<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
<title>mit-6</title>
<link href="http://dspace.mit.edu:80" rel="alternate"/>
<subtitle>The DSpace digital repository system captures, stores, indexes, preserves, and distributes digital research material.</subtitle>
<id xmlns="http://apache.org/cocoon/i18n/2.1">http://dspace.mit.edu:80</id>
<updated>2026-03-16T06:53:20Z</updated>
<dc:date>2026-03-16T06:53:20Z</dc:date>
<entry>
<title>Pd-Catalyzed Amination of Base-Sensitive Five-Membered Heteroaryl Halides with Aliphatic Amines</title>
<link href="https://hdl.handle.net/1721.1/165110" rel="alternate"/>
<author>
<name>Reichert, Elaine C</name>
</author>
<author>
<name>Feng, Kaibo</name>
</author>
<author>
<name>Sather, Aaron C</name>
</author>
<author>
<name>Buchwald, Stephen L</name>
</author>
<id>https://hdl.handle.net/1721.1/165110</id>
<updated>2026-03-14T05:51:44Z</updated>
<published>2023-01-31T00:00:00Z</published>
<summary type="text">Pd-Catalyzed Amination of Base-Sensitive Five-Membered Heteroaryl Halides with Aliphatic Amines
Reichert, Elaine C; Feng, Kaibo; Sather, Aaron C; Buchwald, Stephen L
We report a versatile and functional-group-tolerant method for the Pd-catalyzed C–N cross-coupling of five-membered heteroaryl halides with primary and secondary amines, an important but underexplored transformation. Coupling reactions of challenging, pharmaceutically relevant heteroarenes, such as 2-H-1,3-azoles, are reported in good-to-excellent yields. High-yielding coupling reactions of a wide set of five-membered heteroaryl halides with sterically demanding α-branched cyclic amines and acyclic secondary amines are reported for the first time. The key to the broad applicability of this method is the synergistic combination of (1) the moderate-strength base NaOTMS, which limits base-mediated decomposition of sensitive five-membered heteroarenes that ultimately leads to catalyst deactivation, and (2) the use of a GPhos-supported Pd catalyst, which effectively resists heteroarene-induced catalyst deactivation while promoting efficient coupling, even for challenging and sterically demanding amines. Cross-coupling reactions between a wide variety of five-membered heteroaryl halides and amines are demonstrated, including eight examples involving densely functionalized medicinal chemistry building blocks.
</summary>
<dc:date>2023-01-31T00:00:00Z</dc:date>
</entry>
<entry>
<title>Abiotic peptides as carriers of information for the encoding of small-molecule library synthesis</title>
<link href="https://hdl.handle.net/1721.1/165109" rel="alternate"/>
<author>
<name>Rössler, Simon L</name>
</author>
<author>
<name>Grob, Nathalie M</name>
</author>
<author>
<name>Buchwald, Stephen L</name>
</author>
<author>
<name>Pentelute, Bradley L</name>
</author>
<id>https://hdl.handle.net/1721.1/165109</id>
<updated>2026-03-14T05:51:45Z</updated>
<published>2023-03-02T00:00:00Z</published>
<summary type="text">Abiotic peptides as carriers of information for the encoding of small-molecule library synthesis
Rössler, Simon L; Grob, Nathalie M; Buchwald, Stephen L; Pentelute, Bradley L
Encoding small-molecule information in DNA has been leveraged to accelerate the discovery of ligands for therapeutic targets such as proteins. However, oligonucleotide-based encoding is hampered by inherent limitations of information stability and density. In this study, we establish abiotic peptides for next-generation information storage and apply them for the encoding of diverse small-molecule synthesis. The chemical stability of the peptide-based tag allows the use of palladium-mediated reactions to efficiently synthesize peptide-encoded libraries (PELs) with broad chemical diversity and high purity. We demonstrate the successful de novo discovery of small-molecule protein ligands from PELs by affinity selection against carbonic anhydrase IX and the oncogenic protein targets BRD4(1) and MDM2. Collectively, this work establishes abiotic peptides as carriers of information for the encoding of small-molecule synthesis, leveraged herein for the discovery of protein ligands.
</summary>
<dc:date>2023-03-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Studying Regioisomer Formation in the Pd‐Catalyzed Fluorination of Cyclic Vinyl Triflates: Evidence for in situ Ligand Modification</title>
<link href="https://hdl.handle.net/1721.1/165108" rel="alternate"/>
<author>
<name>Ye, Yuxuan</name>
</author>
<author>
<name>Kim, Seoung‐Tae</name>
</author>
<author>
<name>King, Ryan P</name>
</author>
<author>
<name>Baik, Mu‐Hyun</name>
</author>
<author>
<name>Buchwald, Stephen L</name>
</author>
<id>https://hdl.handle.net/1721.1/165108</id>
<updated>2026-03-14T05:51:43Z</updated>
<published>2023-02-12T00:00:00Z</published>
<summary type="text">Studying Regioisomer Formation in the Pd‐Catalyzed Fluorination of Cyclic Vinyl Triflates: Evidence for in situ Ligand Modification
Ye, Yuxuan; Kim, Seoung‐Tae; King, Ryan P; Baik, Mu‐Hyun; Buchwald, Stephen L
Pd-catalyzed nucleophilic fluorination reactions are important methods for the synthesis of fluoroarenes and fluoroalkenes. However, these reactions can generate a mixture of regioisomeric products that are often difficult to separate. While investigating the Pd-catalyzed fluorination of cyclic vinyl triflates, we observed that the addition of a substoichiometric quantity of TESCF3 significantly improved the regioselectivity of the reaction. Herein, we report a combined experimental and computational study on the mechanism of this transformation focusing on the role of TESCF3. The poor regioselectivity of the reaction in the absence of additives results from the formation of LPd-cyclohexyne complexes (L=biaryl monophosphine ligand). When TESCF3 is added to the reaction mixture, the generation of the Pd-cyclohexyne complexes is diminished by an unexpected pathway involving the dearomatization of the ligand by nucleophilic attack from a trifluoromethyl anion (CF3−).
</summary>
<dc:date>2023-02-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Room-Temperature Cu-Catalyzed Amination of Aryl Bromides Enabled by DFT-Guided Ligand Design</title>
<link href="https://hdl.handle.net/1721.1/165107" rel="alternate"/>
<author>
<name>Kim, Seoung-Tae</name>
</author>
<author>
<name>Strauss, Michael J</name>
</author>
<author>
<name>Cabré, Albert</name>
</author>
<author>
<name>Buchwald, Stephen L</name>
</author>
<id>https://hdl.handle.net/1721.1/165107</id>
<updated>2026-03-14T05:51:46Z</updated>
<published>2023-03-16T00:00:00Z</published>
<summary type="text">Room-Temperature Cu-Catalyzed Amination of Aryl Bromides Enabled by DFT-Guided Ligand Design
Kim, Seoung-Tae; Strauss, Michael J; Cabré, Albert; Buchwald, Stephen L
Ullmann-type C–N coupling reactions represent an important alternative to well-established Pd-catalyzed approaches due to the differing reactivity and the lower cost of Cu. While the design of anionic Cu ligands, particularly those by Ma, has enabled the coupling of various classes of aryl halides and alkyl amines, most methods require conditions that can limit their utility on complex substrates. Herein, we disclose the development of anionic N1,N2-diarylbenzene-1,2-diamine ligands that promote the Cu-catalyzed amination of aryl bromides under mild conditions. Guided by DFT calculations, these ligands were designed to (1) increase the electron density on Cu, thereby increasing the rate of oxidative addition of aryl bromides, and (2) stabilize the active anionic CuI complex via a π-interaction. Under optimized conditions, structurally diverse aryl and heteroaryl bromides and a broad range of alkyl amine nucleophiles, including pharmaceuticals bearing multiple functional groups, were efficiently coupled at room temperature. Combined computational and experimental studies support a mechanism of C–N bond formation that follows a catalytic cycle akin to the well-explored Pd-catalyzed variants. Modification of the ligand structure to include a naphthyl residue resulted in a lower energy barrier to oxidative addition, providing a 30-fold rate increase relative to what is seen with other ligands. Collectively, these results establish a new class of anionic ligands for Cu-catalyzed C–N couplings, which we anticipate may be extended to other Cu-catalyzed C–heteroatom and C–C bond-forming reactions.
</summary>
<dc:date>2023-03-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stereoselective Synthesis of Trisubstituted Alkenes via Copper Hydride-Catalyzed Alkyne Hydroalkylation</title>
<link href="https://hdl.handle.net/1721.1/165106" rel="alternate"/>
<author>
<name>Kutateladze, Dennis A</name>
</author>
<author>
<name>Mai, Binh Khanh</name>
</author>
<author>
<name>Dong, Yuyang</name>
</author>
<author>
<name>Zhang, Yu</name>
</author>
<author>
<name>Liu, Peng</name>
</author>
<author>
<name>Buchwald, Stephen L</name>
</author>
<id>https://hdl.handle.net/1721.1/165106</id>
<updated>2026-03-14T05:51:40Z</updated>
<published>2023-08-04T00:00:00Z</published>
<summary type="text">Stereoselective Synthesis of Trisubstituted Alkenes via Copper Hydride-Catalyzed Alkyne Hydroalkylation
Kutateladze, Dennis A; Mai, Binh Khanh; Dong, Yuyang; Zhang, Yu; Liu, Peng; Buchwald, Stephen L
Alkenes are ubiquitous in organic chemistry, yet many classes of alkenes remain challenging to access by current synthetic methodology. Herein, we report a copper hydride-catalyzed approach for the synthesis of Z-configured trisubstituted alkenes with high stereo- and regioselectivity via alkyne hydroalkylation. A DTBM-dppf-supported Cu catalyst was found to be optimal, providing a substantial increase in product yield compared to reactions conducted with dppf as the ligand. DFT calculations show that the DTBM substitution leads to the acceleration of alkyne hydrocupration through combined ground and transition state effects related to preventing catalyst dimerization and enhancing catalyst–substrate dispersion interactions, respectively. Alkyne hydroalkylation was successfully demonstrated with methyl and larger alkyl tosylate electrophiles to produce a variety of (hetero)aryl-substituted alkenes in moderate to high yields with complete selectivity for the Z stereochemically configured products. In the formation of the key C–C bond, computational studies revealed a direct SN2 pathway for alkylation of the vinylcopper intermediate with in situ-formed alkyl iodides.
</summary>
<dc:date>2023-08-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Room‐Temperature Copper‐Catalyzed Etherification of Aryl Bromides</title>
<link href="https://hdl.handle.net/1721.1/165105" rel="alternate"/>
<author>
<name>Strauss, Michael J</name>
</author>
<author>
<name>Greaves, Megan E</name>
</author>
<author>
<name>Kim, Seoung‐Tae</name>
</author>
<author>
<name>Teijaro, Christiana N</name>
</author>
<author>
<name>Schmidt, Michael A</name>
</author>
<author>
<name>Scola, Paul M</name>
</author>
<author>
<name>Buchwald, Stephen L</name>
</author>
<id>https://hdl.handle.net/1721.1/165105</id>
<updated>2026-03-13T03:07:34Z</updated>
<published>2024-02-15T00:00:00Z</published>
<summary type="text">Room‐Temperature Copper‐Catalyzed Etherification of Aryl Bromides
Strauss, Michael J; Greaves, Megan E; Kim, Seoung‐Tae; Teijaro, Christiana N; Schmidt, Michael A; Scola, Paul M; Buchwald, Stephen L
We disclose the development of a Cu-catalyzed C−O coupling method utilizing a new N1,N2-diarylbenzene-1,2-diamine ligand, L8. Under optimized reaction conditions, structurally diverse aryl and heteroaryl bromides underwent efficient coupling with a variety of alcohols at room temperature using an L8-based catalyst. Notably, the L8-derived catalyst exhibited enhanced activity when compared to the L4-based system previously disclosed for C−N coupling, namely the ability to functionalize aryl bromides containing acidic functional groups. Mechanistic studies demonstrate that C−O coupling utilizing L8 ⋅ Cu involves rate-limiting alkoxide transmetallation, resulting in a mechanism of C−O bond formation that is distinct from previously described Pd-, Cu-, or Ni-based systems. This lower energy pathway leads to rapid C−O bond formation; a 7-fold increase relative to what is seen with other ligands. The results presented in this report overcome limitations in previously described C−O coupling methods and introduce a new ligand that we anticipate may be useful in other Cu-catalyzed C-heteroatom bond-forming reactions.
</summary>
<dc:date>2024-02-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>CuH-Catalyzed Regio- and Enantioselective Formal Hydroformylation of Vinyl Arenes</title>
<link href="https://hdl.handle.net/1721.1/165104" rel="alternate"/>
<author>
<name>Garhwal, Subhash</name>
</author>
<author>
<name>Dong, Yuyang</name>
</author>
<author>
<name>Mai, Binh Khanh</name>
</author>
<author>
<name>Liu, Peng</name>
</author>
<author>
<name>Buchwald, Stephen L</name>
</author>
<id>https://hdl.handle.net/1721.1/165104</id>
<updated>2026-03-13T03:07:21Z</updated>
<published>2024-05-09T00:00:00Z</published>
<summary type="text">CuH-Catalyzed Regio- and Enantioselective Formal Hydroformylation of Vinyl Arenes
Garhwal, Subhash; Dong, Yuyang; Mai, Binh Khanh; Liu, Peng; Buchwald, Stephen L
A highly enantioselective formal hydroformylation of vinyl arenes enabled by copper hydride (CuH) catalysis is reported. Key to the success of the method was the use of the mild Lewis acid zinc triflate to promote the formation of oxocarbenium electrophiles through the activation of diethoxymethyl acetate. Using the newly developed protocol, a broad range of vinyl arene substrates underwent efficient hydroacetalization reactions to provide access to highly enantioenriched α-aryl acetal products in good yields with exclusively branched regioselectivity. The acetal products could be converted to the corresponding aldehydes, alcohols, and amines with full preservation of the enantiomeric purity. Density functional theory studies support that the key C–C bond-forming event between the alkyl copper intermediate and the oxocarbenium electrophile takes place with inversion of configuration of the Cu–C bond in a backside SE2-type mechanism.
</summary>
<dc:date>2024-05-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cu-Catalyzed Amination of Base-Sensitive Aryl Bromides and the Chemoselective N- and O-Arylation of Amino Alcohols</title>
<link href="https://hdl.handle.net/1721.1/165103" rel="alternate"/>
<author>
<name>Strauss, Michael J</name>
</author>
<author>
<name>Liu, Kaylee X</name>
</author>
<author>
<name>Greaves, Megan E</name>
</author>
<author>
<name>Dahl, Jakob C</name>
</author>
<author>
<name>Kim, Seoung-Tae</name>
</author>
<author>
<name>Wu, Yong-Jin</name>
</author>
<author>
<name>Schmidt, Michael A</name>
</author>
<author>
<name>Scola, Paul M</name>
</author>
<author>
<name>Buchwald, Stephen L</name>
</author>
<id>https://hdl.handle.net/1721.1/165103</id>
<updated>2026-03-13T03:07:37Z</updated>
<published>2024-06-26T00:00:00Z</published>
<summary type="text">Cu-Catalyzed Amination of Base-Sensitive Aryl Bromides and the Chemoselective N- and O-Arylation of Amino Alcohols
Strauss, Michael J; Liu, Kaylee X; Greaves, Megan E; Dahl, Jakob C; Kim, Seoung-Tae; Wu, Yong-Jin; Schmidt, Michael A; Scola, Paul M; Buchwald, Stephen L
We report a general and functional-group-tolerant method for the Cu-catalyzed amination of base-sensitive aryl bromides including substrates possessing acidic functional groups and small five-membered heteroarenes. The results presented herein substantially expand the scope of Cu-catalyzed C–N coupling reactions. The combination of L8, an anionic N1,N2-diarylbenzene-1,2-diamine ligand, along with the mild base NaOTMS leads to the formation of a stable yet reactive catalyst that resists deactivation from coordination to heterocycles or charged intermediates. This system enables the use of low catalyst and ligand loadings. Exploiting the differences in nucleophile deprotonation in C–O and C–N coupling reactions catalyzed by Cu·L8 we developed a method to chemoselectively N- and O-arylate a variety of amino alcohol substrates. Employing NaOt-Bu as the base resulted exclusively in C–O coupling when the amino alcohols featured primary alcohols and more hindered amines or aniline groups. Utilizing NaOTMS enabled the ability to override the steric-based selectivity of these reactions completely and exclusively promoted C–N coupling regardless of the structure of the amino alcohol. The ability to invert the observed chemoselectivity is distinct from previously described methods that require protecting group manipulations or rely entirely on steric effects to control reactivity. These results substantially improve the scope of Cu-catalyzed C–N coupling reactions using N1,N2-diarylbenzene-1,2-diamine ligands and introduce a new chemoselective method to arylate amino alcohols.
</summary>
<dc:date>2024-06-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Copper-Catalyzed Amination of Aryl Chlorides under Mild Reaction Conditions</title>
<link href="https://hdl.handle.net/1721.1/165102" rel="alternate"/>
<author>
<name>Ai, Han-Jun</name>
</author>
<author>
<name>Kim, Seoung-Tae</name>
</author>
<author>
<name>Liu, Cecilia</name>
</author>
<author>
<name>Buchwald, Stephen L</name>
</author>
<id>https://hdl.handle.net/1721.1/165102</id>
<updated>2026-03-13T03:07:29Z</updated>
<published>2024-09-16T00:00:00Z</published>
<summary type="text">Copper-Catalyzed Amination of Aryl Chlorides under Mild Reaction Conditions
Ai, Han-Jun; Kim, Seoung-Tae; Liu, Cecilia; Buchwald, Stephen L
We report a mild method for the copper-catalyzed amination of aryl chlorides. Key to the success of the method was the use of highly sterically encumbered &lt;i&gt;N&lt;/i&gt;&lt;sup&gt;1&lt;/sup&gt;,&lt;i&gt;N&lt;/i&gt;&lt;sup&gt;2&lt;/sup&gt;-diaryl diamine ligands which resist catalyst deactivation, allowing reactions to proceed at significantly lower temperatures and with a broader scope than current protocols. A sequence of highly chemoselective C-N and C-O cross-coupling reactions were demonstrated, and mechanistic studies indicate that oxidative addition of the Cu catalyst to the aryl chlorides is rate-limiting. We anticipate that the design principles disclosed herein will help motivate further advances in Cu-catalyzed transformations of aryl chlorides.
</summary>
<dc:date>2024-09-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of a Deactivation-Resistant Dialkylbiarylphosphine Ligand for Pd-Catalyzed Arylation of Secondary Amines</title>
<link href="https://hdl.handle.net/1721.1/165101" rel="alternate"/>
<author>
<name>Feng, Kaibo</name>
</author>
<author>
<name>Raguram, Elaine Reichert</name>
</author>
<author>
<name>Howard, James R</name>
</author>
<author>
<name>Peters, Ellyn</name>
</author>
<author>
<name>Liu, Cecilia</name>
</author>
<author>
<name>Sigman, Matthew S</name>
</author>
<author>
<name>Buchwald, Stephen L</name>
</author>
<id>https://hdl.handle.net/1721.1/165101</id>
<updated>2026-03-13T03:07:27Z</updated>
<published>2024-09-17T00:00:00Z</published>
<summary type="text">Development of a Deactivation-Resistant Dialkylbiarylphosphine Ligand for Pd-Catalyzed Arylation of Secondary Amines
Feng, Kaibo; Raguram, Elaine Reichert; Howard, James R; Peters, Ellyn; Liu, Cecilia; Sigman, Matthew S; Buchwald, Stephen L
Despite the prevalence of N-heteroarenes in small-molecule pharmaceuticals, Pd-catalyzed C-N cross-coupling reactions of aryl halides and amines containing these rings remain challenging due to their ability to displace the supporting ligand via coordination to the metal center. To address this limitation, we report the development of a highly robust Pd catalyst supported by a new dialkylbiarylphosphine ligand, FPhos. The FPhos-supported catalyst effectively resists N-heteroarene-mediated catalyst deactivation to readily promote C-N coupling between a wide variety of Lewis-basic aryl halides and secondary amines, including densely functionalized pharmaceuticals. Mechanistic and structural investigations, as well as principal component analysis and density functional theory, elucidated two key design features that enable FPhos to overcome the limitations of previous ligands. First, the ligated Pd complex is stabilized through its conformational preference for the O-bound isomer, which likely resists coordination by N-heteroarenes. Second, 3',5'-disubstitution on the non-phosphorus-containing ring of FPhos creates the ideal steric environment around the Pd center, which facilitates binding by larger secondary amines while mitigating the formation of off-cycle palladacycle species.
</summary>
<dc:date>2024-09-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Kinetic Modeling Enables Understanding of Off-Cycle Processes in Pd-Catalyzed Amination of Five-Membered Heteroaryl Halides</title>
<link href="https://hdl.handle.net/1721.1/165100" rel="alternate"/>
<author>
<name>Raguram, Elaine Reichert</name>
</author>
<author>
<name>Dahl, Jakob C</name>
</author>
<author>
<name>Jensen, Klavs F</name>
</author>
<author>
<name>Buchwald, Stephen L</name>
</author>
<id>https://hdl.handle.net/1721.1/165100</id>
<updated>2026-03-13T03:07:30Z</updated>
<published>2024-11-20T00:00:00Z</published>
<summary type="text">Kinetic Modeling Enables Understanding of Off-Cycle Processes in Pd-Catalyzed Amination of Five-Membered Heteroaryl Halides
Raguram, Elaine Reichert; Dahl, Jakob C; Jensen, Klavs F; Buchwald, Stephen L
The mechanism of Pd-catalyzed amination of five-membered heteroaryl halides was investigated by integrating experimental kinetic analysis with kinetic modeling through predictive testing and likelihood ratio analysis, revealing an atypical productive coupling pathway and multiple off-cycle events. The GPhos-supported Pd catalyst, along with the moderate-strength base NaOTMS, was previously found to promote efficient coupling between five-membered heteroaryl halides and secondary amines. However, slight deviations from the optimal concentration, temperature, and/or solvent resulted in significantly lower yields, contrary to typical reaction optimization trends. We found that the coupling of 4-bromothiazole with piperidine proceeds through an uncommon mechanism in which the NaOTMS base, rather than the amine, binds first to the oxidative addition complex; the resulting OTMS-bound Pd species is the resting state. Formation of the Pd-amido complex via base/amine exchange was identified as the turnover-limiting step, unlike other reported catalyst systems for which reductive elimination is turnover-limiting. We determined that the amine-bound Pd complex, usually an on-cycle intermediate, is instead a reversibly generated off-cycle species, and that base-mediated decomposition of 4-bromothiazole is the primary irreversible catalyst deactivation pathway. Predictive testing and kinetic modeling were key to the identification of these off-cycle processes, providing insight into minor mechanistic pathways that are difficult to observe experimentally. Collectively, this report reveals the unique enabling features of the Pd-GPhos/NaOTMS system, implementing mechanistic insights to improve the yields of particularly challenging coupling reactions. Moreover, these findings highlight the utility of applying predictive tests to kinetic models for the rapid evaluation of mechanistic possibilities in small-molecule catalytic systems.
</summary>
<dc:date>2024-11-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterization of the response of radiochromic film to quasi-monoenergetic x rays through a cross-calibration with image plates</title>
<link href="https://hdl.handle.net/1721.1/165099" rel="alternate"/>
<author>
<name>Buschmann, BI</name>
</author>
<author>
<name>Cufari, M</name>
</author>
<author>
<name>Vanderloo, N</name>
</author>
<author>
<name>Vargas, J</name>
</author>
<author>
<name>Foo, BC</name>
</author>
<author>
<name>DeVault, A</name>
</author>
<author>
<name>Dannhoff, SG</name>
</author>
<author>
<name>Evans, TE</name>
</author>
<author>
<name>Johnson, TM</name>
</author>
<author>
<name>Kunimune, JH</name>
</author>
<author>
<name>Lawrence, Y</name>
</author>
<author>
<name>Pearcy, JA</name>
</author>
<author>
<name>Reichelt, BL</name>
</author>
<author>
<name>Wink, CW</name>
</author>
<author>
<name>Russell, L</name>
</author>
<author>
<name>Gatu Johnson, M</name>
</author>
<author>
<name>Petrasso, RD</name>
</author>
<author>
<name>Frenje, JA</name>
</author>
<id>https://hdl.handle.net/1721.1/165099</id>
<updated>2026-03-13T03:07:23Z</updated>
<published>2024-09-20T00:00:00Z</published>
<summary type="text">Characterization of the response of radiochromic film to quasi-monoenergetic x rays through a cross-calibration with image plates
Buschmann, BI; Cufari, M; Vanderloo, N; Vargas, J; Foo, BC; DeVault, A; Dannhoff, SG; Evans, TE; Johnson, TM; Kunimune, JH; Lawrence, Y; Pearcy, JA; Reichelt, BL; Wink, CW; Russell, L; Gatu Johnson, M; Petrasso, RD; Frenje, JA
Radiochromic film (RCF) and image plates (IPs) are both commonly used detectors in diagnostics fielded at inertial confinement fusion (ICF) and high-energy-density physics (HEDP) research facilities. Due to the intense x-ray background in all ICF/HEDP experiments, accurately calibrating the optical density of RCF as a function of x-ray dose, and the photostimulated luminescence per photon of IPs as a function of x-ray energy, is necessary for interpreting experimental results. Various measurements of the sensitivity curve of different IPs to x rays have been performed [Izumi et al., Proc. SPIE 8850, 885006 (2013) and Rosenberg et al., Rev. Sci. Instrum. 90(1), 013506 (2019)]; however, calibrating RCF is a tedious process that depends on factors such as the orientation in which the RCF is scanned in the film scanner and the batch of RCF used. These issues can be mitigated by cross-calibrating RCF with IPs to enable the use of IPs for the determination of dose on the RCF without scanning the RCF. Here, the first cross-calibration of RCF with IPs to quasi-monoenergetic titanium, copper, and molybdenum K-line x rays is presented. It is found that the IP-inferred dose rates on the RCF for the Ti and Mo x rays agree well with the measured dose rates, while the IP-inferred dose rate for the Cu x rays is larger than the measured dose rate by ∼2×. Explanations for this discrepancy and plans for future work are discussed.
</summary>
<dc:date>2024-09-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>Determination of the response for the National Ignition Facility particle time of flight (PTOF) detector using single particle counting</title>
<link href="https://hdl.handle.net/1721.1/165098" rel="alternate"/>
<author>
<name>Lawrence, Y</name>
</author>
<author>
<name>Reichelt, BL</name>
</author>
<author>
<name>Wink, CW</name>
</author>
<author>
<name>Rigon, G</name>
</author>
<author>
<name>Johnson, M Gatu</name>
</author>
<author>
<name>Li, CK</name>
</author>
<author>
<name>Frenje, JA</name>
</author>
<id>https://hdl.handle.net/1721.1/165098</id>
<updated>2026-03-13T03:07:36Z</updated>
<published>2024-10-02T00:00:00Z</published>
<summary type="text">Determination of the response for the National Ignition Facility particle time of flight (PTOF) detector using single particle counting
Lawrence, Y; Reichelt, BL; Wink, CW; Rigon, G; Johnson, M Gatu; Li, CK; Frenje, JA
The Particle Time of Flight (PTOF) detector is a chemical vapor deposition diamond-based detector used to measure bang times in low-yield (≲ 1015 neutrons) experiments at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory (LLNL). Historically, the impulse response for PTOF diamond detectors has been obtained from x-ray timing shots on the NIF and shots on the MegaRay pulsed electron accelerator at LLNL. The impulse response may alternatively be obtained using single particle interactions with the detector, at substantially less cost and higher frequency compared to NIF timing shots, which typically occur months apart. Here, the response of a PTOF detector setup is characterized by statistically averaging a large number of single particle waveforms. A high fidelity instrument response function can be constructed in this way. This is confirmed by comparison of the single particle counting-constructed response to the impulse response function measured for the same detector at LLNL’s MegaRay facility.
</summary>
<dc:date>2024-10-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of a compact magnetic spectrometer for use at the OMEGA Laser Facility and the National Ignition Facility</title>
<link href="https://hdl.handle.net/1721.1/165097" rel="alternate"/>
<author>
<name>Pearcy, JA</name>
</author>
<author>
<name>Russell, L</name>
</author>
<author>
<name>Kabadi, NV</name>
</author>
<author>
<name>Johnson, TM</name>
</author>
<author>
<name>Adrian, PA</name>
</author>
<author>
<name>Gatu-Johnson, M</name>
</author>
<author>
<name>Casco, E</name>
</author>
<author>
<name>Palmisano, K</name>
</author>
<author>
<name>Gates, G</name>
</author>
<author>
<name>Burgett, T</name>
</author>
<author>
<name>Scott, M</name>
</author>
<author>
<name>Petrasso, RD</name>
</author>
<author>
<name>Li, CK</name>
</author>
<author>
<name>Frenje, J</name>
</author>
<id>https://hdl.handle.net/1721.1/165097</id>
<updated>2026-03-13T03:07:32Z</updated>
<published>2024-10-03T00:00:00Z</published>
<summary type="text">Development of a compact magnetic spectrometer for use at the OMEGA Laser Facility and the National Ignition Facility
Pearcy, JA; Russell, L; Kabadi, NV; Johnson, TM; Adrian, PA; Gatu-Johnson, M; Casco, E; Palmisano, K; Gates, G; Burgett, T; Scott, M; Petrasso, RD; Li, CK; Frenje, J
Measurement of proton spectra is an important diagnostic for a variety of high energy density physics experiments. Current diagnostics are either not designed to capture the spectrum of low-energy protons or are unsuitable for high debris experiments. To bridge the gap, a new CR-39 based compact magnetic spectrometer (MagSpec) has been developed to measure proton spectra in the 1–20 MeV energy range, with a particular focus on the low-energy (1–6 MeV) spectrum, for use in experiments at the OMEGA Laser Facility and the National Ignition Facility (NIF). In the MagSpec diagnostic, protons of different energies are dispersed as they pass through a magnetic field before impinging on a differentially filtered CR-39 surface, resulting in a spatial distribution of CR-39 tracks that corresponds to the energy spectrum. In this paper, we discuss details of the design and implementation of MagSpec on the NIF and OMEGA.
</summary>
<dc:date>2024-10-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Temperature stabilization of a lab space at 10 mK-level over a day</title>
<link href="https://hdl.handle.net/1721.1/165096" rel="alternate"/>
<author>
<name>Fife, Dylan</name>
</author>
<author>
<name>Shin, Dong-Chel</name>
</author>
<author>
<name>Sudhir, Vivishek</name>
</author>
<id>https://hdl.handle.net/1721.1/165096</id>
<updated>2026-03-13T03:07:26Z</updated>
<published>2024-09-27T00:00:00Z</published>
<summary type="text">Temperature stabilization of a lab space at 10 mK-level over a day
Fife, Dylan; Shin, Dong-Chel; Sudhir, Vivishek
Temperature fluctuations over long time scales (≳ 1 h) are an insidious problem for precision measurements. In optical laboratories, the primary effect of temperature fluctuations is drifts in optical circuits over spatial scales of a few meters and temporal scales extending beyond a few minutes. We present a lab-scale environment temperature control system approaching 10 mK-level temperature instability across a lab for integration times above an hour and extending to a day. This is achieved by passive isolation of the laboratory space from the building walls using a circulating air gap and an active control system feeding back to heating coils at the outlet of the laboratory’s Heating-Ventilation-Air-Conditioning (HVAC) unit. These techniques together result in 20 dB suppression of the temperature power spectrum across the lab at 10−4 Hz—approaching the limit set by statistical coherence of the temperature field—and 10 mK Allan deviation around 15 °C after an hour of averaging, which is an order of magnitude better than any previous report for a full laboratory.
</summary>
<dc:date>2024-09-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Incentives to Comply with the Minimum Wage in the United States and the United Kingdom</title>
<link href="https://hdl.handle.net/1721.1/165095" rel="alternate"/>
<author>
<name>Stansbury, Anna</name>
</author>
<id>https://hdl.handle.net/1721.1/165095</id>
<updated>2026-03-13T03:07:19Z</updated>
<published>2025-01-01T00:00:00Z</published>
<summary type="text">Incentives to Comply with the Minimum Wage in the United States and the United Kingdom
Stansbury, Anna
There is substantial evidence of minimum wage non-compliance in the United States and the United Kingdom. In this article, the author compiles new, comprehensive data on the costs that minimum wage violators incur when non-compliance is detected. In both countries, the costs violators face are often little more than the money they saved by underpaying. To have an incentive to comply under existing penalty regimes, typical US firms would thus have to expect a 47% to 83% probability of detection by the Department of Labor (DOL), or a 25% probability of a successful Fair Labor Standards Act (FLSA) suit. In the United Kingdom, typical firms would have to expect a 44% to 56% probability of detection. Actual probabilities of detection are substantially lower than this for many firms and would likely remain so even with realistic increases in enforcement capacity. Improved enforcement alone is thus insufficient: Expected penalties must also substantially increase to ensure that most firms have an incentive to comply.
</summary>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Non-Euclidean motion planning with graphs of geodesically convex sets</title>
<link href="https://hdl.handle.net/1721.1/165094" rel="alternate"/>
<author>
<name>Cohn, Thomas</name>
</author>
<author>
<name>Petersen, Mark</name>
</author>
<author>
<name>Simchowitz, Max</name>
</author>
<author>
<name>Tedrake, Russ</name>
</author>
<id>https://hdl.handle.net/1721.1/165094</id>
<updated>2026-03-13T03:07:12Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Non-Euclidean motion planning with graphs of geodesically convex sets
Cohn, Thomas; Petersen, Mark; Simchowitz, Max; Tedrake, Russ
Computing optimal, collision-free trajectories for high-dimensional systems is a challenging and important problem. Sampling-based planners struggle with the dimensionality, whereas trajectory optimizers may get stuck in local minima due to inherent nonconvexities in the optimization landscape. The use of mixed-integer programming to encapsulate these nonconvexities and find globally optimal trajectories has recently shown great promise, thanks in part to tight convex relaxations and efficient approximation strategies that greatly reduce runtimes. These approaches were previously limited to Euclidean configuration spaces, precluding their use with mobile bases or continuous revolute joints. In this paper, we handle such scenarios by modeling configuration spaces as Riemannian manifolds, and we describe a reduction procedure for the zero-curvature case to a mixed-integer convex optimization problem. We further present a method for obtaining approximate solutions via piecewise-linear approximations that is applicable to manifolds of arbitrary curvature. We demonstrate our results on various robot platforms, including producing efficient collision-free trajectories for a PR2 bimanual mobile manipulator.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Extended state infrastructure power in an age of networked competition: The cases of Thailand and Taiwan</title>
<link href="https://hdl.handle.net/1721.1/165093" rel="alternate"/>
<author>
<name>Stokols, Andrew</name>
</author>
<author>
<name>Kollar, Justin</name>
</author>
<id>https://hdl.handle.net/1721.1/165093</id>
<updated>2026-03-13T03:07:28Z</updated>
<published>2024-11-17T00:00:00Z</published>
<summary type="text">Extended state infrastructure power in an age of networked competition: The cases of Thailand and Taiwan
Stokols, Andrew; Kollar, Justin
Scholars have highlighted the emergence of infrastructure as a key domain in the struggle over network centrality in what some call the ‘Second Cold War’ between the U.S. and China. We qualify this ‘infrastructural turn’ by drawing attention to the contingent nature of state infrastructural power as depending on key domestic firms that often serve as intermediaries between domestic infrastructure and global supply chains or international partners. Utilising empirical case studies based on field research conducted between 2021 and 2023 in Thailand and Taiwan, we analyse the ways in which state infrastructure power is exercised through strategic negotiation between national politics of the state and territorial investment decisions of multinational and major domestic firms within global supply chains. The study highlights how outcomes of state projects to foster connectivity or centrality in networks are shaped by contingent and sometimes ad-hoc coalitions between state agencies and domestic and multinational companies with their own interests and agency. In the case of Taiwan, the centrality of Taiwan Semiconductor Manufacturing Company (TSMC) to global supply chains makes it an important player amidst continued U.S.-China tension. In Thailand, CP Group’s connections to China have afforded it a role as an interlocutor between Thailand and China, allowing it to obtain state infrastructure contracts. Through comparative case studies the paper complicates both ‘globalist’ and methodologically nationalist perspectives on the ‘infrastructural turn’, and introduces the concept of ‘extended state infrastructural power’ to account for this complex, networked exercise of state authority.
</summary>
<dc:date>2024-11-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Uncovering Patterns in Overdose Deaths: An Analysis of Spike Identification in Fatal Drug Overdose Data in Massachusetts, 2017-2023</title>
<link href="https://hdl.handle.net/1721.1/165092" rel="alternate"/>
<author>
<name>Lee, Hannah</name>
</author>
<author>
<name>Otero-Leon, Daniel</name>
</author>
<author>
<name>Dong, Huiru</name>
</author>
<author>
<name>Stringfellow, Erin J</name>
</author>
<author>
<name>Jalali, Mohammad S</name>
</author>
<id>https://hdl.handle.net/1721.1/165092</id>
<updated>2026-03-13T03:07:33Z</updated>
<published>2026-01-01T00:00:00Z</published>
<summary type="text">Uncovering Patterns in Overdose Deaths: An Analysis of Spike Identification in Fatal Drug Overdose Data in Massachusetts, 2017-2023
Lee, Hannah; Otero-Leon, Daniel; Dong, Huiru; Stringfellow, Erin J; Jalali, Mohammad S
Objectives:&#13;
Yearly rolling aggregate trends or rates are commonly used to analyze trends in overdose deaths, but focusing on long-term trends can obscure short-term fluctuations (eg, daily spikes). We analyzed data on spikes in daily fatal overdoses and how various spike detection thresholds influence the identification of spikes.&#13;
Materials and Methods:&#13;
We used a spike detection algorithm to identify spikes among 16 660 drug-related overdose deaths (from any drug) reported in Massachusetts’ vital statistics from 2017 through 2023. We adjusted the parameters of the algorithm to define spikes in 3 distinct scenarios: deaths exceeding 2 adjusted moving SDs above the 7-, 30-, and 90-day adjusted moving average.&#13;
Results:&#13;
Our results confirmed the on-the-ground observation that there are days when many more people die of overdoses than would be expected based on fluctuations due to differences among people alone. We identified spikes on 5.8% to 20.6% of the days across the 3 scenarios, annually, constituting 11.1% to 31.6% of all overdose deaths. The absolute difference in percentage points of days identified as spikes varied from 5.2 to 11.5 between 7- and 30-day lags and from 0 to 4.6 between 30- and 90-day lags across years. When compared with the adjusted moving average across the 3 scenarios, in 2017 an average of 3.9 to 5.5 additional deaths occurred on spike days, while in 2023 the range was 3.7 to 6.0.&#13;
Practice Implications:&#13;
A substantial percentage of deaths occurred annually on spike days, highlighting the need for effectively monitoring short-term overdose trends. Moreover, our study serves as a foundational analysis for future research into exogenous events that may contribute to spikes in overdose deaths, aiming to prevent future deaths.
</summary>
<dc:date>2026-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Experiential learning amid disequilibrium: Attuning to student emotions</title>
<link href="https://hdl.handle.net/1721.1/165091" rel="alternate"/>
<author>
<name>O’Flanagan, Sinead E</name>
</author>
<author>
<name>Y Jester, Michellana</name>
</author>
<id>https://hdl.handle.net/1721.1/165091</id>
<updated>2026-03-13T03:07:24Z</updated>
<published>2025-01-09T00:00:00Z</published>
<summary type="text">Experiential learning amid disequilibrium: Attuning to student emotions
O’Flanagan, Sinead E; Y Jester, Michellana
Educators recognize the significant role emotions play in experiential learning (EL), particularly in how they support students through the inherent emotion work. However, the traditional design of experiential learning theory (ELT) in higher education (HE) often presupposes a stable environment, which overlooks the impact of unpredictable external factors on students’ emotions and learning. Despite its critical importance, emotion work in EL remains underexplored, with emotional dynamics often obscured or dismissed as isolated incidents. This study sheds light on the heightened emotional challenges that arise during periods of sustained disequilibrium, such as the COVID-19-induced restrictions. It provides novel insights into the dynamic interplay of emotions and learning progression within EL frameworks, drawing on perspectives from EL educators, advisors, and students. The research underscores the importance of emotion-focused dialogue, educator-student connection, and assimilating autonomy needs in EL amid disequilibrium. It also identifies often-neglected elements in EL frameworks, such as students “sharing struggles” or “valuing work efforts,” alongside educator strategies like “personal anchoring.” The findings contribute to ELT by proposing adaptive strategies that integrate emotion work into pedagogical frameworks, enhancing reflection and conceptualization practices, and extending ELT’s applicability across diverse educational and work-based management learning settings.
</summary>
<dc:date>2025-01-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Associate Vice President for Research Administration</title>
<link href="https://hdl.handle.net/1721.1/165090" rel="alternate"/>
<author>
<name>White, Anne</name>
</author>
<id>https://hdl.handle.net/1721.1/165090</id>
<updated>2026-03-12T07:26:11Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Associate Vice President for Research Administration
White, Anne
This report contains the following sections: Overview, Research Administration Services, OSATT Core, Technology Licensing Office, Corporate Relations, Research Systems &amp; Support, and Cost Analysis.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Insulin Delivery Pumps for Human Spaceflight: Steps Toward an Accessible Space Future</title>
<link href="https://hdl.handle.net/1721.1/165089" rel="alternate"/>
<author>
<name>Horn, Kyle J</name>
</author>
<author>
<name>Hoffman, Jeffrey A</name>
</author>
<id>https://hdl.handle.net/1721.1/165089</id>
<updated>2026-03-12T07:25:04Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Insulin Delivery Pumps for Human Spaceflight: Steps Toward an Accessible Space Future
Horn, Kyle J; Hoffman, Jeffrey A
Commercially available insulin pumps for treatment of diabetes mellitus are currently not qualified to operate in the space environment. This work rigorously tested the fluid delivery performance of a Tandem t:slim X2 insulin pump in both micro- and hypergravity during a parabolic microgravity research flight. The parabolic research flight environment serves as an analogue to the types of transient gravitational loadings experienced during human-led missions, which provides a foundation to expand testing to suborbital and orbital flights in addition to other extreme environmental tests for wilderness dependency. The results of the flight data showed no significant difference between fluid delivery performance at 0, 1, and 2g acceleration regimes, nor at the transitions between gravity environments. Recommendations are made for further experimentation and qualification tests before use in future spaceflight missions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the nonlinear Eshelby inclusion problem and its isomorphic growth limit</title>
<link href="https://hdl.handle.net/1721.1/165088" rel="alternate"/>
<author>
<name>Bonavia, Joseph E</name>
</author>
<author>
<name>Chockalingam, S</name>
</author>
<author>
<name>Cohen, Tal</name>
</author>
<id>https://hdl.handle.net/1721.1/165088</id>
<updated>2026-03-12T07:24:57Z</updated>
<published>2026-01-01T00:00:00Z</published>
<summary type="text">On the nonlinear Eshelby inclusion problem and its isomorphic growth limit
Bonavia, Joseph E; Chockalingam, S; Cohen, Tal
In the late 1950s, Eshelby’s linear solutions for the deformation field inside an ellipsoidal inclusion and, subsequently, the infinite matrix in which it is embedded were published. The solutions’ ability to capture the behavior of an orthotropically symmetric shaped inclusion made it invaluable in efforts to understand the behavior of defects within, and the micromechanics of, metals and other stiff materials throughout the rest of the 20th century. Over half a century later, we wish to understand the analogous effects of microstructure on the behavior of soft materials, both organic and synthetic, but in order to do so, we must venture beyond the linear limit, far into the nonlinear regime. However, no solutions to these analogous problems currently exist for non-spherical inclusions. In this work, we present an accurate semi-inverse solution for the elastic field in an isotropically growing spheroidal inclusion embedded in an infinite matrix, both made of the same incompressible neo-Hookean material. We also investigate the behavior of such an inclusion as it grows infinitely large, demonstrating the existence of a non-spherical asymptotic shape and an associated asymptotic pressure. We call this the isomorphic limit, and the associated pressure the isomorphic pressure.
</summary>
<dc:date>2026-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Risk for Astronaut Involvement in In-Space Manufacturing: Analog Field Testing and Future Planetary Surface Procedures</title>
<link href="https://hdl.handle.net/1721.1/165087" rel="alternate"/>
<author>
<name>MacRobbie, Madelyn</name>
</author>
<author>
<name>Patel, Palak B.</name>
</author>
<id>https://hdl.handle.net/1721.1/165087</id>
<updated>2026-03-12T07:24:54Z</updated>
<published>2025-03-29T00:00:00Z</published>
<summary type="text">Evaluating Risk for Astronaut Involvement in In-Space Manufacturing: Analog Field Testing and Future Planetary Surface Procedures
MacRobbie, Madelyn; Patel, Palak B.
Introduction&#13;
A key objective of the NASA Artemis program is to establish a sustained human presence on the Moon, along with its international and commercial partners. NASA aims to establish a lunar economy, increasing the need for infrastructure to support human habitation and facilitate growth. In-space manufacturing (ISM) coupled with in situ resource utilization (ISRU) can reduce launch mass and reduce the dependency on Earth resupply for long-term habitation, enabling rapid expansion. However, the space environment introduces unique challenges compared to Earth, such as the absence of an atmosphere, reduced gravity levels, and high consequences of human-machine interactions given the barrier to evacuating an astronaut injured in a manufacturing accident on the Moon, necessitating new safety standards for ISM processes.&#13;
Methods&#13;
This study proposes the application of a modified analytical hierarchy process (AHP) to identify high-risk aspects of crew procedures in molten regolith electrolysis (MRE) for both Earth-based analog testing and lunar production.&#13;
Results&#13;
The modified AHP assists in pinpointing areas needing hazard mitigation to protect crew members, enabling the improvement of safety standards for MRE in both environments.&#13;
Conclusion&#13;
Findings will inform the development of robust safety protocols for ISM, crucial for the success of NASA's Artemis missions and the broader goal of sustained human presence on the Moon and Mars.
</summary>
<dc:date>2025-03-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Origins of Face Responses in the Human Cortex: fNIRS and fMRI Evidence From Infants</title>
<link href="https://hdl.handle.net/1721.1/165086" rel="alternate"/>
<author>
<name>Saxe, Rebecca</name>
</author>
<author>
<name>Kosakowski, Heather L</name>
</author>
<id>https://hdl.handle.net/1721.1/165086</id>
<updated>2026-03-12T07:25:03Z</updated>
<published>2025-10-01T00:00:00Z</published>
<summary type="text">Origins of Face Responses in the Human Cortex: fNIRS and fMRI Evidence From Infants
Saxe, Rebecca; Kosakowski, Heather L
In adults, cortical regions in the fusiform face area (FFA), superior temporal sulcus (STS), and medial prefrontal cortex (MPFC) respond selectively to faces but underlie distinct perceptual and social processes. When do each of these regions, and their distinctive functions, develop? We reviewed recent studies of awake human infants’ cortical responses to faces using functional near-infrared spectroscopy (fNIRS) and functional MRI (fMRI). The results converged and do not support a slow, sequential posterior-to-anterior development of face-selective responses. Instead, cortical face-selective responses arise very early and simultaneously in infancy and may reflect distinctively social processes from the start.
</summary>
<dc:date>2025-10-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Effect of Basketball Analytics Investment on National Basketball Association (NBA) Team Performance</title>
<link href="https://hdl.handle.net/1721.1/165085" rel="alternate"/>
<author>
<name>Wang, Henry</name>
</author>
<author>
<name>Sarker, Arnab</name>
</author>
<author>
<name>Hosoi, Anette</name>
</author>
<id>https://hdl.handle.net/1721.1/165085</id>
<updated>2026-03-12T07:25:02Z</updated>
<published>2025-08-01T00:00:00Z</published>
<summary type="text">The Effect of Basketball Analytics Investment on National Basketball Association (NBA) Team Performance
Wang, Henry; Sarker, Arnab; Hosoi, Anette
In the National Basketball Association (NBA), basketball data and analytics is an area of significant financial investment for all 30 franchises, despite there being little quantitative evidence demonstrating analytics adoption actually improves team-level performance. This study seeks to measure the return on investment of analytics on NBA team success in a time of great demand for analytical front office personnel. Using a two-way fixed effects modeling approach, we identify the causal effect of analytics department headcounts on regular season wins using 12 years of season-level data for each team. We find a positive and statistically significant effect, suggesting clubs that invest more in analytics tend to outperform competitors when controlling for roster characteristics, injuries, difficulty of schedule, and team-specific and time-specific effects. This research contributes to the body of literature affirming the value of data analytics for organizational performance and supports current investments in analytics being made by NBA teams.
</summary>
<dc:date>2025-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Workshop on Noninvasive Glucose Monitoring 2024</title>
<link href="https://hdl.handle.net/1721.1/165084" rel="alternate"/>
<author>
<name>Kang, Jeon Woong</name>
</author>
<author>
<name>Arnold, Mark A</name>
</author>
<author>
<name>Steenkamp, Devin</name>
</author>
<author>
<name>Tapsak, Mark A</name>
</author>
<author>
<name>Mäntele, Werner</name>
</author>
<author>
<name>Khang, Yoonho</name>
</author>
<author>
<name>Jue, Miyeon</name>
</author>
<author>
<name>So, Peter TC</name>
</author>
<id>https://hdl.handle.net/1721.1/165084</id>
<updated>2026-03-12T07:25:00Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">Workshop on Noninvasive Glucose Monitoring 2024
Kang, Jeon Woong; Arnold, Mark A; Steenkamp, Devin; Tapsak, Mark A; Mäntele, Werner; Khang, Yoonho; Jue, Miyeon; So, Peter TC
This first workshop on noninvasive glucose monitoring (NIGM) was held at the Massachusetts Institute of Technology (MIT) on October 30, 2024. Six invited speakers, representing industry, academia, and clinics, gave presentations that covered (1) an overview of the NIGM technologies, (2) the state of the art in NIGM technologies, such as near-infrared (NIR), mid-infrared (IR), photoacoustic, and Raman spectroscopies, (3) minimally invasive implantable continuous glucose monitoring (CGM) sensors, and (4) a clinician’s perspective on the impact of the current CGM devices for patient care.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The community test tube of American civilization: Burt and Ethel Aginsky’s Social Science Field Laboratory, 1939–47</title>
<link href="https://hdl.handle.net/1721.1/165083" rel="alternate"/>
<author>
<name>Kapsalakis, Lauren</name>
</author>
<id>https://hdl.handle.net/1721.1/165083</id>
<updated>2026-03-12T07:25:01Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">The community test tube of American civilization: Burt and Ethel Aginsky’s Social Science Field Laboratory, 1939–47
Kapsalakis, Lauren
The Social Science Field Laboratory (SSFL, 1939–47), a field school in the Ukiah Valley that trained students in social scientific and anthropological methodology, sheds light on a period in anthropology when methods were shifting from objective empiricism to meaningful participation. As analytic tools for framing the study of society failed to keep pace with social change, sociopolitical trends inside and outside anthropology situated a valley in northern California as the opportune place to gather a sample of ‘American history in vitro’. Founded by Columbia-trained anthropologists Burt and Ethel Aginsky, the SSFL responded to trends inside and outside anthropology. As the Great Depression directed anthropologists’ attention to the study of practical, modern problems in complex American communities—such as race relations, immigration, modernization, and urbanization—funding agencies strengthened the relations between sociology and anthropology and encouraged the development of interdisciplinary approaches. The Aginskys conceived of the Ukiah Valley as a ‘community test-tube of American civilization’, where scientists from all disciplines ‘can come for a convenient sample of the United States, past and present’. In teaching students how to collect data in the field, the Aginskys pierced the widely held notion that ethnographic technique cannot be taught but must be experienced by the lone individual in the field.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring How Organizational Actors Experience Evaluation and Its Influence: A Q-Methodological Study</title>
<link href="https://hdl.handle.net/1721.1/165082" rel="alternate"/>
<author>
<name>Kelly, Catherine</name>
</author>
<id>https://hdl.handle.net/1721.1/165082</id>
<updated>2026-03-12T07:24:59Z</updated>
<published>2025-04-23T00:00:00Z</published>
<summary type="text">Exploring How Organizational Actors Experience Evaluation and Its Influence: A Q-Methodological Study
Kelly, Catherine
This article contributes to research on evaluation by examining how organizational actors respond to and use evaluation imposed on them within an evaluation system. Drawing on Henry and Mark's theory of evaluation influence, this study uses Q-methodology to explore how staff within English higher education providers experience evaluation and its influence on their widening participation practice and strategy decision-making. The experiences of organizational actors are examined and classified into four types: strategic practitioners, pragmatic practitioners, staff with indirect involvement in widening participation, and evaluation enthusiasts. Through analyzing these experiences, the findings illustrate the diverse ways organizational actors are influenced by evaluation within evaluation systems. To deepen our understanding of evaluation influence in the contexts of evaluation systems, this article recommends explicitly embedding organizational theories into future theories of evaluation influence and provides suggestions for future research on the topic.
</summary>
<dc:date>2025-04-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mapping the Caregiver Experience: Predicting Dimensions of Caregiver Strain Through Task-Based Profiles</title>
<link href="https://hdl.handle.net/1721.1/165081" rel="alternate"/>
<author>
<name>Brady, Samantha</name>
</author>
<author>
<name>Ashebir, Sophia</name>
</author>
<author>
<name>D’Ambrosio, Lisa</name>
</author>
<author>
<name>Balmuth, Alexa</name>
</author>
<author>
<name>Felts, Adam</name>
</author>
<author>
<name>Lee, Chaiwoo</name>
</author>
<id>https://hdl.handle.net/1721.1/165081</id>
<updated>2026-03-12T07:24:55Z</updated>
<published>2026-01-01T00:00:00Z</published>
<summary type="text">Mapping the Caregiver Experience: Predicting Dimensions of Caregiver Strain Through Task-Based Profiles
Brady, Samantha; Ashebir, Sophia; D’Ambrosio, Lisa; Balmuth, Alexa; Felts, Adam; Lee, Chaiwoo
Objective: Family caregiving is a prevalent, diverse, and often challenging experience. We develop caregiving activity profiles to better understand how sets of care-tasks contribute to various aspects of strain.&#13;
Methods: Using diary data from a survey of 213 family caregivers in the U.S., we perform latent class analysis to group commonly occurring care-related tasks into activity profiles. We then use these classifications to predict physical, financial, and emotional strain.&#13;
Main Findings: We identified 4 unique activity profiles based on a set of 36 daily caregiving activities performed. Activity profiles varied significantly across the three analyzed strain dimensions.&#13;
Conclusion: Activity profiles present opportunities to better understand how caregiving tasks are related to specific types kinds of caregiving strain.
</summary>
<dc:date>2026-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Your home is not a school: The limits of homeschooling as a political practice</title>
<link href="https://hdl.handle.net/1721.1/165080" rel="alternate"/>
<author>
<name>Pavel, Sonia Maria</name>
</author>
<author>
<name>Cynamon, Jeremy Kingston</name>
</author>
<id>https://hdl.handle.net/1721.1/165080</id>
<updated>2026-03-12T07:24:53Z</updated>
<published>2025-04-18T00:00:00Z</published>
<summary type="text">Your home is not a school: The limits of homeschooling as a political practice
Pavel, Sonia Maria; Cynamon, Jeremy Kingston
Homeschooling is on the rise. It appeals to very different perspectives and ideologies that tend not to have common ground, from classical conservative to radical progressive. But the justifications for the practice are weak. In this paper, we build a case against the “home school” as a political practice using the existing commitments of liberal, conservative, and democratic theories of education. Whether education should aim at the cultivation of children's autonomy, their formation as members of cultural communities, or their training as democratic citizens, there are reasons to doubt that the practice of homeschooling can fulfill our educational goals. As such, we argue that liberals, conservatives, and democrats each have their own motivations to oppose homeschooling as an institutional alternative to traditional schools. Through our critiques, we also advance a metatheoretical argument in favor of centering the aims of education in our philosophical and political debates.
</summary>
<dc:date>2025-04-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Qualitative Assessment of Terrestrial Care Settings to Inform Self-sufficient Spaceflight Medical Care</title>
<link href="https://hdl.handle.net/1721.1/165079" rel="alternate"/>
<author>
<name>Porter, Allison</name>
</author>
<author>
<name>Arquilla, Katya</name>
</author>
<author>
<name>Stankovic, Aleksandra</name>
</author>
<id>https://hdl.handle.net/1721.1/165079</id>
<updated>2026-03-12T07:24:52Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Qualitative Assessment of Terrestrial Care Settings to Inform Self-sufficient Spaceflight Medical Care
Porter, Allison; Arquilla, Katya; Stankovic, Aleksandra
Introduction&#13;
Long communication latencies in exploration spaceflight will necessitate in situ resolution to medical problems. Integrating automation into the care paradigm can address challenges posed by resource gaps inherent to spaceflight operations. However, it is not clear what aspects of exploration care are most well suited for automation integration.&#13;
Methods&#13;
To probe the potential role of automation in spaceflight medicine, we began by decomposing the human-automation system to first characterize the work domain(s) of the human tasks. Using the lens of point-of-care ultrasound, we leveraged existing analogous Earth medical domains to conduct in situ observations in a hospital emergency department to understand how clinicians process contextual information to provide urgent care using ultrasound and semistructured interviews with specialists to identify key procedural information components for automation.&#13;
Results&#13;
This investigation allowed us to characterize the dynamic system surrounding a task that does not exist in its intended—currently inaccessible—use case (ie, point-of-care ultrasound on Mars) to guide future human-automation systems development.&#13;
Conclusion&#13;
We conclude that specific aspects of the care environment that influence the result of a task or process (“mediating factors”) from candidate work domains call for distinct, targeted guidance for automation support and are valuable in providing system developers with tunable automation level and implementation guidelines within and/or between those work domains. Such evidence-based design practice is directly translatable to automation assistance for medical providers in resource-limited environments as well as to any situation where a person's sensory processing, perception, decision making, or response selection could be aided by automation to accomplish a task.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Solutions and Challenges for Addressing Misinformation</title>
<link href="https://hdl.handle.net/1721.1/165078" rel="alternate"/>
<author>
<name>Martel, Cameron</name>
</author>
<author>
<name>Rand, David G</name>
</author>
<id>https://hdl.handle.net/1721.1/165078</id>
<updated>2026-03-12T07:24:47Z</updated>
<published>2025-06-10T00:00:00Z</published>
<summary type="text">Solutions and Challenges for Addressing Misinformation
Martel, Cameron; Rand, David G
Research on mitigating the effects of misinformation has contributed to the development of multiple feasible interventions designed to reduce belief in, and sharing of, falsehoods. The authors review these interventions and discuss challenges and open questions for future research. First, they provide an overview of content-neutral and content-based interventions. Next, they discuss two practical challenges to deploying and assessing these interventions in the field: scalability and pushback against content moderation efforts due to perceived political bias. Finally, they highlight several open theoretical questions and common pitfalls of research on misinformation. In particular, they argue for critical evaluation of how interventions may be effective across different types of misinformative content, different key subpopulations, and different media and environmental contexts.
</summary>
<dc:date>2025-06-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Atomic Transactions</title>
<link href="https://hdl.handle.net/1721.1/165077" rel="alternate"/>
<author>
<name>Lynch, Nancy</name>
</author>
<author>
<name>Merritt, Michael</name>
</author>
<author>
<name>Weihl, William</name>
</author>
<author>
<name>Fekete, Alan</name>
</author>
<id>https://hdl.handle.net/1721.1/165077</id>
<updated>2026-03-12T13:19:44Z</updated>
<published>1994-01-01T00:00:00Z</published>
<summary type="text">Atomic Transactions
Lynch, Nancy; Merritt, Michael; Weihl, William; Fekete, Alan
</summary>
<dc:date>1994-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A design of a low-pressure steam turbine</title>
<link href="https://hdl.handle.net/1721.1/165076" rel="alternate"/>
<author>
<name>Jones, Bradley.</name>
</author>
<id>https://hdl.handle.net/1721.1/165076</id>
<updated>2026-03-11T03:05:17Z</updated>
<published>1910-01-01T00:00:00Z</published>
<summary type="text">A design of a low-pressure steam turbine
Jones, Bradley.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1910
</summary>
<dc:date>1910-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational study of static replication for barrier options</title>
<link href="https://hdl.handle.net/1721.1/165075" rel="alternate"/>
<author>
<name>Sun, Hai Po.</name>
</author>
<id>https://hdl.handle.net/1721.1/165075</id>
<updated>2026-03-11T03:04:39Z</updated>
<published>1997-01-01T00:00:00Z</published>
<summary type="text">Computational study of static replication for barrier options
Sun, Hai Po.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1997; Includes bibliographical references (leaves 75-76).
</summary>
<dc:date>1997-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling, control and experimentation of a two dimensional linear motor</title>
<link href="https://hdl.handle.net/1721.1/165074" rel="alternate"/>
<author>
<name>Castañeda Vega, José Israel.</name>
</author>
<id>https://hdl.handle.net/1721.1/165074</id>
<updated>2026-03-11T03:04:33Z</updated>
<published>1997-01-01T00:00:00Z</published>
<summary type="text">Modeling, control and experimentation of a two dimensional linear motor
Castañeda Vega, José Israel.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1997; Includes bibliographical references (leaf 118).
</summary>
<dc:date>1997-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of anode dimensions in mercury-vapour thermionic rectifiers</title>
<link href="https://hdl.handle.net/1721.1/165073" rel="alternate"/>
<author>
<name>Fussell, Lewis.</name>
</author>
<id>https://hdl.handle.net/1721.1/165073</id>
<updated>2026-03-11T03:04:42Z</updated>
<published>1932-01-01T00:00:00Z</published>
<summary type="text">A study of anode dimensions in mercury-vapour thermionic rectifiers
Fussell, Lewis.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1932; Includes bibliographical references (leaf 50).
</summary>
<dc:date>1932-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cloud-chamber study of cosmic ray showers in lead plates</title>
<link href="https://hdl.handle.net/1721.1/165072" rel="alternate"/>
<author>
<name>Fussell, Lewis.</name>
</author>
<id>https://hdl.handle.net/1721.1/165072</id>
<updated>2026-03-11T03:02:21Z</updated>
<published>1938-01-01T00:00:00Z</published>
<summary type="text">Cloud-chamber study of cosmic ray showers in lead plates
Fussell, Lewis.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1938; Includes bibliographical references (leaves [113]-[118]).
</summary>
<dc:date>1938-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of a high-speed light source suitable for photoelastic studies</title>
<link href="https://hdl.handle.net/1721.1/165071" rel="alternate"/>
<author>
<name>Wyle, Frank S.</name>
</author>
<id>https://hdl.handle.net/1721.1/165071</id>
<updated>2026-03-11T03:05:14Z</updated>
<published>1941-01-01T00:00:00Z</published>
<summary type="text">Development of a high-speed light source suitable for photoelastic studies
Wyle, Frank S.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1941; Includes bibliographical references (leaf 25).
</summary>
<dc:date>1941-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Boiling and spreading rates of instantaneous liquid methane spills on water</title>
<link href="https://hdl.handle.net/1721.1/165070" rel="alternate"/>
<author>
<name>Chatlos, David Joseph.</name>
</author>
<id>https://hdl.handle.net/1721.1/165070</id>
<updated>2026-03-11T03:04:37Z</updated>
<published>1982-01-01T00:00:00Z</published>
<summary type="text">Boiling and spreading rates of instantaneous liquid methane spills on water
Chatlos, David Joseph.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1982; Supervised by Robert C. Reid.; Includes bibliographical references (leaves 86-88).
</summary>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Manipulation and measurement of charge transfer kinetics at chemically modified electrodes</title>
<link href="https://hdl.handle.net/1721.1/165069" rel="alternate"/>
<author>
<name>Lewis, Nathan S.
            (Nathan Saul)</name>
</author>
<id>https://hdl.handle.net/1721.1/165069</id>
<updated>2026-03-11T03:02:10Z</updated>
<published>1981-01-01T00:00:00Z</published>
<summary type="text">Manipulation and measurement of charge transfer kinetics at chemically modified electrodes
Lewis, Nathan S.
            (Nathan Saul)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1981; Includes bibliographical references.
</summary>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of Fourier transform spectroscopy to the absolute determination of the chemical shift of protons.</title>
<link href="https://hdl.handle.net/1721.1/165068" rel="alternate"/>
<author>
<name>Wright, Francine Elaine.</name>
</author>
<id>https://hdl.handle.net/1721.1/165068</id>
<updated>2026-03-11T03:04:45Z</updated>
<published>1975-01-01T00:00:00Z</published>
<summary type="text">Application of Fourier transform spectroscopy to the absolute determination of the chemical shift of protons.
Wright, Francine Elaine.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1975; Vita.; Bibliography: leaves 65-66.
</summary>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Double valves</title>
<link href="https://hdl.handle.net/1721.1/165067" rel="alternate"/>
<author>
<name>Faunce, Linus.</name>
</author>
<id>https://hdl.handle.net/1721.1/165067</id>
<updated>2026-03-11T03:05:04Z</updated>
<published>1877-01-01T00:00:00Z</published>
<summary type="text">Double valves
Faunce, Linus.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1877
</summary>
<dc:date>1877-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Recirculation through western boundary currents varies nonlinearly with the ocean basin's aspect ratio</title>
<link href="https://hdl.handle.net/1721.1/165066" rel="alternate"/>
<author>
<name>Gianchandani, Kaushal</name>
</author>
<id>https://hdl.handle.net/1721.1/165066</id>
<updated>2026-03-11T03:08:31Z</updated>
<published>2024-09-17T00:00:00Z</published>
<summary type="text">Recirculation through western boundary currents varies nonlinearly with the ocean basin's aspect ratio
Gianchandani, Kaushal
Recirculation gyres adjacent to western boundary currents (WBCs) in the ocean enhance the poleward transport of these currents. While it is well-established that the WBC in a barotropic ocean strengthens with increase in basin's aspect ratio (the meridional-to-zonal extent ratio), how intensity of the recirculation through the western boundary layer varies with this parameter remains unexplored. I address this using the non-dimensional form of the nonlinear, wind-driven Stommel–Munk model of westward intensification that comprises three parameters—the aspect ratio (δ), the damping coefficient (ϵ), and the β-Rossby number (Rβ). Here, ϵ is set by the ratio of Rayleigh friction coefficient (or eddy viscosity) to the meridional gradient of the Coriolis frequency and the basin's zonal dimension, while Rβ is proportional to wind stress amplitude and quantifies the strength of nonlinearity. In the weak-to-moderate nonlinearity limit (Rβ&amp;amp;lt;∼ϵ), perturbation analysis reveals that recirculation varies concavely with aspect ratio, suggesting existence of an optimal aspect ratio (δopt) for which the recirculation is maximum and for typical values of ϵ (10−3−10−2), δopt follows the power-law relation δopt=4.3ϵ. Numerical simulations further validate the existence of δopt. For large ϵ (&amp;amp;gt;5×10−3), the power-law predicts δopt for the numerical solutions rather accurately, but does not hold for smaller ϵ (2×10−3) due to increased importance of nonlinear terms. Nevertheless, the nonlinear variation in recirculation through the western boundary layer with aspect ratio is observed for all ϵ values and may contribute to the heterogeneous increase in the WBC's transport across different ocean basins in a warming climate.
</summary>
<dc:date>2024-09-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Streamlining Physics Problem Generation to Support Physics Teachers in Using Generative Artificial Intelligence</title>
<link href="https://hdl.handle.net/1721.1/165065" rel="alternate"/>
<author>
<name>El-Adawy, Shams</name>
</author>
<author>
<name>Liao, Isaac</name>
</author>
<author>
<name>Lad, Vedang</name>
</author>
<author>
<name>Abdelhafez, Mohamed</name>
</author>
<author>
<name>Dourmashkin, Peter</name>
</author>
<id>https://hdl.handle.net/1721.1/165065</id>
<updated>2026-03-11T03:08:30Z</updated>
<published>2024-10-01T00:00:00Z</published>
<summary type="text">Streamlining Physics Problem Generation to Support Physics Teachers in Using Generative Artificial Intelligence
El-Adawy, Shams; Liao, Isaac; Lad, Vedang; Abdelhafez, Mohamed; Dourmashkin, Peter
The rapid advancement of large language models (LLMs) presents a unique opportunity for educators to find ways to include artificial intelligence (AI) in physics course design. By critically engaging with LLMs to help with the task of generating problems, physics teachers can not only model a potentially effective way to use LLMs for other teachers, but also showcase to students ways to productively engage with LLMs. This article presents a workflow with two different starting points to generate physics problems using ChatGPT 3.5. The first initialization involves interacting with ChatGPT in a conversational manner, guiding iterative problem creation by breaking tasks into smaller tasks. The second initialization harnesses ChatGPT’s generative abilities, aligning problem generation with established problem styles by instructing the model to emulate contexts from question banks. We discuss the implications of this workflow for other physics instructors exploring productive ways to incorporate the use of AI in their own course design.
</summary>
<dc:date>2024-10-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ion optical design of the magnetic proton recoil neutron spectrometer for the SPARC tokamak</title>
<link href="https://hdl.handle.net/1721.1/165064" rel="alternate"/>
<author>
<name>Mackie, S</name>
</author>
<author>
<name>Wink, CW</name>
</author>
<author>
<name>Dalla Rosa, M</name>
</author>
<author>
<name>Berg, GPA</name>
</author>
<author>
<name>Ball, JL</name>
</author>
<author>
<name>Wang, X</name>
</author>
<author>
<name>Carmichael, J</name>
</author>
<author>
<name>Tinguely, RA</name>
</author>
<author>
<name>Rigamonti, D</name>
</author>
<author>
<name>Tardocchi, M</name>
</author>
<author>
<name>Raj, P</name>
</author>
<author>
<name>Frenje, J</name>
</author>
<author>
<name>Rice, J</name>
</author>
<id>https://hdl.handle.net/1721.1/165064</id>
<updated>2026-03-11T03:08:29Z</updated>
<published>2024-10-01T00:00:00Z</published>
<summary type="text">Ion optical design of the magnetic proton recoil neutron spectrometer for the SPARC tokamak
Mackie, S; Wink, CW; Dalla Rosa, M; Berg, GPA; Ball, JL; Wang, X; Carmichael, J; Tinguely, RA; Rigamonti, D; Tardocchi, M; Raj, P; Frenje, J; Rice, J
A magnetic proton recoil (MPR) neutron spectrometer is being designed for SPARC, a high magnetic field (BT = 12 T), compact (R0 = 1.85 m, a = 0.57 m) tokamak currently under construction in Devens, MA, USA. MPR neutron spectrometers are versatile tools for making high fidelity ab initio calibrated measurements of fusion neutron flux spectra and have been used to infer fusion power, ion temperature, fuel ion ratio, and suprathermal fuel populations at several high performance fusion experiments. The performance of an MPR neutron spectrometer is in large part determined by the design of the magnetic field, which disperses and focuses recoil protons. This article details the ion optical design of a high-resolution MPR neutron spectrometer, including the amelioration of image aberrations due to nonlinear effects. An optimized design is presented that achieves ion optical energy resolution δE/E &lt; 1% and focal plane properties that enable straightforward integration with the hodoscope detector array.
</summary>
<dc:date>2024-10-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performance predictions of the SPARC x-ray crystal spectrometers for ion temperature and toroidal rotation measurements</title>
<link href="https://hdl.handle.net/1721.1/165063" rel="alternate"/>
<author>
<name>Perks, C</name>
</author>
<author>
<name>Vezinet, D</name>
</author>
<author>
<name>Rice, JE</name>
</author>
<author>
<name>Reinke, ML</name>
</author>
<id>https://hdl.handle.net/1721.1/165063</id>
<updated>2026-03-11T03:08:34Z</updated>
<published>2024-08-27T00:00:00Z</published>
<summary type="text">Performance predictions of the SPARC x-ray crystal spectrometers for ion temperature and toroidal rotation measurements
Perks, C; Vezinet, D; Rice, JE; Reinke, ML
SPARC will be outfitted with three systems of x-ray crystal spectrometer arrays. Two of these are designed using cylindrically bent crystals to achieve high spectral-resolution for ion temperature and toroidal velocity measurements via imaging He-like Kr and Ne-like Xe. The last acts as a spectral survey system to monitor Ne-like W and nearby H- and He-like emission from Cr, Fe, Co, Ni, and Cu. Line radiation intensities are calculated using the Flexible Atomic Code for atomic data and ColRadPy for collisional-radiative modeling, then convoluted with a Voigt line shape. Free–free, free-bound, and two-photon continuum radiation is also included. The ToFu code is used to perform volume-of-sight integration to produce synthetic detector images. In addition, presented is cross-validation performed using the XICSRT Monte Carlo ray-tracing code. Ion temperature and toroidal velocity profiles are reconstructed using ToFu via tomographic inversion.
</summary>
<dc:date>2024-08-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Edge scanning reflectometry for density profile measurement on the SPARC tokamak</title>
<link href="https://hdl.handle.net/1721.1/165062" rel="alternate"/>
<author>
<name>Lin, Y</name>
</author>
<author>
<name>Nikolaeva, V</name>
</author>
<author>
<name>Hachmeister, D</name>
</author>
<author>
<name>Kowalski, E</name>
</author>
<author>
<name>Reinke, ML</name>
</author>
<id>https://hdl.handle.net/1721.1/165062</id>
<updated>2026-03-11T03:08:33Z</updated>
<published>2024-08-21T00:00:00Z</published>
<summary type="text">Edge scanning reflectometry for density profile measurement on the SPARC tokamak
Lin, Y; Nikolaeva, V; Hachmeister, D; Kowalski, E; Reinke, ML
Edge scanning reflectometry (ESRL) on the SPARC tokamak aims to measure the electron density profile from the far scrape-off layer to the top of the typical H-mode pedestal and provide real-time data for plasma control. ESRL uses a standard frequency-modulated continuous wave technique from 18 to 90 GHz. By implementing both the O-mode and left-hand-cutoff X-mode, it covers densities from ∼4 × 1018 to ∼4 × 1020 m−3 at B0 ∼12 T. A voltage-controlled oscillator acts as the frequency sweep source. Phase-locked dielectric resonator oscillators and bandpass filters generate base signals ∼9–15 GHz. The signals are then frequency multiplied and amplified to reach the K (18–26 GHz), Ka (26–40 GHz), U (40–60 GHz), and E (60–90 GHz) bands. Multi-band signals are combined via the quasi-optical technique. ESRL plans to use oversized waveguides (∼20 m one-way) and a bi-static arrangement to minimize signal losses and distortions while allowing system flexibility. A COMSOL Multiphysics RF model in 2D has been set up to simulate the reflectometry process and help decide the layout of the horn antennas. Engineering analyses of the key parts of the system have been carried out in support of its preliminary design.
</summary>
<dc:date>2024-08-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Neutronics simulations for the design of neutron flux monitors in SPARC</title>
<link href="https://hdl.handle.net/1721.1/165061" rel="alternate"/>
<author>
<name>Wang, X</name>
</author>
<author>
<name>Gocht, R</name>
</author>
<author>
<name>Ball, J</name>
</author>
<author>
<name>Mackie, S</name>
</author>
<author>
<name>Panontin, E</name>
</author>
<author>
<name>Tinguely, RA</name>
</author>
<author>
<name>Raj, P</name>
</author>
<author>
<name>Holmes, I</name>
</author>
<author>
<name>Saltos, AA</name>
</author>
<author>
<name>Johnson, A</name>
</author>
<author>
<name>Grieve, A</name>
</author>
<id>https://hdl.handle.net/1721.1/165061</id>
<updated>2026-03-11T03:08:27Z</updated>
<published>2024-08-30T00:00:00Z</published>
<summary type="text">Neutronics simulations for the design of neutron flux monitors in SPARC
Wang, X; Gocht, R; Ball, J; Mackie, S; Panontin, E; Tinguely, RA; Raj, P; Holmes, I; Saltos, AA; Johnson, A; Grieve, A
This paper presents the development and application of high-fidelity neutronic models of the SPARC tokamak for the design of neutron flux monitors (NFM) for application during plasma operations. NFMs measure the neutron flux in the tokamak hall, which is related to fusion power via calibration. We have explored Boron-10 gamma-compensated ionization chambers (ICs) and parallel-plate Uranium-238 fission chambers (FCs). We plan for all NFMs to be located by the wall in the tokamak hall and directly exposed to neutrons streaming through a shielded opening in a midplane port. Our simulations primarily use a constructive solid geometry-based OpenMC model based on the true SPARC geometry. The OpenMC model is benchmarked against a detailed CAD-based MCNP6 model. The B10 ICs are equipped with high-density polyethylene (HDPE) sleeves, borated HDPE housings, and borated aluminum covers to shield out scattered neutrons, optimize detector response levels, and make calibration robust against changes in the tokamak hall. The B10 neutron absorption branching ratio may cause the detectors’ responses to be non-linear to neutron flux &gt;200 keV. However, our simulations unveil that, in the SPARC environment and with the proposed housings and sleeves, &gt;99% of the detector responses are induced by &lt;100 keV neutrons. U238’s insensitivity to slow neutrons makes this FC a promising candidate for direct fusion neutron measurements. Along with a borated HDPE sleeve, about 60% of the FCs’ responses are induced by direct neutrons.
</summary>
<dc:date>2024-08-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Image plate multi-scan response to fusion protons in the range of 1–14 MeV</title>
<link href="https://hdl.handle.net/1721.1/165060" rel="alternate"/>
<author>
<name>Vanderloo, N</name>
</author>
<author>
<name>Cufari, M</name>
</author>
<author>
<name>Russell, L</name>
</author>
<author>
<name>Johnson, TM</name>
</author>
<author>
<name>Vargas, J</name>
</author>
<author>
<name>Foo, BC</name>
</author>
<author>
<name>Buschmann, BI</name>
</author>
<author>
<name>Dannhoff, SG</name>
</author>
<author>
<name>DeVault, A</name>
</author>
<author>
<name>Evans, TE</name>
</author>
<author>
<name>Kunimune, JH</name>
</author>
<author>
<name>Lawrence, Y</name>
</author>
<author>
<name>Pearcy, JA</name>
</author>
<author>
<name>Reichelt, BL</name>
</author>
<author>
<name>Wink, CW</name>
</author>
<author>
<name>Gatu Johnson, M</name>
</author>
<author>
<name>Petrasso, RD</name>
</author>
<author>
<name>Frenje, JA</name>
</author>
<author>
<name>Li, CK</name>
</author>
<id>https://hdl.handle.net/1721.1/165060</id>
<updated>2026-03-11T03:08:25Z</updated>
<published>2024-09-24T00:00:00Z</published>
<summary type="text">Image plate multi-scan response to fusion protons in the range of 1–14 MeV
Vanderloo, N; Cufari, M; Russell, L; Johnson, TM; Vargas, J; Foo, BC; Buschmann, BI; Dannhoff, SG; DeVault, A; Evans, TE; Kunimune, JH; Lawrence, Y; Pearcy, JA; Reichelt, BL; Wink, CW; Gatu Johnson, M; Petrasso, RD; Frenje, JA; Li, CK
Image plates (IPs) are a quickly recoverable and reusable radiation detector often used to measure proton and x-ray fluence in laser-driven experiments. Recently, IPs have been used in a proton radiography detector stack on the OMEGA laser, a diagnostic historically implemented with CR-39, or radiochromic film. The IPs used in this and other diagnostics detect charged particles, neutrons, and x-rays indiscriminately. IPs detect radiation using a photo-stimulated luminescence (PSL) material, often phosphor, in which electrons are excited to metastable states by ionizing radiation. Protons at MeV energies deposit energy deeper into the IP compared with x rays below ∼20 keV due to the Bragg peak present for protons. This property is exploited to discriminate between radiation types. Doses of mono-energetic protons between 1.7 and 14 MeV are applied to IPs using the MIT linear electrostatic ion accelerator. This paper presents the results from consecutive scans of IPs irradiated with different proton energies. The PSL ratios between subsequent scans are shown to depend on proton energy, with higher energy protons having lower PSL ratios for each scan. This finding is separate from the known energy dependence in the absolute sensitivity of IPs. The results can be compared to complimentary work on x rays, showing a difference between protons and x rays, forging a path to discriminate between proton and x-ray fluence in mixed radiation environments.
</summary>
<dc:date>2024-09-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>A compact and portable gamma-ray spectrometer (GRASP) for inertial confinement fusion and basic science experiments</title>
<link href="https://hdl.handle.net/1721.1/165059" rel="alternate"/>
<author>
<name>Dannhoff, SG</name>
</author>
<author>
<name>Wink, CW</name>
</author>
<author>
<name>Mackie, S</name>
</author>
<author>
<name>Berg, GPA</name>
</author>
<author>
<name>Frenje, JA</name>
</author>
<id>https://hdl.handle.net/1721.1/165059</id>
<updated>2026-03-11T03:08:24Z</updated>
<published>2024-08-05T00:00:00Z</published>
<summary type="text">A compact and portable gamma-ray spectrometer (GRASP) for inertial confinement fusion and basic science experiments
Dannhoff, SG; Wink, CW; Mackie, S; Berg, GPA; Frenje, JA
A compact and portable gamma-ray spectrometer has been designed to diagnose different components of the inertial confinement fusionrelevant γ-ray spectrum with energies between ∼3.7–17.9 MeV. The system is designed to be as compact as possible for convenient transportation and fielding in diagnostic ports on the OMEGA laser, the National Ignition Facility, and other photon-source facilities. The system consists of a conversion foil for Compton scattering in front of four magnetic spectrometer “arms,” each covering a different energy range and constructed out of cylindrical permanent magnet Halbach arrays. Monte Carlo simulations have been used to optimize and assess the performance of the conversion foil, and COSY INFINITY ion-optical simulations have been used to optimize the spectrometer magnets. The performance of the design is assessed for a simulated direct-drive γ-ray spectrum. Spanning its total γ-ray energy bandwidth and using a 1.7 mm thick boron conversion foil, the system’s total energy resolution and efficiency are ∼15.8%–4.5% and 5.4 × 10−7 –3.7 × 10−7 e − /γ, respectively, with room for improvement. Spectral γ-ray measurements will provide guidance to the inertial confinement fusion program toward achieving high-energy gain relevant to inertial fusion energy and enable new measurement capabilities for basic discovery science.
</summary>
<dc:date>2024-08-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterization of the image plate multi-scan response to mono-energetic x-rays</title>
<link href="https://hdl.handle.net/1721.1/165058" rel="alternate"/>
<author>
<name>Cufari, M</name>
</author>
<author>
<name>Vanderloo, N</name>
</author>
<author>
<name>Buschmann, BI</name>
</author>
<author>
<name>DeVault, A</name>
</author>
<author>
<name>Foo, BC</name>
</author>
<author>
<name>Vargas, J</name>
</author>
<author>
<name>Dannhoff, SG</name>
</author>
<author>
<name>Evans, TE</name>
</author>
<author>
<name>Johnson, TM</name>
</author>
<author>
<name>Kunimune, J</name>
</author>
<author>
<name>Lawrence, Y</name>
</author>
<author>
<name>Pearcy, JA</name>
</author>
<author>
<name>Reichelt, BL</name>
</author>
<author>
<name>Russell, L</name>
</author>
<author>
<name>Wink, CW</name>
</author>
<author>
<name>Gatu Johnson, M</name>
</author>
<author>
<name>Petrasso, RD</name>
</author>
<author>
<name>Frenje, JA</name>
</author>
<id>https://hdl.handle.net/1721.1/165058</id>
<updated>2026-03-11T03:08:21Z</updated>
<published>2024-09-24T00:00:00Z</published>
<summary type="text">Characterization of the image plate multi-scan response to mono-energetic x-rays
Cufari, M; Vanderloo, N; Buschmann, BI; DeVault, A; Foo, BC; Vargas, J; Dannhoff, SG; Evans, TE; Johnson, TM; Kunimune, J; Lawrence, Y; Pearcy, JA; Reichelt, BL; Russell, L; Wink, CW; Gatu Johnson, M; Petrasso, RD; Frenje, JA
Image plates (IPs), or phosphor storage screens, are a technology employed frequently in inertial confinement fusion (ICF) and high energy density plasma (HEDP) diagnostics because of their sensitivity to many types of radiation, including, x rays, protons, alphas, beta particles, and neutrons. Prior studies characterizing IPs are predicated on the signal level remaining below the scanner saturation threshold. Since the scanning process removes some signal from the IP via photostimulated luminescence, repeatedly scanning an IP can bring the signal level below the scanner saturation threshold. This process, in turn, raises concerns about the signal response of IPs after an arbitrary number of scans and whether such a process yields, for example, a constant ratio of signal between the nth and n + 1st scan. Here, the sensitivity of IPs is investigated when scanned multiple times. It is demonstrated that the ratio of signal decay is not a constant with the number of scans and that the signal decay depends on the x-ray energy. As such, repeatedly scanning an IP with a mixture of signal types (e.g., x ray, neutron, and protons) enables ICF and HEDP diagnostics employing IPs to better isolate a particular signal type.
</summary>
<dc:date>2024-09-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Microbially-enhanced dissolution of calcite in sinking marine particles</title>
<link href="https://hdl.handle.net/1721.1/165057" rel="alternate"/>
<author>
<name>Borer, Benedict</name>
</author>
<author>
<name>Subhas, Adam V.</name>
</author>
<author>
<name>Hayden, Matthew G.</name>
</author>
<author>
<name>Woosley, Ryan J.</name>
</author>
<author>
<name>Babbin, Andrew R.</name>
</author>
<id>https://hdl.handle.net/1721.1/165057</id>
<updated>2026-03-10T03:07:40Z</updated>
<published>2026-03-09T00:00:00Z</published>
<summary type="text">Microbially-enhanced dissolution of calcite in sinking marine particles
Borer, Benedict; Subhas, Adam V.; Hayden, Matthew G.; Woosley, Ryan J.; Babbin, Andrew R.
Evidence for the shallow cycling of calcium carbonate in the global ocean is mounting, but the mechanisms driving the dissolution of thermodynamically stable polymorphs, like aragonite and calcite, in the surface ocean remain unconstrained. Here, we quantify how microbial metabolism creates acidic microenvironments in marine particles that enhance the local dissolution of calcite despite supersaturated conditions in bulk waters. A temporal decoupling of particle deoxygenation and acidification suggests that respiration-derived carbon dioxide is not the sole driver of the observed undersaturation. Rapid dissolution occurs in particles exhibiting bacterial growth, with rates exceeding abiotic dissolution at the same bulk saturation by more than an order of magnitude. We observe the highest particle-associated dissolution rates at intermediate settling velocities, indicating that a trade-off between elevated mass transfer due to settling and bacterial respiration governs the ensuing dissolution rates. Translation of our experiments to the water column suggests that microbially driven undersaturation in marine particles may dissolve sufficient calcite in the mesopelagic ocean to extend particle transit times by eliminating this vital ballast mineral, reducing the efficiency of organic carbon sequestration.
</summary>
<dc:date>2026-03-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Process cost analysis of performance challenges and their mitigations in sodium-ion battery cathode materials</title>
<link href="https://hdl.handle.net/1721.1/165056" rel="alternate"/>
<author>
<name>Munjal, Mrigi</name>
</author>
<author>
<name>Prein, Thorben</name>
</author>
<author>
<name>Ramadan, Mahmoud M.</name>
</author>
<author>
<name>Smith, Hugh B.</name>
</author>
<author>
<name>Venugopal, Vineeth</name>
</author>
<author>
<name>Rupp, Jennifer L.M.</name>
</author>
<author>
<name>Abate, Iwnetim I.</name>
</author>
<author>
<name>Olivetti, Elsa A.</name>
</author>
<author>
<name>Huang, Kevin J.</name>
</author>
<id>https://hdl.handle.net/1721.1/165056</id>
<updated>2026-03-10T03:07:47Z</updated>
<published>2025-05-21T00:00:00Z</published>
<summary type="text">Process cost analysis of performance challenges and their mitigations in sodium-ion battery cathode materials
Munjal, Mrigi; Prein, Thorben; Ramadan, Mahmoud M.; Smith, Hugh B.; Venugopal, Vineeth; Rupp, Jennifer L.M.; Abate, Iwnetim I.; Olivetti, Elsa A.; Huang, Kevin J.
The success of sodium-ion batteries (SIBs) hinges on mitigating underperformance in ways that are cost effective, manufacturable, and scalable. This work investigates interfacial, morphological, and bulk interventions to enhance the performance of layered metal oxide cathode active materials (CAMs) for SIBs. We mapped the full space of literature-reported SIB CAM challenges and their mitigations. We then estimated the manufacturing costs for a diverse and representative set of mitigation approaches. Adding sacrificial salts can be cost effective, given low materials costs and minimal process changes. By contrast, many methods are reported to tune CAM morphology. Several are likely challenging at scale due to process throughput and yield limitations. Finally, bulk modifications can mitigate the moisture sensitivity of some CAMs, a likely less costly route than expanding stringent atmosphere controls during manufacturing. We end by discussing the limits and promise of process cost analysis, given the current state of battery reporting in the literature.
</summary>
<dc:date>2025-05-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Policy Analytics for Cybersecurity of Cyber-Physical Systems</title>
<link href="https://hdl.handle.net/1721.1/165055" rel="alternate"/>
<author>
<name>Choucri, Nazli</name>
</author>
<id>https://hdl.handle.net/1721.1/165055</id>
<updated>2026-03-06T03:00:51Z</updated>
<published>2024-10-08T00:00:00Z</published>
<summary type="text">Policy Analytics for Cybersecurity of Cyber-Physical Systems
Choucri, Nazli
Mounting concerns about safety and security have resulted in an intricate ecosystem system of&#13;
guidelines, compliance measures, directives and policy reports for cybersecurity of all critical&#13;
infrastructure. The policy paradox is that the text form of policy documents is an impediment to&#13;
the implementation of policies and directives and creates potentially powerful opportunity costs.&#13;
As a general practice, guidelines, directives and policy documents are presented in text form,&#13;
page-by-page and word-by-word all supported by figures, diagrams and tables as needed. By&#13;
definition text obscures properties of both policy and system-target in terms of dynamic&#13;
relationships, feedback, “drill-down”, leads and lags, and so forth.&#13;
The challenge is to develop analytics for cybersecurity policy of cyber physical systems. We begin&#13;
with constructing (a) a structured system model of the system, in order to (b) identify major policydefined&#13;
system-wide parameters, (c) situate system vulnerabilities, (d) map security requirements&#13;
to security objectives, and (e) advance research on how system properties respond to diverse&#13;
policy controls for security of cyber physical systems.&#13;
This Project addresses the hard problem of policy-governed secure collaboration related to cyberphysical&#13;
security of critical infrastructure (focusing on a generic and fundamental feature, namely&#13;
smart grid of electric power systems). The purpose is to (a) reduce, if not eliminate barriers to full&#13;
understanding of policy text as transmitted by the source, (b) explore system-wide or targeted&#13;
implications, (c) help contextualize generic directives for specific applications, and (d) facilitate&#13;
contingency analysis, as needed.&#13;
This Compilation is based on the Quarterly Research Reports submitted by MIT to the Cyber-&#13;
Physical Systems Organization of Vanderbilt University. The Compilation is the first of several&#13;
Reports highlighting the research process and products of the MIT Project on Policy Analytics for&#13;
Cybersecurity of Cyber-Physical Systems. Gaurav Agarwal [a.k.a. Gaurav], MIT alumnus, served&#13;
as Lead Researcher for the Proof-of-Concept case presented here.
</summary>
<dc:date>2024-10-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>Summary of the Fourth ALMA Phasing Project (APP) Commissioning and Science Verification Mission: 2016 April 3-8</title>
<link href="https://hdl.handle.net/1721.1/165054" rel="alternate"/>
<author>
<name>Matthews, Lynn D.</name>
</author>
<author>
<name>Crew, G. B.</name>
</author>
<author>
<name>Fish, Vincent</name>
</author>
<id>https://hdl.handle.net/1721.1/165054</id>
<updated>2026-03-06T03:01:04Z</updated>
<published>2017-03-28T00:00:00Z</published>
<summary type="text">Summary of the Fourth ALMA Phasing Project (APP) Commissioning and Science Verification Mission: 2016 April 3-8
Matthews, Lynn D.; Crew, G. B.; Fish, Vincent
The primary objectives of the fourth APP CSV campaign were twofold: (1) to execute Very Long Baseline Interferometry (VLBI) observing mode (VOM) observations using Schedule Blocks (SBs); (2) to carry out the first end-to-end testing of intercontinental VLBI sessions in both Bands 3 and 6. While intercontinental VLBI fringes with ALMA&#13;
have already been obtained during previous CSV campaigns (Matthews &amp; Crew 2016b, c), those sessions were not conducted in a manner identical to future VLBI science campaigns&#13;
(i.e., they used manual execution of observing commands rather than SBs) and did not involve observations of a full suite of ALMA and VLBI calibrators. Secondary objectives of&#13;
the fourth CSV mission included further development work on an ALMA Phasing System (APS) graphical user interface (GUI), additional testing of the fast phasing loop (under a&#13;
wider variety of weather conditions), tests of the phasing system in Band 7 (in support of an ongoing North America ALMA Study award), and the training of ALMA staﬀ in the&#13;
operation of the VOM and the APS hardware and software.  This is a report of the activities of this campaign.
This report was prepared to report on APP commissioning progress, provided as ALMA Technical Note #19.
</summary>
<dc:date>2017-03-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Summary of the Third ALMA Phasing Project (APP) Commissioning and Science Verification Mission: 2015 July 28-August 3</title>
<link href="https://hdl.handle.net/1721.1/165053" rel="alternate"/>
<author>
<name>Matthews, Lynn D.</name>
</author>
<author>
<name>Crew, G. B.</name>
</author>
<id>https://hdl.handle.net/1721.1/165053</id>
<updated>2026-03-06T03:01:12Z</updated>
<published>2015-09-14T00:00:00Z</published>
<summary type="text">Summary of the Third ALMA Phasing Project (APP) Commissioning and Science Verification Mission: 2015 July 28-August 3
Matthews, Lynn D.; Crew, G. B.
The primary objective for the third APP CSV campaign was to perform intercontinental Very Long Baseline Interferometry (VLBI) fringe tests between phased ALMA and remote stations in Band 3, Band 6, and (conditions permitting) Band 7. Secondary objectives included preparations for making the APS available to the community for ALMA Cycle 4 and training ALMA staff in the operation of the ALMA Phasing System (APS).  This is a report of those activities.
This report was prepared to report on APP commissioning progress, provided as ALMA Technical Note #18.
</summary>
<dc:date>2015-09-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Summary of the Second ALMA Phasing Project (APP) Commissioning and Science Verification Mission: 2015 March 24-30</title>
<link href="https://hdl.handle.net/1721.1/165052" rel="alternate"/>
<author>
<name>Matthews, Lynn D.</name>
</author>
<author>
<name>Crew, G. B.</name>
</author>
<id>https://hdl.handle.net/1721.1/165052</id>
<updated>2026-03-06T03:01:14Z</updated>
<published>2015-09-04T00:00:00Z</published>
<summary type="text">Summary of the Second ALMA Phasing Project (APP) Commissioning and Science Verification Mission: 2015 March 24-30
Matthews, Lynn D.; Crew, G. B.
The primary objective for the second APP CSV campaign was to test and characterize the phasing system, including the recent changes in the handling of the front-end delays. Secondary goals were to repeat the local VLBI test between ALMA and an antenna at the Operations Support Facility (OSF) (also attempted during the January mission) and to obtain a short VLBI recording on a calibrator source with ALMA and one or more stations operating as part of the Event Horizon Telescope (EHT) network, thus allowing demonstration of an intercontinental VLBI fringe.  This is a report of the week's activities.
This report was prepared to report on APP commissioning progress, provided as ALMA Technical Note #17.
</summary>
<dc:date>2015-09-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Summary of the First ALMA Phasing Project (APP) Commissioning and Science Verification Mission: 2015 January 6-13</title>
<link href="https://hdl.handle.net/1721.1/165051" rel="alternate"/>
<author>
<name>Matthews, Lynn D.</name>
</author>
<author>
<name>Crew, G. B.</name>
</author>
<id>https://hdl.handle.net/1721.1/165051</id>
<updated>2026-03-06T03:01:08Z</updated>
<published>2015-03-29T00:00:00Z</published>
<summary type="text">Summary of the First ALMA Phasing Project (APP) Commissioning and Science Verification Mission: 2015 January 6-13
Matthews, Lynn D.; Crew, G. B.
The first Commissioning and Science Verification (CSV) mission for the ALMA Phasing Project (APP) took place during the ALMA EOC Week from 2015 January 6-13. The formal commencement of APP CSV activities followed the provisional acceptance of the APP hardware during a formal review by JAO that took place on 2014 December 11.  This is a report of activities during the week.
This report was prepared to report on APP commissioning progress, provided as ALMA Technical Note #16.
</summary>
<dc:date>2015-03-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>ALMA North America Cycle 4 Study Project Final Report: Diversifying the Applications of the ALMA Phasing System</title>
<link href="https://hdl.handle.net/1721.1/165050" rel="alternate"/>
<author>
<name>Matthews, Lynn D.</name>
</author>
<author>
<name>Crew, G.</name>
</author>
<author>
<name>Hecht, M. H.</name>
</author>
<id>https://hdl.handle.net/1721.1/165050</id>
<updated>2026-03-06T03:01:06Z</updated>
<published>2018-09-01T00:00:00Z</published>
<summary type="text">ALMA North America Cycle 4 Study Project Final Report: Diversifying the Applications of the ALMA Phasing System
Matthews, Lynn D.; Crew, G.; Hecht, M. H.
The Atacama Millimeter/submillimeter Array (ALMA) Phasing Project (APP) produced the hardware and software modifications necessary to bring Very Long Baseline Interferometry (VLBI) capabilities to ALMA. The resulting VLBI observing mode was introduced to the science community in ALMA Cycle 4 (2017), and two VLBI science campaigns have now been carried out successfully at ALMA. The current Cycle 4 ALMA North America (NA) Study was proposed&#13;
to lay the groundwork for a variety of enhancements to the ALMA Phasing System (APS) that were not within the scope of the original APP project. These include: (1) devising an improved method for the handling of baseband delays; (2) development of procedures for use of the APS on fainter astronomical sources than is presently possible; (3) development of data acquisition and correlation techniques to allow the APS to be used for spectral line VLBI experiments. These&#13;
tasks were intended as preparatory steps for a future full-scale implementation project (if approved). Formal approval of this implementation work has now been granted and is funded through an ALMA Cycle 5 NA Development project known as APP “Phase 2” (APP-2). As a result, efforts to implement capabilities designed and explored under the current Cycle 4 Study, as well as a previous Cycle 3 Study award, are now underway. This report provides a status summary of Cycle 4 activities and outlines follow-on work that is continuing as part of the ongoing Cycle 5 Development efforts.
</summary>
<dc:date>2018-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ALMA North America Cycle 3 Study Project Final Report: Extensions and Enhancements to the ALMA Phasing System</title>
<link href="https://hdl.handle.net/1721.1/165049" rel="alternate"/>
<author>
<name>Matthews, L.</name>
</author>
<author>
<name>Crew, G.</name>
</author>
<author>
<name>Hecht, M. H.</name>
</author>
<id>https://hdl.handle.net/1721.1/165049</id>
<updated>2026-03-06T03:01:09Z</updated>
<published>2018-05-01T00:00:00Z</published>
<summary type="text">ALMA North America Cycle 3 Study Project Final Report: Extensions and Enhancements to the ALMA Phasing System
Matthews, L.; Crew, G.; Hecht, M. H.
The Atacama Millimeter/submillimeter Array (ALMA) Phasing Project (APP) has successfully brought Very Long Baseline Interferometry (VLBI) to ALMA. Nine VLBI science projects were observed in 2017 during ALMA’s in augural VLBI campaign as part of Cycle 4. This marked the culmination of an international 5-year effort that involved both hardware and software contribu-&#13;
tions from the APP Team to the ALMA Observatory. A Cycle 3 ALMA North America (NA) Study was proposed to enable ongoing support of VLBI at ALMA and the investigation of enhancements to the ALMA Phasing System (APS) that were not within the scope of the original APP project. These included: (1) an extension of phasing capabilities to the submilleter (Band 7); (2) an exploration of correlation techniques to compensate for the mismatch in sampling rates between ALMA and other VLBI stations; (3) prescriptions for optimization of ALMA baseband delay application; (4) defining and documenting data calibration and analysis pathways for experiments utilizing phased ALMA data.&#13;
This report summarizes outcomes from the Cycle 3 Study. Work on the APP remains ongoing under a Cycle 4 Study award and will continue under a pending ALMA NA Cycle 5 Development Project that is expected to enable full implementation of the capabilities explored under the Cycle 3 and Cycle 4 Studies.
</summary>
<dc:date>2018-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pulsars, Magnetars, and Transients with Phased ALMA, Final Report</title>
<link href="https://hdl.handle.net/1721.1/165048" rel="alternate"/>
<author>
<name>Cordes, James</name>
</author>
<author>
<name>Blackburn, Lindy</name>
</author>
<author>
<name>Chatterjee, Shami</name>
</author>
<author>
<name>Crew, Geoffrey</name>
</author>
<author>
<name>Devignes, Gregory</name>
</author>
<author>
<name>Doeleman, Shep</name>
</author>
<author>
<name>Kramer, Michael</name>
</author>
<author>
<name>Lazio, Joe</name>
</author>
<author>
<name>Liu, Kuo</name>
</author>
<author>
<name>Ransom, Scott</name>
</author>
<id>https://hdl.handle.net/1721.1/165048</id>
<updated>2026-03-06T03:01:05Z</updated>
<published>2017-10-01T00:00:00Z</published>
<summary type="text">Pulsars, Magnetars, and Transients with Phased ALMA, Final Report
Cordes, James; Blackburn, Lindy; Chatterjee, Shami; Crew, Geoffrey; Devignes, Gregory; Doeleman, Shep; Kramer, Michael; Lazio, Joe; Liu, Kuo; Ransom, Scott
The present study developed fast time-domain capability for the ALMA phased-array system that is needed for observations of compact objects in the Galactic center and elsewhere in the Galaxy. ALMA can provide unparalleled sensitivity to a spectral region that has been poorly explored for neutron stars.  Observations at mm and sub-mm wavelengths have the potential for providing decisive observational constraints on emission processes from the magnetospheres of neutron stars. ALMA is also important for surveys for pulsars and transients in the Galactic center.  This is a final report of the work performed.
</summary>
<dc:date>2017-10-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Final Report: ALMA Phasing Project Augmentation</title>
<link href="https://hdl.handle.net/1721.1/165047" rel="alternate"/>
<author>
<name>Matthews, Lynn D.</name>
</author>
<id>https://hdl.handle.net/1721.1/165047</id>
<updated>2026-03-06T03:01:10Z</updated>
<published>2017-05-24T00:00:00Z</published>
<summary type="text">Final Report: ALMA Phasing Project Augmentation
Matthews, Lynn D.
This report provides a summary of activities carried out under the NA ALMA Development Fund award to augment the National Science Foundation MRI award for the ALMA Beamformer proposal, performed as the ALMA Phasing Project.
</summary>
<dc:date>2017-05-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>MRI: Development of an ALMA Beamformer for Ultra High Resolution VLBI and High Frequency Phased Array Science</title>
<link href="https://hdl.handle.net/1721.1/165046" rel="alternate"/>
<author>
<name>Matthews, Lynn D.</name>
</author>
<id>https://hdl.handle.net/1721.1/165046</id>
<updated>2026-03-06T03:01:11Z</updated>
<published>2016-12-12T00:00:00Z</published>
<summary type="text">MRI: Development of an ALMA Beamformer for Ultra High Resolution VLBI and High Frequency Phased Array Science
Matthews, Lynn D.
This is the final report covering the activities of the National Science Foundation MRI Award number 1126433.
</summary>
<dc:date>2016-12-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cycle 11 VLBI Acceptance: Delay Fix Final Report</title>
<link href="https://hdl.handle.net/1721.1/165045" rel="alternate"/>
<author>
<name>Crew, Geoffrey B.</name>
</author>
<author>
<name>Matthews, Lynn D.</name>
</author>
<id>https://hdl.handle.net/1721.1/165045</id>
<updated>2026-03-06T03:01:07Z</updated>
<published>2024-08-20T00:00:00Z</published>
<summary type="text">Cycle 11 VLBI Acceptance: Delay Fix Final Report
Crew, Geoffrey B.; Matthews, Lynn D.
This report presents details of changes to the VLBI software system made on the path to the Cycle 11 Acceptance. The principal new feature is the long-awaited "delay fix" to the APS that is presented in considerable detail here. This was proposed for a new ADF implementation project, APP2. The delay fix is a technology that allows the full 2-GHz continuum band to be used in active phasing, resulting results in a lower flux density limit for direct observation of science targets. It also allows a greater range of passive phasing targets to support weaker targets. Work on the delay fix began during Cycle 3 and is only now concluded in time for additional testing as desired in Cycle 11 and then full use in Cycle 12. Other than the delay fix, a new software device was delivered to support the new hydrogen maser, and there was the usual round of minor scripting updates in the SSR component. This report is written to present a largely self-contained history of the delay fix effort and its fruit, and it also covers the other, minor development items which are formally part of the Cycle 11 Acceptance. As the software phasing engine which supports this mode is also now the recommended engine for the APS, this document also serves to present relatively complete documentation on the final implementation of the SSR as well as TelCal sides of the APS. The underlying VOM remains as it was deployed and used in Cycle 4.
</summary>
<dc:date>2024-08-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>ObsMode2022 Cycle 10 Go/No-Go Report for VLBI Capabilities</title>
<link href="https://hdl.handle.net/1721.1/165044" rel="alternate"/>
<author>
<name>Matthews, Lynn D.</name>
</author>
<author>
<name>Crew, Geoff</name>
</author>
<author>
<name>Fish, Vincent</name>
</author>
<author>
<name>Messias, Hugo</name>
</author>
<author>
<name>Titus, Mike</name>
</author>
<author>
<name>Krichbaum, Thomas</name>
</author>
<id>https://hdl.handle.net/1721.1/165044</id>
<updated>2026-03-06T03:01:13Z</updated>
<published>2022-11-04T00:00:00Z</published>
<summary type="text">ObsMode2022 Cycle 10 Go/No-Go Report for VLBI Capabilities
Matthews, Lynn D.; Crew, Geoff; Fish, Vincent; Messias, Hugo; Titus, Mike; Krichbaum, Thomas
We present here a report on the Very Long Baseline Interferometry (VLBI) development efforts under consideration at ALMA as new offerings in Cycle 10. These activities are being carried out under the ALMA North America Development Project known as the ALMA Phasing Project Phase 3 (APP3). The two VLBI priorities previously identified for Cycle 10 by the ObsMode process are: (1) spectral line VLBI with flexible tuning; (2) panchromatic VLBI for spectral line and continuum (Bands 1, 3, 6 and 7, with provisions for extension to any other band). This document provides an overview of the development, testing, and readiness of these capabilities. Some updates on minor development efforts are also provided.
This report was prepared for the formal acceptance of the software required for Cycle 10.  Notionally it is ALMA Technical Note #25, but not published (yet) as such.
</summary>
<dc:date>2022-11-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cycle 8 (2021–2022) VLBI Delta Acceptance Report</title>
<link href="https://hdl.handle.net/1721.1/165043" rel="alternate"/>
<author>
<name>Crew, Geoff</name>
</author>
<author>
<name>Vila-Vilaro, Baltasar</name>
</author>
<id>https://hdl.handle.net/1721.1/165043</id>
<updated>2026-03-06T03:01:03Z</updated>
<published>2022-02-09T00:00:00Z</published>
<summary type="text">Cycle 8 (2021–2022) VLBI Delta Acceptance Report
Crew, Geoff; Vila-Vilaro, Baltasar
This report summarizes the acceptance process for VLBI which was carried out in 2021&#13;
as part of the normal Cycle 8 Acceptance and later, through the 2021–2022 preparations&#13;
for the 2022 VLBI Campaigns. It reviews the hardware setup and checks that must&#13;
be made at various times prior to any observations with VLBI peers. Then there is a&#13;
suite of offline tests of the SB-generation that include observation simulation. There is&#13;
also a suite of on-sky regression tests that exercise the ALMA Phasing System (APS).&#13;
The final step of the acceptance has historically been the execution of a short “dress&#13;
rehearsal” (DR) usually in January of the cycle year with the Event Horizon Telescope&#13;
(EHT) to provide end-to-end validation of the system via fringes to remote peers.&#13;
Collectively these tests establish that the science VLBI projects may proceed without&#13;
issue.
This report was prepared for the formal acceptance of the software required for Cycle 8.  Notionally, it is ALMA Technical Note #24, but not published (yet) as such.
</summary>
<dc:date>2022-02-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a diamond-based in-vessel soft x-ray detector for the SPARC tokamak</title>
<link href="https://hdl.handle.net/1721.1/165042" rel="alternate"/>
<author>
<name>Normile, S</name>
</author>
<author>
<name>Vezinet, D</name>
</author>
<author>
<name>Perks, C</name>
</author>
<author>
<name>Bombarda, F</name>
</author>
<author>
<name>Verona-Rinati, G</name>
</author>
<author>
<name>Rice, JE</name>
</author>
<author>
<name>Verona, C</name>
</author>
<author>
<name>Raso, AM</name>
</author>
<author>
<name>Angelone, M</name>
</author>
<id>https://hdl.handle.net/1721.1/165042</id>
<updated>2026-03-06T03:09:00Z</updated>
<published>2024-09-24T00:00:00Z</published>
<summary type="text">Design of a diamond-based in-vessel soft x-ray detector for the SPARC tokamak
Normile, S; Vezinet, D; Perks, C; Bombarda, F; Verona-Rinati, G; Rice, JE; Verona, C; Raso, AM; Angelone, M
The in-vessel silicon diode arrays that are used for soft x-ray detection in many tokamaks are sensitive to neutron damage, making them unsuitable for burning plasma devices such as SPARC. In such a device, the silicon diodes would need to be placed far from the plasma—limiting their field of view—or an alternative detector could be used. Here, we present the design of a camera containing an array of chemical vapor deposition single-crystal diamonds, which will be placed in the upper and lower port plugs of the SPARC tokamak with a large enough view of the poloidal cross section to enable tomographic inversion. The camera design presented here is optimized to provide a wide field of view of the poloidal cross section. Simulated plasma conditions are used to estimate the x-ray signal that this detector array will receive and to fine-tune the camera placement within the tokamak.
</summary>
<dc:date>2024-09-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of the prototype for the SPARC hard X-ray monitor</title>
<link href="https://hdl.handle.net/1721.1/165041" rel="alternate"/>
<author>
<name>Panontin, E</name>
</author>
<author>
<name>Tinguely, RA</name>
</author>
<author>
<name>Hartwig, ZS</name>
</author>
<author>
<name>Saltos, AA</name>
</author>
<author>
<name>Vezinet, D</name>
</author>
<author>
<name>Rice, J</name>
</author>
<id>https://hdl.handle.net/1721.1/165041</id>
<updated>2026-03-06T03:08:57Z</updated>
<published>2024-08-05T00:00:00Z</published>
<summary type="text">Development of the prototype for the SPARC hard X-ray monitor
Panontin, E; Tinguely, RA; Hartwig, ZS; Saltos, AA; Vezinet, D; Rice, J
The SPARC tokamak will be equipped with a hard X-ray (HXR) monitor system capable of measuring the bremsstrahlung emission from runaway electrons with photon energies in excess of about 100 keV. This diagnostic will detect the formation of runaway electron beams during plasma start-up and inform the plasma control system to terminate the discharge early to protect the machine. In this work, we present a 0D estimate of the HXR emission in SPARC during plasma start-up. Then we discuss the characterization of a prototype of the HXR monitor. The detector mounts a 1 × 1-in.2 LaBr3 inorganic scintillator coupled with a photomultiplier tube and has been tested with γ-ray sources to find its dynamic range. Finally, two possible modes of operation for spectroscopic and current mode measurements on SPARC are proposed.
</summary>
<dc:date>2024-08-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Perspectives on pilot-wave hydrodynamics</title>
<link href="https://hdl.handle.net/1721.1/165040" rel="alternate"/>
<author>
<name>Bush, John WM</name>
</author>
<author>
<name>Frumkin, Valeri</name>
</author>
<author>
<name>Sáenz, Pedro J</name>
</author>
<id>https://hdl.handle.net/1721.1/165040</id>
<updated>2026-03-06T03:08:58Z</updated>
<published>2024-07-15T00:00:00Z</published>
<summary type="text">Perspectives on pilot-wave hydrodynamics
Bush, John WM; Frumkin, Valeri; Sáenz, Pedro J
We present a number of fresh perspectives on pilot-wave hydrodynamics, the field initiated in 2005 by Couder and Fort's discovery that millimetric droplets self-propelling along the surface of a vibrating bath can capture certain features of quantum systems. A recurring theme will be that pilot-wave hydrodynamics furnishes a classical framework for reproducing many quantum phenomena and allows one to rationalize such phenomena mechanistically, from a local realist perspective, obviating the need to appeal to quantum nonlocality. The distinction is drawn between hydrodynamic pilot-wave theory and its quantum counterparts, Bohmian mechanics, the Bohm–Vigier stochastic pilot-wave theory, and de Broglie's theory of the double-solution. Each of these quantum predecessors provide a valuable touchstone as we take the physical picture engendered in the walking droplets and extend it into the quantum realm via theoretical modeling. Emphasis is given to recent developments in the field, both experimental and conceptual, and to forecasting potentially fruitful new directions.
</summary>
<dc:date>2024-07-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated transient grating spectroscopy mapping and signal control for large samples</title>
<link href="https://hdl.handle.net/1721.1/165039" rel="alternate"/>
<author>
<name>Weaver, Colin</name>
</author>
<author>
<name>Stapelberg, Myles</name>
</author>
<author>
<name>Short, Michael P</name>
</author>
<author>
<name>Wylie, Angus</name>
</author>
<author>
<name>Artalejo, Elena Botica</name>
</author>
<id>https://hdl.handle.net/1721.1/165039</id>
<updated>2026-03-06T03:08:55Z</updated>
<published>2024-07-10T00:00:00Z</published>
<summary type="text">Automated transient grating spectroscopy mapping and signal control for large samples
Weaver, Colin; Stapelberg, Myles; Short, Michael P; Wylie, Angus; Artalejo, Elena Botica
We present developments for the mapping of large areas using transient grating spectroscopy (TGS) that allow for smoother, larger, autonomous measurements of material samples. The addition of a precise linear stage in the direction parallel to laser sampling coupled with signal optimizing control allows for hands free, self-correcting measurements. In addition, the simplification of the sample holding design to a form that is small enough to mount directly to the linear stage exhibits a straightforward, low-cost solution for automated TGS applications. This capability is demonstrated by taking large uninterrupted maps of gradient wafers, and the results are validated on calibrated tungsten samples and control TGS samples from gradient wafers.
</summary>
<dc:date>2024-07-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Manipulating the duration of picoinjection controls the injected volume of individual droplets</title>
<link href="https://hdl.handle.net/1721.1/165038" rel="alternate"/>
<author>
<name>Thakur, R.</name>
</author>
<author>
<name>Weitz, D.</name>
</author>
<id>https://hdl.handle.net/1721.1/165038</id>
<updated>2026-03-06T03:09:01Z</updated>
<published>2024-07-02T00:00:00Z</published>
<summary type="text">Manipulating the duration of picoinjection controls the injected volume of individual droplets
Thakur, R.; Weitz, D.
The ability to add reagents into droplets is required in many microfluidic workflows. Picoinjection can address this need; however, it is unable to control the injection volume for each individual droplet. Here, we present an improved picoinjection method that can inject controlled volumes into individual droplets. We achieve this by adjusting the injection duration for each picoinjection event. This improved picoinjection method can be used to create complex microfluidic workflows that are able to control the biochemical composition of individual droplets.
</summary>
<dc:date>2024-07-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multitask methods for predicting molecular properties from heterogeneous data</title>
<link href="https://hdl.handle.net/1721.1/165037" rel="alternate"/>
<author>
<name>Fisher, KE</name>
</author>
<author>
<name>Herbst, MF</name>
</author>
<author>
<name>Marzouk, YM</name>
</author>
<id>https://hdl.handle.net/1721.1/165037</id>
<updated>2026-03-06T03:08:49Z</updated>
<published>2024-07-03T00:00:00Z</published>
<summary type="text">Multitask methods for predicting molecular properties from heterogeneous data
Fisher, KE; Herbst, MF; Marzouk, YM
Data generation remains a bottleneck in training surrogate models to predict molecular properties. We demonstrate that multitask Gaussian process regression overcomes this limitation by leveraging both expensive and cheap data sources. In particular, we consider training sets constructed from coupled-cluster (CC) and density functional theory (DFT) data. We report that multitask surrogates can predict at CC-level accuracy with a reduction in data generation cost by over an order of magnitude. Of note, our approach allows the training set to include DFT data generated by a heterogeneous mix of exchange–correlation functionals without imposing any artificial hierarchy on functional accuracy. More generally, the multitask framework can accommodate a wider range of training set structures—including the full disparity between the different levels of fidelity—than existing kernel approaches based on Δ-learning although we show that the accuracy of the two approaches can be similar. Consequently, multitask regression can be a tool for reducing data generation costs even further by opportunistically exploiting existing data sources.
</summary>
<dc:date>2024-07-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>7.344 Cellular Metabolism and Cancer: Nature or Nurture?, Fall 2018</title>
<link href="https://hdl.handle.net/1721.1/165036" rel="alternate"/>
<author>
<name>Lau, Allison</name>
</author>
<author>
<name>Lien, Evan</name>
</author>
<id>https://hdl.handle.net/1721.1/165036</id>
<updated>2026-03-09T18:17:48Z</updated>
<published>2018-01-01T00:00:00Z</published>
<summary type="text">7.344 Cellular Metabolism and Cancer: Nature or Nurture?, Fall 2018
Lau, Allison; Lien, Evan
In this course we will explore how altered metabolism drives cancer progression. Students will learn (1) how to read, discuss, and critically evaluate scientific findings in the primary research literature, (2) how scientists experimentally approach fundamental issues in biology and medicine, (3) how recent findings have challenged the traditional “textbook” understanding of metabolism and given us new insight into cancer, and (4) how a local pharmaceutical company is developing therapeutics to target cancer metabolism in an effort to revolutionize cancer therapy.
</summary>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>7.343 Single-Molecule Imaging: Capturing Nanoscale Cellular Machines in Action, Fall 2021</title>
<link href="https://hdl.handle.net/1721.1/165035" rel="alternate"/>
<author>
<name>Kose, Hazal B.</name>
</author>
<id>https://hdl.handle.net/1721.1/165035</id>
<updated>2026-03-09T18:16:55Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">7.343 Single-Molecule Imaging: Capturing Nanoscale Cellular Machines in Action, Fall 2021
Kose, Hazal B.
Did you know that we have approximately 2 meters of DNA packed in our cells, which are less than 10 μm diameter? Or that to replicate DNA it is copied at a rate of 70,000 basepairs per second by a cellular apparatus that coordinates at least six different enzymes? Or that microtubules form greater than 1 meter long “railways” upon which molecular machines transport cargo within nerve cells? In this course, we will explore how single-molecule imaging techniques capture the mega-cellular machines working in real-time.&#13;
&#13;
This course is one of many Advanced Undergraduate Seminars offered by the Biology Department at MIT. These seminars are tailored for students with an interest in using primary research literature to discuss and learn about current biological research in a highly interactive setting. Many instructors of the Advanced Undergraduate Seminars are postdoctoral scientists with a strong interest in teaching.
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>7.342 Immune Cell Migration: On the Move in Response to Pathogens and Cancer Immunotherapy, Fall 2021</title>
<link href="https://hdl.handle.net/1721.1/165034" rel="alternate"/>
<author>
<name>Fessenden, Timothy</name>
</author>
<id>https://hdl.handle.net/1721.1/165034</id>
<updated>2026-03-09T18:16:00Z</updated>
<published>2021-01-01T00:00:00Z</published>
<summary type="text">7.342 Immune Cell Migration: On the Move in Response to Pathogens and Cancer Immunotherapy, Fall 2021
Fessenden, Timothy
The mammalian immune system is sometimes called a “liquid organ,” capable of rapidly initiating and then resolving potent responses to pathogens at almost any location in the organism. What protein machinery drives immune cells’ rapid migration? How do cells make pathfinding decisions around barriers? How do they find rare pathogens or target cells in complex environments?&#13;
&#13;
This course will begin by examining the general immunological functions of two major immune cell types—T cells and dendritic cells. Through our readings and discussions, we will examine the connections between immunotherapy as an emerging treatment modality for a variety of cancers and the migration of immune cells.&#13;
&#13;
This course is one of many Advanced Undergraduate Seminars offered by the Biology Department at MIT. These seminars are tailored for students with an interest in using primary research literature to discuss and learn about current biological research in a highly interactive setting. Many instructors of the Advanced Undergraduate Seminars are postdoctoral scientists with a strong interest in teaching.
</summary>
<dc:date>2021-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>7.341 Turning Evolutionary Dials: Directed Evolution Techniques for Climate Change and Beyond, Spring 2022</title>
<link href="https://hdl.handle.net/1721.1/165033" rel="alternate"/>
<author>
<name>Kizer, Megan</name>
</author>
<author>
<name>Wilson, Robbie</name>
</author>
<id>https://hdl.handle.net/1721.1/165033</id>
<updated>2026-03-09T18:15:19Z</updated>
<published>2022-01-01T00:00:00Z</published>
<summary type="text">7.341 Turning Evolutionary Dials: Directed Evolution Techniques for Climate Change and Beyond, Spring 2022
Kizer, Megan; Wilson, Robbie
This course will cover the many ways in which we have realized evolution in the laboratory toward functional biomolecules, such as protein and nucleic-acid-based therapeutics, enzymes that catalyze production of synthetic drugs, and carbon-dioxide capture molecules to lessen the impact of climate change. Students will both become familiar with the field of directed molecular evolution and learn how to critically analyze primary research papers, design research experiments, and present data relating to molecular biology and evolution. The importance of directed evolution in biomedical and biotechnological careers, both academic and industrial, will be highlighted.&#13;
&#13;
This course is one of many Advanced Undergraduate Seminars offered by the Biology Department at MIT. These seminars are tailored for students with an interest in using primary research literature to discuss and learn about current biological research in a highly interactive setting. Many instructors of the Advanced Undergraduate Seminars are postdoctoral scientists with a strong interest in teaching.
</summary>
<dc:date>2022-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>7.342 Synapse Remodeling in Health and Disease, Fall 2022</title>
<link href="https://hdl.handle.net/1721.1/165032" rel="alternate"/>
<author>
<name>Ordonez, Dalila</name>
</author>
<author>
<name>Boivin, Josiah</name>
</author>
<id>https://hdl.handle.net/1721.1/165032</id>
<updated>2026-03-09T18:14:38Z</updated>
<published>2022-01-01T00:00:00Z</published>
<summary type="text">7.342 Synapse Remodeling in Health and Disease, Fall 2022
Ordonez, Dalila; Boivin, Josiah
Our brains are remarkably adaptable throughout our lives. Individual brain cells called neurons form synapses, sites of physical connection and communication between neurons, and then repeatedly rewire those connections in response to new experiences or to neuronal cell death caused by injury, disease, or aging. In this course, we will explore how neurons establish their synapses in the healthy brain during childhood and later in life, and how this process goes awry in disease states. More specifically, we will discuss how the brain forms its synapses early in life, stabilizes a subset of those synapses for long-term maintenance, and continues to add and remove synapses throughout life. We will then explore synapse dysfunction in diseases such as autism and Alzheimer’s disease, which involve abnormal increases or losses of synaptic connections, respectively. We will also consider synapse remodeling, a process of adding and removing synaptic connections to optimize our brain network, in the context of neuroinflammation, recovery from traumatic brain injury, and psychological trauma following prolonged stress.&#13;
&#13;
This course is one of many Advanced Undergraduate Seminars offered by the Biology Department at MIT. These seminars are tailored for students with an interest in using primary research literature to discuss and learn about current biological research in a highly interactive setting. Many instructors of the Advanced Undergraduate Seminars are postdoctoral scientists with a strong interest in teaching.
</summary>
<dc:date>2022-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>7.342 How To Build An Animal: Cell Fate and Identity in Development and Disease, Fall 2017</title>
<link href="https://hdl.handle.net/1721.1/165031" rel="alternate"/>
<author>
<name>Blanton, Laura V</name>
</author>
<id>https://hdl.handle.net/1721.1/165031</id>
<updated>2026-03-09T18:14:04Z</updated>
<published>2017-01-01T00:00:00Z</published>
<summary type="text">7.342 How To Build An Animal: Cell Fate and Identity in Development and Disease, Fall 2017
Blanton, Laura V
In this course, we will explore how animals determine and maintain cell fate. We will discuss changes to DNA structure and packaging, special proteins (known as "master regulators") with the ability to alter cell fate via transcription, cell-cell signaling, and RNA localization.&#13;
&#13;
This course is one of many Advanced Undergraduate Seminars offered by the Biology Department at MIT. These seminars are tailored for students with an interest in using primary research literature to discuss and learn about current biological research in a highly interactive setting. Many instructors of the Advanced Undergraduate Seminars are postdoctoral scientists with a strong interest in teaching.
</summary>
<dc:date>2017-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>7.341 DNA's Sister Does All the Work: The Central Roles of RNA in Gene Expression , Spring 2019</title>
<link href="https://hdl.handle.net/1721.1/165030" rel="alternate"/>
<author>
<name>Fiszbein, Ana</name>
</author>
<author>
<name>Jens, Marvin</name>
</author>
<id>https://hdl.handle.net/1721.1/165030</id>
<updated>2026-03-09T18:13:29Z</updated>
<published>2019-01-01T00:00:00Z</published>
<summary type="text">7.341 DNA's Sister Does All the Work: The Central Roles of RNA in Gene Expression , Spring 2019
Fiszbein, Ana; Jens, Marvin
This course will explore the current frontiers of the world of RNA biology with primary research papers to trace how the original odd detail sometimes leads to major discoveries. As we discuss the different transcripts and processing events that enable this exciting diversity of RNA functions, we invite you to read landmark papers with us, think critically, and ask new questions, as we marvel at the wonders of RNA.  &#13;
  &#13;
This course is one of many Advanced Undergraduate Seminars offered by the Biology Department at MIT. These seminars are tailored for students with an interest in using primary research literature to discuss and learn about current biological research in a highly interactive setting. Many instructors of the Advanced Undergraduate Seminars are postdoctoral scientists with a strong interest in teaching.
</summary>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>7.341 The Microbiome and Drug Delivery: Cross-species Communication in Health and Disease, Spring 2018</title>
<link href="https://hdl.handle.net/1721.1/165029" rel="alternate"/>
<author>
<name>Beyzavi, Ali</name>
</author>
<author>
<name>Jimenez, Miguel</name>
</author>
<id>https://hdl.handle.net/1721.1/165029</id>
<updated>2026-03-09T18:12:34Z</updated>
<published>2018-01-01T00:00:00Z</published>
<summary type="text">7.341 The Microbiome and Drug Delivery: Cross-species Communication in Health and Disease, Spring 2018
Beyzavi, Ali; Jimenez, Miguel
There are more microbes permanently living in our gut than there are cells in the human body. This rich community of bacteria, fungi and viruses, called the microbiome, plays a central role in human health and disease. Recent research has linked this passenger community to nutrition, circadian rhythms, infectious disease, inflammatory disease, cancer, diabetes, arthritis and even immune system and nervous system development. How can we analyze such a complex system? Can we exploit the microbiome to improve human health? Can interactions with microbes be harnessed for drug delivery?&#13;
&#13;
In this course, we will learn to critically assess the primary scientific literature to find answers to these questions and learn to distinguish between correlation and causality. We will learn how mechanistic insights and emerging tools, such as synthetic biology and microfluidics, together are transforming microbiome research, and might lead to new types of therapeutics and drug delivery for improving human health.&#13;
&#13;
This course is one of many Advanced Undergraduate Seminars offered by the Biology Department at MIT. These seminars are tailored for students with an interest in using primary research literature to discuss and learn about current biological research in a highly interactive setting. Many instructors of the Advanced Undergraduate Seminars are postdoctoral scientists with a strong interest in teaching.
</summary>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>7.341 Microbes at War: The Mechanisms That Drive Infectious Diseases, Fall 2022</title>
<link href="https://hdl.handle.net/1721.1/165028" rel="alternate"/>
<author>
<name>McLellan, Lisa</name>
</author>
<id>https://hdl.handle.net/1721.1/165028</id>
<updated>2026-03-09T18:11:52Z</updated>
<published>2022-01-01T00:00:00Z</published>
<summary type="text">7.341 Microbes at War: The Mechanisms That Drive Infectious Diseases, Fall 2022
McLellan, Lisa
How can a tick bite cause a meat allergy? And does cranberry juice do anything to help cure a urinary tract infection? To answer these and other questions, we are going to take a dive into the molecular world of microbes. In this class, we will use the primary research literature to explore the molecular interactions between pathogens and their hosts that allow microbes to cause infectious diseases. We will examine the factors that pathogens use to colonize a host and how the host response can impact the outcome of the infection. By the end of the class, students will have both developed critical scientific skills in evaluating scientific literature and an appreciation of the microbes influencing our lives and health every day.&#13;
&#13;
This course is one of many Advanced Undergraduate Seminars offered by the Biology Department at MIT. These seminars are tailored for students with an interest in using primary research literature to discuss and learn about current biological research in a highly interactive setting. Many instructors of the Advanced Undergraduate Seminars are postdoctoral scientists with a strong interest in teaching.
</summary>
<dc:date>2022-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>7.341 Biomaterials and Devices for Disease Diagnosis and Therapy, Fall 2018</title>
<link href="https://hdl.handle.net/1721.1/165027" rel="alternate"/>
<author>
<name>McHugh, Kevin</name>
</author>
<author>
<name>Beyzavi, Ali</name>
</author>
<id>https://hdl.handle.net/1721.1/165027</id>
<updated>2026-03-09T18:11:08Z</updated>
<published>2018-01-01T00:00:00Z</published>
<summary type="text">7.341 Biomaterials and Devices for Disease Diagnosis and Therapy, Fall 2018
McHugh, Kevin; Beyzavi, Ali
Students will learn about the use of biomaterials to create advanced diagnostic tools for detection of infectious and chronic diseases, restore insulin production to supplement lost pancreatic function in diabetes, provide cells with appropriate physical, mechanical, and biochemical cues to direct tissue regeneration, and enhance the efficacy of cancer immunotherapy.&#13;
&#13;
This course is one of many Advanced Undergraduate Seminars offered by the Biology Department at MIT. These seminars are tailored for students with an interest in using primary research literature to discuss and learn about current biological research in a highly interactive setting. Many instructors of the Advanced Undergraduate Seminars are postdoctoral scientists with a strong interest in teaching.
</summary>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>7.342 The Seeds and the Soil: Roles of Tumor Heterogeneity and the Tumor Microenvironment in Cancer Metastasis, Fall 2020</title>
<link href="https://hdl.handle.net/1721.1/165026" rel="alternate"/>
<author>
<name>Lambert, Arthur</name>
</author>
<author>
<name>Zhang, Yun</name>
</author>
<id>https://hdl.handle.net/1721.1/165026</id>
<updated>2026-03-09T18:09:46Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">7.342 The Seeds and the Soil: Roles of Tumor Heterogeneity and the Tumor Microenvironment in Cancer Metastasis, Fall 2020
Lambert, Arthur; Zhang, Yun
Metastatic disease is responsible for the vast majority of deaths associated with cancer, yet our understanding of how metastases arise is still developing. In this course, we will introduce various concepts and models that have been proposed to explain how cancer cells disseminate from a primary tumor to distant anatomical sites. We’ll learn about the critical factors that influence cancer metastasis frontiers through analysis and discussion of relevant primary research articles, with an emphasis on mechanisms of metastasis that can be applied across different cancer types. Students will gain a broad understanding of the field of cancer metastasis, including state-of-the-art techniques that are being used to address pressing questions in the field.&#13;
&#13;
This course is one of many Advanced Undergraduate Seminars offered by the Biology Department at MIT. These seminars are tailored for students with an interest in using primary research literature to discuss and learn about current biological research in a highly interactive setting. Many instructors of the Advanced Undergraduate Seminars are postdoctoral scientists with a strong interest in teaching.
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>7.343 Microbial Megaproducers: Discovery, Biosynthesis, Engineering and Applications of Natural Products, Fall 2020</title>
<link href="https://hdl.handle.net/1721.1/165025" rel="alternate"/>
<author>
<name>Ulrich, Emily C</name>
</author>
<author>
<name>Hetrick, Kenton</name>
</author>
<id>https://hdl.handle.net/1721.1/165025</id>
<updated>2026-03-09T18:08:41Z</updated>
<published>2020-01-01T00:00:00Z</published>
<summary type="text">7.343 Microbial Megaproducers: Discovery, Biosynthesis, Engineering and Applications of Natural Products, Fall 2020
Ulrich, Emily C; Hetrick, Kenton
The natural world is a mega-factory of small molecules, peptides, fatty acids, phospholipids, and a host of other compounds, known as natural products (NPs). Immensely diverse in structure and function, NPs have strongly influenced how we treat infectious disease, cancer, pain, and a host of other conditions. Roughly half of the drugs that have been approved in the past 30 years are NPs, derivatives of NPs or NP-inspired. In this discussion-based course, we will delve into research on discovering NPs from producing organisms, investigating the biochemistry of NP production, and using synthetic biology to create NP derivatives—all with a particular emphasis on how genomic data guides and informs all these studies.&#13;
&#13;
This course is one of many Advanced Undergraduate Seminars offered by the Biology Department at MIT. These seminars are tailored for students with an interest in using primary research literature to discuss and learn about current biological research in a highly interactive setting. Many instructors of the Advanced Undergraduate Seminars are postdoctoral scientists with a strong interest in teaching.
</summary>
<dc:date>2020-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>7.342 A Double-Edged Sword: Cellular Immunity in Health and Disease, Fall 2018</title>
<link href="https://hdl.handle.net/1721.1/165024" rel="alternate"/>
<author>
<name>Ma, Haiting</name>
</author>
<id>https://hdl.handle.net/1721.1/165024</id>
<updated>2026-03-09T18:09:08Z</updated>
<published>2018-01-01T00:00:00Z</published>
<summary type="text">7.342 A Double-Edged Sword: Cellular Immunity in Health and Disease, Fall 2018
Ma, Haiting
Immune cells protect our bodies from both self-derived threats and exogenous pathogens, while keeping peace with normal cells and non-harmful commensal microbiota. They have various mechanisms to perform these tasks, a capacity that is essential for maintaining homeostasis. However, these same mechanisms can backfire, resulting in severe disorders such as immunodeficiency, chronic inflammation, allergy, degenerative diseases, and cancer. This course discusses the connections between normal physiology and disease by examining the developmental relationship between innate and adaptive immune cells as well as the functions and malfunctions of immune cells. The course familiarizes students with both basic biological principles (such as cell death and immune cell signaling) and clinical applications (such as immune checkpoint blockade). More generally, students learn to identify relevant primary research literature, critically evaluate experimental data, and reach their own conclusions based on primary data.&#13;
This course is one of many Advanced Undergraduate Seminars offered by the Biology Department at MIT. These seminars are tailored for students with an interest in using primary research literature to discuss and learn about current biological research in a highly interactive setting. Many instructors of the Advanced Undergraduate Seminars are postdoctoral scientists with a strong interest in teaching.
</summary>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Flow Synthesis of Artificial Heme Enzymes for Enantiodivergent Biocatalysis</title>
<link href="https://hdl.handle.net/1721.1/165023" rel="alternate"/>
<author>
<name>Fittolani, Giulio</name>
</author>
<author>
<name>Kutateladze, Dennis A</name>
</author>
<author>
<name>Loas, Andrei</name>
</author>
<author>
<name>Buchwald, Stephen L</name>
</author>
<author>
<name>Pentelute, Bradley L</name>
</author>
<id>https://hdl.handle.net/1721.1/165023</id>
<updated>2026-03-05T06:13:21Z</updated>
<published>2025-01-22T00:00:00Z</published>
<summary type="text">Automated Flow Synthesis of Artificial Heme Enzymes for Enantiodivergent Biocatalysis
Fittolani, Giulio; Kutateladze, Dennis A; Loas, Andrei; Buchwald, Stephen L; Pentelute, Bradley L
The remarkable efficiency with which enzymes catalyze small-molecule reactions has driven their widespread application in organic chemistry. Here, we employ automated fast-flow solid-phase synthesis to access catalytically active full-length enzymes without restrictions on the number and structure of noncanonical amino acids incorporated. We demonstrate the total syntheses of iron-dependent Bacillus subtilis myoglobin (BsMb) and sperm whale myoglobin (SwMb). The synthetic enzymes displayed excellent enantioselectivity and yield in carbene transfer reactions. Absolute control over enantioselectivity in styrene cyclopropanation was achieved using synthetic L- and D-BsMb mutants, which delivered each enantiomer of cyclopropane product in identical and opposite enantiomeric enrichment. BsMb mutants outfitted with noncanonical amino acids were used to facilitate detailed structure–activity relationship studies, revealing a previously unrecognized hydrogen-bonding interaction as the primary driver of enantioselectivity in styrene cyclopropanation. We anticipate that our approach will advance biocatalysis by providing reliable and rapid access to fully synthetic enzymes possessing noncanonical amino acids.
</summary>
<dc:date>2025-01-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of a Ligand for Cu-Catalyzed Amination of Base-Sensitive (Hetero)aryl Chlorides</title>
<link href="https://hdl.handle.net/1721.1/165022" rel="alternate"/>
<author>
<name>Ai, Han-Jun</name>
</author>
<author>
<name>Mai, Binh Khanh</name>
</author>
<author>
<name>Liu, Cecilia</name>
</author>
<author>
<name>Liu, Peng</name>
</author>
<author>
<name>Buchwald, Stephen L</name>
</author>
<id>https://hdl.handle.net/1721.1/165022</id>
<updated>2026-03-05T06:13:11Z</updated>
<published>2025-10-13T00:00:00Z</published>
<summary type="text">Development of a Ligand for Cu-Catalyzed Amination of Base-Sensitive (Hetero)aryl Chlorides
Ai, Han-Jun; Mai, Binh Khanh; Liu, Cecilia; Liu, Peng; Buchwald, Stephen L
We report a new N1,N2-diarylbenzene-1,2-diamine ligand, L6, that supports a copper catalyst capable of coupling base-sensitive aryl chlorides and amines that were previously unsuccessful substrates for Cu-catalyzed C–N coupling. A detailed structure–activity relationship study, combined with density functional theory (DFT) calculations, was used to uncover two key structural features that contribute to the efficacy of the catalyst derived from L6. First, steric repulsion caused by a methyl substituent induces a conformational change that opens up additional space for ligand deprotonation and oxidative addition. Second, the trifluoromethyl groups create electrostatic interactions between the ligand and aryl chloride substrates that facilitate oxidative addition via through-space ligand–substrate interaction.
</summary>
<dc:date>2025-10-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ligand Design Enables Cu-Catalyzed Etherification of Aryl Bromides Using Mild Bases</title>
<link href="https://hdl.handle.net/1721.1/165021" rel="alternate"/>
<author>
<name>Strauss, Michael J</name>
</author>
<author>
<name>Greaves, Megan E</name>
</author>
<author>
<name>Kim, Seoung-Tae</name>
</author>
<author>
<name>Schmidt, Michael A</name>
</author>
<author>
<name>Scola, Paul M</name>
</author>
<author>
<name>Buchwald, Stephen L</name>
</author>
<id>https://hdl.handle.net/1721.1/165021</id>
<updated>2026-03-05T06:13:20Z</updated>
<published>2026-01-05T00:00:00Z</published>
<summary type="text">Ligand Design Enables Cu-Catalyzed Etherification of Aryl Bromides Using Mild Bases
Strauss, Michael J; Greaves, Megan E; Kim, Seoung-Tae; Schmidt, Michael A; Scola, Paul M; Buchwald, Stephen L
We report a Cu-catalyzed method for the efficient coupling of base-sensitive aryl bromides and alcohols utilizing a newly developed N1,N2-diarylbenzene-1,2-diamine ligand, L15. This ligand was developed to increase the Lewis acidity of the Cu center, thereby permitting the use of a substantially milder base (NaOTMS or NaOPh) relative to those required in a previous iteration of this methodology (NaOMe or NaOt-Bu). Under the optimized reaction conditions, several classes of previously incompatible aryl bromides were efficiently transformed, including base-sensitive heterocycles and those containing acidic functional groups. Kinetic analyses support that C–O coupling proceeds via a mechanism involving binding/deprotonation of alcohol nucleophiles, that the pKa of the base influences the overall rate law, and that substoichiometric quantities of strong base can be utilized to accelerate ligand activation and thereby increase the overall rate of the transformation.
</summary>
<dc:date>2026-01-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Time-Resolved Line Shapes of Single Quantum Emitters via Machine Learned Photon Correlations</title>
<link href="https://hdl.handle.net/1721.1/165020" rel="alternate"/>
<author>
<name>Proppe, Andrew H</name>
</author>
<author>
<name>Lee, Kin Long Kelvin</name>
</author>
<author>
<name>Kaplan, Alexander EK</name>
</author>
<author>
<name>Ginterseder, Matthias</name>
</author>
<author>
<name>Krajewska, Chantalle J</name>
</author>
<author>
<name>Bawendi, Moungi G</name>
</author>
<id>https://hdl.handle.net/1721.1/165020</id>
<updated>2026-03-05T06:13:24Z</updated>
<published>2023-08-04T00:00:00Z</published>
<summary type="text">Time-Resolved Line Shapes of Single Quantum Emitters via Machine Learned Photon Correlations
Proppe, Andrew H; Lee, Kin Long Kelvin; Kaplan, Alexander EK; Ginterseder, Matthias; Krajewska, Chantalle J; Bawendi, Moungi G
Solid-state single-photon emitters (SPEs) are quantum light sources that combine atomlike optical properties with solid-state integration and fabrication capabilities. SPEs are hindered by spectral diffusion, where the emitter's surrounding environment induces random energy fluctuations. Timescales of spectral diffusion span nanoseconds to minutes and require probing single emitters to remove ensemble averaging. Photon correlation Fourier spectroscopy (PCFS) can be used to measure time-resolved single emitter line shapes, but is hindered by poor signal-to-noise ratio in the measured correlation functions at early times due to low photon counts. Here, we develop a framework to simulate PCFS correlation functions directly from diffusing spectra that match well with experimental data for single colloidal quantum dots. We use these simulated datasets to train a deep ensemble autoencoder machine learning model that outputs accurate, noiseless, and probabilistic reconstructions of the noisy correlations. Using this model, we obtain reconstructed time-resolved single dot emission line shapes at timescales as low as 10 ns, which are otherwise completely obscured by noise. This enables PCFS to extract optical coherence times on the same timescales as Hong-Ou-Mandel two-photon interference, but with the advantage of providing spectral information in addition to estimates of photon indistinguishability. Our machine learning approach is broadly applicable to different photon correlation spectroscopy techniques and SPE systems, offering an enhanced tool for probing single emitter line shapes on previously inaccessible timescales.
</summary>
<dc:date>2023-08-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Uncovering temperature-dependent exciton-polariton relaxation mechanisms in hybrid organic-inorganic perovskites</title>
<link href="https://hdl.handle.net/1721.1/165019" rel="alternate"/>
<author>
<name>Laitz, Madeleine</name>
</author>
<author>
<name>Kaplan, Alexander EK</name>
</author>
<author>
<name>Deschamps, Jude</name>
</author>
<author>
<name>Barotov, Ulugbek</name>
</author>
<author>
<name>Proppe, Andrew H</name>
</author>
<author>
<name>García-Benito, Inés</name>
</author>
<author>
<name>Osherov, Anna</name>
</author>
<author>
<name>Grancini, Giulia</name>
</author>
<author>
<name>deQuilettes, Dane W</name>
</author>
<author>
<name>Nelson, Keith A</name>
</author>
<author>
<name>Bawendi, Moungi G</name>
</author>
<author>
<name>Bulović, Vladimir</name>
</author>
<id>https://hdl.handle.net/1721.1/165019</id>
<updated>2026-03-05T06:13:18Z</updated>
<published>2023-04-27T00:00:00Z</published>
<summary type="text">Uncovering temperature-dependent exciton-polariton relaxation mechanisms in hybrid organic-inorganic perovskites
Laitz, Madeleine; Kaplan, Alexander EK; Deschamps, Jude; Barotov, Ulugbek; Proppe, Andrew H; García-Benito, Inés; Osherov, Anna; Grancini, Giulia; deQuilettes, Dane W; Nelson, Keith A; Bawendi, Moungi G; Bulović, Vladimir
Hybrid perovskites have emerged as a promising material candidate for&#13;
exciton-polariton (polariton) optoelectronics. Thermodynamically, lowthreshold Bose-Einstein condensation requires efficient scattering to the&#13;
polariton energy dispersion minimum, and many applications demand precise&#13;
control of polariton interactions. Thus far, the primary mechanisms by which&#13;
polaritons relax in perovskites remains unclear. In this work, we perform&#13;
temperature-dependent measurements of polaritons in low-dimensional perovskite wedged microcavities achieving a Rabi splitting of _ΩRabi = 260 ±&#13;
5 meV. We change the Hopfield coefficients by moving the optical excitation&#13;
along the cavity wedge and thus tune the strength of the primary polariton&#13;
relaxation mechanisms in this material. We observe the polariton bottleneck&#13;
regime and show that it can be overcome by harnessing the interplay between&#13;
the different excitonic species whose corresponding dynamics are modified by&#13;
strong coupling. This work provides an understanding of polariton relaxation&#13;
in perovskites benefiting from efficient, material-specific relaxation pathways&#13;
and intracavity pumping schemes from thermally brightened excitonic&#13;
species.
Springer Science and Business Media LLC
</summary>
<dc:date>2023-04-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>ObsMode2021 Cycle 9 Go/No-Go Report for VLBI Capabilities</title>
<link href="https://hdl.handle.net/1721.1/165018" rel="alternate"/>
<author>
<name>Matthews, Lynn D.</name>
</author>
<author>
<name>Crew, Geoff</name>
</author>
<author>
<name>Goddi, Ciriaco</name>
</author>
<author>
<name>Marti-Vidal, Ivan</name>
</author>
<author>
<name>Titus, Mike</name>
</author>
<author>
<name>Fish, Vincent</name>
</author>
<author>
<name>Wagner, Jan</name>
</author>
<author>
<name>Rottmann, Helge</name>
</author>
<author>
<name>Pridiprihora, Yurii</name>
</author>
<author>
<name>Krichbaum, Thomas</name>
</author>
<author>
<name>Liu, Kuo</name>
</author>
<author>
<name>Kramer, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/165018</id>
<updated>2026-03-05T18:45:41Z</updated>
<published>2021-10-26T00:00:00Z</published>
<summary type="text">ObsMode2021 Cycle 9 Go/No-Go Report for VLBI Capabilities
Matthews, Lynn D.; Crew, Geoff; Goddi, Ciriaco; Marti-Vidal, Ivan; Titus, Mike; Fish, Vincent; Wagner, Jan; Rottmann, Helge; Pridiprihora, Yurii; Krichbaum, Thomas; Liu, Kuo; Kramer, Michael
We present a status summary of the primary Very Long Baseline Interferometry (VLBI) development efforts which are under consideration at ALMA as new offerings in Cycle 9. These activities are being carried out under the ALMA North America Development Project known as the ALMA Phasing Project Phase 2 (APP2). The two VLBI priorities previously identified for Cycle 9 by the ObsMode process are: (1) a submillimeter (Band 7) VLBI observing capability and (2) a prototype spectral line VLBI mode (in Band 3 only). This document provides an overview of the development, testing, and readiness of these capabilities. Updates on other APP2 development efforts of relevance for future cycles, including the phased-array mode offered for the first time in Cycle 8, are also provided.&#13;
Warning: This report contains material that is considered proprietary to the Event Horizon Telescope Collaboration (EHTC). These results may not be publicly posted, cited, or shared in any form. They are presented here with permission from EHTC Management solely for the purpose of allowing an evaluation of ALMA's performance as a VLBI station for 345 GHz (Band 7) VLBI. This report also contains ALMA EOC (test) data which are subject to ALMA's standard policies on the use of such data. We are working to ensure that the EHTC follows the ALMA guidelines (Carpenter et al., 2019) in the appropriate publication and release of data to allow follow-up work for science.
This report was prepared for the formal acceptance of the software required for Cycle 9.&#13;
Notionally it is ALMA Technical Note #23, but not published (yet) as such.
</summary>
<dc:date>2021-10-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Self-Directed, Home-Like XR System for Sustained Intangible Cultural Heritage Practice: An Ikebana Case Study</title>
<link href="https://hdl.handle.net/1721.1/165017" rel="alternate"/>
<author>
<name>Wu, Yu</name>
</author>
<author>
<name>Li, Manxueying</name>
</author>
<author>
<name>Mai, Gelei</name>
</author>
<id>https://hdl.handle.net/1721.1/165017</id>
<updated>2026-03-05T06:13:11Z</updated>
<published>2026-02-05T00:00:00Z</published>
<summary type="text">A Self-Directed, Home-Like XR System for Sustained Intangible Cultural Heritage Practice: An Ikebana Case Study
Wu, Yu; Li, Manxueying; Mai, Gelei
Sustained Intangible Cultural Heritage (ICH) practices for novices depend more on curiosity and creative agency than on procedural training. Yet, most extended reality (XR) systems for ICH emphasize guided instruction or exhibitions, limiting self-direction and continuity beyond the device. Using Ikebana as a case study, we present a self-directed, home-like virtual reality (VR) experience built with 3D Gaussian Splatting (3D GS) and natural hand tracking, complemented by an augmented reality (AR) revisiting feature that exports creations for real-world placement and sharing. In a study with 11 novices, pre-post questionnaires showed gains in interest, likelihood to continue offline, and understanding (p ≤.01). Interviews indicated that domestic realism reduced intimidation, natural gestures supported immersion, and AR revisiting extended reflection and engagement. We contribute (1) a home-like, self-directed XR design for ICH practice and (2) evidence that approachability, autonomy, and cross-reality continuity enhance motivation beyond the virtual world.
VRCAI ’25, Macau, China
</summary>
<dc:date>2026-02-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>CS Ed. in Prisons and Jails: Evidence of Computer Programming Self-Efficacy Growth Across Multiple Course Offerings</title>
<link href="https://hdl.handle.net/1721.1/165016" rel="alternate"/>
<author>
<name>Fishberg, Andrew</name>
</author>
<author>
<name>Gaetz, Marisa</name>
</author>
<author>
<name>Nisser, Martin</name>
</author>
<author>
<name>Cafferty, Carole</name>
</author>
<author>
<name>Perlman, Lee</name>
</author>
<author>
<name>Soicher, Raechel N.</name>
</author>
<author>
<name>Long, Joshua</name>
</author>
<id>https://hdl.handle.net/1721.1/165016</id>
<updated>2026-03-05T06:13:06Z</updated>
<published>2026-02-17T00:00:00Z</published>
<summary type="text">CS Ed. in Prisons and Jails: Evidence of Computer Programming Self-Efficacy Growth Across Multiple Course Offerings
Fishberg, Andrew; Gaetz, Marisa; Nisser, Martin; Cafferty, Carole; Perlman, Lee; Soicher, Raechel N.; Long, Joshua
Incarcerated students enrolled in education programs in prisons and jails experience a range of benefits, from reduced recidivism to improved psychosocial well-being. With respect to computer science education, little is still known about how courses impact incarcerated students' experiences, though recent work has explored fears and confidence of incarcerated students enrolled in computer science courses. Our work investigates incarcerated students' changes in self-efficacy over multiple iterations of four different classes. Our findings showed that all subscales of computer programming self-efficacy (algorithm, control, cooperation, debugging, and logic), but not generalized self-efficacy, were statistically significantly increased at the end of the courses relative to the beginning (p &lt; 0.001, n = 36). A similar pattern of results across the full sample (n = 188) adds additional support for the veracity of the effects found in the subset of paired data. Additionally, we share students' qualitative data to add nuance to our findings and emphasize the importance of these educational experiences for incarcerated students' personal and professional development.
</summary>
<dc:date>2026-02-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>HyProf: A Profiler for Programming Students that Offers Hypotheses about Performance Bugs</title>
<link href="https://hdl.handle.net/1721.1/165015" rel="alternate"/>
<author>
<name>Dargan, Hope</name>
</author>
<author>
<name>Hartz, Adam</name>
</author>
<author>
<name>Miller, Robert</name>
</author>
<id>https://hdl.handle.net/1721.1/165015</id>
<updated>2026-03-05T06:13:25Z</updated>
<published>2026-02-17T00:00:00Z</published>
<summary type="text">HyProf: A Profiler for Programming Students that Offers Hypotheses about Performance Bugs
Dargan, Hope; Hartz, Adam; Miller, Robert
Programming students often struggle to find and fix performance bugs in their code. To provide students additional performance debugging support, as well as expose them to profiling tools, we developed Hypothesis Profiler (HyProf). HyProf automatically profiles a slow student submission and produces a profile visualization suitable for learners. In addition to showing individual function and line times, HyProf shows details about the call graph, lines that made recursive calls or did not execute, and hypotheses about possible causes of slow performance, formulated by comparing the slow profile against fast submissions from other students. We deployed HyProf in a 400-student Python course and evaluated it through web logs, office hour observations, and surveys, which showed that 75% of respondents successfully used HyProf to find or fix a performance issue and 85% would recommend it to others.
SIGCSE TS 2026, St. Louis, MO, USA
</summary>
<dc:date>2026-02-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>KANELÉ: Kolmogorov–Arnold Networks for Efficient LUT-based Evaluation</title>
<link href="https://hdl.handle.net/1721.1/165014" rel="alternate"/>
<author>
<name>Hoang, Duc</name>
</author>
<author>
<name>Gupta, Aarush</name>
</author>
<author>
<name>Harris, Philip C</name>
</author>
<id>https://hdl.handle.net/1721.1/165014</id>
<updated>2026-03-05T06:13:27Z</updated>
<published>2026-02-21T00:00:00Z</published>
<summary type="text">KANELÉ: Kolmogorov–Arnold Networks for Efficient LUT-based Evaluation
Hoang, Duc; Gupta, Aarush; Harris, Philip C
Low-latency, resource-efficient neural network inference on FPGAs is essential for applications demanding real-time capability and low power. Lookup table (LUT)-based neural networks are a common solution, combining strong representational power with efficient FPGA implementation. In this work, we introduce KANELÉ, a framework that exploits the unique properties of Kolmogorov–Arnold Networks (KANs) for FPGA deployment. Unlike traditional multilayer perceptrons (MLPs), KANs employ learnable one-dimensional splines with fixed domains as edge activations, a structure naturally suited to discretization and efficient LUT mapping. We present the first systematic design flow for implementing KANs on FPGAs, co-optimizing training with quantization and pruning to enable compact, high-throughput, and low-latency KAN architectures. Our results demonstrate up to a 2700x speedup and orders of magnitude resource savings compared to prior KAN-on-FPGA approaches. Moreover, KANELÉ matches or surpasses other LUT-based architectures on widely used benchmarks, particularly for tasks involving symbolic or physical formulas, while balancing resource usage across FPGA hardware. Finally, we showcase the versatility of the framework by extending it to real-time, power-efficient control systems.
FPGA ’26, Seaside, CA, USA
</summary>
<dc:date>2026-02-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI Séance: Recounts from designing artificial intelligence for transcendence, interpretive lenses and chance</title>
<link href="https://hdl.handle.net/1721.1/165013" rel="alternate"/>
<author>
<name>Schroeder, Hope</name>
</author>
<author>
<name>Smith, Amy</name>
</author>
<author>
<name>Epstein, Ziv</name>
</author>
<id>https://hdl.handle.net/1721.1/165013</id>
<updated>2026-03-05T06:13:22Z</updated>
<published>2025-07-17T00:00:00Z</published>
<summary type="text">AI Séance: Recounts from designing artificial intelligence for transcendence, interpretive lenses and chance
Schroeder, Hope; Smith, Amy; Epstein, Ziv
As AI becomes a prism through which we reflect, see, and make sense of the world, the way we create creative, transcendent experiences around AI can shape our relationship to it. Drawing inspiration from the ritual structures of Spiritualist séances and creative art-making séances of Hilma af Klint, we present reflections from a series of participatory experiments we called AI Séances. These gatherings brought together artists, technologists, and spiritual practitioners to engage with generative models in contexts shaped by ritual, randomness, and collaborative interpretation. We found that creative production with AI can yield transcendent user experiences (TUX), different communities bring distinct interpretive lenses to AI outputs, and increased technical control can paradoxically diminish serendipity and transcendence. Through our experiences, we suggest that reclaiming interpretive agency over AI outputs in the creative and spiritual context, rather than treating models as machines that produce answers, opens up new avenues for critical and creative engagement with these technologies and is critical to preserving our humanity. The AI Séance offers a model for human-centered interaction with generative systems where magic lies not in the machine’s capabilities, but in our collective ability to create meaning.
</summary>
<dc:date>2025-07-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>2.008 Design and Manufacturing II, Spring 2003</title>
<link href="https://hdl.handle.net/1721.1/165012" rel="alternate"/>
<author>
<name>Dow, David</name>
</author>
<author>
<name>Sachs, Emanuel</name>
</author>
<author>
<name>Chun, Jung-Hoon</name>
</author>
<author>
<name>McAtamney, Patrick</name>
</author>
<author>
<name>Sarma, Sanjay</name>
</author>
<id>https://hdl.handle.net/1721.1/165012</id>
<updated>2026-03-04T18:04:45Z</updated>
<published>2003-01-01T00:00:00Z</published>
<summary type="text">2.008 Design and Manufacturing II, Spring 2003
Dow, David; Sachs, Emanuel; Chun, Jung-Hoon; McAtamney, Patrick; Sarma, Sanjay
Integration of design, engineering, and management disciplines and practices for analysis and design of manufacturing enterprises. Emphasis is on the physics and stochastic nature of manufacturing processes and systems, and their effects on quality, rate, cost, and flexibility. Topics include process physics and control, design for manufacturing, and manufacturing systems. Group project requires design and fabrication of parts using mass-production and assembly methods to produce a product in quantity.
</summary>
<dc:date>2003-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>2.008 Design and Manufacturing II, Spring 2004</title>
<link href="https://hdl.handle.net/1721.1/165011" rel="alternate"/>
<author>
<name>Chun, Jung-Hoon</name>
</author>
<author>
<name>Kim, Sang-Gook</name>
</author>
<id>https://hdl.handle.net/1721.1/165011</id>
<updated>2026-03-04T18:05:26Z</updated>
<published>2004-01-01T00:00:00Z</published>
<summary type="text">2.008 Design and Manufacturing II, Spring 2004
Chun, Jung-Hoon; Kim, Sang-Gook
This course introduces you to modern manufacturing with four areas of emphasis: manufacturing processes, equipment/control, systems, and design for manufacturing. The course exposes you to integration of engineering and management disciplines for determining manufacturing rate, cost, quality and flexibility. Topics include process physics, equipment design and automation/control, quality, design for manufacturing, industrial management, and systems design and operation. Labs are integral parts of the course, and expose you to various manufacturing disciplines and practices.
</summary>
<dc:date>2004-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>NeuSE: Neural SE(3)-equivariant embedding for long-term object-based simultaneous localization and mapping</title>
<link href="https://hdl.handle.net/1721.1/165010" rel="alternate"/>
<author>
<name>Fu, Jiahui</name>
</author>
<author>
<name>Du, Yilun</name>
</author>
<author>
<name>Singh, Kurran</name>
</author>
<author>
<name>Tenenbaum, Joshua B</name>
</author>
<author>
<name>Leonard, John J</name>
</author>
<id>https://hdl.handle.net/1721.1/165010</id>
<updated>2026-03-05T06:13:16Z</updated>
<published>2026-01-01T00:00:00Z</published>
<summary type="text">NeuSE: Neural SE(3)-equivariant embedding for long-term object-based simultaneous localization and mapping
Fu, Jiahui; Du, Yilun; Singh, Kurran; Tenenbaum, Joshua B; Leonard, John J
We present NeuSE, a novel Neural SE(3)-Equivariant Embedding for objects, and illustrate how it supports object-based Simultaneous Localization and Mapping (SLAM) for consistent spatial understanding with long-term scene changes. NeuSE is a set of latent object embeddings created from partial object observations. It serves as a compact point cloud surrogate for complete object models, encoding the full shape, scale, and transform information about an object. In addition, the inferred latent code is both SE(3) and scale equivariant, enabling strong generalization to objects of both unseen sizes and different SE(3) poses. This makes NeuSE particularly effective in real-world scenarios where objects may vary in size or spatial configuration. With NeuSE, relative frame transforms can be directly derived from inferred latent codes. Our proposed SLAM paradigm, using NeuSE for object shape, size, and pose characterization, can operate independently or in conjunction with typical SLAM systems. It directly infers SE(3) camera pose constraints that are compatible with general SLAM pose graph optimization, while maintaining a lightweight, object-centric map that adapts to real-world changes. Our evaluation is conducted on synthetic and real-world sequences with changes in both controlled and uncontrolled settings, featuring multi-category objects of various shapes and sizes. Our approach demonstrates improved localization capability and change-aware mapping consistency when working either independently or as a complement to common SLAM pipelines.
</summary>
<dc:date>2026-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>TravelAgent: Generative agents in the built environment</title>
<link href="https://hdl.handle.net/1721.1/165008" rel="alternate"/>
<author>
<name>Noyman, Ariel</name>
</author>
<author>
<name>Hu, Kai</name>
</author>
<author>
<name>Larson, Kent</name>
</author>
<id>https://hdl.handle.net/1721.1/165008</id>
<updated>2026-03-05T06:13:07Z</updated>
<published>2026-02-01T00:00:00Z</published>
<summary type="text">TravelAgent: Generative agents in the built environment
Noyman, Ariel; Hu, Kai; Larson, Kent
Understanding human behavior in the built environment is critical for designing highly-functional, human-centered urban spaces. Traditional approaches, such as manual observations, surveys, and simple simulations, often struggle to capture the complexity and nuance of real-world human behavior and experience. Here we introduce TravelAgent, a novel agentic simulation platform that models pedestrian navigation, activity, and human-like decision-making in the built environment. TravelAgent is proposed to help design teams and decision-makers understand how different users might experience diverse built environments under varying environmental conditions. TravelAgent integrates Generative Agents, multi-modal sensory inputs, and virtual environments, enabling agents to perceive, navigate, and interact with their surroundings, with tasks ranging from goal-oriented navigation to free exploration. We share analysis from 200 simulations with 3364 decision points and task completion rate of ∼80%, across diverse spatial layouts and agent archetypes. We present spatial, linguistic, and sentiment analysis, and show how agents react and experience their surroundings. Finally, we suggest TravelAgent as a new paradigm for designing, simulating, and understanding human experiences in urban environments.
</summary>
<dc:date>2026-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cleaning a dark matter detector: A case of ontological and normative elusiveness</title>
<link href="https://hdl.handle.net/1721.1/165007" rel="alternate"/>
<author>
<name>de Swart, Jaco</name>
</author>
<author>
<name>Mol, Annemarie</name>
</author>
<id>https://hdl.handle.net/1721.1/165007</id>
<updated>2026-03-05T06:13:29Z</updated>
<published>2025-08-30T00:00:00Z</published>
<summary type="text">Cleaning a dark matter detector: A case of ontological and normative elusiveness
de Swart, Jaco; Mol, Annemarie
Laboratory sciences crucially depend on the cleanliness of experiments. But what is clean? In this article, we show that the salience of the valuation clean emerges through its relation to a particular ontological repertoire. Our case is the XENONnT experiment in the Gran Sasso Mountains of Italy, designed to detect dark matter in the form of hypothetical WIMPs (Weakly Interacting Massive Particles). In this experiment, dirt presents a significant disruption, as contaminations can mimic the signals of WIMPs, and electronegative molecules risk erasing such signals. The ideosyncratic cleanliness required makes the practice of cleaning the XENONnT detector exceedingly difficult. So far, the ontological question ‘do WIMPs exist?’ remains open, which means that the normative question ‘is the detector clean enough?’ cannot be answered either. In addition, more cleaning will make the detector sensitive to a background of unremovable neutrinos—hence irredeemably dirty. With the normative goal of a ‘clean detector’ out of reach, the ontological question ‘do WIMPs exist?’ is bound to remain open as well. Alternative experiments therefore hunt for different hypothetical dark matter candidates, with different equipment, requiring different kinds of cleanliness. At the same time, the XENONnT experiment must navigate tensions between its own cleanliness goals and rules meant to ensure the environmental cleanliness of the Gran Sasso National Park. Cleaning turns out to be dirty. This leads us to ask: Which goods deserve to be cherished, and, intertwined with that, which realities deserve to be cared for?
</summary>
<dc:date>2025-08-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-fidelity reinforcement learning for time-optimal quadrotor re-planning</title>
<link href="https://hdl.handle.net/1721.1/165006" rel="alternate"/>
<author>
<name>Ryou, Gilhyun</name>
</author>
<author>
<name>Wang, Geoffrey</name>
</author>
<author>
<name>Karaman, Sertac</name>
</author>
<id>https://hdl.handle.net/1721.1/165006</id>
<updated>2026-03-05T06:12:59Z</updated>
<published>2025-08-22T00:00:00Z</published>
<summary type="text">Multi-fidelity reinforcement learning for time-optimal quadrotor re-planning
Ryou, Gilhyun; Wang, Geoffrey; Karaman, Sertac
High-speed online trajectory planning for UAVs poses a significant challenge due to the need for precise modeling of complex dynamics while also being constrained by computational limitations. This paper presents a multi-fidelity reinforcement learning method (MFRL) that aims to effectively create a realistic dynamics model and simultaneously train a planning policy that can be readily deployed in real-time applications. The proposed method involves the co-training of a planning policy and a reward estimator; the latter predicts the performance of the policy’s output and is trained efficiently through multi-fidelity Bayesian optimization. This optimization approach models the correlation between different fidelity levels, thereby constructing a high-fidelity model based on a low-fidelity foundation, which enables the accurate development of the reward model with limited high-fidelity experiments. The framework is further extended to include real-world flight experiments in reinforcement learning training, allowing the reward model to precisely reflect real-world constraints and broadening the policy’s applicability to real-world scenarios. We present rigorous evaluations by training and testing the planning policy in both simulated and real-world environments. The resulting trained policy not only generates faster and more reliable trajectories compared to the baseline snap minimization method, but it also achieves trajectory updates in 2 ms on average, while the baseline method takes several minutes.
</summary>
<dc:date>2025-08-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Distribution Network Tariffs in the US with an Application to Increased Electric Vehicle Adoption</title>
<link href="https://hdl.handle.net/1721.1/165005" rel="alternate"/>
<author>
<name>Turk, Graham</name>
</author>
<author>
<name>Schittekatte, Tim</name>
</author>
<author>
<name>Duenas-Martinez, Pablo</name>
</author>
<author>
<name>Joskow, Paul L</name>
</author>
<author>
<name>Schmalensee, Richard</name>
</author>
<id>https://hdl.handle.net/1721.1/165005</id>
<updated>2026-03-05T06:13:23Z</updated>
<published>2025-11-01T00:00:00Z</published>
<summary type="text">Designing Distribution Network Tariffs in the US with an Application to Increased Electric Vehicle Adoption
Turk, Graham; Schittekatte, Tim; Duenas-Martinez, Pablo; Joskow, Paul L; Schmalensee, Richard
Time-of-use (TOU) tariffs that vary the cost per kWh to reflect wide variations in generation and wholesale market costs give incentives to shift all electric vehicle (EV) charging to low-price periods. As EV penetration increases, such tariffs would substantially raise the local kW demand in those low-priced periods, which eventually would lead to increasing network expansion costs. A straightforward way to mitigate this problem is to separate energy charges from network charges, with appropriate rate designs for each. This paper uses a realistic case study to investigate the implications of combining TOU energy charges with various network tariff designs in the face of increased EV penetration. Our results provide support for the adoption in the US of ex-ante subscribed capacity tariffs (subscription charges), which give consumers incentives to reduce their peak kW demands. Reducing costs of EV ownership (a priority for many US states) need not be pursued at the expense of broader affordability goals.&#13;
&#13;
JEL classification: L51, L94, L97, Q41, D40
</summary>
<dc:date>2025-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geopolitical ecologies of cloud capitalism: Territorial restructuring and the making of national computing power in the U.S. and China</title>
<link href="https://hdl.handle.net/1721.1/165004" rel="alternate"/>
<author>
<name>Kollar, Justin</name>
</author>
<author>
<name>Stokols, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/165004</id>
<updated>2026-03-05T06:13:28Z</updated>
<published>2025-09-15T00:00:00Z</published>
<summary type="text">Geopolitical ecologies of cloud capitalism: Territorial restructuring and the making of national computing power in the U.S. and China
Kollar, Justin; Stokols, Andrew
As computing power becomes central to geopolitical rivalry, cloud infrastructure is increasingly framed as critical to national security, economic resilience and technological sovereignty. Current debates often focus on global competition – especially between the U.S. and China – highlighting strategic investments, export controls and infrastructure diplomacy abroad. Yet far less attention has been paid to the domestic territorial transformations that make such geopolitical projection possible. This paper argues that national strategies for AI and cloud dominance depend on the reorganization of land, energy and regulatory systems to sustain large-scale computation. Using a geopolitical ecology framework, we examine how the U.S. and China build national computing power as a strategic economic and military resource. In the U.S., cloud firms operate as state-aligned actors, drawing on fragmented regulatory authority, public subsidies and national security discourse to expand into rural and peri-urban regions. China pursues a more centralized strategy through its East Data, West Computing initiative, redistributing infrastructure to inland provinces under state-led development goals. Through comparative regional analysis, we show how domestic infrastructural expansion underpins geopolitical rivalry, producing new forms of territorial governance and socio-environmental inequality. Far from immaterial, the cloud is grounded in enclosure, extraction and the spatial foundations of techno-industrial power.
</summary>
<dc:date>2025-09-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unlikely Organizers: The Rise of Tech Worker Labor Activism</title>
<link href="https://hdl.handle.net/1721.1/165003" rel="alternate"/>
<author>
<name>Tan, JS</name>
</author>
<author>
<name>Luka, Natalia</name>
</author>
<author>
<name>Mazo, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/165003</id>
<updated>2026-03-05T06:13:09Z</updated>
<published>2025-10-09T00:00:00Z</published>
<summary type="text">Unlikely Organizers: The Rise of Tech Worker Labor Activism
Tan, JS; Luka, Natalia; Mazo, Emily
Tech workers—professionals in the technology industry, such as software engineers, product managers, and UX designers—are not normally associated with labor activism. Yet, since 2017, there has been a significant rise in workplace activism over “bread-and-butter” issues among this group. Using an original data set, the authors demonstrate how, in the case of tech workers, periods of intense workplace social activism preceded later periods of heightened labor activism. Regression analysis confirms that participation in social activism increases the likelihood of labor activism six months to one year later at the same company. Extending Rick Fantasia’s cultures of solidarity to professional workers, the authors highlight a new mechanism by which professionals engage in labor organizing: First, tech workers, guided by their professional interest in socially beneficial work, engage in workplace social activism. This action generates solidarity among employee-participants but also creates conflict with management and leads to the emergence of labor activism among professionals.
</summary>
<dc:date>2025-10-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fast detection of liver fibrosis with collagen-binding single-nanometer iron oxide nanoparticles via T1-weighted MRI</title>
<link href="https://hdl.handle.net/1721.1/165002" rel="alternate"/>
<author>
<name>Zhang, Juanye</name>
</author>
<author>
<name>Ning, Yingying</name>
</author>
<author>
<name>Zhu, Hua</name>
</author>
<author>
<name>Rotile, Nicholas J</name>
</author>
<author>
<name>Wei, He</name>
</author>
<author>
<name>Diyabalanage, Himashinie</name>
</author>
<author>
<name>Hansen, Eric C</name>
</author>
<author>
<name>Zhou, Iris Y</name>
</author>
<author>
<name>Barrett, Stephen C</name>
</author>
<author>
<name>Sojoodi, Mozhdeh</name>
</author>
<author>
<name>Tanabe, Kenneth K</name>
</author>
<author>
<name>Humblet, Valerie</name>
</author>
<author>
<name>Jasanoff, Alan</name>
</author>
<author>
<name>Caravan, Peter</name>
</author>
<author>
<name>Bawendi, Moungi G</name>
</author>
<id>https://hdl.handle.net/1721.1/165002</id>
<updated>2026-03-04T03:07:55Z</updated>
<published>2023-04-24T00:00:00Z</published>
<summary type="text">Fast detection of liver fibrosis with collagen-binding single-nanometer iron oxide nanoparticles via T1-weighted MRI
Zhang, Juanye; Ning, Yingying; Zhu, Hua; Rotile, Nicholas J; Wei, He; Diyabalanage, Himashinie; Hansen, Eric C; Zhou, Iris Y; Barrett, Stephen C; Sojoodi, Mozhdeh; Tanabe, Kenneth K; Humblet, Valerie; Jasanoff, Alan; Caravan, Peter; Bawendi, Moungi G
SNIO–CBP, a single-nanometer iron oxide (SNIO) nanoparticle functionalized with a type I collagen-binding peptide (CBP), was developed as a T1-weighted MRI contrast agent with only endogenous elements for fast and noninvasive detection of liver fibrosis. SNIO–CBP exhibits 6.7-fold higher relaxivity compared to a molecular gadolinium-based collagen-binding contrast agent CM-101 on a per CBP basis at 4.7 T. Unlike most iron oxide nanoparticles, SNIO–CBP exhibits fast elimination from the bloodstream with a 5.7 min half-life, high renal clearance, and low, transient liver enhancement in healthy mice. We show that a dose of SNIO–CBP that is 2.5-fold lower than that for CM-101 has comparable imaging efficacy in rapid (within 15 min following intravenous injection) detection of hepatotoxin-induced liver fibrosis using T1-weighted MRI in a carbon tetrachloride–induced mouse liver injury model. We further demonstrate the applicability of SNIO–CBP in detecting liver fibrosis in choline-deficient L-amino acid-defined high-fat diet mouse model of nonalcoholic steatohepatitis. These results provide a platform with potential for the development of high relaxivity, gadolinium-free molecular MRI probes for characterizing chronic liver disease.
</summary>
<dc:date>2023-04-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantum Shell in a Shell: Engineering Colloidal Nanocrystals for a High-Intensity Excitation Regime</title>
<link href="https://hdl.handle.net/1721.1/165001" rel="alternate"/>
<author>
<name>Harankahage, Dulanjan</name>
</author>
<author>
<name>Cassidy, James</name>
</author>
<author>
<name>Beavon, Jacob</name>
</author>
<author>
<name>Huang, Jiamin</name>
</author>
<author>
<name>Brown, Niamh</name>
</author>
<author>
<name>Berkinsky, David B</name>
</author>
<author>
<name>Marder, Andrew</name>
</author>
<author>
<name>Kayira, Barbra</name>
</author>
<author>
<name>Montemurri, Michael</name>
</author>
<author>
<name>Anzenbacher, Pavel</name>
</author>
<author>
<name>Schaller, Richard D</name>
</author>
<author>
<name>Sun, Liangfeng</name>
</author>
<author>
<name>Bawendi, Moungi G</name>
</author>
<author>
<name>Malko, Anton V</name>
</author>
<author>
<name>Diroll, Benjamin T</name>
</author>
<author>
<name>Zamkov, Mikhail</name>
</author>
<id>https://hdl.handle.net/1721.1/165001</id>
<updated>2026-03-04T03:07:49Z</updated>
<published>2023-06-06T00:00:00Z</published>
<summary type="text">Quantum Shell in a Shell: Engineering Colloidal Nanocrystals for a High-Intensity Excitation Regime
Harankahage, Dulanjan; Cassidy, James; Beavon, Jacob; Huang, Jiamin; Brown, Niamh; Berkinsky, David B; Marder, Andrew; Kayira, Barbra; Montemurri, Michael; Anzenbacher, Pavel; Schaller, Richard D; Sun, Liangfeng; Bawendi, Moungi G; Malko, Anton V; Diroll, Benjamin T; Zamkov, Mikhail
Many optoelectronic processes in colloidal semiconductor nanocrystals (NCs) suffer an efficiency decline under high-intensity excitation. This issue is caused by Auger recombination of multiple excitons, which converts the NC energy into excess heat, reducing the efficiency and life span of NC-based devices, including photodetectors, X-ray scintillators, lasers, and high-brightness light-emitting diodes (LEDs). Recently, semiconductor quantum shells (QSs) have emerged as a promising NC geometry for the suppression of Auger decay; however, their optoelectronic performance has been hindered by surface-related carrier losses. Here, we address this issue by introducing quantum shells with a CdS-CdSe-CdS-ZnS core-shell-shell-shell multilayer structure. The ZnS barrier inhibits the surface carrier decay, which increases the photoluminescence (PL) quantum yield (QY) to 90% while retaining a high biexciton emission QY of 79%. The improved QS morphology allows demonstrating one of the longest Auger lifetimes reported for colloidal NCs to date. The reduction of nonradiative losses in QSs also leads to suppressed blinking in single nanoparticles and low-threshold amplified spontaneous emission. We expect that ZnS-encapsulated quantum shells will benefit many applications exploiting high-power optical or electrical excitation regimes.
</summary>
<dc:date>2023-06-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>Synthesis of Zwitterionic CsPbBr3 Nanocrystals with Controlled Anisotropy using Surface-Selective Ligand Pairs</title>
<link href="https://hdl.handle.net/1721.1/165000" rel="alternate"/>
<author>
<name>Zhu, Hua</name>
</author>
<author>
<name>Kick, Matthias</name>
</author>
<author>
<name>Ginterseder, Matthias</name>
</author>
<author>
<name>Krajewska, Chantalle J</name>
</author>
<author>
<name>Šverko, Tara</name>
</author>
<author>
<name>Li, Ruipeng</name>
</author>
<author>
<name>Lu, Yongli</name>
</author>
<author>
<name>Shih, Meng‐Chen</name>
</author>
<author>
<name>Van Voorhis, Troy</name>
</author>
<author>
<name>Bawendi, Moungi G</name>
</author>
<id>https://hdl.handle.net/1721.1/165000</id>
<updated>2026-03-04T03:07:31Z</updated>
<published>2023-07-24T00:00:00Z</published>
<summary type="text">Synthesis of Zwitterionic CsPbBr3 Nanocrystals with Controlled Anisotropy using Surface-Selective Ligand Pairs
Zhu, Hua; Kick, Matthias; Ginterseder, Matthias; Krajewska, Chantalle J; Šverko, Tara; Li, Ruipeng; Lu, Yongli; Shih, Meng‐Chen; Van Voorhis, Troy; Bawendi, Moungi G
Mechanistic studies of the morphology of lead halide perovskite nanocrystals (LHP‐NCs) are hampered by a lack of generalizable suitable synthetic strategies and ligand systems. Here, the synthesis of zwitterionic CsPbBr&lt;jats:sub&gt;3&lt;/jats:sub&gt; NCs is presented with controlled anisotropy using a proposed “surface‐selective ligand pairs” strategy. Such a strategy provides a platform to systematically study the binding affinity of capping ligand pairs and the resulting LHP morphologies. By using zwitterionic ligands (ZwL) with varying structures, majority ZwL‐capped LHP NCs with controlled morphology are obtained, including anisotropic nanoplatelets and nanorods, for the first time. Combining experiments with density functional theory calculations, factors that govern the ligand binding on the different surface facets of LHP‐NCs are revealed, including the steric bulkiness of the ligand, the number of binding sites, and the charge distance between binding moieties. This study provides guidance for the further exploration of anisotropic LHP‐NCs.
</summary>
<dc:date>2023-07-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Theory of Photoluminescence Spectral Line Shapes of Semiconductor Nanocrystals</title>
<link href="https://hdl.handle.net/1721.1/164999" rel="alternate"/>
<author>
<name>Lin, Kailai</name>
</author>
<author>
<name>Jasrasaria, Dipti</name>
</author>
<author>
<name>Yoo, Jason J</name>
</author>
<author>
<name>Bawendi, Moungi</name>
</author>
<author>
<name>Utzat, Hendrik</name>
</author>
<author>
<name>Rabani, Eran</name>
</author>
<id>https://hdl.handle.net/1721.1/164999</id>
<updated>2026-03-04T03:07:30Z</updated>
<published>2023-08-08T00:00:00Z</published>
<summary type="text">Theory of Photoluminescence Spectral Line Shapes of Semiconductor Nanocrystals
Lin, Kailai; Jasrasaria, Dipti; Yoo, Jason J; Bawendi, Moungi; Utzat, Hendrik; Rabani, Eran
Single-molecule photoluminescence (PL) spectroscopy of semiconductor nanocrystals (NCs) reveals the nature of exciton-phonon interactions in NCs. Understanding the homogeneous spectral line shapes and their temperature dependence remains an open problem. Here, we develop an atomistic model to describe the PL spectrum of NCs, accounting for excitonic effects, phonon dispersion relations, and exciton-phonon couplings. We validate our model using single-NC measurements on CdSe/CdS NCs from &lt;i&gt;T&lt;/i&gt; = 4 to 290 K, and we find that the slightly asymmetric main peak at low temperatures is comprised of a narrow zero-phonon line (ZPL) and acoustic phonon sidebands. Furthermore, we identify the specific phonon modes that give rise to the optical phonon sidebands. At temperatures above 200 K, the spectral line width shows a stronger dependence upon the temperature, which we demonstrate to be correlated with higher order exciton-phonon couplings. We also identify the line width dependence upon reorganization energy, NC core sizes, and shell thicknesses.
</summary>
<dc:date>2023-08-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ultrafast dense DNA functionalization of quantum dots and rods for scalable 2D array fabrication with nanoscale precision</title>
<link href="https://hdl.handle.net/1721.1/164998" rel="alternate"/>
<author>
<name>Chen, Chi</name>
</author>
<author>
<name>Luo, Xin</name>
</author>
<author>
<name>Kaplan, Alexander EK</name>
</author>
<author>
<name>Bawendi, Moungi G</name>
</author>
<author>
<name>Macfarlane, Robert J</name>
</author>
<author>
<name>Bathe, Mark</name>
</author>
<id>https://hdl.handle.net/1721.1/164998</id>
<updated>2026-03-04T03:07:50Z</updated>
<published>2023-08-11T00:00:00Z</published>
<summary type="text">Ultrafast dense DNA functionalization of quantum dots and rods for scalable 2D array fabrication with nanoscale precision
Chen, Chi; Luo, Xin; Kaplan, Alexander EK; Bawendi, Moungi G; Macfarlane, Robert J; Bathe, Mark
Scalable fabrication of two-dimensional (2D) arrays of quantum dots (QDs) and quantum rods (QRs) with nanoscale precision is required for numerous device applications. However, self-assembly–based fabrication of such arrays using DNA origami typically suffers from low yield due to inefficient QD and QR DNA functionalization. In addition, it is challenging to organize solution-assembled DNA origami arrays on 2D device substrates while maintaining their structural fidelity. Here, we reduced manufacturing time from a few days to a few minutes by preparing high-density DNA-conjugated QDs/QRs from organic solution using a dehydration and rehydration process. We used a surface-assisted large-scale assembly (SALSA) method to construct 2D origami lattices directly on solid substrates to template QD and QR 2D arrays with orientational control, with overall loading yields exceeding 90%. Our fabrication approach enables the scalable, high fidelity manufacturing of 2D addressable QDs and QRs with nanoscale orientational and spacing control for functional 2D photonic devices.
</summary>
<dc:date>2023-08-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rational Design of a Chemical Bath Deposition Based Tin Oxide Electron‐Transport Layer for Perovskite Photovoltaics</title>
<link href="https://hdl.handle.net/1721.1/164997" rel="alternate"/>
<author>
<name>Lu, Yongli</name>
</author>
<author>
<name>Shih, Meng‐Chen</name>
</author>
<author>
<name>Tan, Shaun</name>
</author>
<author>
<name>Grotevent, Matthias J</name>
</author>
<author>
<name>Wang, Lili</name>
</author>
<author>
<name>Zhu, Hua</name>
</author>
<author>
<name>Zhang, Ruiqi</name>
</author>
<author>
<name>Lee, Joo‐Hong</name>
</author>
<author>
<name>Lee, Jin‐Wook</name>
</author>
<author>
<name>Bulović, Vladimir</name>
</author>
<author>
<name>Bawendi, Moungi G</name>
</author>
<id>https://hdl.handle.net/1721.1/164997</id>
<updated>2026-03-04T03:07:44Z</updated>
<published>2023-07-18T00:00:00Z</published>
<summary type="text">Rational Design of a Chemical Bath Deposition Based Tin Oxide Electron‐Transport Layer for Perovskite Photovoltaics
Lu, Yongli; Shih, Meng‐Chen; Tan, Shaun; Grotevent, Matthias J; Wang, Lili; Zhu, Hua; Zhang, Ruiqi; Lee, Joo‐Hong; Lee, Jin‐Wook; Bulović, Vladimir; Bawendi, Moungi G
Chemical bath deposition (CBD) is widely used to deposit tin oxide (SnOx) as an electron-transport layer in perovskite solar cells (PSCs). The conventional recipe uses thioglycolic acid (TGA) to facilitate attachments of SnOx particles onto the substrate. However, nonvolatile TGA is reported to harm the operational stability of PSCs. In this work, a volatile oxalic acid (OA) is introduced as an alternative to TGA. OA, a dicarboxylic acid, functions as a chemical linker for the nucleation and attachment of particles to the substrate in the chemical bath. Moreover, OA can be readily removed through thermal annealing followed by a mild H2O2 treatment, as shown by FTIR measurements. Synergistically, the mild H2O2 treatment selectively oxidizes the surface of the SnOx layer, minimizing nonradiative interface carrier recombination. EELS (electron-energy-loss spectroscopy) confirms that the SnOx surface is dominated by Sn4+, while the bulk is a mixture of Sn2+ and Sn4+. This rational design of a CBD SnOx layer leads to devices with T85 ≈1500 h, a significant improvement over the TGA-based device with T80 ≈250 h. The champion device reached a power conversion efficiency of 24.6%. This work offers a rationale for optimizing the complex parameter space of CBD SnOx to achieve efficient and stable PSCs.
</summary>
<dc:date>2023-07-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reduced recombination via tunable surface fields in perovskite thin films</title>
<link href="https://hdl.handle.net/1721.1/164996" rel="alternate"/>
<author>
<name>deQuilettes, Dane W</name>
</author>
<author>
<name>Yoo, Jason J</name>
</author>
<author>
<name>Brenes, Roberto</name>
</author>
<author>
<name>Kosasih, Felix Utama</name>
</author>
<author>
<name>Laitz, Madeleine</name>
</author>
<author>
<name>Dou, Benjia Dak</name>
</author>
<author>
<name>Graham, Daniel J</name>
</author>
<author>
<name>Ho, Kevin</name>
</author>
<author>
<name>Shi, Yangwei</name>
</author>
<author>
<name>Shin, Seong Sik</name>
</author>
<author>
<name>Ducati, Caterina</name>
</author>
<author>
<name>Bawendi, Moungi G</name>
</author>
<author>
<name>Bulović, Vladimir</name>
</author>
<id>https://hdl.handle.net/1721.1/164996</id>
<updated>2026-03-04T03:07:54Z</updated>
<published>2024-02-28T00:00:00Z</published>
<summary type="text">Reduced recombination via tunable surface fields in perovskite thin films
deQuilettes, Dane W; Yoo, Jason J; Brenes, Roberto; Kosasih, Felix Utama; Laitz, Madeleine; Dou, Benjia Dak; Graham, Daniel J; Ho, Kevin; Shi, Yangwei; Shin, Seong Sik; Ducati, Caterina; Bawendi, Moungi G; Bulović, Vladimir
The ability to reduce energy loss at semiconductor surfaces through passivation or surface field engineering is an essential step in the manufacturing of efficient photovoltaic (PV) and optoelectronic devices. Similarly, surface modification of emerging halide perovskites with quasi-two-dimensional (2D) heterostructures is now ubiquitous to achieve PV power conversion efficiencies (PCEs) &gt;25%, yet a fundamental understanding to how these treatments function is still generally lacking. Here we use a unique combination of depth-sensitive nanoscale characterization techniques to uncover a tunable passivation strategy and mechanism found in perovskite PV devices that were the first to reach the &gt;25% PCE milestone. Namely, treatment with hexylammonium bromide leads to the simultaneous formation of an iodide-rich 2D layer along with a Br halide gradient that extends from defective surfaces and grain boundaries into the bulk three-dimensional (3D) layer. This interface can be optimized to extend the charge carrier lifetime to record values &gt;30 μs and to reduce interfacial recombination velocities to values as low as &lt;7 cm s−1.
</summary>
<dc:date>2024-02-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Solution-phase sample-averaged single-particle spectroscopy of quantum emitters with femtosecond resolution</title>
<link href="https://hdl.handle.net/1721.1/164995" rel="alternate"/>
<author>
<name>Shi, Jiaojian</name>
</author>
<author>
<name>Shen, Yuejun</name>
</author>
<author>
<name>Pan, Feng</name>
</author>
<author>
<name>Sun, Weiwei</name>
</author>
<author>
<name>Mangu, Anudeep</name>
</author>
<author>
<name>Shi, Cindy</name>
</author>
<author>
<name>McKeown-Green, Amy</name>
</author>
<author>
<name>Moradifar, Parivash</name>
</author>
<author>
<name>Bawendi, Moungi G</name>
</author>
<author>
<name>Moerner, WE</name>
</author>
<author>
<name>Dionne, Jennifer A</name>
</author>
<author>
<name>Liu, Fang</name>
</author>
<author>
<name>Lindenberg, Aaron M</name>
</author>
<id>https://hdl.handle.net/1721.1/164995</id>
<updated>2026-03-04T03:07:48Z</updated>
<published>2024-04-08T00:00:00Z</published>
<summary type="text">Solution-phase sample-averaged single-particle spectroscopy of quantum emitters with femtosecond resolution
Shi, Jiaojian; Shen, Yuejun; Pan, Feng; Sun, Weiwei; Mangu, Anudeep; Shi, Cindy; McKeown-Green, Amy; Moradifar, Parivash; Bawendi, Moungi G; Moerner, WE; Dionne, Jennifer A; Liu, Fang; Lindenberg, Aaron M
The development of many quantum optical technologies depends on the availability of single quantum emitters with near-perfect coherence. Systematic improvement is limited by a lack of understanding of the microscopic energy flow at the single-emitter level and ultrafast timescales. Here we utilize a combination of fluorescence correlation spectroscopy and ultrafast spectroscopy to capture the sample-averaged dynamics of defects with single-particle sensitivity. We employ this approach to study heterogeneous emitters in two-dimensional hexagonal boron nitride. From milliseconds to nanoseconds, the translational, shelving, rotational and antibunching features are disentangled in time, which quantifies the normalized two-photon emission quantum yield. Leveraging the femtosecond resolution of this technique, we visualize electron–phonon coupling and discover the acceleration of polaronic formation on multi-electron excitation. Corroborated with theory, this translates to the photon fidelity characterization of cascaded emission efficiency and decoherence time. Our work provides a framework for ultrafast spectroscopy in heterogeneous emitters, opening new avenues of extreme-scale characterization for quantum applications.
</summary>
<dc:date>2024-04-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>Additive‐Free Oxidized Spiro‐MeOTAD Hole Transport Layer Significantly Improves Thermal Solar Cell Stability</title>
<link href="https://hdl.handle.net/1721.1/164994" rel="alternate"/>
<author>
<name>Grotevent, Matthias J</name>
</author>
<author>
<name>Lu, Yongli</name>
</author>
<author>
<name>Šverko, Tara</name>
</author>
<author>
<name>Shih, Meng‐Chen</name>
</author>
<author>
<name>Tan, Shaun</name>
</author>
<author>
<name>Zhu, Hua</name>
</author>
<author>
<name>Dang, Tong</name>
</author>
<author>
<name>Mwaura, Jeremiah K</name>
</author>
<author>
<name>Swartwout, Richard</name>
</author>
<author>
<name>Beiglböck, Finn</name>
</author>
<author>
<name>Kothe, Linda</name>
</author>
<author>
<name>Bulović, Vladimir</name>
</author>
<author>
<name>Bawendi, Moungi G</name>
</author>
<id>https://hdl.handle.net/1721.1/164994</id>
<updated>2026-03-04T03:07:51Z</updated>
<published>2024-06-06T00:00:00Z</published>
<summary type="text">Additive‐Free Oxidized Spiro‐MeOTAD Hole Transport Layer Significantly Improves Thermal Solar Cell Stability
Grotevent, Matthias J; Lu, Yongli; Šverko, Tara; Shih, Meng‐Chen; Tan, Shaun; Zhu, Hua; Dang, Tong; Mwaura, Jeremiah K; Swartwout, Richard; Beiglböck, Finn; Kothe, Linda; Bulović, Vladimir; Bawendi, Moungi G
Perovskite solar cells are among the most promising new solar technologies, already surpassing polycrystalline silicon solar cell efficiencies. The stability of the highest efficiency devices at elevated temperature is, however, poor. These cells typically use Spiro‐MeOTAD as the hole transporting layer. It is generally believed that additives, required for enhancing electrical conductivity and optimizing energy level alignment, are responsible for the reduced stability—inferring that Spiro‐MeOTAD based hole transporting layers are intrinsically unstable. Here, a reliable noble metal free synthesis of Spiro‐MeOTAD (bis(trifluoromethane)sulfonimide)&lt;jats:sub&gt;4&lt;/jats:sub&gt; is presented which is used as the oxidizing agent. No additives are added to the partially oxidized Spiro‐MeOTAD hole‐transporting layer. Device efficiencies up to 24.2% are achieved. Electrical conductivity is largely developed by the first 1% oxidation. Further oxidation shifts the energy levels away from the vacuum level, which allows tuning of the energy level alignment without the use of additives—contradicting the current understanding of this system. Without additives, devices demonstrate significant improvement in stability at elevated temperatures up to 85 °C under one sun over 1400 h continuous illumination. The remaining degradation is pinpointed to ion migration and reactions in the perovskite layer which may be further suppressed with compositional engineering and adequate ion barrier layers.
</summary>
<dc:date>2024-06-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bright and Fast Emission from Robust Supramolecular J-Aggregate Nanostructures through Silica-Encapsulation</title>
<link href="https://hdl.handle.net/1721.1/164993" rel="alternate"/>
<author>
<name>Thanippuli Arachchi, Dimuthu H</name>
</author>
<author>
<name>Barotov, Ulugbek</name>
</author>
<author>
<name>Perkinson, Collin F</name>
</author>
<author>
<name>Šverko, Tara</name>
</author>
<author>
<name>Kaplan, Alexander EK</name>
</author>
<author>
<name>Bawendi, Moungi G</name>
</author>
<id>https://hdl.handle.net/1721.1/164993</id>
<updated>2026-03-04T03:07:46Z</updated>
<published>2024-07-24T00:00:00Z</published>
<summary type="text">Bright and Fast Emission from Robust Supramolecular J-Aggregate Nanostructures through Silica-Encapsulation
Thanippuli Arachchi, Dimuthu H; Barotov, Ulugbek; Perkinson, Collin F; Šverko, Tara; Kaplan, Alexander EK; Bawendi, Moungi G
We introduce a two-step silica-encapsulation procedure to optimize both the optical efficiency and structural robustness of 5,5',6,6'-tetrachloro-1,1'-diethyl-3,3'-di(4-sulfobutyl)-benzimidazolocarbocyanine (TDBC), a two-dimensional sheet-like J-aggregate. We report a fluorescence quantum yield of ∼98%, the highest quantum yield recorded for any J-aggregate structure at room temperature, and a fast, emissive lifetime of 234 ps. Silica, as an encapsulating matrix, provides optical transparency, chemical inertness, and robustness to dilution, while rigidifying the J-aggregate structure. Our in situ encapsulation process preserves the excitonic structure in TDBC J-aggregates, maintaining their light absorption and emission properties. The homogeneous silica coating has an average thickness of 0.5-1 nm around J-aggregate sheets. Silica encapsulation permits extensive dilutions of J-aggregates without significant disintegration into monomers. The narrow absorbance and emission line widths exhibit further narrowing upon cooling to 79 K, which is consistent with J-type coupling in the encapsulated aggregates. This silica TDBC J-aggregate construct signifies (1) a bright, fast, and robust fluorophore system, (2) a platform for further manipulation of J-aggregates as building blocks for integration with other optical materials and structures, and (3) a system for fundamental studies of exciton delocalization, transport, and emission dynamics within a rigid matrix.
</summary>
<dc:date>2024-07-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward biophysical markers of depression vulnerability</title>
<link href="https://hdl.handle.net/1721.1/164992" rel="alternate"/>
<author>
<name>Pinotsis, DA</name>
</author>
<author>
<name>Fitzgerald, S</name>
</author>
<author>
<name>See, C</name>
</author>
<author>
<name>Sementsova, A</name>
</author>
<author>
<name>Widge, AS</name>
</author>
<id>https://hdl.handle.net/1721.1/164992</id>
<updated>2026-03-04T03:07:22Z</updated>
<published>2022-10-18T00:00:00Z</published>
<summary type="text">Toward biophysical markers of depression vulnerability
Pinotsis, DA; Fitzgerald, S; See, C; Sementsova, A; Widge, AS
A major difficulty with treating psychiatric disorders is their heterogeneity: different neural causes can lead to the same phenotype. To address this, we propose describing the underlying pathophysiology in terms of interpretable, biophysical parameters of a neural model derived from the electroencephalogram. We analyzed data from a small patient cohort of patients with depression and controls. Using DCM, we constructed biophysical models that describe neural dynamics in a cortical network activated during a task that is used to assess depression state. We show that biophysical model parameters are biomarkers, that is, variables that allow subtyping of depression at a biological level. They yield a low dimensional, interpretable feature space that allowed description of differences between individual patients with depressive symptoms. They could capture internal heterogeneity/variance of depression state and achieve significantly better classification than commonly used EEG features. Our work is a proof of concept that a combination of biophysical models and machine learning may outperform earlier approaches based on classical statistics and raw brain data.
</summary>
<dc:date>2022-10-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Robustness and Generalization of Artificial Neural Networks Through Fourier Masks</title>
<link href="https://hdl.handle.net/1721.1/164991" rel="alternate"/>
<author>
<name>Karantzas, Nikos</name>
</author>
<author>
<name>Besier, Emma</name>
</author>
<author>
<name>Ortega Caro, Josue</name>
</author>
<author>
<name>Pitkow, Xaq</name>
</author>
<author>
<name>Tolias, Andreas S.</name>
</author>
<author>
<name>Patel, Ankit B.</name>
</author>
<author>
<name>Anselmi, Fabio</name>
</author>
<id>https://hdl.handle.net/1721.1/164991</id>
<updated>2026-03-04T03:07:37Z</updated>
<published>2022-07-12T00:00:00Z</published>
<summary type="text">Understanding Robustness and Generalization of Artificial Neural Networks Through Fourier Masks
Karantzas, Nikos; Besier, Emma; Ortega Caro, Josue; Pitkow, Xaq; Tolias, Andreas S.; Patel, Ankit B.; Anselmi, Fabio
Despite the enormous success of artificial neural networks (ANNs) in many disciplines, the characterization of their computations and the origin of key properties such as generalization and robustness remain open questions. Recent literature suggests that robust networks with good generalization properties tend to be biased toward processing low frequencies in images. To explore the frequency bias hypothesis further, we develop an algorithm that allows us to learn modulatory masks highlighting the essential input frequencies needed for preserving a trained network's performance. We achieve this by imposing invariance in the loss with respect to such modulations in the input frequencies. We first use our method to test the low-frequency preference hypothesis of adversarially trained or data-augmented networks. Our results suggest that adversarially robust networks indeed exhibit a low-frequency bias but we find this bias is also dependent on directions in frequency space. However, this is not necessarily true for other types of data augmentation. Our results also indicate that the essential frequencies in question are effectively the ones used to achieve generalization in the first place. Surprisingly, images seen through these modulatory masks are not recognizable and resemble texture-like patterns.
</summary>
<dc:date>2022-07-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancement of Cyanobacterial Bloom Monitoring in Lake Taihu Using Dual Red-Edge Bands of GF-6/WFV: Multi-Dimensional Feature Combination and Extraction Accuracy Analysis</title>
<link href="https://hdl.handle.net/1721.1/164990" rel="alternate"/>
<author>
<name>Sun, Yunxiao</name>
</author>
<author>
<name>Zhang, Ruolin</name>
</author>
<author>
<name>Zhao, Chunhong</name>
</author>
<author>
<name>Meng, Qingyan</name>
</author>
<author>
<name>Sun, Zhenhui</name>
</author>
<author>
<name>Wang, Jialong</name>
</author>
<author>
<name>Wu, Jun</name>
</author>
<author>
<name>Wang, Yao</name>
</author>
<author>
<name>Gao, Decai</name>
</author>
<author>
<name>Guan, Huyi</name>
</author>
<id>https://hdl.handle.net/1721.1/164990</id>
<updated>2026-03-04T03:07:41Z</updated>
<published>2026-02-20T00:00:00Z</published>
<summary type="text">Enhancement of Cyanobacterial Bloom Monitoring in Lake Taihu Using Dual Red-Edge Bands of GF-6/WFV: Multi-Dimensional Feature Combination and Extraction Accuracy Analysis
Sun, Yunxiao; Zhang, Ruolin; Zhao, Chunhong; Meng, Qingyan; Sun, Zhenhui; Wang, Jialong; Wu, Jun; Wang, Yao; Gao, Decai; Guan, Huyi
Cyanobacterial blooms pose a serious threat to freshwater ecosystems, necessitating accurate remote sensing monitoring. Although red-edge bands show potential in terrestrial monitoring, their multi-dimensional features (i.e., spectral, textural, and index-based characteristics) remain underutilized for aquatic blooms. This study leverages the dual red-edge bands (710 nm and 750 nm) of GF-6/WFV to enhance cyanobacterial bloom identification in Lake Taihu. Multi-temporal images from 2019–2023 were used to construct red-edge features in three dimensions: spectral (evaluated via adaptive band selection method) and Jeffries–Matusita–Bhattacharyya distance), texture (based on Gray Level Co-occurrence Matrix and principal component analysis), and indices (nine vegetation indices ranked by Random Forest importance). Twelve feature-combination schemes were designed and implemented with a Random Forest classifier. Results show that red-edge features consistently improve identification accuracy. Quantitatively, compared to the basic four-band (RGBN) combination, the 710 nm band improved spectral separability by an average of 9.63%, whereas the 750 nm band yielded a lower average improvement of 5.69%. Red-edge indices, especially the modified chlorophyll absorption reflectance index 1 (MCARI1) and normalized difference red-edge index (NDRE), exhibited higher importance than non-red-edge indices. All schemes incorporating red-edge features achieved mean overall accuracies of 92.8–94.9% and Kappa coefficients of 0.86–0.94, surpassing the basic four-band scheme. Among these features, red-edge indices contributed most significantly to accuracy gains, increasing the overall accuracy by an average of 0.36–6.06% and the Kappa coefficient by up to 0.06. The enhancement effect of the red-edge 710 nm band features was superior to that of the 750 nm band. This study demonstrates that multi-dimensional red-edge features effectively enhance the identification accuracy of cyanobacterial blooms and provides a methodological reference for operational GF-6 applications in water quality monitoring.
</summary>
<dc:date>2026-02-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>Biological Activity of Metal Complexes</title>
<link href="https://hdl.handle.net/1721.1/164989" rel="alternate"/>
<author>
<name>Sharma, Vinay K.</name>
</author>
<id>https://hdl.handle.net/1721.1/164989</id>
<updated>2026-03-04T03:07:43Z</updated>
<published>2026-02-17T00:00:00Z</published>
<summary type="text">Biological Activity of Metal Complexes
Sharma, Vinay K.
Metal complexes play a fundamental role in biological systems and continue to attract sustained interest due to their remarkable potential in therapeutic, diagnostic, and biotechnological applications [1–8]. In recent years, the field of bioinorganic chemistry has advanced rapidly, driven by progress in coordination chemistry, spectroscopy, nanotechnology, and molecular biology [9–22]. These developments have enabled a deeper understanding of how metal ions and complexes interact with biomolecular targets and have opened new avenues for the rational design of metal-based agents for cancer therapy, antimicrobial treatment, imaging, and the study of metal-mediated biochemical processes [23–30].
</summary>
<dc:date>2026-02-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Single Parameter Model for Galaxy Rotation Curves</title>
<link href="https://hdl.handle.net/1721.1/164988" rel="alternate"/>
<author>
<name>Cisneros, Sophia N.</name>
</author>
<author>
<name>Ott, Rich</name>
</author>
<author>
<name>Crowley, Meagan</name>
</author>
<author>
<name>Roberts, Amy</name>
</author>
<author>
<name>Paz, Marcus</name>
</author>
<id>https://hdl.handle.net/1721.1/164988</id>
<updated>2026-03-04T03:07:33Z</updated>
<published>2026-02-15T00:00:00Z</published>
<summary type="text">Single Parameter Model for Galaxy Rotation Curves
Cisneros, Sophia N.; Ott, Rich; Crowley, Meagan; Roberts, Amy; Paz, Marcus
One key piece of evidence for dark matter is the rotation-curve problem: the disagreement between measured galactic rotation curves and their luminous mass. A novel solution to this problem is presented here, in a model that predicts observed Doppler-shifted spectra based only on the luminous matter estimates and one free model parameter &#120572;. This model is applied to fit the rotation curves of the SPARC sample of 175 galaxies, yielding mass-to-light ratios, goodness of fit measurements, and &#120572;. The measured average &#120594;2&#13;
&#119903; =2.24 compares favorably with the Navarro-Frenk-White dark matter model’s average of &#120594;2&#13;
&#119903; =4.19 for the same data, and more galaxies are successfully fit by this model. The model provides a useful formulation linking luminous matter to the observed rotation curves, with the dark matter contribution to galaxies encoded in two transformation terms of the luminous mass. It also offers a lower-parameter characterization of the rotation curve problem, and a power law relationship between &#120572; and galactic photometric quantities is observed, potentially removing the need for the free parameter.
</summary>
<dc:date>2026-02-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Non-Clinical Safety of GRAd Vector-Based COVID-19 and HIV Vaccines Supports a Platform Regulatory Approach</title>
<link href="https://hdl.handle.net/1721.1/164987" rel="alternate"/>
<author>
<name>Paalangara, Reji</name>
</author>
<author>
<name>Gohin, Stephanie</name>
</author>
<author>
<name>Menard, Alexis</name>
</author>
<author>
<name>Amy, Charlotte</name>
</author>
<author>
<name>Berrabah, Wahiba</name>
</author>
<author>
<name>Rogue, Alexandra</name>
</author>
<author>
<name>Getz, Matthew A.</name>
</author>
<author>
<name>Alrubayyi, Aljawharah</name>
</author>
<author>
<name>Battella, Simone</name>
</author>
<author>
<name>Raggioli, Angelo</name>
</author>
<author>
<name>Gentile, Michela</name>
</author>
<author>
<name>Di Rita, Anthea</name>
</author>
<author>
<name>Noto, Alessia</name>
</author>
<author>
<name>Miselli, Giuseppina</name>
</author>
<author>
<name>Grazioli, Fabiana</name>
</author>
<author>
<name>Napolitano, Federico</name>
</author>
<author>
<name>Sowcik, Dhurata</name>
</author>
<author>
<name>Soriani, Marco</name>
</author>
<author>
<name>Chmielewski, Benjamin</name>
</author>
<author>
<name>Molife, Lebohang</name>
</author>
<id>https://hdl.handle.net/1721.1/164987</id>
<updated>2026-03-04T03:07:38Z</updated>
<published>2026-02-06T00:00:00Z</published>
<summary type="text">Non-Clinical Safety of GRAd Vector-Based COVID-19 and HIV Vaccines Supports a Platform Regulatory Approach
Paalangara, Reji; Gohin, Stephanie; Menard, Alexis; Amy, Charlotte; Berrabah, Wahiba; Rogue, Alexandra; Getz, Matthew A.; Alrubayyi, Aljawharah; Battella, Simone; Raggioli, Angelo; Gentile, Michela; Di Rita, Anthea; Noto, Alessia; Miselli, Giuseppina; Grazioli, Fabiana; Napolitano, Federico; Sowcik, Dhurata; Soriani, Marco; Chmielewski, Benjamin; Molife, Lebohang
Background/Objectives: The rapid development of safe and efficacious vaccines is often hindered by extensive, mandated non-clinical safety evaluations in animals. With the aim to provide scientific evidence supporting a “vaccine platform approach”, here we present the complete non-clinical studies for two investigational vaccines, GRAd-COV2 and GRAdHIVNE1, based on GRAd, a gorilla-derived group C adenoviral vector. Methods: The biodistribution of GRAd genomes following the intramuscular administration of the vaccines was assessed in rats by a sensitive qPCR method. Local tolerance and systemic toxic effects were evaluated in single- and repeated-dose toxicity studies in rabbits. Results: GRAd-COV2 and GRAdHIVNE1 were well-tolerated. Distribution was highly confined to the injection site and draining lymph nodes, and toxicity profile consisted of transient, non-adverse inflammatory responses, while the expected immune responses to the encoded antigens were successfully induced. Notably, both vaccines demonstrated a consistent safety profile despite transgene and backbone differences, comparable to other replication-defective adenoviral vectors. Conclusions: The established non-clinical safety profile of the GRAd platform provides a robust foundation for a more efficient and streamlined regulatory pathway. By leveraging this prior knowledge, future GRAd-based vaccines can achieve accelerated clinical development while fully adhering to the ethical principles of replacement, reduction, and refinement of animal use in research.
</summary>
<dc:date>2026-02-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and User-Centered Field Evaluation of an Accessible Precision Irrigation Tool and Its Human–Machine Interaction on a Jordanian Farm</title>
<link href="https://hdl.handle.net/1721.1/164986" rel="alternate"/>
<author>
<name>Van de Zande, Georgia D.</name>
</author>
<author>
<name>Sheline, Carolyn</name>
</author>
<author>
<name>Pratt, Shane R.</name>
</author>
<author>
<name>Winter V, Amos G.</name>
</author>
<id>https://hdl.handle.net/1721.1/164986</id>
<updated>2026-03-04T03:07:34Z</updated>
<published>2026-02-04T00:00:00Z</published>
<summary type="text">Design and User-Centered Field Evaluation of an Accessible Precision Irrigation Tool and Its Human–Machine Interaction on a Jordanian Farm
Van de Zande, Georgia D.; Sheline, Carolyn; Pratt, Shane R.; Winter V, Amos G.
This work aims to demonstrate the successful, long-term human use of an automatic scheduling-manual operation (AS-MO) precision irrigation tool by farmers on a medium-scale Jordanian farm. Innovation in low-cost, accessible, and water-efficient irrigation technologies is critical as water resources become scarce, especially on resource-constrained farms in the drought-prone Middle East and North Africa (MENA) region. Prior work has shown that a proposed AS-MO decision support tool could bridge the gap between fully manual irrigation—a common practice on many MENA farms—and existing precision agriculture solutions, which are often too expensive or complex for medium-scale farmers to adopt. Recent developments have also demonstrated that the scheduling theory behind the proposed AS-MO tool uses up to 44% less water compared to fully manual irrigation. However, a functional design of the AS-MO tool has not been realized nor has it been demonstrated on a farm with farmer users. This work documents the detailed design of an AS-MO tool’s human–machine interaction (HMI) and validates the human execution of the tool in context. Through an 11-week case study conducted on a Jordanian farm, we show that farmers used a functional prototype of the AS-MO tool as intended. The functional tool prototype was designed to deliver a long-term AS-MO user experience to study participants. The prototype monitored local weather conditions, generated water-efficient schedules using an existing scheduling theory, and notified users’ phones when they should manually open or close valves. The irrigation practices of participants using the AS-MO prototype were measured, and participants demonstrated successful use of the tool. Users correctly confirmed 93% of the scheduled events using the tool’s HMI. Despite manual operation, a majority of confirmed irrigation event durations fell within 15% of the automatically scheduled durations; relative to the length of scheduled irrigation event durations, the medians of confirmed and scheduled durations were 102% and 88%, respectively. These results demonstrate the success of the tool’s decision support ability. Feedback from study participants can support the AS-MO tool’s next design iteration and can inform the development of other decision support systems designed for resource-constrained, medium-scale farms. This work presents an important step towards developing a precision irrigation tool that, if adopted at scale, could increase the adoption of water-efficient irrigation practices on resource-constrained farms that are not served by existing technology, improving sustainable agriculture in MENA.
</summary>
<dc:date>2026-02-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Effect of Genipin Matrix Augmentation on the Retention of Glycosaminoglycans in the Intervertebral Disc—A Pilot Study</title>
<link href="https://hdl.handle.net/1721.1/164985" rel="alternate"/>
<author>
<name>Hedman, Thomas</name>
</author>
<author>
<name>Brown, Matthew</name>
</author>
<author>
<name>Slusarewicz, Pawel</name>
</author>
<id>https://hdl.handle.net/1721.1/164985</id>
<updated>2026-03-04T03:07:39Z</updated>
<published>2026-02-02T00:00:00Z</published>
<summary type="text">The Effect of Genipin Matrix Augmentation on the Retention of Glycosaminoglycans in the Intervertebral Disc—A Pilot Study
Hedman, Thomas; Brown, Matthew; Slusarewicz, Pawel
The degradation of intervertebral disc proteoglycans, including the loss or shortening of their hydrophilic glycosaminoglycan chains, causes a loss of disc hydration, leading to an increase in solid matrix stresses. This illustrates one aspect of the complex multifactorial relationship between tissue degradation and the resulting mechanical dysfunction. Genipin matrix augmentation has previously been evaluated with regard to its ability to improve mechanical properties of the disc, increasing joint stability and permeability. The study aim was to evaluate the ability of genipin augmentation to increase retention of glycosaminoglycans in disc specimens subjected to free swelling. Three different models were utilized: whole bovine caudal discs, partial annulus specimens from bovine, and human thoracic discs. Total glycosaminoglycan release to a surrounding bath was quantified using a modified dimethyl-methylene blue assay. Genipin solution injections reduced glycosaminoglycan loss by 44.0% in intact bovine discs compared to buffer-only controls (p = 0.027), by 75.8% in partial bovine annulus specimens (p = 0.0004), and by 51.9% in human annulus specimens (p = 0.017). The combination of increased permeability and glycosaminoglycans retention may produce beneficial effects on nutritional flow, diurnal irrigation, and reduction of matrix solid phase stress. Combining these effects with the ability to improve joint stability and augment tissue mechanical properties suggests this nano-scale device may be capable of arresting ongoing degeneration.
</summary>
<dc:date>2026-02-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Novel Recurrent Neural Network Framework for Prediction and Treatment of Oncogenic Mutation Progression</title>
<link href="https://hdl.handle.net/1721.1/164984" rel="alternate"/>
<author>
<name>Parthasarathy, Rishab</name>
</author>
<author>
<name>Bhowmik, Achintya K.</name>
</author>
<id>https://hdl.handle.net/1721.1/164984</id>
<updated>2026-03-04T03:07:35Z</updated>
<published>2026-02-02T00:00:00Z</published>
<summary type="text">A Novel Recurrent Neural Network Framework for Prediction and Treatment of Oncogenic Mutation Progression
Parthasarathy, Rishab; Bhowmik, Achintya K.
Despite significant medical advancements, cancer remains the second leading cause of death in the US, causing over 600,000 deaths per year. One emerging field, pathway analysis, is promising but still relies on manually derived wet lab data, which is time-consuming to acquire. This work proposes an efficient, effective, end-to-end framework for Artificial Intelligence (AI)-based pathway analysis that predicts both cancer severity and mutation progression in order to recommend possible treatments. The proposed technique involves a novel combination of time-series machine learning models and pathway analysis. First, mutation sequences were isolated from The Cancer Genome Atlas (TCGA) Database. Then, a novel preprocessing algorithm was used to filter key mutations by mutation frequency. This data was fed into a Recurrent Neural Network (RNN) that predicted cancer severity. The model probabilistically used the RNN predictions, information from the preprocessing algorithm, and multiple drug-target databases to predict future mutations and recommend possible treatments. This framework achieved robust results and Receiver Operating Characteristic (ROC) curves (a key statistical metric) with accuracies greater than 60%, similar to existing cancer diagnostics. In addition, preprocessing played a key role in isolating a few hundred key driver mutations per cancer stage, consistent with current research. Heatmaps based on predicted gene frequency were also generated, highlighting key mutations in each cancer. Overall, this work is the first to propose an efficient, cost-effective end-to-end framework for projecting cancer prognosis and providing possible treatments without relying on expensive, time-consuming wet lab work.
</summary>
<dc:date>2026-02-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>All-Pay Auctions with Different Forfeits</title>
<link href="https://hdl.handle.net/1721.1/164983" rel="alternate"/>
<author>
<name>Kang, Benjamin</name>
</author>
<author>
<name>Unwin, James</name>
</author>
<id>https://hdl.handle.net/1721.1/164983</id>
<updated>2026-03-04T03:07:42Z</updated>
<published>2026-01-09T00:00:00Z</published>
<summary type="text">All-Pay Auctions with Different Forfeits
Kang, Benjamin; Unwin, James
In an auction, each party bids a certain amount, and the one who bids the highest is the winner. Interestingly, auctions can also be used as models for other real-world systems. In an all-pay auction all parties must pay a forfeit for bidding. In the most commonly studied all-pay auction, parties forfeit their entire bid, and this has been considered as a model for expenditure on political campaigns. Here, we consider a number of alternative forfeits that might be used as models for different real-world competitions, such as preparing bids for defense or infrastructure contracts.
</summary>
<dc:date>2026-01-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spontaneous formation of robust two-dimensional perovskite phases</title>
<link href="https://hdl.handle.net/1721.1/164982" rel="alternate"/>
<author>
<name>Tan, Shaun</name>
</author>
<author>
<name>Shih, Meng-Chen</name>
</author>
<author>
<name>Lu, Yongli</name>
</author>
<author>
<name>Choi, Seung-Gu</name>
</author>
<author>
<name>Dong, Yifan</name>
</author>
<author>
<name>Lee, Joo-Hong</name>
</author>
<author>
<name>Yavuz, Ilhan</name>
</author>
<author>
<name>Larson, Bryon W</name>
</author>
<author>
<name>Park, So Yeon</name>
</author>
<author>
<name>Kodalle, Tim</name>
</author>
<author>
<name>Zhang, Ruiqi</name>
</author>
<author>
<name>Grotevent, Matthias J</name>
</author>
<author>
<name>Lin, Yu-Kuan</name>
</author>
<author>
<name>Zhu, Hua</name>
</author>
<author>
<name>Bulović, Vladimir</name>
</author>
<author>
<name>Sutter-Fella, Carolin M</name>
</author>
<author>
<name>Park, Nam-Gyu</name>
</author>
<author>
<name>Beard, Matthew C</name>
</author>
<author>
<name>Lee, Jin-Wook</name>
</author>
<author>
<name>Zhu, Kai</name>
</author>
<author>
<name>Bawendi, Moungi G</name>
</author>
<id>https://hdl.handle.net/1721.1/164982</id>
<updated>2026-03-03T03:07:04Z</updated>
<published>2025-05-08T00:00:00Z</published>
<summary type="text">Spontaneous formation of robust two-dimensional perovskite phases
Tan, Shaun; Shih, Meng-Chen; Lu, Yongli; Choi, Seung-Gu; Dong, Yifan; Lee, Joo-Hong; Yavuz, Ilhan; Larson, Bryon W; Park, So Yeon; Kodalle, Tim; Zhang, Ruiqi; Grotevent, Matthias J; Lin, Yu-Kuan; Zhu, Hua; Bulović, Vladimir; Sutter-Fella, Carolin M; Park, Nam-Gyu; Beard, Matthew C; Lee, Jin-Wook; Zhu, Kai; Bawendi, Moungi G
The two-dimensional on three-dimensional (2D/3D) perovskite bilayer heterostructure can improve the stability and performance of perovskite solar cells. We show that the 2D/3D perovskite stack in a device evolves dynamically during its end-of-life decomposition. Initially phase-pure 2D interlayers can evolve differently, resulting in different device stabilities. We show that a robust 2D interlayer can be formed using mixed solvents to regulate its crystallinity and phase purity. The resulting 2D/3D devices achieved 25.9% efficiency and had good durability, retaining 91% of their initial performance after 1074 hours at 85°C using maximum power point tracking.
</summary>
<dc:date>2025-05-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>Third-order photon correlations extract single-nanocrystal multiexciton properties in solution</title>
<link href="https://hdl.handle.net/1721.1/164981" rel="alternate"/>
<author>
<name>Horowitz, Jonah R</name>
</author>
<author>
<name>Berkinsky, David B</name>
</author>
<author>
<name>Bendekgey, Henry C</name>
</author>
<author>
<name>Tye, Oliver J</name>
</author>
<author>
<name>Šverko, Tara</name>
</author>
<author>
<name>Shulenberger, Katherine E</name>
</author>
<author>
<name>Bawendi, Moungi G</name>
</author>
<id>https://hdl.handle.net/1721.1/164981</id>
<updated>2026-03-03T03:06:58Z</updated>
<published>2025-07-28T00:00:00Z</published>
<summary type="text">Third-order photon correlations extract single-nanocrystal multiexciton properties in solution
Horowitz, Jonah R; Berkinsky, David B; Bendekgey, Henry C; Tye, Oliver J; Šverko, Tara; Shulenberger, Katherine E; Bawendi, Moungi G
Colloidal semiconductor nanocrystals are considered promising materials for high-flux optical applications, including lasing, light-emitting diodes, biological imaging, and quantum optics. In high-flux applications, multiexcitons can significantly contribute to emission, influencing its brightness, spectral purity, and kinetics. As a result, understanding and controlling multiexciton emission in colloidal nanocrystal materials is of the utmost importance. In the past, single-nanocrystal photon correlation methods have been applied to understand biexciton and triexciton efficiencies, lifetimes, and spectra. While powerful, such methods suffer from user selection bias and require stable emission from single nanocrystals. To compensate for this shortcoming, second-order correlation methods were developed to extract sample-averaged biexciton properties from a solution of nanocrystals. Until now, however, the analogous third-order solution photon correlation methods remained unexplored. In this work, we present a pair of third-order photon correlation techniques to obtain the sample-averaged single-nanocrystal triexciton quantum yield and lifetime in a solution-phase experiment. These techniques derive from the relationship between the Poisson probability of nanocrystal photon absorption and the intrinsic probability of nanocrystal photon emission. We validate the theoretical background of these techniques by creating a numerical model to simulate the diffusion and emission of many nanocrystals in solution. Our simulations confirm that the average triexciton quantum yield and triexciton lifetime can be extracted from a solution of nanocrystals. These techniques will enable researchers to gain a better understanding of the fundamental multiexciton properties of colloidal nanocrystals.
</summary>
<dc:date>2025-07-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Challenges of II‐VI and III‐V Blue Quantum Dot Light‐Emitting Diodes</title>
<link href="https://hdl.handle.net/1721.1/164980" rel="alternate"/>
<author>
<name>Tan, Shaun</name>
</author>
<author>
<name>Horowitz, Jonah R</name>
</author>
<author>
<name>Tye, Oliver J</name>
</author>
<author>
<name>Bawendi, Moungi G</name>
</author>
<id>https://hdl.handle.net/1721.1/164980</id>
<updated>2026-03-03T03:07:00Z</updated>
<published>2025-09-22T00:00:00Z</published>
<summary type="text">Challenges of II‐VI and III‐V Blue Quantum Dot Light‐Emitting Diodes
Tan, Shaun; Horowitz, Jonah R; Tye, Oliver J; Bawendi, Moungi G
Quantum dot light-emitting diodes (QD-LEDs) are electroluminescent devices where the emissive layer consists of inorganic colloidal quantum dots. Recent breakthroughs have enabled the development of bright and efficient blue-emitting QD-LEDs based on heavy metal-free compositions. However, challenges remain that hinder their practical application in electroluminescent displays and lighting technologies. The primary obstacle is their limited operational lifetimes which remain significantly below practical requirement standards, especially in comparison to the red- and green-emitting QD-LEDs. Another important issue is the low color purity and broad spectral linewidths of heavy metal-free blue quantum dot compositions. Additional problems include transient electroluminescent behaviors such as fluorescence intermittency and positive aging effects. This review examines the current understanding of the physical mechanisms underlying these challenges faced by blue QD-LEDs. Often, contradictory explanations are proposed to account for the same phenomenon. Here, potential interpretations are suggested that may help reconcile the conflicting reports. Recent advances are further examined that have contributed to the development of state-of-the-art blue QD-LEDs.
</summary>
<dc:date>2025-09-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact of Processing Environment on Anti-Solvent Free FAPbI3 Films and Solar Cells</title>
<link href="https://hdl.handle.net/1721.1/164979" rel="alternate"/>
<author>
<name>Wall, Elizabeth M</name>
</author>
<author>
<name>Lin, Yu‐Kuan</name>
</author>
<author>
<name>Bawendi, Moungi</name>
</author>
<author>
<name>Burlingame, Quinn C</name>
</author>
<author>
<name>Loo, Yueh‐Lin</name>
</author>
<id>https://hdl.handle.net/1721.1/164979</id>
<updated>2026-03-03T03:06:51Z</updated>
<published>2025-12-17T00:00:00Z</published>
<summary type="text">Impact of Processing Environment on Anti-Solvent Free FAPbI3 Films and Solar Cells
Wall, Elizabeth M; Lin, Yu‐Kuan; Bawendi, Moungi; Burlingame, Quinn C; Loo, Yueh‐Lin
As perovskite solar cells approach commercialization, understanding the environmental sensitivities of perovskites during fabrication becomes increasingly important. In this work, the humidity-dependence of each deposition and annealing step in the anti-solvent-free two-step formamidinium lead iodide fabrication process is investigated in air and N2. In-situ grazing-incidence wide-angle X-ray scattering measurements during spin-coating indicate that humidity affects the formation and dynamics of intermediate phases in perovskite precursor films. These differences, and those induced by annealing in humidity, impact the structure, morphology, and composition of resultant perovskite films, though the initial performance of solar cells fabricated using these active layers is relatively insensitive to humidity across the range studied. In contrast, stability is maximized in devices with dry-processed active layers and those terminally annealed in humidity. Spin-coating of PbI2 is most environmentally sensitive—needle-like structures precipitate while spin-coating in 40% relative humidity leading to significantly reduced photovoltaic performance and device stability. Additionally, films and solar cells fabricated in air appear virtually identical to those fabricated in N2. Collectively, these results show that optimal performance and stability of two-step processed formamidinium lead iodide solar cells is achieved when fabricating active layers in a dry atmosphere or with some humidity during the final anneal.
</summary>
<dc:date>2025-12-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cognitive Reinforcement: Capturing Tacit Knowledge and Enhancing Expertise with a Biofeedback Interface for Visual Attention</title>
<link href="https://hdl.handle.net/1721.1/164978" rel="alternate"/>
<author>
<name>Armengol-Urpi, Alexandre</name>
</author>
<author>
<name>Salazar-Gomez, Andres F.</name>
</author>
<author>
<name>Sinha, Pawan</name>
</author>
<author>
<name>Sarma, Sanjay E.</name>
</author>
<id>https://hdl.handle.net/1721.1/164978</id>
<updated>2026-03-04T17:47:02Z</updated>
<published>2026-03-04T00:00:00Z</published>
<summary type="text">Cognitive Reinforcement: Capturing Tacit Knowledge and Enhancing Expertise with a Biofeedback Interface for Visual Attention
Armengol-Urpi, Alexandre; Salazar-Gomez, Andres F.; Sinha, Pawan; Sarma, Sanjay E.
Objective. Tacit or implicit knowledge refers to know-how that experts&#13;
possess but often cannot articulate, codify, or explicitly transfer to&#13;
others. This can present a significant challenge for learning, skill acquisition, and knowledge transfer across various domains, including those&#13;
that rely on apprenticeships, craftsmanship, sports, and medical imaging diagnosis. This study explores whether expert tacit knowledge can&#13;
be accessed and leveraged using an EEG and gaze-informed biofeedback interface to enhance expertise transfer and training. Approach.&#13;
We designed an image classification task where novices were trained&#13;
until they implicitly learned to classify images correctly, despite being&#13;
unaware of which image regions or features guided their decisions. The&#13;
task involved images with a hidden spatial asymmetry that even trained&#13;
participants did not explicitly recognize. Using combined eye-tracking&#13;
and EEG measures, we tracked both overt and covert visual attention to determine whether individuals unconsciously internalized this&#13;
asymmetry during learning. We then investigated whether providing&#13;
explicit gaze-informed feedback on their own implicit attention biases&#13;
could further improve task performance of trained participants. Main Results. Our findings reveal that as participants became trained, their&#13;
attention patterns —both overt and covert— consistently reflected an&#13;
unconscious awareness of image asymmetry, with attention biased toward&#13;
task-relevant image regions. Moreover, trained individuals who received&#13;
explicit feedback derived from their own gaze behavior showed additional improvements in classification performance compared to an equally&#13;
trained control group. Significance. These results open the door to novel&#13;
uses of biofeedback interfaces to facilitate new forms of expertise transfer, training, and collective intelligence. By extracting and conveying&#13;
tacit expert knowledge—ordinarily difficult to externalize—our interface&#13;
enables its transmission to novices, trained individuals, or even machine&#13;
learning systems. We refer to this process as cognitive reinforcement.
</summary>
<dc:date>2026-03-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Higher Siegel-Weil formulae over function fields</title>
<link href="https://hdl.handle.net/1721.1/164977" rel="alternate"/>
<author>
<name>Mkrtchyan, Mikayel</name>
</author>
<id>https://hdl.handle.net/1721.1/164977</id>
<updated>2026-02-28T03:02:13Z</updated>
<published>2026-02-01T00:00:00Z</published>
<summary type="text">Higher Siegel-Weil formulae over function fields
Mkrtchyan, Mikayel
In their seminal work, Feng-Yun-Zhang introduced function field analogues of Kudla-Rapoport cycles for moduli spaces of unitary shtukas, and initiated the study of their intersection theory. They proved a higher Siegel-Weil formula in the case of non-degenerate Fourier coefficients, relating the degrees of these cycles to higher derivatives of Siegel-Eisenstein series. In this thesis, we generalize their result in two directions: we 1) prove a higher Siegel-Weil formula for unitary groups for corank-1 degenerate coefficients, and 2) introduce analogous cycles on moduli spaces of symplectic shtukas, and prove a higher Siegel-Weil formula for such cycles in the non-degenerate case, relating their degrees to derivatives of Siegel-Eisenstein series on split orthogonal groups.
</summary>
<dc:date>2026-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Diverse Array of Synthetic Strategies for Phosphorus Group Transfer Chemistry: From Phosphinidenes to Phosphates</title>
<link href="https://hdl.handle.net/1721.1/164976" rel="alternate"/>
<author>
<name>Xin, Tiansi</name>
</author>
<id>https://hdl.handle.net/1721.1/164976</id>
<updated>2026-02-28T03:02:21Z</updated>
<published>2026-02-01T00:00:00Z</published>
<summary type="text">A Diverse Array of Synthetic Strategies for Phosphorus Group Transfer Chemistry: From Phosphinidenes to Phosphates
Xin, Tiansi
This thesis compiles the published scientific contributions of Tiansi Xin. Chapter 1 consists of a brief collection of eulogies from friends and colleagues, reflecting on his life and time at the Massachusetts Institute of Technology. The subsequent chapters describe the development of novel synthetic methods for the transfer of phosphorus-containing moieties, specifically metaphosphates and phosphinidenes. The work presented here has significant implications for both the fundamental understanding and practical advancement of synthetic inorganic and organic chemistry. Chapters 2 and 3 address the sustainable production and processing of phosphoruscontaining chemicals, focusing on mechanochemical methods to synthesize reduced phosphorus species while circumventing the need to access hazardous white phosphorus as an intermediate. In particular, Chapter 2 describes a solvent-free mechanochemical approach to producing phosphite (HPO₃²⁻) via hydride-mediated reduction of condensed phosphates. Using potassium hydride, a range of inorganic phosphate sources—including pyrophosphate, triphosphate, trimetaphosphate, fluorophosphate, and polyphosphate—were converted to phosphite in moderate to high yields. Mechanistic studies identified overreduction pathways leading to hypophosphite and other low-oxidation P-species. Chapter 3 similarly applies this mechanochemical approach to phosphorus–carbon bond formation, reporting the phosphorylation of acetylides with condensed phosphates to afford phosphonates. Biogenic polyphosphates were also shown to be viable precursors, a proof-of-concept to closing the modern phosphorus cycle using recycled inputs. These results demonstrate the possibility of accessing organophosphorus chemicals directly from condensed phosphates and may offer an opportunity toward a “greener” phosphorus industry. Chapters 4 and 5 shift focus to phosphinidene transfer chemistry and the synthesis of novel phosphorus-containing heterocycles. This expands on previously published studies from the Cummins group on the chemistry of dibenzo-7-phosphanorbornadiene “RPA” reagents. Chapter 4 reports the preparation and structural characterization of iron–phosphido complexes relevant to phosphinidene group transfer catalysis and describes the development of an improved catalytic system based on a simple diiron precursor (Fp₂), enabling efficient synthesis of phosphiranes from electron-deficient alkenes. The mechanism was thoroughly experimentally and computationally interrogated. Chapter 5 describes the novel synthesis of free, uncomplexed phosphet-2-ones via phosphinidene transfer to cyclopropenones, with experimental and theoretical studies supporting a mechanism involving ketene-derived intermediates and transformations to additional phosphorus heterocycles through subsequent reactions.
</summary>
<dc:date>2026-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Host–Guest Complexation by β-Cyclodextrin Enhances the Solubility of an Esterified Protein</title>
<link href="https://hdl.handle.net/1721.1/164975" rel="alternate"/>
<author>
<name>Cheah, Keith M</name>
</author>
<author>
<name>Jun, Joomyung V</name>
</author>
<author>
<name>Wittrup, K Dane</name>
</author>
<author>
<name>Raines, Ronald T</name>
</author>
<id>https://hdl.handle.net/1721.1/164975</id>
<updated>2026-02-27T04:16:04Z</updated>
<published>2022-08-29T00:00:00Z</published>
<summary type="text">Host–Guest Complexation by β-Cyclodextrin Enhances the Solubility of an Esterified Protein
Cheah, Keith M; Jun, Joomyung V; Wittrup, K Dane; Raines, Ronald T
The carboxyl groups of a protein can be esterified by reaction with a diazo compound, 2-diazo-2-(p-methylphenyl)-N,N-dimethylacetamide. This esterification enables the entry of the protein into the cytosol of a mammalian cell, where the nascent ester groups are hydrolyzed by endogenous esterases. The low aqueous solubility of the ensuing esterified protein is, however, a major practical challenge. Solubility screening revealed that β-cyclodextrin (β-CD) is an optimal solubilizing agent for esterified green fluorescent protein (est-GFP). Its addition can increase the recovery of est-GFP by 10-fold. α-CD, γ-CD, and cucurbit-7-uril are less effective excipients. 1H NMR titration experiments revealed that β-CD encapsulates the hydrophobic tolyl group of ester conjugates with Ka = 321 M–1. Combining l-arginine and sucrose with β-CD enables the nearly quantitative recovery of est-GFP. Thus, the insolubility of esterified proteins can be overcome with excipients.
</summary>
<dc:date>2022-08-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying the Role of Kinematic and Behavioral Features in Driver-Pedestrian Interaction across Environments: An Inverse Reinforcement Learning Approach</title>
<link href="https://hdl.handle.net/1721.1/164974" rel="alternate"/>
<author>
<name>Noonan, T Zach</name>
</author>
<author>
<name>Gershon, Pnina</name>
</author>
<author>
<name>Domeyer, Josh</name>
</author>
<author>
<name>Mehler, Bruce</name>
</author>
<author>
<name>Reimer, Bryan</name>
</author>
<id>https://hdl.handle.net/1721.1/164974</id>
<updated>2026-02-27T04:16:16Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Quantifying the Role of Kinematic and Behavioral Features in Driver-Pedestrian Interaction across Environments: An Inverse Reinforcement Learning Approach
Noonan, T Zach; Gershon, Pnina; Domeyer, Josh; Mehler, Bruce; Reimer, Bryan
This study examined real-world driver-pedestrian encounters to identify key interaction features and assess how the importance of these features is mediated by protection afforded by the environment. Using inverse reinforcement learning, we estimated the utility functions to evaluate the relative importance of different aspects of the interaction for each road user and how they differ between undesignated (e.g., jaywalking) and designated (e.g., zebra crossings) crossings. Pedestrian pausing behavior and dynamic features like acceleration changes and time gaps were important at designated crossings, whereas undesignated crossings relied on distances and bidirectional gaze, highlighting reliance on non-verbal cues. This work builds on previous studies analyzing the role of environmental features on interaction, communication, and negotiation between drivers and pedestrians. Understanding driver-pedestrian communication and identifying the most important interaction features may enhance the design of effective and coordinated driver-pedestrian interaction strategies, especially in the context of automated driving systems.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Preoperative Function, Previous SERM Treatment, and Triple-Negative Tumor Status are Independently Associated With 3-Month Postoperative Function After Surgical Decompression of Metastatic Breast Cancer</title>
<link href="https://hdl.handle.net/1721.1/164973" rel="alternate"/>
<author>
<name>Siraj, Layla</name>
</author>
<author>
<name>Duvall, Julia B.</name>
</author>
<author>
<name>Massaad, Elie</name>
</author>
<author>
<name>Fourman, Mitchell S.</name>
</author>
<author>
<name>Shin, John H.</name>
</author>
<id>https://hdl.handle.net/1721.1/164973</id>
<updated>2026-02-27T04:16:12Z</updated>
<published>2025-11-04T00:00:00Z</published>
<summary type="text">Preoperative Function, Previous SERM Treatment, and Triple-Negative Tumor Status are Independently Associated With 3-Month Postoperative Function After Surgical Decompression of Metastatic Breast Cancer
Siraj, Layla; Duvall, Julia B.; Massaad, Elie; Fourman, Mitchell S.; Shin, John H.
Background:&#13;
The most common cancer in women worldwide, breast cancer most often metastasizes to the bone. Improved chemo- and radiotherapies and novel molecular therapies have prolonged survival in women with osseous metastatic breast cancer, but spinal metastases often cause cord compression that degrades their functional independence.&#13;
Purpose:&#13;
In women with breast cancer metastasized to the spine, we sought to (1) identify independent predictors of a functional deficit 3 months after surgical management and (2) assess the utility of existing metrics at highlighting patients at risk of a postoperative functional deficit.&#13;
Methods:&#13;
We performed a single-institution, retrospective analysis of 92 patients meeting our inclusion criteria between 2004 and 2021. Patients were classified by 3-month postoperative Eastern Cooperative Oncology Group (ECOG) scores into good/independent (ECOG 0 to 2) and poor/dependent (ECOG 3 to 5) functional outcome groups. Univariate and multivariate analyses were performed to identify patient and tumor factors associated with good vs. poor 3-month ECOG scores.&#13;
Results:&#13;
Preoperative use of selective estrogen receptor modulators (SERMs) was significantly associated with good postoperative functional outcomes. Poor preoperative function, the presence of visceral metastases at the time of surgery, and triple-negative primary or metastatic tumor status were independently associated with poor 3-month postoperative function. Host characteristics, sociodemographic factors, and indicators of surgical complexity, including estimated blood loss, front/back surgery, and corpectomy reconstruction, were not associated with 3-month ECOG score. A multivariate model including these significant univariate associations and normalized for patient demographics identified preoperative SERM use, poor preoperative function (ECOG score), and triple-negative primary or metastatic tumor status as independently associated with functional status 3 months after surgery.&#13;
Conclusions:&#13;
Our retrospective analysis found that preoperative SERM use was significantly associated with improved postoperative functional outcomes, while poor preoperative function and triple-negative tumor status were significantly associated with poor function 3 months after surgery. These factors may serve as indicators of function and independence after surgery for patients with metastatic breast cancer to the spine.&#13;
Level of Evidence:&#13;
Level IV: Prognostic Study
</summary>
<dc:date>2025-11-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Influence of Prior Semantic Knowledge in Noisy Channel Interpretation</title>
<link href="https://hdl.handle.net/1721.1/164972" rel="alternate"/>
<author>
<name>Chen, Sihan</name>
</author>
<author>
<name>Washington, Lia</name>
</author>
<author>
<name>Gibson, Edward</name>
</author>
<id>https://hdl.handle.net/1721.1/164972</id>
<updated>2026-02-27T04:16:13Z</updated>
<published>2025-09-25T00:00:00Z</published>
<summary type="text">The Influence of Prior Semantic Knowledge in Noisy Channel Interpretation
Chen, Sihan; Washington, Lia; Gibson, Edward
How do comprehenders interpret semantically implausible sentences? Previous studies proposed a noisy-channel framework of sentence comprehension, where communication between a speaker and a comprehender happens in a noisy channel. The comprehender rationally adopts an interpretation of a sentence based on how likely the interpretation is (the semantic prior) and how likely is the interpretation corrupted into the perceived sentence because of noise (the likelihood). The theory predicted that comprehenders would be more likely to adopt a literal interpretation of an implausible sentence if their prior of implausible sentences were higher. To test this hypothesis, Gibson et al. manipulated the proportion of implausible test sentences in two sets of experiments, where participants read a number of sentences and answer a comprehension question following each sentence. Although their results supported the hypothesis, the experiment could be confounded (a) by participants’ adaptation effect (due to different experiment lengths) and (b) by different participants having different strategies to do the task (due to the between-subject design). In our study, we manipulated the semantic prior and controlled for these potential confounds. We found participants exposed to more implausible sentences were indeed more likely to interpret implausible sentences literally. Our results hence offer additional support for the noisy-channel framework.
</summary>
<dc:date>2025-09-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Politics of Engagement in Platform Governance</title>
<link href="https://hdl.handle.net/1721.1/164971" rel="alternate"/>
<author>
<name>Lewis, Becca</name>
</author>
<author>
<name>Christin, Angèle</name>
</author>
<id>https://hdl.handle.net/1721.1/164971</id>
<updated>2026-02-27T04:16:11Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Politics of Engagement in Platform Governance
Lewis, Becca; Christin, Angèle
In recent years, the concept of user engagement has dominated debate over the governance of online platforms, and critics use the term to assign crass commercial interests to social media companies. We argue that social media engagement is a multifaceted ideal that serves both economic and ideological functions for platforms. We show how Facebook’s early leadership used the concept to reconcile the competing demands of expansion, revenue generation, and community-building. In doing so, they synthesized three distinct ideas: the Silicon Valley belief that network expansion correlated with network strength, the ad industry’s contention that media should promote emotional investment from viewers, and the academic claim that civic participation is the most important democratic virtue. Even as the contradictions that these claims yield have come to the foreground, the multiple logics of engagement have proven difficult to evade, and it continues to shape discussions of platform governance.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Impact of Electrification and Partial Automation on Driver Speeding Behavior</title>
<link href="https://hdl.handle.net/1721.1/164970" rel="alternate"/>
<author>
<name>Gershon, Pnina</name>
</author>
<author>
<name>Noonan, T Zach</name>
</author>
<author>
<name>Lenneman, John</name>
</author>
<id>https://hdl.handle.net/1721.1/164970</id>
<updated>2026-02-27T04:16:15Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">The Impact of Electrification and Partial Automation on Driver Speeding Behavior
Gershon, Pnina; Noonan, T Zach; Lenneman, John
As electric vehicles (EVs) and partial automation systems become increasingly prevalent, their impact on everyday driving behavior remains underexplored. This study utilizes real-world naturalistic data to examine how vehicle type, an electric versus an internal combustion engine (ICE), and the use of partial automation are associated with speeding behavior. Data were collected from 24 drivers over the course of a month each, comparing Tesla Model 3s with Autopilot (EV) and Cadillac CT6s with Super Cruise (ICE), covering about 38,000 miles of driving. Results indicate that EV drivers tended to speed for shorter durations on arterial roads but exhibited higher speeding magnitudes on residential and controlled access roads after their first week of driving. Notably, driving with partial automation, regardless of powertrain, was associated with significantly longer speeding durations and slightly greater speeding magnitudes compared to manual driving. These findings suggest that both electrification and automation contribute to evolving driver behaviors, changing speeding behavior in specific driving contexts. As drivers adapt to new vehicle technologies, understanding how these systems shape behavior is important. Insights from this study may inform the design of future in-vehicle systems and guide driver education strategies to promote safe driving practices in an evolving transportation landscape.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cloud Capitalism and the AI Transition</title>
<link href="https://hdl.handle.net/1721.1/164969" rel="alternate"/>
<author>
<name>Tan, JS</name>
</author>
<author>
<name>Thelen, Kathleen</name>
</author>
<id>https://hdl.handle.net/1721.1/164969</id>
<updated>2026-02-27T04:16:14Z</updated>
<published>2025-12-26T00:00:00Z</published>
<summary type="text">Cloud Capitalism and the AI Transition
Tan, JS; Thelen, Kathleen
This article explores the origins and implications of a new cloud business model that is powering the advance of AI. We document how this model emerged within a handful of the most dominant IT firms whose reach into all corners of the economy makes them a powerful node or “choke point” in the political economy as a whole. We then elaborate how the features of the cloud business model differ from the traditional platform model out of which it grew, as it evolved from asset-light to asset-heavy, from hierarchical organization to semivertical integration, from domination over to collaboration with partner firms, and from embracing consumer- to enterprise-facing strategies. A final section considers the technological, political, and distributional impacts of the rise of this new business model—showing how the current race to artificial general intelligence (AGI) has reinforced and accelerated its underlying dynamics (above all, intensifying the drive for scale and ever-greater asset intensity), analyzing the new techno-nationalist alliance between industry leaders and the state that the model's development has inspired, and considering the new power-distributional dynamics this model has produced.
</summary>
<dc:date>2025-12-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Vice President for Research</title>
<link href="https://hdl.handle.net/1721.1/164968" rel="alternate"/>
<author>
<name>Waitz, Ian A</name>
</author>
<id>https://hdl.handle.net/1721.1/164968</id>
<updated>2026-02-27T04:17:20Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Vice President for Research
Waitz, Ian A
This report contains the following sections: Overview, Research Administration, Research Compliance, Office of Research Computing and Data, International Scholars Office, Postdoctoral Services, OVPR Administration, and Lab and Center Leadership.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hexamethylbenzene Elimination Enables the Generation of Transient, Sterically Unhindered Multiply Bonded Boron Species</title>
<link href="https://hdl.handle.net/1721.1/164967" rel="alternate"/>
<author>
<name>Zhang, Chonghe</name>
</author>
<author>
<name>Dabringhaus,  Philipp</name>
</author>
<author>
<name>Tra,  Bi Youan E.</name>
</author>
<author>
<name>Gilliard, Robert J. Jr</name>
</author>
<author>
<name>Cummins, Christopher C.</name>
</author>
<id>https://hdl.handle.net/1721.1/164967</id>
<updated>2026-02-27T04:16:10Z</updated>
<published>2025-05-16T00:00:00Z</published>
<summary type="text">Hexamethylbenzene Elimination Enables the Generation of Transient, Sterically Unhindered Multiply Bonded Boron Species
Zhang, Chonghe; Dabringhaus,  Philipp; Tra,  Bi Youan E.; Gilliard, Robert J. Jr; Cummins, Christopher C.
We present a method for the generation of boron-containing unsaturated small molecules via hexamethylbenzene elimination. The fragmentation precursors are obtained through bond insertion into phenyl boranorbornadiene (PhB(C6Me6), 1). Compound 1 undergoes 1,1-insertion with 2,6-xylyl isocyanide, affording a boron-doped bicyclo[2.2.2]octa-2,5-diene 2. Heating 2 in toluene results in the formation of a base-stabilized boraketenimine PhB(CNxyl)2 (i.e., borylene diisocyanide) as an intermediate via retro-Diels–Alder reaction. Surprisingly, PhB(CNxyl)2 dimerizes to give a boron-doped 6-membered ring (PhB)2C4(CNxyl)64. The reaction of 1 with trimethylamine N-oxide and phenyl azide yields triphenyl boroxine and a BN4 ring, respectively, implying the involvement of transient oxoborane (PhB[triple bond, length as m-dash]O) and iminoborane intermediates (PhB[triple bond, length as m-dash]NPh), respectively. Furthermore, boranorbornadiene also undergoes 2,3-insertion with mesityl isocyanate (MesNCO), affording a fused 6/5-membered heterocycle 11. This insertion profile is analogous to the insertion of phenyl azide into 1.
</summary>
<dc:date>2025-05-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mechanisms and Scale-up Potential of 3D Solar Interfacial-Evaporators</title>
<link href="https://hdl.handle.net/1721.1/164966" rel="alternate"/>
<author>
<name>Zhang,  James H.</name>
</author>
<author>
<name>Mittapally, Rohith</name>
</author>
<author>
<name>Oluwade,  Abimbola</name>
</author>
<author>
<name>Chen, Gang</name>
</author>
<id>https://hdl.handle.net/1721.1/164966</id>
<updated>2026-02-27T04:16:06Z</updated>
<published>2025-04-24T00:00:00Z</published>
<summary type="text">Mechanisms and Scale-up Potential of 3D Solar Interfacial-Evaporators
Zhang,  James H.; Mittapally, Rohith; Oluwade,  Abimbola; Chen, Gang
Evaporation fluxes from porous evaporators under sunlight have been reported to exceed the solar-thermal limit, determined by relating the incoming solar energy to the latent and sensible heat of water, for applications in desalination and brine pond drying. Although flat two-dimensional (2D) evaporators exceeding the solar limit imply a non-thermal process, tall three-dimensional (3D) solar evaporators can exceed it by absorbing additional environmental heat into its cold sidewalls. Through modeling, we explain the physics and identify the critical heights in which a fin transitions from 2D to 3D evaporation and exceeds the solar-thermal limit. Our analyses illustrate that environmental heat absorption in 3D evaporators is determined by the ambient relative humidity and the airflow velocity. The model is then coarse-grained into a large-scale fin array device on the meters scale to analyze their scalability. We identify that these devices are unlikely to scale favorably in closed environment settings such as solar stills. Our modeling clearly illustrates the benefits and limitations of 3D evaporating arrays and pinpoints design choices in previous works that hinder the device's overall performance. This work illustrates the importance in distinguishing 2D from 3D evaporation for mechanisms underlying interfacial evaporation exceeding the solar-thermal limit.
</summary>
<dc:date>2025-04-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decarbonization Approaches for Ethylene Production: Comparative Techno-Economic and Life-Cycle Analysis</title>
<link href="https://hdl.handle.net/1721.1/164965" rel="alternate"/>
<author>
<name>Shin, Woojae</name>
</author>
<author>
<name>Lin, Bosong</name>
</author>
<author>
<name>Lai, Haoxiang</name>
</author>
<author>
<name>Ibrahima, Gasim</name>
</author>
<author>
<name>Zang, Guiyan</name>
</author>
<id>https://hdl.handle.net/1721.1/164965</id>
<updated>2026-02-27T04:16:09Z</updated>
<published>2025-02-18T00:00:00Z</published>
<summary type="text">Decarbonization Approaches for Ethylene Production: Comparative Techno-Economic and Life-Cycle Analysis
Shin, Woojae; Lin, Bosong; Lai, Haoxiang; Ibrahima, Gasim; Zang, Guiyan
Ethylene, a building block of the chemical industry, significantly contributes to global greenhouse gas (GHG) emissions, prompting interest in decarbonization approaches to align with recent carbon neutrality initiatives. This paper presents a comprehensive techno-economic analysis (TEA) and life cycle analysis (LCA) of GHG emissions, comparing conventional ethane-based ethylene plants with three decarbonization approaches. The study was conducted within the context of the U.S. average, with sensitivity analysis to identify key drivers affecting well-to-gate (WTG) GHG emissions and the levelized cost of ethylene (LCOE). The conventional plant exhibited a GHG emission of 869 kgCO2e per tonne-ethylene and a LCOE of $746 per tonne-ethylene. Substituting external natural gas fuels with grid or renewable electricity decreased the emissions to 806 and 717 kgCO2e per tonne-ethylene, respectively. The emissions of the grid-powered or renewable-powered electrically heated cracker that exports co-produced hydrogen to substitute conventional gray hydrogen were 1031 and −163 kgCO2e per tonne-ethylene, respectively. The application of CCS to purge gas showed 703 and 514 kgCO2e per tonne-ethylene emissions, respectively. The electric cracker showed lower emissions than the conventional plant below 380 kgCO2e per MW h electricity upstream, and at 60 kgCO2e per MW h, it achieved carbon neutrality. Regarding LCOE, when using a grid electricity source, no external natural gas, electric cracker, and adding CCS to purge gas showed $743, 833, and 771 per tonne-ethylene, respectively. When these plants adopt renewable electricity, their LCOEs will be $737, 746 and 757 per tonne-ethylene. Below $41.1 per MW h electricity price, the electric cracker had the lowest value among all cases. With hydrogen prices of $0.5–3.0 per kg-H2, the electric cracker's LCOE ranged from −$45(cost)–128(saving) per tonne-ethylene compared to the conventional concept.
</summary>
<dc:date>2025-02-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Archerfish: A Retrofitted 3D Printer for High-throughput Combinatorial Experimentation via Continuous Printing</title>
<link href="https://hdl.handle.net/1721.1/164964" rel="alternate"/>
<author>
<name>Alexander E. Siemenn,   Basita Das, Eunice Aissi, Fang Sheng, Lleyton Elliott, Blake Hudspeth, Marilyn Meyers, James Serdy and Tonio Buonassisi</name>
</author>
<id>https://hdl.handle.net/1721.1/164964</id>
<updated>2026-02-27T04:16:03Z</updated>
<published>2025-01-31T00:00:00Z</published>
<summary type="text">Archerfish: A Retrofitted 3D Printer for High-throughput Combinatorial Experimentation via Continuous Printing
Alexander E. Siemenn,   Basita Das, Eunice Aissi, Fang Sheng, Lleyton Elliott, Blake Hudspeth, Marilyn Meyers, James Serdy and Tonio Buonassisi
The maturation of 3D printing technology has enabled low-cost, rapid prototyping capabilities for mainstreaming accelerated product design. The materials research community has recognized this need, but no universally accepted rapid prototyping technique currently exists for material design. Toward this end, we develop Archerfish, a 3D printer retrofitted to dispense liquid with in situ mixing capabilities for performing high-throughput combinatorial printing (HTCP) of material compositions. Using this HTCP design, we demonstrate continuous printing throughputs of up to 250 unique compositions per minute, 100× faster than similar tools such as Opentrons that utilize stepwise printing with ex situ mixing. We validate the formation of these combinatorial “prototype” material gradients using hyperspectral image analysis and energy-dispersive X-ray spectroscopy. Furthermore, we describe hardware challenges to realizing reproducible, accurate, and precise composition gradients with continuous printing, including those related to precursor dispensing, mixing, and deposition. Despite these limitations, the continuous printing and low-cost design of Archerfish demonstrate promising accelerated materials screening results across a range of materials systems from nanoparticles to perovskites.
</summary>
<dc:date>2025-01-31T00:00:00Z</dc:date>
</entry>
<entry>
<title>Borinine-FLP Ring Expansion: Isolation of Eight-Membered B-P Rings Bridged by µ2 Chalcogenide and Chloronium Ions</title>
<link href="https://hdl.handle.net/1721.1/164963" rel="alternate"/>
<author>
<name>Frey, Nathan C.</name>
</author>
<author>
<name>Sarkar,  Samir Kumar</name>
</author>
<author>
<name>Dickie, Diane A.</name>
</author>
<author>
<name>Molinoa, Andrew</name>
</author>
<author>
<name>Gilliard, Robert J. Jr</name>
</author>
<id>https://hdl.handle.net/1721.1/164963</id>
<updated>2026-02-27T04:15:52Z</updated>
<published>2025-05-10T00:00:00Z</published>
<summary type="text">Borinine-FLP Ring Expansion: Isolation of Eight-Membered B-P Rings Bridged by µ2 Chalcogenide and Chloronium Ions
Frey, Nathan C.; Sarkar,  Samir Kumar; Dickie, Diane A.; Molinoa, Andrew; Gilliard, Robert J. Jr
Boron–phosphorus (B–P) frustrated Lewis pairs (FLPs) are an important class of compounds for activating various small molecules. Utilizing the ring expansion reactivity of 9-chloro-9-borafluorene, a borinine-based FLP was synthesized. Various five-membered main-group element heterocycles were obtained via the reaction of the FLP with Me3NO, S8, and Se. Subsequent reduction of these species yielded the ring-expanded compounds, each featuring bridging B–E–B (E = O, S, Se) bonds. Similarly, halide abstraction from the FLP with AgNTf2 led to the formation of a cationic ring-expanded compound with a bridging B–Cl–B motif. This motif constitutes one of the first examples of a boron-stabilized chloronium ion, as verified using in-depth bonding analysis methods. Mechanistic pathways for the reduction- and halide abstraction-mediated ring expansion reactions are proposed with the aid of density functional theory. Electronic structure computations were performed to determine the best representation of bonding interactions in each compound, suggesting phosophorus(V)–chalcogen double bonding and chalcogen–boron(III) dative interactions within the heterocycles.
</summary>
<dc:date>2025-05-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-resolution structure of Zn3(HOTP)2 (HOTP = hexaoxidotriphenylene), a three-dimensional conductive MOF</title>
<link href="https://hdl.handle.net/1721.1/164962" rel="alternate"/>
<author>
<name>Zhang,  Kimberly J.</name>
</author>
<author>
<name>Chen, Tianyang</name>
</author>
<author>
<name>Oppenheim, Julius J.</name>
</author>
<author>
<name>Yang,  Luming</name>
</author>
<author>
<name>Palatinus,  Lukáš</name>
</author>
<author>
<name>Müller,  Peter</name>
</author>
<author>
<name>Van Voorhisa,  Troy</name>
</author>
<author>
<name>Dincă,  Mircea</name>
</author>
<id>https://hdl.handle.net/1721.1/164962</id>
<updated>2026-02-27T04:16:01Z</updated>
<published>2025-06-02T00:00:00Z</published>
<summary type="text">High-resolution structure of Zn3(HOTP)2 (HOTP = hexaoxidotriphenylene), a three-dimensional conductive MOF
Zhang,  Kimberly J.; Chen, Tianyang; Oppenheim, Julius J.; Yang,  Luming; Palatinus,  Lukáš; Müller,  Peter; Van Voorhisa,  Troy; Dincă,  Mircea
Although two-dimensional (2D) electrically conducting metal–organic frameworks (cMOFs) have become prominent due to their numerous potential applications, their structures are often implied or assumed from rather crude powder X-ray diffraction data. Indeed, exceedingly few examples exist of atomic-level structural details coming from single crystal diffraction experiments. Most widely studied among cMOFs are materials based on triphenylene ligands, in particular M3(HOTP)2 (M = Cu, Zn) and [M3(HOTP)2][M3(HOTP)]2 (M = Mg, Ni, Co; H6HOTP = 2,3,6,7,10,11-hexahydroxytriphenylene), which are invariably described as 2D van der Waals materials with sheets of ligands connected by square planar or octahedral metal ions. Here, we employ electron diffraction to show that, unlike the Mg, Co, Ni, and Cu analogs, Zn3(HOTP)2 crystallizes into a three-dimensional network that is analogous to the structures of the lanthanide-based HOTP MOFs. Moreover, similar to the lanthanide frameworks, Zn3(HOTP)2 exhibits incommensurate modulation, likely originating from a frustration between the preferred π–π stacking distance and the Zn–O bond lengths, or from a Peierls distortion. This work reinforces the importance of employing single crystal diffraction measurements for the characterization of conductive MOFs, especially when trying to correlate electronic properties to structural details.
</summary>
<dc:date>2025-06-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Intratumoral nanobody–IL-2 fusions that bind the tumor extracellular matrix suppress solid tumor growth in mice</title>
<link href="https://hdl.handle.net/1721.1/164961" rel="alternate"/>
<author>
<name>Lutz, Emi A</name>
</author>
<author>
<name>Jailkhani, Noor</name>
</author>
<author>
<name>Momin, Noor</name>
</author>
<author>
<name>Huang, Ying</name>
</author>
<author>
<name>Sheen, Allison</name>
</author>
<author>
<name>Kang, Byong H</name>
</author>
<author>
<name>Wittrup, K Dane</name>
</author>
<author>
<name>Hynes, Richard O</name>
</author>
<id>https://hdl.handle.net/1721.1/164961</id>
<updated>2026-02-26T03:07:36Z</updated>
<published>2022-11-01T00:00:00Z</published>
<summary type="text">Intratumoral nanobody–IL-2 fusions that bind the tumor extracellular matrix suppress solid tumor growth in mice
Lutz, Emi A; Jailkhani, Noor; Momin, Noor; Huang, Ying; Sheen, Allison; Kang, Byong H; Wittrup, K Dane; Hynes, Richard O
Confining cytokine exposure to the tumors would greatly enhance cancer immunotherapy safety and efficacy. Immunocytokines, cytokines fused to tumor-targeting antibodies, have been developed with this intention, but without significant clinical success to date. A critical limitation is uptake by receptor-expressing cells in the blood, that decreases the dose at the tumor and engenders toxicity. Small-format immunocytokines, constructed with antibody fragments, are hypothesized to improve tumor specificity due to rapid systemic clearance. However, effective design criteria for small-format immunocytokines need further examination. Here, we engineer small interleukin-2 (IL-2) immunocytokines fused to nanobodies with nanomolar to picomolar affinities for the tumor-specific EIIIB domain of fibronectin (also known as EDB). Upon intravenous delivery into immunocompetent mice, such immunocytokines led to similar tumor growth delay as size-matched untargeted IL-2. Intratumoral (i.t.) delivery imparted improved survival dependent on affinity to EIIIB. I.t. administration offers a promising avenue to deliver small-format immunocytokines, given effective affinity for the tumor microenvironment.
</summary>
<dc:date>2022-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ablative radiotherapy improves survival but does not cure autochthonous mouse models of prostate and colorectal cancer</title>
<link href="https://hdl.handle.net/1721.1/164960" rel="alternate"/>
<author>
<name>Schmidt, Daniel R</name>
</author>
<author>
<name>Gramatikov, Iva Monique T</name>
</author>
<author>
<name>Sheen, Allison</name>
</author>
<author>
<name>Williams, Christopher L</name>
</author>
<author>
<name>Hurwitz, Martina</name>
</author>
<author>
<name>Dodge, Laura E</name>
</author>
<author>
<name>Holupka, Edward</name>
</author>
<author>
<name>Kiger, WS</name>
</author>
<author>
<name>Cornwall-Brady, Milton R</name>
</author>
<author>
<name>Huang, Wei</name>
</author>
<author>
<name>Mak, Howard H</name>
</author>
<author>
<name>Cormier, Kathleen S</name>
</author>
<author>
<name>Condon, Charlene</name>
</author>
<author>
<name>Dane Wittrup, K</name>
</author>
<author>
<name>Yilmaz, Ömer H</name>
</author>
<author>
<name>Stevenson, Mary Ann</name>
</author>
<author>
<name>Down, Julian D</name>
</author>
<author>
<name>Floyd, Scott R</name>
</author>
<author>
<name>Roper, Jatin</name>
</author>
<author>
<name>Vander Heiden, Matthew G</name>
</author>
<id>https://hdl.handle.net/1721.1/164960</id>
<updated>2026-02-26T03:07:27Z</updated>
<published>2023-08-09T00:00:00Z</published>
<summary type="text">Ablative radiotherapy improves survival but does not cure autochthonous mouse models of prostate and colorectal cancer
Schmidt, Daniel R; Gramatikov, Iva Monique T; Sheen, Allison; Williams, Christopher L; Hurwitz, Martina; Dodge, Laura E; Holupka, Edward; Kiger, WS; Cornwall-Brady, Milton R; Huang, Wei; Mak, Howard H; Cormier, Kathleen S; Condon, Charlene; Dane Wittrup, K; Yilmaz, Ömer H; Stevenson, Mary Ann; Down, Julian D; Floyd, Scott R; Roper, Jatin; Vander Heiden, Matthew G
Background&#13;
Genetically engineered mouse models (GEMMs) of cancer are powerful tools to study mechanisms of disease progression and therapy response, yet little is known about how these models respond to multimodality therapy used in patients. Radiation therapy (RT) is frequently used to treat localized cancers with curative intent, delay progression of oligometastases, and palliate symptoms of metastatic disease.&#13;
&#13;
Methods&#13;
Here we report the development, testing, and validation of a platform to immobilize and target tumors in mice with stereotactic ablative RT (SART). Xenograft and autochthonous tumor models were treated with hypofractionated ablative doses of radiotherapy.&#13;
&#13;
Results&#13;
We demonstrate that hypofractionated regimens used in clinical practice can be effectively delivered in mouse models. SART alters tumor stroma and the immune environment, improves survival in GEMMs of primary prostate and colorectal cancer, and synergizes with androgen deprivation in prostate cancer. Complete pathologic responses were achieved in xenograft models, but not in GEMMs.&#13;
&#13;
Conclusions&#13;
While SART is capable of fully ablating xenografts, it is unable to completely eradicate disease in GEMMs, arguing that resistance to potentially curative therapy can be modeled in GEMMs.&#13;
&#13;
Plain language summary&#13;
Mice can be used to model the types of cancer seen in people to investigate the effects of cancer therapies, such as radiation. Here, we apply radiation therapy treatments that are able to cure cancer in humans to mice that have cancer of the prostate or colorectum. We show that the mice do not experience many side effects and that the tumours reduce in size, but in some cases show progression after treatment. Our study demonstrates that mice can be used to better understand how human cancers respond to radiation treatment, which can lead to the development of improved treatments and treatment schedules.
</summary>
<dc:date>2023-08-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Anti–PD-1 and Extended Half-life IL2 Synergize for Treatment of Murine Glioblastoma Independent of Host MHC Class I Expression</title>
<link href="https://hdl.handle.net/1721.1/164959" rel="alternate"/>
<author>
<name>Tritz, Zachariah P</name>
</author>
<author>
<name>Ayasoufi, Katayoun</name>
</author>
<author>
<name>Wolf, Delaney M</name>
</author>
<author>
<name>Owens, Carley A</name>
</author>
<author>
<name>Malo, Courtney S</name>
</author>
<author>
<name>Himes, Benjamin T</name>
</author>
<author>
<name>Fain, Cori E</name>
</author>
<author>
<name>Goddery, Emma N</name>
</author>
<author>
<name>Yokanovich, Lila T</name>
</author>
<author>
<name>Jin, Fang</name>
</author>
<author>
<name>Hansen, Michael J</name>
</author>
<author>
<name>Parney, Ian F</name>
</author>
<author>
<name>Wang, Chensu</name>
</author>
<author>
<name>Moynihan, Kelly D</name>
</author>
<author>
<name>Irvine, Darrell J</name>
</author>
<author>
<name>Wittrup, K Dane</name>
</author>
<author>
<name>Diaz Marcano, Rosa M</name>
</author>
<author>
<name>Vile, Richard G</name>
</author>
<author>
<name>Johnson, Aaron J</name>
</author>
<id>https://hdl.handle.net/1721.1/164959</id>
<updated>2026-02-26T03:07:45Z</updated>
<published>2023-06-02T00:00:00Z</published>
<summary type="text">Anti–PD-1 and Extended Half-life IL2 Synergize for Treatment of Murine Glioblastoma Independent of Host MHC Class I Expression
Tritz, Zachariah P; Ayasoufi, Katayoun; Wolf, Delaney M; Owens, Carley A; Malo, Courtney S; Himes, Benjamin T; Fain, Cori E; Goddery, Emma N; Yokanovich, Lila T; Jin, Fang; Hansen, Michael J; Parney, Ian F; Wang, Chensu; Moynihan, Kelly D; Irvine, Darrell J; Wittrup, K Dane; Diaz Marcano, Rosa M; Vile, Richard G; Johnson, Aaron J
Glioblastoma (GBM) is the most common malignant brain tumor in adults, responsible for approximately 225,000 deaths per year. Despite preclinical successes, most interventions have failed to extend patient survival by more than a few months. Treatment with anti—programmed cell death protein 1 (anti–PD-1) immune checkpoint blockade (ICB) monotherapy has been beneficial for malignant tumors such as melanoma and lung cancers but has yet to be effectively employed in GBM. This study aimed to determine whether supplementing anti–PD-1 ICB with engineered extended half-life IL2, a potent lymphoproliferative cytokine, could improve outcomes. This combination therapy, subsequently referred to as enhanced checkpoint blockade (ECB), delivered intraperitoneally, reliably cures approximately 50% of C57BL/6 mice bearing orthotopic GL261 gliomas and extends median survival of the treated cohort. In the CT2A model, characterized as being resistant to CBI, ECB caused a decrease in CT2A tumor volume in half of measured animals similar to what was observed in GL261-bearing mice, promoting a trending survival increase. ECB generates robust immunologic responses, features of which include secondary lymphoid organ enlargement and increased activation status of both CD4 and CD8 T cells. This immunity is durable, with long-term ECB survivors able to resist GL261 rechallenge. Through employment of depletion strategies, ECB's efficacy was shown to be independent of host MHC class I–restricted antigen presentation but reliant on CD4 T cells. These results demonstrate ECB is efficacious against the GL261 glioma model through an MHC class I–independent mechanism and supporting further investigation into IL2-supplemented ICB therapies for tumors of the central nervous system.
</summary>
<dc:date>2023-06-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Collagen-Anchored Interleukin-2 and Interleukin-12 Safely Reprogram the Tumor Microenvironment in Canine Soft-Tissue Sarcomas</title>
<link href="https://hdl.handle.net/1721.1/164958" rel="alternate"/>
<author>
<name>Stinson, Jordan A</name>
</author>
<author>
<name>Sheen, Allison</name>
</author>
<author>
<name>Momin, Noor</name>
</author>
<author>
<name>Hampel, Jordan</name>
</author>
<author>
<name>Bernstein, Rebecca</name>
</author>
<author>
<name>Kamerer, Rebecca</name>
</author>
<author>
<name>Fadl-Alla, Bahaa</name>
</author>
<author>
<name>Samuelson, Jonathan</name>
</author>
<author>
<name>Fink, Elizabeth</name>
</author>
<author>
<name>Fan, Timothy M</name>
</author>
<author>
<name>Wittrup, K Dane</name>
</author>
<id>https://hdl.handle.net/1721.1/164958</id>
<updated>2026-02-26T03:07:43Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Collagen-Anchored Interleukin-2 and Interleukin-12 Safely Reprogram the Tumor Microenvironment in Canine Soft-Tissue Sarcomas
Stinson, Jordan A; Sheen, Allison; Momin, Noor; Hampel, Jordan; Bernstein, Rebecca; Kamerer, Rebecca; Fadl-Alla, Bahaa; Samuelson, Jonathan; Fink, Elizabeth; Fan, Timothy M; Wittrup, K Dane
Purpose:&#13;
Cytokine therapies such as IL2 and IL12 suffer from impractically small therapeutic windows driven by their on-target, off-tumor activity, limiting their clinical potential despite potent antitumor effects. We previously engineered cytokines that bind and anchor to tumor collagen following intratumoral injection, and sought to test their safety and biomarker activity in spontaneous canine soft-tissue sarcomas (STS).&#13;
&#13;
Experimental Design:&#13;
Collagen-binding cytokines were canine-ized to minimize immunogenicity and were used in a rapid dose-escalation study in healthy beagles to identify a maximum tolerated dose. Ten client-owned pet dogs with STS were then enrolled into trial, receiving cytokines at different intervals prior to surgical tumor excision. Tumor tissue was analyzed through IHC and NanoString RNA profiling for dynamic changes within treated tumors. Archived, untreated STS samples were analyzed in parallel as controls.&#13;
&#13;
Results:&#13;
Intratumorally administered collagen-binding IL2 and IL12 were well tolerated by STS-bearing dogs, with only Grade 1/2 adverse events observed (mild fever, thrombocytopenia, neutropenia). IHC revealed enhanced T-cell infiltrates, corroborated by an enhancement in gene expression associated with cytotoxic immune function. We found concordant increases in expression of counter-regulatory genes that we hypothesize would contribute to a transient antitumor effect, and confirmed in mouse models that combination therapy to inhibit this counter-regulation can improve responses to cytokine therapy.&#13;
&#13;
Conclusions:&#13;
These results support the safety and activity of intratumorally delivered, collagen-anchoring cytokines for inflammatory polarization of the canine STS tumor microenvironment. We are further evaluating the efficacy of this approach in additional canine cancers, including oral malignant melanoma.&#13;
&#13;
Translational Relevance&#13;
Successful translation of novel cancer therapies could be accelerated through the inclusion of tumor models that accurately recapitulate natural evolution and malignant transformation processes operative in human tumor development. Spontaneous cancer in pet dogs provides an underutilized opportunity to assess the safety and activity of investigational cancer therapies in tumors that arise following years of immunoediting. Particularly for the evaluation of immunotherapies, canine tumors enable the assessment of clinical potential in the context of an experienced, and often senescent, immune background. Beyond efficacy, such evaluation provides meaningful insight into tumor resistance mechanisms that could influence eventual human clinical success. Herein, we characterize immune activities generated by intratumoral injections of engineered collagen-binding cytokines IL2 and IL12 into naturally occurring canine soft-tissue sarcomas, and demonstrate through comparative assessment in mouse tumors the differential learnings from each model and their combined role in guiding rational design of treatment combinations with greater expected efficacy.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Both intratumoral regulatory T cell depletion and CTLA-4 antagonism are required for maximum efficacy of anti-CTLA-4 antibodies</title>
<link href="https://hdl.handle.net/1721.1/164957" rel="alternate"/>
<author>
<name>Lax, Brianna M</name>
</author>
<author>
<name>Palmeri, Joseph R</name>
</author>
<author>
<name>Lutz, Emi A</name>
</author>
<author>
<name>Sheen, Allison</name>
</author>
<author>
<name>Stinson, Jordan A</name>
</author>
<author>
<name>Duhamel, Lauren</name>
</author>
<author>
<name>Santollani, Luciano</name>
</author>
<author>
<name>Kennedy, Alan</name>
</author>
<author>
<name>Rothschilds, Adrienne M</name>
</author>
<author>
<name>Spranger, Stefani</name>
</author>
<author>
<name>Sansom, David M</name>
</author>
<author>
<name>Wittrup, K Dane</name>
</author>
<id>https://hdl.handle.net/1721.1/164957</id>
<updated>2026-02-26T03:07:38Z</updated>
<published>2023-07-24T00:00:00Z</published>
<summary type="text">Both intratumoral regulatory T cell depletion and CTLA-4 antagonism are required for maximum efficacy of anti-CTLA-4 antibodies
Lax, Brianna M; Palmeri, Joseph R; Lutz, Emi A; Sheen, Allison; Stinson, Jordan A; Duhamel, Lauren; Santollani, Luciano; Kennedy, Alan; Rothschilds, Adrienne M; Spranger, Stefani; Sansom, David M; Wittrup, K Dane
Anti-CTLA-4 antibodies have successfully elicited durable tumor regression in the clinic; however, long-term benefit is limited to a subset of patients for select cancer indications. The incomplete understanding of their mechanism of action has hindered efforts at improvement, with conflicting hypotheses proposing either antagonism of the CTLA-4:B7 axis or Fc effector-mediated regulatory T cell (Treg) depletion governing efficacy. Here, we report the engineering of a nonantagonistic CTLA-4 binding domain (b1s1e2) that depletes intratumoral Tregs as an Fc fusion. Comparison of b1s1e2-Fc to 9d9, an antagonistic anti-CTLA-4 antibody, allowed for interrogation of the separate contributions of CTLA-4 antagonism and Treg depletion to efficacy. Despite equivalent levels of intratumoral Treg depletion, 9d9 achieved more long-term cures than b1s1e2-Fc in MC38 tumors, demonstrating that CTLA-4 antagonism provided additional survival benefit. Consistent with prior reports that CTLA-4 antagonism enhances priming, treatment with 9d9, but not b1s1e2-Fc, increased the percentage of activated T cells in the tumor-draining lymph node (tdLN). Treg depletion with either construct was restricted to the tumor due to insufficient surface CTLA-4 expression on Tregs in other compartments. Through intratumoral administration of diphtheria toxin in Foxp3-DTR mice, we show that depletion of both intratumoral and nodal Tregs provided even greater survival benefit than 9d9, consistent with Treg-driven restraint of priming in the tdLN. Our data demonstrate that anti-CTLA-4 therapies require both CTLA-4 antagonism and intratumoral Treg depletion for maximum efficacy—but that potential future therapies also capable of depleting nodal Tregs could show efficacy in the absence of CTLA-4 antagonism.
</summary>
<dc:date>2023-07-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Overcoming lung cancer immunotherapy resistance by combining nontoxic variants of IL-12 and IL-2</title>
<link href="https://hdl.handle.net/1721.1/164956" rel="alternate"/>
<author>
<name>Horton, Brendan L</name>
</author>
<author>
<name>D’Souza, Alicia D</name>
</author>
<author>
<name>Zagorulya, Maria</name>
</author>
<author>
<name>McCreery, Chloe V</name>
</author>
<author>
<name>Abhiraman, Gita C</name>
</author>
<author>
<name>Picton, Lora</name>
</author>
<author>
<name>Sheen, Allison</name>
</author>
<author>
<name>Agarwal, Yash</name>
</author>
<author>
<name>Momin, Noor</name>
</author>
<author>
<name>Wittrup, K Dane</name>
</author>
<author>
<name>White, Forest M</name>
</author>
<author>
<name>Garcia, K Christopher</name>
</author>
<author>
<name>Spranger, Stefani</name>
</author>
<id>https://hdl.handle.net/1721.1/164956</id>
<updated>2026-02-26T03:07:35Z</updated>
<published>2023-09-05T00:00:00Z</published>
<summary type="text">Overcoming lung cancer immunotherapy resistance by combining nontoxic variants of IL-12 and IL-2
Horton, Brendan L; D’Souza, Alicia D; Zagorulya, Maria; McCreery, Chloe V; Abhiraman, Gita C; Picton, Lora; Sheen, Allison; Agarwal, Yash; Momin, Noor; Wittrup, K Dane; White, Forest M; Garcia, K Christopher; Spranger, Stefani
Engineered cytokine-based approaches for immunotherapy of cancer are poised to enter the clinic, with IL-12 being at the forefront. However, little is known about potential mechanisms of resistance to cytokine therapies. We found that orthotopic murine lung tumors were resistant to systemically delivered IL-12 fused to murine serum albumin (MSA, IL12-MSA) because of low IL-12 receptor (IL-12R) expression on tumor-reactive CD8+ T cells. IL2-MSA increased binding of IL12-MSA by tumor-reactive CD8+ T cells, and combined administration of IL12-MSA and IL2-MSA led to enhanced tumor-reactive CD8+ T cell effector differentiation, decreased numbers of tumor-infiltrating CD4+ regulatory T cells, and increased survival of lung tumor-bearing mice. Predictably, the combination of IL-2 and IL-12 at therapeutic doses led to significant dose-limiting toxicity. Administering IL-12 and IL-2 analogs with preferential binding to cells expressing Il12rb1 and CD25, respectively, led to a significant extension of survival in mice with lung tumors while abrogating dose-limiting toxicity. These findings suggest that IL-12 and IL-2 represent a rational approach to combination cytokine therapy whose dose-limiting toxicity can be overcome with engineered cytokine variants.
</summary>
<dc:date>2023-09-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Intratumoral aluminum hydroxide–anchored IL-12 drives potent antitumor activity by remodeling the tumor microenvironment</title>
<link href="https://hdl.handle.net/1721.1/164955" rel="alternate"/>
<author>
<name>Battula, Sailaja</name>
</author>
<author>
<name>Papastoitsis, Gregory</name>
</author>
<author>
<name>Kaufman, Howard L</name>
</author>
<author>
<name>Wittrup, K Dane</name>
</author>
<author>
<name>Schmidt, Michael M</name>
</author>
<id>https://hdl.handle.net/1721.1/164955</id>
<updated>2026-02-26T03:07:32Z</updated>
<published>2023-12-08T00:00:00Z</published>
<summary type="text">Intratumoral aluminum hydroxide–anchored IL-12 drives potent antitumor activity by remodeling the tumor microenvironment
Battula, Sailaja; Papastoitsis, Gregory; Kaufman, Howard L; Wittrup, K Dane; Schmidt, Michael M
IL-12 is a potent cytokine that can promote innate and adaptive anticancer immunity, but its clinical development has been limited by toxicity when delivered systemically. Intratumoral (i.t.) administration can expand the therapeutic window of IL-12 and other cytokines but is in turn limited by rapid drug clearance from the tumor, which reduces efficacy, necessitates frequent administration, and increases systemic accumulation. To address these limitations, we developed an anchored IL-12 designated ANK-101, composed of an engineered IL-12 variant that forms a stable complex with the FDA-approved vaccine adjuvant aluminum hydroxide (Alhydrogel). Following i.t. administration of murine ANK-101 (mANK-101) in early intervention syngeneic mouse tumors, the complex formed a depot that was locally retained for weeks as measured by IVIS or SPECT/CT imaging, while unanchored protein injected i.t. was cleared within hours. One or 2 i.t. injections of mANK-101 induced single-agent antitumor activity across a diverse range of syngeneic tumors, including models resistant to checkpoint blockade at doses where unanchored IL-12 had no efficacy. Local treatment with mANK-101 further induced regressions of noninjected lesions, especially when combined with systemic checkpoint blockade. Antitumor activity was associated with remodeling of the tumor microenvironment, including prolonged IFN-γ and chemokine expression, recruitment and activation of T and NK cells, M1 myeloid cell skewing, and increased antigen processing and presentation. Subcutaneous administration of ANK-101 in cynomolgus macaques was well tolerated. Together, these data demonstrate that ANK-101 has an enhanced efficacy and safety profile and warrants future clinical development.
</summary>
<dc:date>2023-12-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>CD8+ T cell priming that is required for curative intratumorally anchored anti-4-1BB immunotherapy is constrained by Tregs</title>
<link href="https://hdl.handle.net/1721.1/164954" rel="alternate"/>
<author>
<name>Palmeri, Joseph R</name>
</author>
<author>
<name>Lax, Brianna M</name>
</author>
<author>
<name>Peters, Joshua M</name>
</author>
<author>
<name>Duhamel, Lauren</name>
</author>
<author>
<name>Stinson, Jordan A</name>
</author>
<author>
<name>Santollani, Luciano</name>
</author>
<author>
<name>Lutz, Emi A</name>
</author>
<author>
<name>Pinney, William</name>
</author>
<author>
<name>Bryson, Bryan D</name>
</author>
<author>
<name>Dane Wittrup, K</name>
</author>
<id>https://hdl.handle.net/1721.1/164954</id>
<updated>2026-02-26T03:07:40Z</updated>
<published>2024-03-01T00:00:00Z</published>
<summary type="text">CD8+ T cell priming that is required for curative intratumorally anchored anti-4-1BB immunotherapy is constrained by Tregs
Palmeri, Joseph R; Lax, Brianna M; Peters, Joshua M; Duhamel, Lauren; Stinson, Jordan A; Santollani, Luciano; Lutz, Emi A; Pinney, William; Bryson, Bryan D; Dane Wittrup, K
Although co-stimulation of T cells with agonist antibodies targeting 4-1BB (CD137) improves antitumor immune responses in preclinical studies, clinical success has been limited by on-target, off-tumor activity. Here, we report the development of a tumor-anchored ɑ4-1BB agonist (ɑ4-1BB-LAIR), which consists of a ɑ4-1BB antibody fused to the collagen-binding protein LAIR. While combination treatment with an antitumor antibody (TA99) shows only modest efficacy, simultaneous depletion of CD4+ T cells boosts cure rates to over 90% of mice. Mechanistically, this synergy depends on ɑCD4 eliminating tumor draining lymph node regulatory T cells, resulting in priming and activation of CD8+ T cells which then infiltrate the tumor microenvironment. The cytotoxic program of these newly primed CD8+ T cells is then supported by the combined effect of TA99 and ɑ4-1BB-LAIR. The combination of TA99 and ɑ4-1BB-LAIR with a clinically approved ɑCTLA-4 antibody known for enhancing T cell priming results in equivalent cure rates, which validates the mechanistic principle, while the addition of ɑCTLA-4 also generates robust immunological memory against secondary tumor rechallenge. Thus, our study establishes the proof of principle for a clinically translatable cancer immunotherapy.
</summary>
<dc:date>2024-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Estimating Isotope Shifts for $^{227}$Th and $^{229}$Th in Th$^{3+}$: 5F$_{5/2} \rightarrow$ 6D$_{5/2}$ Transition</title>
<link href="https://hdl.handle.net/1721.1/164953" rel="alternate"/>
<author>
<name>Lam, P. Y. Ian</name>
</author>
<author>
<name>MohanMurthy, Prajwal</name>
</author>
<id>https://hdl.handle.net/1721.1/164953</id>
<updated>2026-02-26T03:01:06Z</updated>
<published>2026-02-25T00:00:00Z</published>
<summary type="text">Estimating Isotope Shifts for $^{227}$Th and $^{229}$Th in Th$^{3+}$: 5F$_{5/2} \rightarrow$ 6D$_{5/2}$ Transition
Lam, P. Y. Ian; MohanMurthy, Prajwal
\textbf{Motivation:} Accurate spectroscopic information for isotopes of $^{224-232}$Th are critical for experimental programs investigating such thorium isotopes as candidates for next-generation nuclear optical clocks and as platforms for searches of symmetry-violating effects. Direct experimental data on isotope shifts in the $\text{Th}^{3+}:5F_{5/2} \to 6D_{5/2}$ line at $690$ nm are sparse, with measurements available only for $^{229}\text{Th}$ and $^{230}\text{Th}$ relative to the reference isotope $^{232}\text{Th}$.\\&#13;
&#13;
\textbf{Method:} To address this gap, we employed a King-plot analysis comparing the well-characterized isotope shifts of the $\text{Th}^{+}$ transition at $583.9$ nm to the limited data available for the $690$ nm transition of $\text{Th}^{3+}$. Using nuclear structure information on mean-square charge radii and nuclear quadrupole deformations, we extracted the field-shift constant $F_{690}$ and mass-shift constant $M_{690}$ for the $690$ nm transition. We subsequently calculated the missing isotope shifts by incorporating published values of $\delta\langle r^2\rangle$ where available and estimating $\delta\langle r^2\rangle$ for unmeasured isotopes using nuclear quadrupole deformation coefficients $\beta_2$ from the FRDM model.\\&#13;
&#13;
\textbf{Key Results:} The calculated isotope shifts for the $\text{Th}^{3+}:5F_{5/2} \to 6D_{5/2}$ transition relative to $690~$nm transition of $^{232}\text{Th}$ are:&#13;
&#13;
\begin{align}&#13;
&#13;
\delta\nu^{224,232}_{690} &amp;= -29296(5585) \text{MHz} \nonumber\\&#13;
&#13;
\delta\nu^{225,232}_{690} &amp;= -25840(4930) \text{MHz} \nonumber\\&#13;
&#13;
\delta\nu^{226,232}_{690} &amp;= -22113(4219) \text{MHz} \nonumber\\&#13;
&#13;
\delta\nu^{227,232}_{690} &amp;= -18631(6238) \text{MHz} \nonumber\\&#13;
&#13;
\delta\nu^{228,232}_{690} &amp;= -14970(6181) \text{MHz} \nonumber\\&#13;
&#13;
\delta\nu^{231,232}_{690} &amp;= -3742(715) \nonumber\text{MHz}&#13;
&#13;
\end{align}&#13;
&#13;
Our results for the isotopes of $^{227,228}$Th use experimental $\delta\langle r^2\rangle$ values, whereas the remaining ones use theoretical values of $\delta\langle r^2\rangle$ calculated from quadrupole deformation coefficients. These results provide essential spectroscopic data for future precision measurements with thorium isotopes in various ionized states.
This work is supported by BNL award #460913, a Phi Kappa Phi Fellowship, and generous support from Prof. R. P. Redwine and MIT LNS.
</summary>
<dc:date>2026-02-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Targeting of the CD161 inhibitory receptor enhances T-cell–mediated immunity against hematological malignancies</title>
<link href="https://hdl.handle.net/1721.1/164952" rel="alternate"/>
<author>
<name>Alvarez Calderon, Francesca</name>
</author>
<author>
<name>Kang, Byong H</name>
</author>
<author>
<name>Kyrysyuk, Oleksandr</name>
</author>
<author>
<name>Zheng, Shiwei</name>
</author>
<author>
<name>Wang, Hao</name>
</author>
<author>
<name>Mathewson, Nathan D</name>
</author>
<author>
<name>Luoma, Adrienne M</name>
</author>
<author>
<name>Ning, Xiaohan</name>
</author>
<author>
<name>Pyrdol, Jason</name>
</author>
<author>
<name>Cao, Xuan</name>
</author>
<author>
<name>Suvà, Mario L</name>
</author>
<author>
<name>Yuan, Guo-Cheng</name>
</author>
<author>
<name>Wittrup, K Dane</name>
</author>
<author>
<name>Wucherpfennig, Kai W</name>
</author>
<id>https://hdl.handle.net/1721.1/164952</id>
<updated>2026-02-26T03:07:30Z</updated>
<published>2024-03-21T00:00:00Z</published>
<summary type="text">Targeting of the CD161 inhibitory receptor enhances T-cell–mediated immunity against hematological malignancies
Alvarez Calderon, Francesca; Kang, Byong H; Kyrysyuk, Oleksandr; Zheng, Shiwei; Wang, Hao; Mathewson, Nathan D; Luoma, Adrienne M; Ning, Xiaohan; Pyrdol, Jason; Cao, Xuan; Suvà, Mario L; Yuan, Guo-Cheng; Wittrup, K Dane; Wucherpfennig, Kai W
The CD161 inhibitory receptor is highly upregulated by tumor-infiltrating T cells in multiple human solid tumor types, and its ligand, CLEC2D, is expressed by both tumor cells and infiltrating myeloid cells. Here, we assessed the role of the CD161 receptor in hematological malignancies. Systematic analysis of CLEC2D expression using the Cancer Cell Line Encyclopedia revealed that CLEC2D messenger RNA was most abundant in hematological malignancies, including B-cell and T-cell lymphomas as well as lymphocytic and myelogenous leukemias. CLEC2D protein was detected by flow cytometry on a panel of cell lines representing a diverse set of hematological malignancies. We, therefore, used yeast display to generate a panel of high-affinity, fully human CD161 monoclonal antibodies (mAbs) that blocked CLEC2D binding. These mAbs were specific for CD161 and had a similar affinity for human and nonhuman primate CD161, a property relevant for clinical translation. A high-affinity CD161 mAb enhanced key aspects of T-cell function, including cytotoxicity, cytokine production, and proliferation, against B-cell lines originating from patients with acute lymphoblastic leukemia, diffuse large B-cell lymphoma, and Burkitt lymphoma. In humanized mouse models, this CD161 mAb enhanced T-cell–mediated immunity, resulting in a significant survival benefit. Single cell RNA-seq data demonstrated that CD161 mAb treatment enhanced expression of cytotoxicity genes by CD4 T cells as well as a tissue-residency program by CD4 and CD8 T cells that is associated with favorable survival outcomes in multiple human cancer types. These fully human mAbs, thus, represent potential immunotherapy agents for hematological malignancies.
</summary>
<dc:date>2024-03-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Upcycling spent medium-Ni cathodes via novel liquified salts sourcing</title>
<link href="https://hdl.handle.net/1721.1/164951" rel="alternate"/>
<author>
<name>Yoon, Moonsu</name>
</author>
<author>
<name>Park, Jin-Sung</name>
</author>
<author>
<name>Chen, Weiyin</name>
</author>
<author>
<name>Huang,  Yimeng</name>
</author>
<author>
<name>Dai,  Tao</name>
</author>
<author>
<name>Lee,  Yumin</name>
</author>
<author>
<name>Shin,  Jungmin</name>
</author>
<author>
<name>Lee,  Seungmi</name>
</author>
<author>
<name>Kim,  Yongil</name>
</author>
<author>
<name>Lee, Dongsoo</name>
</author>
<author>
<name>Shin, Daiha</name>
</author>
<author>
<name>Cho, Jaephil</name>
</author>
<author>
<name>Dong,  Yanhao</name>
</author>
<author>
<name>Li, Ju</name>
</author>
<id>https://hdl.handle.net/1721.1/164951</id>
<updated>2026-02-26T03:07:25Z</updated>
<published>2025-04-02T00:00:00Z</published>
<summary type="text">Upcycling spent medium-Ni cathodes via novel liquified salts sourcing
Yoon, Moonsu; Park, Jin-Sung; Chen, Weiyin; Huang,  Yimeng; Dai,  Tao; Lee,  Yumin; Shin,  Jungmin; Lee,  Seungmi; Kim,  Yongil; Lee, Dongsoo; Shin, Daiha; Cho, Jaephil; Dong,  Yanhao; Li, Ju
The rapid growth in lithium-ion battery technology underscores the urgent need for sustainable recycling to address the environmental and economic challenges of battery waste. This study introduces a liquified-salts-assisted upcycling approach to transform spent medium-Ni cathodes into high-performance single-crystalline Ni-rich cathodes. Utilizing the LiOH–LiNO3–Ni(NO3)2·6H2O eutectic, this method leverages planetary centrifugal mixing to create a liquid-like environment for accelerated elemental diffusion and microstructural refinement. The in situ liquefaction of these salts ensures seamless precursor integration, achieving compositional uniformity and minimizing impurity formation. Compared to conventional solid-state methods, our method significantly suppresses rock-salt phase formation, and improves electrochemical performance with superior cycling stability and rate capability. The environmental and economic advantages of our approach highlight its potential to reduce greenhouse gas emissions and energy consumption. This scalable, energy-efficient strategy provides a transformative solution for battery waste management, paving the way for the sustainable production of next-generation cathode materials.
</summary>
<dc:date>2025-04-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>CLN-617 Retains IL2 and IL12 in Injected Tumors to Drive Robust and Systemic Immune-Mediated Antitumor Activity</title>
<link href="https://hdl.handle.net/1721.1/164950" rel="alternate"/>
<author>
<name>Mehta, Naveen K</name>
</author>
<author>
<name>Rakhra, Kavya</name>
</author>
<author>
<name>Meetze, Kristan A</name>
</author>
<author>
<name>Li, Bochong</name>
</author>
<author>
<name>Momin, Noor</name>
</author>
<author>
<name>Chang, Jason YH</name>
</author>
<author>
<name>Wittrup, K Dane</name>
</author>
<author>
<name>Baeuerle, Patrick A</name>
</author>
<author>
<name>Michaelson, Jennifer S</name>
</author>
<id>https://hdl.handle.net/1721.1/164950</id>
<updated>2026-02-26T03:07:34Z</updated>
<published>2024-08-01T00:00:00Z</published>
<summary type="text">CLN-617 Retains IL2 and IL12 in Injected Tumors to Drive Robust and Systemic Immune-Mediated Antitumor Activity
Mehta, Naveen K; Rakhra, Kavya; Meetze, Kristan A; Li, Bochong; Momin, Noor; Chang, Jason YH; Wittrup, K Dane; Baeuerle, Patrick A; Michaelson, Jennifer S
Despite clinical evidence of antitumor activity, the development of cytokine therapies has been hampered by a narrow therapeutic window and limited response rates. Two cytokines of high interest for clinical development are interleukin 2 (IL2) and interleukin 12 (IL12), which potently synergize to promote the activation and proliferation of T cells and NK cells. However, the only approved human IL2 therapy, Proleukin, is rarely used in the clinic due to systemic toxicities, and no IL12 product has been approved to date due to severe dose-limiting toxicities. Here, we describe CLN-617, a first-in-class therapeutic for intratumoral (IT) injection that co-delivers IL2 and IL12 on a single molecule in a safe and effective manner. CLN-617 is a single-chain fusion protein comprised of IL2, leukocyte-associated immunoglobulin-like receptor 2 (LAIR2), human serum albumin (HSA), and IL12. LAIR2 and HSA function to retain CLN-617 in the treated tumor by binding collagen and increasing molecular weight, respectively. We found that IT administration of a murine surrogate of CLN-617, mCLN-617, eradicated established treated and untreated tumors in syngeneic models, significantly improved response to anti-PD1 checkpoint therapy, and generated a robust abscopal response dependent on cellular immunity and antigen cross-presentation. CLN-617 is being evaluated in a clinical trial in patients with advanced solid tumors (NCT06035744).
</summary>
<dc:date>2024-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Porous Organic Materials-Based Atomically Dispersed Metal Electrocatalysts</title>
<link href="https://hdl.handle.net/1721.1/164949" rel="alternate"/>
<author>
<name>Zhang, Hao</name>
</author>
<author>
<name>Wang,  Suwen</name>
</author>
<author>
<name>Lv,  Enmin</name>
</author>
<author>
<name>Qi, Menghui</name>
</author>
<author>
<name>He, Chengchao</name>
</author>
<author>
<name>Dong, Xinglong</name>
</author>
<author>
<name>Qiu,  Jieshan</name>
</author>
<author>
<name>Wang, Yong</name>
</author>
<author>
<name>Wen,  Zhenhai</name>
</author>
<id>https://hdl.handle.net/1721.1/164949</id>
<updated>2026-02-26T03:07:23Z</updated>
<published>2025-03-25T00:00:00Z</published>
<summary type="text">Porous Organic Materials-Based Atomically Dispersed Metal Electrocatalysts
Zhang, Hao; Wang,  Suwen; Lv,  Enmin; Qi, Menghui; He, Chengchao; Dong, Xinglong; Qiu,  Jieshan; Wang, Yong; Wen,  Zhenhai
The transition to renewable energy sources and the need for efficient energy conversion technologies have led to the development of various types of catalysts, among which atomically dispersed metal catalysts (ADMCs) supported by porous organic materials (POMs) have attracted attention for their high catalytic efficiency and stability. This review focuses on the development and application of ADMCs supported by POMs, such as MOFs, COFs, and HOFs, which offer catalytic performance due to their high atomic utilization, stability, and selectivity. This paper systematically explores various strategies for synthesizing ADMCs, including the use of organic linkers, metal nodes, and pore spaces within POMs to stabilize metal atoms and prevent aggregation. Key applications highlighted include energy conversion and storage technologies, such as fuel cells, water splitting, CO2 reduction and nitrogen reduction, where ADMCs demonstrate the potential to replace noble metals. Despite the progress, challenges remain in achieving high metal loading, long-term stability, and cost-effective large-scale production. This study underscores the importance of advanced characterization techniques and computational models to deepen the understanding of ADMCs’ catalytic mechanisms and guide future material design, paving the way for their broader application in sustainable energy technologies.
</summary>
<dc:date>2025-03-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying Mining Requirement and Waste for Energy Sustainability</title>
<link href="https://hdl.handle.net/1721.1/164948" rel="alternate"/>
<author>
<name>Ermakova, Dinara</name>
</author>
<author>
<name>Sen,  Drishti</name>
</author>
<author>
<name>Wainwright, Haruko</name>
</author>
<author>
<name>Bae,  Jin Whan</name>
</author>
<author>
<name>Chene, Lisha</name>
</author>
<author>
<name>Vujic, Jasmina</name>
</author>
<id>https://hdl.handle.net/1721.1/164948</id>
<updated>2026-02-26T03:07:18Z</updated>
<published>2025-04-13T00:00:00Z</published>
<summary type="text">Quantifying Mining Requirement and Waste for Energy Sustainability
Ermakova, Dinara; Sen,  Drishti; Wainwright, Haruko; Bae,  Jin Whan; Chene, Lisha; Vujic, Jasmina
This study demonstrates the life-cycle assessment of different energy sources-coal, natural gas, solar, wind, nuclear, and hydro-particularly focused on mining activities and waste per given electricity capacity and generation. It also includes carbon dioxide emissions generated during the transportation of raw materials to build and operate electricity generating systems and their environmental impacts in the US from 2023 to 2050. We identify the raw material and metal requirements for the U.S.-based typical systems in each energy type and synthesize datasets on typical ore fraction and material recycling factors, while taking into account the capacity factor of the power plants. We then compute the total mass and volume of material requirements and waste mass and volume for the front-end (i.e., mining, material needed for construction), operation (i.e., fuel, maintenance), and back-end (i.e., decommissioning) activities. The key findings are that (1) the energy transition from fossil fuel to low-carbon energy sources would reduce mining waste as well as the shipping carbon footprint; (2) the difference in capacity and actual electricity generation is significant for the life-cycle assessment due to low capacity factors of solar and wind energy; (3) several key metals with low abundance or high requirements dominate mining waste, which highlights the need for recycling and establishing a circular economy; (4) mining of critical minerals becomes important during the clean energy transition and (5) nuclear energy generates least waste and contributes least to shipping emissions among the low-carbon sources due to the high energy density and capacity factor and the small mass of materials it requires. Although the waste mass may not necessarily be equal to the environmental impact due to different waste isolation technologies, we aim to highlight the importance of considering mining and decommissioning waste, which are often ignored but important for accounting for the environmental impacts and addressing energy justice issues.
</summary>
<dc:date>2025-04-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Scholarly Knowledge Ecosystem: Challenges and Opportunities for the Field of Information</title>
<link href="https://hdl.handle.net/1721.1/164947" rel="alternate"/>
<author>
<name>Altman, Micah</name>
</author>
<author>
<name>Cohen, Philip N</name>
</author>
<id>https://hdl.handle.net/1721.1/164947</id>
<updated>2026-02-26T03:07:41Z</updated>
<published>2022-01-31T00:00:00Z</published>
<summary type="text">The Scholarly Knowledge Ecosystem: Challenges and Opportunities for the Field of Information
Altman, Micah; Cohen, Philip N
The scholarly knowledge ecosystem presents an outstanding exemplar of the challenges of understanding, improving, and governing information ecosystems at scale. This article draws upon significant reports on aspects of the ecosystem to characterize the most important research challenges and promising potential approaches. The focus of this review article is the fundamental scientific research challenges related to developing a better understanding of the scholarly knowledge ecosystem. Across a range of disciplines, we identify reports that are conceived broadly, published recently, and written collectively. We extract the critical research questions, summarize these using quantitative text analysis, and use this quantitative analysis to inform a qualitative synthesis. Three broad themes emerge from this analysis: the need for multi-sectoral cooperation and coordination, for mixed methods analysis at multiple levels, and interdisciplinary collaboration. Further, we draw attention to an emerging consensus that scientific research in this area should by a set of core human values.
</summary>
<dc:date>2022-01-31T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation of L2/Ln Pragmatic Competence: Its Core and Route Map</title>
<link href="https://hdl.handle.net/1721.1/164946" rel="alternate"/>
<author>
<name>Mao, Tiaoyuan</name>
</author>
<id>https://hdl.handle.net/1721.1/164946</id>
<updated>2026-02-26T03:07:44Z</updated>
<published>2021-08-17T00:00:00Z</published>
<summary type="text">Investigation of L2/Ln Pragmatic Competence: Its Core and Route Map
Mao, Tiaoyuan
How to use language properly and acquire the capacity for language use has become the focus of linguists and philosophers for centuries. Therefore, pragmatic competence underlying language use arouses enormous interests of language acquisition practitioners. This study reveals the core properties of various models or theories of pragmatic competence, such as the communicative componential models, the form-function mapping proposal of the functionalist, the tripartite cognitive model, and the current integrated model of pragmatic competence. The common core includes (but not limited to) integration of thought and communication, one uniform pragmatic mechanism, dynamic form-function mapping, and complementarity between grammatical and pragmatic competences. With the findings as a departure, a brief outline for further investigation of pragmatic competence is proposed finally, including pathological and neurobiological examination of pragmatic competence.
</summary>
<dc:date>2021-08-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating Structural Biophysical Features for Antigen-Binding Fragment Crystallization via Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/164945" rel="alternate"/>
<author>
<name>Chattaraj, Krishna Gopal</name>
</author>
<author>
<name>Ferreira,  Joana</name>
</author>
<author>
<name>Myerson, Allan S.</name>
</author>
<author>
<name>Trout,  Bernhardt L.</name>
</author>
<id>https://hdl.handle.net/1721.1/164945</id>
<updated>2026-02-26T03:07:12Z</updated>
<published>2025-02-28T00:00:00Z</published>
<summary type="text">Investigating Structural Biophysical Features for Antigen-Binding Fragment Crystallization via Machine Learning
Chattaraj, Krishna Gopal; Ferreira,  Joana; Myerson, Allan S.; Trout,  Bernhardt L.
Antibody-based therapeutics continue to be an important pharmaceutical development modality. Crystallization of antibodies is important for structural characterization, but in addition has the potential for use as a separation method and for use as a dosage form. Nevertheless, bringing about controlled crystallization of an antibody remains a challenging task due to its large size, high degree of segmental flexibility, and the intricacy of all the occurring interactions (e.g., protein–protein interactions, protein–solvent interactions, etc.). Methods to predict important contact sites could help to develop such crystallization methods. However, limited data and understanding have hitherto not allowed the development of such robust methods. This study employs machine learning combined with in silico modelling of crystal structures using available experimental structures to identify the crucial physicochemical features necessary for successful antibody crystallization in an attempt to remedy that gap. The developed method can with good accuracy distinguish crystal-site residues from non-crystal-site residues. A set of 510 descriptors is utilized to characterize each residue, which is treated as a distinct data point. Moreover, new algorithms have been developed to design novel descriptors that improve the model's predictive capabilities. Fragment antigen-binding (Fab) regions are investigated due to the scarcity of full-length monoclonal antibodies (mAbs) crystal structures. The current findings show that the extreme gradient boosting (XGBoost) algorithm effectively identifies crystal site residues, as evidenced by an AUPRC value that is more than 3-fold higher than that of the baseline model. The top-ranked descriptors indicate that crystal-site residues are primarily characterized by solvent-exposed residues with high spatial aggregation propensity (SAP), signifying hydrophobic patches, and their immediate surface-exposed neighbors. Moreover, these high SAP residues are often surrounded by other solvent-exposed residues that are either polar, charged, or both. In contrast, residues not involved in crystal interfaces generally lack these essential features, though some might be excluded due to specific crystal lattice arrangements. Additionally, reducing the feature set from 510 to the top 15% in the XGBoost model yields similar performance while significantly simplifying the model.
</summary>
<dc:date>2025-02-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling the Role of Supramolecular Clustering in Multivalent Assembly</title>
<link href="https://hdl.handle.net/1721.1/164944" rel="alternate"/>
<author>
<name>Sbalbi,  Nicholas</name>
</author>
<author>
<name>Petrov, Artem</name>
</author>
<author>
<name>Sass, Jacob</name>
</author>
<author>
<name>Ye,  Matthew</name>
</author>
<author>
<name>Alexander-Katz, Alfredo</name>
</author>
<author>
<name>Macfarlane, Robert J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164944</id>
<updated>2026-02-26T03:07:14Z</updated>
<published>2025-04-23T00:00:00Z</published>
<summary type="text">Modeling the Role of Supramolecular Clustering in Multivalent Assembly
Sbalbi,  Nicholas; Petrov, Artem; Sass, Jacob; Ye,  Matthew; Alexander-Katz, Alfredo; Macfarlane, Robert J.
In self-assembled systems, a combination of multiple weak supramolecular interactions is often utilized to enable strong yet reversible binding. When modeling the behavior of these multivalent interfaces, it is commonly assumed that binding pairs are independent, i.e., the probability of a pair being bound is unaffected by the bound state of neighboring pairs. Inspired by recent experimental work, we report that for a variety of systems this assumption may not hold, leading to the formation of clusters at the binding interface. Through a series of analytical and numerical models of end-functionalized brushes, we reveal the role of cluster size on binding thermodynamics, detail how entropic contributions from polymer chains provide tunable control of cluster size, and provide predictions for cluster size as a function of system architecture. Investigation of these models yields surprising results: within the melting window, the enthalpy of binding of multivalent interfaces is predicted to depend only on cluster size and not on the overall valency of the multivalent system. Moreover, clustering is predicted to be significant even in systems with only weak dipole and dispersion interactions between neighboring groups. Combined, this work brings to light the potential impacts of clustering on multivalent self-assembly, providing theoretical justification for previous experimental observations and paving the way for future work in this area.
</summary>
<dc:date>2025-04-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Elucidating the effect of Fe substitution on structural and redox stability of Na2Mn3O7</title>
<link href="https://hdl.handle.net/1721.1/164943" rel="alternate"/>
<author>
<name>Smith, Hugh B.</name>
</author>
<author>
<name>Lee,  Gi-Hyeok</name>
</author>
<author>
<name>Kumar,  Bachu Sravan</name>
</author>
<author>
<name>Penn, Aubrey N.</name>
</author>
<author>
<name>Venturi, Victor</name>
</author>
<author>
<name>Gao, Yifan</name>
</author>
<author>
<name>Davis,  Ryan C.</name>
</author>
<author>
<name>Stone, Kevin Hunter</name>
</author>
<author>
<name>Hunt,  Adrian</name>
</author>
<author>
<name>Waluyo, Iradwikanari</name>
</author>
<author>
<name>Stavitski,  Eli</name>
</author>
<author>
<name>Yangb, Wanli</name>
</author>
<author>
<name>Abate, Iwnetim I.</name>
</author>
<id>https://hdl.handle.net/1721.1/164943</id>
<updated>2026-02-26T03:07:21Z</updated>
<published>2025-03-11T00:00:00Z</published>
<summary type="text">Elucidating the effect of Fe substitution on structural and redox stability of Na2Mn3O7
Smith, Hugh B.; Lee,  Gi-Hyeok; Kumar,  Bachu Sravan; Penn, Aubrey N.; Venturi, Victor; Gao, Yifan; Davis,  Ryan C.; Stone, Kevin Hunter; Hunt,  Adrian; Waluyo, Iradwikanari; Stavitski,  Eli; Yangb, Wanli; Abate, Iwnetim I.
Sodium-ion batteries have the potential to meet the growing demand for energy storage due to their low costs stemming from natural resource abundances, but their cathode energy densities must be improved to be comparable to those of lithium-ion batteries. One strategy is accessing high voltage capacity through high-valent redox reactions. Such reactions usually cause instability in cathode materials, but Na2Mn3O7 (NMO) has demonstrated excellent performance and reversibility in the high-valent regime due to its unique lattice structure with ordered Mn vacancies. This work expands the universality of the ordered vacancy as a design principle and increases the material candidates with such exceptional electrochemical behavior. Our approach involves synergizing cationic ordered vacancies with tunable metal–ligand hybridization through partial metal substitution. In particular, we successfully incorporated Fe3+ for Mn4+ in NMO to make Na2.25Mn2.75Fe0.25O7 and achieved improved high-valent redox behavior. Fe substitution leads to larger specific capacities (171 vs. 159 mA h g−1 first cycle), enhanced cycle stability (97 vs. 60 mA h g−1 after 50 cycles), and superior rate performance. This study lays the foundation for developing new cathode materials with stable high-valent redox through substitution of redox-active transition metals by employing cationic ordered vacancies and partial transition metal substitution as design principles in tandem.
</summary>
<dc:date>2025-03-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>Progress in Computational Methods and Mechanistic Insights on the Growth of Carbon Nanotubes</title>
<link href="https://hdl.handle.net/1721.1/164942" rel="alternate"/>
<author>
<name>Wang,  Linzheng</name>
</author>
<author>
<name>Tricard, Nicolas</name>
</author>
<author>
<name>Chen, Zituo</name>
</author>
<author>
<name>Deng, Sili</name>
</author>
<id>https://hdl.handle.net/1721.1/164942</id>
<updated>2026-02-26T03:07:16Z</updated>
<published>2025-03-19T00:00:00Z</published>
<summary type="text">Progress in Computational Methods and Mechanistic Insights on the Growth of Carbon Nanotubes
Wang,  Linzheng; Tricard, Nicolas; Chen, Zituo; Deng, Sili
Carbon nanotubes (CNTs), as a promising nanomaterial with broad applications across various fields, are continuously attracting significant research attention. Despite substantial progress in understanding their growth mechanisms, synthesis methods, and post-processing techniques, two major goals remain challenging: achieving property-targeted growth and efficient mass production. Recent advancements in computational methods driven by increased computational resources, the development of platforms, and the refinement of theoretical models, have significantly deepened our understanding of the mechanisms underlying CNT growth. This review aims to comprehensively examine the latest computational techniques that shed light on various aspects of CNT synthesis. The first part of this review focuses on progress in computational methods. Beginning with atomistic simulation approaches, we introduce the fundamentals and advancements in density functional theory (DFT), molecular dynamics (MD) simulations, and kinetic Monte Carlo (kMC) simulations. We discuss the applicability and limitations of each method in studying mechanisms of CNT growth. Then, the focus shifts to multiscale modeling approaches, where we demonstrate the coupling of atomic-scale simulations with reactor-scale multiphase flow models. Given that CNT growth inherently spans multiple temporal and spatial scales, the development and application of multiscale modeling techniques are poised to become a central focus of future computational research in this field. Furthermore, this review emphasizes the growing role played by machine learning in CNT growth research. Compared with traditional physics-based simulation methods, data-driven machine learning approaches have rapidly emerged in recent years, revolutionizing research paradigms from molecular simulation to experimental design. In the second part of this review, we highlight the latest advancements in CNT growth mechanisms and synthesis methods achieved through computational techniques. These include novel findings across fundamental growth stages, i.e., from nucleation to elongation and ultimately termination. We also examine the dynamic behaviors of catalyst nanoparticles and chirality-controlled growth processes, emphasizing how these insights contribute to advancing the field. Finally, in the concluding section, we propose future directions for advancements of computational approaches toward deeper understanding of CNT growth mechanisms and better support of CNT manufacturing.
</summary>
<dc:date>2025-03-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Superthermal Solar Interfacial Evaporation is not due to Reduced Latent Heat of Water</title>
<link href="https://hdl.handle.net/1721.1/164941" rel="alternate"/>
<author>
<name>Zhang, James H.</name>
</author>
<author>
<name>Mittapally,  Rohith</name>
</author>
<author>
<name>Lva, Guangxin</name>
</author>
<author>
<name>Chen, Gang</name>
</author>
<id>https://hdl.handle.net/1721.1/164941</id>
<updated>2026-02-26T03:07:09Z</updated>
<published>2025-01-13T00:00:00Z</published>
<summary type="text">Superthermal Solar Interfacial Evaporation is not due to Reduced Latent Heat of Water
Zhang, James H.; Mittapally,  Rohith; Lva, Guangxin; Chen, Gang
To explain reported solar interfacial-evaporation rates from porous materials beyond an apparent 100% efficiency using the thermal evaporation mechanism, many publications hypothesize that intermediate water inside porous materials has a reduced latent heat. Key supporting evidence is that water-only surfaces have lower natural evaporation rates than porous evaporators, with the ratio of the two rates taken as the latent heat reduction. Through simulations and experiments, we study natural evaporation of water and show that reported differences in evaporation rates between porous materials and water are likely due to experimental error from recessed evaporating surfaces. A few millimeter recession of the water surface relative to the container lip can drop evaporation rates by over 50% due to a stagnant air layer, suggesting that the comparative experiments are prone to error. Furthermore, in the reduced latent heat picture, interfacial cooling must occur at the porous sample–water interface due to the enthalpy difference between bulk water and intermediate water. Our transport modeling shows that reduced latent heat cannot explain superthermal evaporation and that new mechanistic directions need to be pursued.
</summary>
<dc:date>2025-01-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generative design and molecular mechanics characterization of silk proteins based on unfolding behavior</title>
<link href="https://hdl.handle.net/1721.1/164940" rel="alternate"/>
<author>
<name>Lu, Wei</name>
</author>
<author>
<name>Buehler, Markus J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164940</id>
<updated>2026-02-26T03:07:20Z</updated>
<published>2025-05-02T00:00:00Z</published>
<summary type="text">Generative design and molecular mechanics characterization of silk proteins based on unfolding behavior
Lu, Wei; Buehler, Markus J.
Spider silk exhibits exceptional mechanical properties, biocompatibility, and biodegradability, making it a promising material for bioengineered applications. However, the complexity and diversity of silk proteins, coupled with limited experimental data, have hindered the rational design of silk-based biomaterials. Furthermore, the mechanobiology of these proteins and their impact on silk fiber properties remain underexplored. In this study, we introduce a series of novel silk protein sequences and characterize their nonlinear unfolding behavior and mechanical properties through molecular dynamics (MD) simulations. Focusing on major ampullate spidroin (MaSp) silk proteins, we curate a dataset that integrates experimentally acquired sequences with synthetic sequences generated by SilkomeGPT, a generative model for silk-inspired proteins. Structural predictions are performed using OmegaFold, from which high-fidelity regions are extracted and analyzed. Their unfolding responses are assessed via implicit all-atom MD simulations, enabling characterization of their mechanical behavior. This computationally efficient framework facilitates the rational design of spider silk proteins by linking atomistic and sequence features to larger-scale properties. The developed dataset systematically captures structural uncertainties, while simulations provide atomic-level insights into how protein mechanics contribute to fiber properties, advancing the mechanobiological understanding of spider silk and supporting diverse applications in biomaterials design.
</summary>
<dc:date>2025-05-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tumor-Localized Interleukin-2 and Interleukin-12 Combine with Radiation Therapy to Safely Potentiate Regression of Advanced Malignant Melanoma in Pet Dogs</title>
<link href="https://hdl.handle.net/1721.1/164939" rel="alternate"/>
<author>
<name>Stinson, Jordan A</name>
</author>
<author>
<name>Barbosa, Matheus Moreno P</name>
</author>
<author>
<name>Sheen, Allison</name>
</author>
<author>
<name>Momin, Noor</name>
</author>
<author>
<name>Fink, Elizabeth</name>
</author>
<author>
<name>Hampel, Jordan</name>
</author>
<author>
<name>Selting, Kim A</name>
</author>
<author>
<name>Kamerer, Rebecca L</name>
</author>
<author>
<name>Bailey, Keith L</name>
</author>
<author>
<name>Wittrup, Karl D</name>
</author>
<author>
<name>Fan, Timothy M</name>
</author>
<id>https://hdl.handle.net/1721.1/164939</id>
<updated>2026-02-25T07:11:58Z</updated>
<published>2024-09-13T00:00:00Z</published>
<summary type="text">Tumor-Localized Interleukin-2 and Interleukin-12 Combine with Radiation Therapy to Safely Potentiate Regression of Advanced Malignant Melanoma in Pet Dogs
Stinson, Jordan A; Barbosa, Matheus Moreno P; Sheen, Allison; Momin, Noor; Fink, Elizabeth; Hampel, Jordan; Selting, Kim A; Kamerer, Rebecca L; Bailey, Keith L; Wittrup, Karl D; Fan, Timothy M
Purpose:&#13;
Cytokines IL2 and IL12 exhibit potent anticancer activity but suffer a narrow therapeutic window due to off-tumor immune cell activation. Engineering cytokines with the ability to bind and associate with tumor collagen after intratumoral injection potentiated response without toxicity in mice and was previously safe in pet dogs with sarcoma. Here, we sought to test the efficacy of this approach in dogs with advanced melanoma.&#13;
&#13;
Patients and Methods:&#13;
This study examined 15 client-owned dogs with histologically or cytologically confirmed malignant melanoma that received a single 9-Gy fraction of radiotherapy, followed by six cycles of combined collagen-anchored IL2 and IL12 therapy every 2 weeks. Cytokine dosing followed a 3 + 3 dose escalation design, with the initial cytokine dose chosen from prior evaluation in canine sarcomas. No exclusion criteria for tumor stage or metastatic burden, age, weight, or neuter status were applied for this trial.&#13;
&#13;
Results:&#13;
Median survival regardless of the tumor stage or dose level was 256 days, and 10/13 (76.9%) dogs that completed treatment had CT-measured tumor regression at the treated lesion. In dogs with metastatic disease, 8/13 (61.5%) had partial responses across their combined lesions, which is evidence of locoregional response. Profiling by NanoString of treatment-resistant dogs revealed that B2m loss was predictive of poor response to this therapy.&#13;
&#13;
Conclusions:&#13;
Collectively, these results confirm the ability of locally administered tumor-anchored cytokines to potentiate responses at regional disease sites when combined with radiation. This evidence supports the clinical translation of this approach and highlights the utility of comparative investigation in canine cancers.
</summary>
<dc:date>2024-09-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bivalent target-binding bioPROTACs induce potent degradation of oncogenic SHP2</title>
<link href="https://hdl.handle.net/1721.1/164938" rel="alternate"/>
<author>
<name>Hoffman, Megan</name>
</author>
<author>
<name>Krum, David</name>
</author>
<author>
<name>Wittrup, K Dane</name>
</author>
<id>https://hdl.handle.net/1721.1/164938</id>
<updated>2026-02-25T07:11:56Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Bivalent target-binding bioPROTACs induce potent degradation of oncogenic SHP2
Hoffman, Megan; Krum, David; Wittrup, K Dane
Targeted protein degradation is an emergent and rapidly evolving therapeutic strategy. In particular, biologics-based targeted degradation modalities (bioPROTACs) are relatively under explored compared to small molecules. Here, we investigate how target affinity, cellular localization, and valency of bioPROTACs impact efficacy of targeted degradation of the oncogenic phosphatase src-homology 2 containing protein tyrosine phosphatase-2 (SHP2). We identify bivalent recruitment of SHP2 by bioPROTACs as a broadly applicable strategy to improve potency. Moreover, we demonstrate that SHP2-targeted bioPROTACs can effectively counteract gain-of-function SHP2 mutants present in cancer, which are otherwise challenging to selectively target with small molecule constructs. Overall, this study demonstrates the utility of bioPROTACs for challenging targets, and further explicates design principles for therapeutic bioPROTACs.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Local delivery of cell surface-targeted immunocytokines programs systemic antitumor immunity</title>
<link href="https://hdl.handle.net/1721.1/164937" rel="alternate"/>
<author>
<name>Santollani, Luciano</name>
</author>
<author>
<name>Maiorino, Laura</name>
</author>
<author>
<name>Zhang, Yiming J</name>
</author>
<author>
<name>Palmeri, Joseph R</name>
</author>
<author>
<name>Stinson, Jordan A</name>
</author>
<author>
<name>Duhamel, Lauren R</name>
</author>
<author>
<name>Qureshi, Kashif</name>
</author>
<author>
<name>Suggs, Jack R</name>
</author>
<author>
<name>Porth, Owen T</name>
</author>
<author>
<name>Pinney, William</name>
</author>
<author>
<name>Msari, Riyam Al</name>
</author>
<author>
<name>Walsh, Agnes A</name>
</author>
<author>
<name>Wittrup, K Dane</name>
</author>
<author>
<name>Irvine, Darrell J</name>
</author>
<id>https://hdl.handle.net/1721.1/164937</id>
<updated>2026-02-25T07:11:50Z</updated>
<published>2024-08-07T00:00:00Z</published>
<summary type="text">Local delivery of cell surface-targeted immunocytokines programs systemic antitumor immunity
Santollani, Luciano; Maiorino, Laura; Zhang, Yiming J; Palmeri, Joseph R; Stinson, Jordan A; Duhamel, Lauren R; Qureshi, Kashif; Suggs, Jack R; Porth, Owen T; Pinney, William; Msari, Riyam Al; Walsh, Agnes A; Wittrup, K Dane; Irvine, Darrell J
Systemically administered cytokines are potent immunotherapeutics but can cause severe dose-limiting toxicities. To overcome this challenge, cytokines have been engineered for intratumoral retention after local delivery. However, despite inducing regression of treated lesions, tumor-localized cytokines often elicit only modest responses at distal untreated tumors. In the present study, we report a localized cytokine therapy that safely elicits systemic antitumor immunity by targeting the ubiquitous leukocyte receptor CD45. CD45-targeted immunocytokines have lower internalization rates relative to wild-type counterparts, leading to sustained downstream cis and trans signaling between lymphocytes. A single intratumoral dose of αCD45-interleukin (IL)-12 followed by a single dose of αCD45-IL-15 eradicated treated tumors and untreated distal lesions in multiple syngeneic mouse tumor models without toxicity. Mechanistically, CD45-targeted cytokines reprogrammed tumor-specific CD8+ T cells in the tumor-draining lymph nodes to have an antiviral transcriptional signature. CD45 anchoring represents a broad platform for protein retention by host immune cells for use in immunotherapy.
</summary>
<dc:date>2024-08-07T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tumor Integrin-Targeted Glucose Oxidase Enzyme Promotes ROS-Mediated Cell Death that Combines with Interferon Alpha Therapy for Tumor Control</title>
<link href="https://hdl.handle.net/1721.1/164936" rel="alternate"/>
<author>
<name>Stinson, Jordan A</name>
</author>
<author>
<name>Sheen, Allison</name>
</author>
<author>
<name>Lax, Brianna M</name>
</author>
<author>
<name>Yang, Grace N</name>
</author>
<author>
<name>Duhamel, Lauren</name>
</author>
<author>
<name>Santollani, Luciano</name>
</author>
<author>
<name>Fink, Elizabeth</name>
</author>
<author>
<name>Palmeri, Joseph R</name>
</author>
<author>
<name>Wittrup, Karl Dane</name>
</author>
<id>https://hdl.handle.net/1721.1/164936</id>
<updated>2026-02-25T07:12:00Z</updated>
<published>2025-01-02T00:00:00Z</published>
<summary type="text">Tumor Integrin-Targeted Glucose Oxidase Enzyme Promotes ROS-Mediated Cell Death that Combines with Interferon Alpha Therapy for Tumor Control
Stinson, Jordan A; Sheen, Allison; Lax, Brianna M; Yang, Grace N; Duhamel, Lauren; Santollani, Luciano; Fink, Elizabeth; Palmeri, Joseph R; Wittrup, Karl Dane
Although heightened intratumoral levels of reactive oxygen species (ROS) are typically associated with a suppressive tumor microenvironment, under certain conditions ROS contribute to tumor elimination. Treatment approaches, including some chemotherapy and radiation protocols, increase cancer cell ROS levels that influence their mechanism of cell death and subsequent recognition by the immune system. Furthermore, activated myeloid cells rapidly generate ROS upon encounter with pathogens or infected cells to eliminate disease, and recently, this effector function has been noted in cancer contexts as well. Collectively, ROS-induced cancer cell death may help initiate adaptive antitumor immune responses that could synergize with current approved immunotherapies, for improved control of solid tumors. In this work, we explore the use of glucose oxidase, an enzyme which produces hydrogen peroxide, a type of ROS, to therapeutically mimic the endogenous oxidative burst from myeloid cells to promote antigen generation within the tumor microenvironment. We engineer the enzyme to target pan-tumor-expressed integrins both as a tumor-agnostic therapeutic approach and as a strategy to prolong local enzyme activity following intratumoral administration. We found the targeted enzyme potently induced cancer cell death and enhanced cross-presentation by dendritic cells in vitro and further combined with interferon alpha for long-term tumor control in murine MC38 tumors in vivo. Optimizing the single-dose administration of this enzyme overcomes limitations with immunogenicity noted for other prooxidant enzyme approaches. Overall, our results suggest ROS-induced cell death can be harnessed for tumor control and highlight the potential use of designed enzyme therapies alongside immunotherapy against cancer.
</summary>
<dc:date>2025-01-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Yeast as a tool for exploring disulfide-rich peptides</title>
<link href="https://hdl.handle.net/1721.1/164935" rel="alternate"/>
<author>
<name>Yap, Kuok</name>
</author>
<author>
<name>Porth, Owen T</name>
</author>
<author>
<name>Xie, Jing</name>
</author>
<author>
<name>Wang, Conan K</name>
</author>
<author>
<name>Durek, Thomas</name>
</author>
<author>
<name>Wittrup, K Dane</name>
</author>
<author>
<name>Craik, David J</name>
</author>
<id>https://hdl.handle.net/1721.1/164935</id>
<updated>2026-02-25T07:12:01Z</updated>
<published>2025-12-18T00:00:00Z</published>
<summary type="text">Yeast as a tool for exploring disulfide-rich peptides
Yap, Kuok; Porth, Owen T; Xie, Jing; Wang, Conan K; Durek, Thomas; Wittrup, K Dane; Craik, David J
Cyclic disulfide-rich peptides have become increasingly popular in drug development because their structures enhance molecular stability and allow for mutagenesis to introduce non-native functions. This review focuses on yeast-based platform technologies and their utility in advancing cyclic disulfide-rich peptides as drug modalities and for large-scale biomanufacturing. These technologies include yeast surface display which facilitates the screening of large libraries to develop peptide binders with strong affinity and selectivity for protein targets, while maintaining the innate high stability of the peptide scaffold via protease-based selection pressure. We also describe a recently developed platform that leverages yeast’s ability to secrete correctly folded disulfide-rich peptides while simultaneously displaying peptide or protein tags on their surfaces. In combination with microfluidics technology, the platform creates single-cell yeast-in-droplets reactors, enabling the screening of large libraries based on functional output rather than solely on binding affinity. After identifying cyclic peptide candidates through library-based discovery, these candidates can be produced using a versatile yeast-based bioproduction platform. Traditionally, cyclic disulfide-rich peptides are produced through solid-phase synthesis, a method that generates significant amounts of toxic waste. In contrast, yeast-based bioproduction offers an environmentally sustainable alternative. It has the capability to produce structurally distinct peptides with minimal adjustments and is easily scalable using microbial fermenters, making it an ideal choice for large-scale production.
</summary>
<dc:date>2025-12-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Aligning supply chain design for boosting resilience</title>
<link href="https://hdl.handle.net/1721.1/164934" rel="alternate"/>
<author>
<name>Sáenz, María Jesús</name>
</author>
<author>
<name>Revilla, Elena</name>
</author>
<author>
<name>Acero, Beatriz</name>
</author>
<id>https://hdl.handle.net/1721.1/164934</id>
<updated>2026-02-25T07:12:02Z</updated>
<published>2018-05-01T00:00:00Z</published>
<summary type="text">Aligning supply chain design for boosting resilience
Sáenz, María Jesús; Revilla, Elena; Acero, Beatriz
Many researchers have analyzed the effect of disruptive events, such as natural disasters and economic and market forces, on global supply chains. However, there is a lack of consensus on delineating a universal collection of supply chain risk management practices that will help companies operate in a global market with large-scale disruptions. In this article, we present an analysis, in conjunction with a worldwide online survey, based on successful global brands and their supply chains. We propose a framework that deploys the dynamics of building supply chain resilience, first linking the design of the supply chain portfolio (local versus global scope, as well as strategic responsiveness versus cost reduction) with supply chain vulnerabilities (external versus internal). We describe the transition between different supply chain structures as a way of coping with disruptions and thus proactively developing resilience. In this article, we introduce both a supply chain risk management approach and the reactive-by-deployment mode, as illustrated by successful global company examples.
</summary>
<dc:date>2018-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pennies, Penny Pools</title>
<link href="https://hdl.handle.net/1721.1/164933" rel="alternate"/>
<author>
<name>Bucciarelli, Louis</name>
</author>
<id>https://hdl.handle.net/1721.1/164933</id>
<updated>2026-02-25T03:01:14Z</updated>
<published>2026-02-24T00:00:00Z</published>
<summary type="text">Pennies, Penny Pools
Bucciarelli, Louis
On Feburay 25, 2025, our president ordered the US Treasury to cease minting pennies. In this essay, I recount how members in Congress have tried to legislate the same - calling for rounding to the nearest nickel - but have not succeeded, some fearing that prevailing price structures (e.g., $xx.89) would leave customers on the short end of the stick over time with rounding up (customer loses) being more prevalent than rounding down (customer wins). I analyze the situation, then suggest how a “penny pool” might be used by retailers to ease the transition to a just and fair penny-less society.
</summary>
<dc:date>2026-02-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Directed evolution-based discovery of ligands for in vivo restimulation of chimeric antigen receptor T cells</title>
<link href="https://hdl.handle.net/1721.1/164932" rel="alternate"/>
<author>
<name>Grzywa, Tomasz M</name>
</author>
<author>
<name>Neeser, Alexandra</name>
</author>
<author>
<name>Ramasubramanian, Ranjani</name>
</author>
<author>
<name>Romanov, Anna</name>
</author>
<author>
<name>Tannir, Ryan</name>
</author>
<author>
<name>Mehta, Naveen K</name>
</author>
<author>
<name>Cossette, Benjamin</name>
</author>
<author>
<name>Morgan, Duncan M</name>
</author>
<author>
<name>Goncalves, Beatriz</name>
</author>
<author>
<name>Sukaj, Ina</name>
</author>
<author>
<name>Bergaggio, Elisa</name>
</author>
<author>
<name>Kadauke, Stephan</name>
</author>
<author>
<name>Myers, Regina M</name>
</author>
<author>
<name>Paruzzo, Luca</name>
</author>
<author>
<name>Ghilardi, Guido</name>
</author>
<author>
<name>Cozzone, Austin</name>
</author>
<author>
<name>Schuster, Stephen J</name>
</author>
<author>
<name>Frey, Noelle</name>
</author>
<author>
<name>Zhang, Libin</name>
</author>
<author>
<name>Yousefpour, Parisa</name>
</author>
<author>
<name>Abraham, Wuhbet</name>
</author>
<author>
<name>Suh, Heikyung</name>
</author>
<author>
<name>Ruella, Marco</name>
</author>
<author>
<name>Grupp, Stephan A</name>
</author>
<author>
<name>Chiarle, Roberto</name>
</author>
<author>
<name>Wittrup, K Dane</name>
</author>
<author>
<name>Ma, Leyuan</name>
</author>
<author>
<name>Irvine, Darrell J</name>
</author>
<id>https://hdl.handle.net/1721.1/164932</id>
<updated>2026-02-24T03:07:59Z</updated>
<published>2025-08-25T00:00:00Z</published>
<summary type="text">Directed evolution-based discovery of ligands for in vivo restimulation of chimeric antigen receptor T cells
Grzywa, Tomasz M; Neeser, Alexandra; Ramasubramanian, Ranjani; Romanov, Anna; Tannir, Ryan; Mehta, Naveen K; Cossette, Benjamin; Morgan, Duncan M; Goncalves, Beatriz; Sukaj, Ina; Bergaggio, Elisa; Kadauke, Stephan; Myers, Regina M; Paruzzo, Luca; Ghilardi, Guido; Cozzone, Austin; Schuster, Stephen J; Frey, Noelle; Zhang, Libin; Yousefpour, Parisa; Abraham, Wuhbet; Suh, Heikyung; Ruella, Marco; Grupp, Stephan A; Chiarle, Roberto; Wittrup, K Dane; Ma, Leyuan; Irvine, Darrell J
Chimeric antigen receptor (CAR) T cell therapy targeting CD19 elicits remarkable clinical efficacy in B cell malignancies, but many patients relapse owing to failed expansion and/or progressive loss of CAR-T cells. We recently reported a strategy to potently restimulate CAR-T cells in vivo, enhancing their functionality by administration of a vaccine-like stimulus comprised of surrogate peptide ligands for a CAR linked to a lymph node-targeting amphiphilic PEG-lipid (amph-vax). Here we demonstrate a general strategy to discover and optimize peptide mimotopes enabling amph-vax generation for any CAR. We use yeast surface display to identify peptide binders to FMC63 (the scFv used in clinical CD19 CARs), which are then subsequently affinity matured by directed evolution. CAR-T vaccines using these optimized mimotopes triggered marked expansion and memory development of CD19 CAR-T cells in both syngeneic and humanized mouse models of B-acute lymphoblastic leukaemia/lymphoma, and enhanced control of disease progression compared with CD19 CAR-T-only-treated mice. This approach enables amph-vax boosting to be applied to any clinically relevant CAR-T cell product.
</summary>
<dc:date>2025-08-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine learning prediction of antibody aggregation and viscosity for high concentration formulation development of protein therapeutics</title>
<link href="https://hdl.handle.net/1721.1/164931" rel="alternate"/>
<author>
<name>Lai, Pin-Kuang</name>
</author>
<author>
<name>Gallegos, Austin</name>
</author>
<author>
<name>Mody, Neil</name>
</author>
<author>
<name>Sathish, Hasige A</name>
</author>
<author>
<name>Trout, Bernhardt L</name>
</author>
<id>https://hdl.handle.net/1721.1/164931</id>
<updated>2026-03-08T03:40:22Z</updated>
<published>2022-01-25T00:00:00Z</published>
<summary type="text">Machine learning prediction of antibody aggregation and viscosity for high concentration formulation development of protein therapeutics
Lai, Pin-Kuang; Gallegos, Austin; Mody, Neil; Sathish, Hasige A; Trout, Bernhardt L
Machine learning has been recently used to predict therapeutic antibody aggregation rates and viscosity at high concentrations (150 mg/ml). These works focused on commercially available antibodies, which may have been optimized for stability. In this study, we measured accelerated aggregation rates at 45°C and viscosity at 150 mg/ml for 20 preclinical and clinical-stage antibodies. Features obtained from molecular dynamics simulations of the full-length antibody and sequences were used for machine learning model construction. We found a k-nearest neighbors regression model with two features, spatial positive charge map on the CDRH2 and solvent-accessible surface area of hydrophobic residues on the variable fragment, gives the best performance for predicting antibody aggregation rates (r = 0.89). For the viscosity classification model, the model with the highest accuracy is a logistic regression model with two features, spatial negative charge map on the heavy chain variable region and spatial negative charge map on the light chain variable region. The accuracy and the area under precision recall curve of the classification model from validation tests are 0.86 and 0.70, respectively. In addition, we combined data from another 27 commercial mAbs to develop a viscosity predictive model. The best model is a logistic regression model with two features, number of hydrophobic residues on the light chain variable region and net charges on the light chain variable region. The accuracy and the area under precision recall curve of the classification model are 0.85 and 0.6, respectively. The aggregation rates and viscosity models can be used to predict antibody stability to facilitate pharmaceutical development.
</summary>
<dc:date>2022-01-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhanced O-glycosylation site prediction using explainable machine learning technique with spatial local environment</title>
<link href="https://hdl.handle.net/1721.1/164930" rel="alternate"/>
<author>
<name>Hong, Seokyoung</name>
</author>
<author>
<name>Chattaraj, Krishna Gopal</name>
</author>
<author>
<name>Guo, Jing</name>
</author>
<author>
<name>Trout, Bernhardt L</name>
</author>
<author>
<name>Braatz, Richard D</name>
</author>
<id>https://hdl.handle.net/1721.1/164930</id>
<updated>2026-03-08T03:40:22Z</updated>
<published>2025-02-04T00:00:00Z</published>
<summary type="text">Enhanced O-glycosylation site prediction using explainable machine learning technique with spatial local environment
Hong, Seokyoung; Chattaraj, Krishna Gopal; Guo, Jing; Trout, Bernhardt L; Braatz, Richard D
Motivation: The accurate prediction of O-GlcNAcylation sites is crucial for understanding disease mechanisms and developing effective treatments. Previous machine learning (ML) models primarily relied on primary or secondary protein structural and related properties, which have&#13;
limitations in capturing the spatial interactions of neighboring amino acids. This study introduces local environmental features as a novel approach that incorporates three-dimensional spatial information, significantly improving model performance by considering the spatial context&#13;
around the target site. Additionally, we utilize sparse recurrent neural networks to effectively capture sequential nature of the proteins and to&#13;
identify key factors influencing O-GlcNAcylation as an explainable ML model.&#13;
Results: Our findings demonstrate the effectiveness of our proposed features with the model achieving an F1 score of 28.3%, as well as feature selection capability with the model using only the top 20% of features achieving the highest F1 score of 32.02%, a 1.4-fold improvement&#13;
over existing PTM models. Statistical analysis of the top 20 features confirmed their consistency with literature. This method not only boosts&#13;
prediction accuracy but also paves the way for further research in understanding and targeting O-GlcNAcylation.&#13;
Availability and implementation: The entire code, data, features used in this study are available in the GitHub repository: https://github.com/&#13;
pseokyoung/o-glcnac-
</summary>
<dc:date>2025-02-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging microtopography to pattern multi-oriented muscle actuators</title>
<link href="https://hdl.handle.net/1721.1/164929" rel="alternate"/>
<author>
<name>Rossy, Tamara</name>
</author>
<author>
<name>Schwendeman, Laura</name>
</author>
<author>
<name>Kohli, Sonika</name>
</author>
<author>
<name>Bawa,  Maheera</name>
</author>
<author>
<name>Umashankar,  Pavankumar</name>
</author>
<author>
<name>Habba, Roi</name>
</author>
<author>
<name>Tchaicheeyan, Oren</name>
</author>
<author>
<name>Lesmanbc,  Ayelet</name>
</author>
<author>
<name>Raman, Ritu</name>
</author>
<id>https://hdl.handle.net/1721.1/164929</id>
<updated>2026-02-20T03:08:08Z</updated>
<published>2025-03-14T00:00:00Z</published>
<summary type="text">Leveraging microtopography to pattern multi-oriented muscle actuators
Rossy, Tamara; Schwendeman, Laura; Kohli, Sonika; Bawa,  Maheera; Umashankar,  Pavankumar; Habba, Roi; Tchaicheeyan, Oren; Lesmanbc,  Ayelet; Raman, Ritu
Engineering skeletal muscle tissue with precisely defined alignment is of significant importance for applications ranging from drug screening to biohybrid robotics. Aligning 2D contractile muscle monolayers, which are compatible with high-content imaging and can be deployed in planar soft robots, typically requires micropatterned cues. However, current protocols for integrating microscale topographical features in extracellular matrix hydrogels require expensive microfabrication equipment and multi-step procedures involving error-prone manual handling steps. To address this challenge, we present STAMP (simple templating of actuators via micro-topographical patterning), an easily accessible and cost-effective one-step method to pattern microtopography of various sizes and configurations on the surface of hydrogels using reusable 3D printed stamps. We demonstrate that STAMP enables precisely controlling the alignment of mouse and human skeletal muscle fibers without negatively impacting their maturation or function. To showcase the versatility of our technique, we designed a planar soft robot inspired by the iris, which leverages spatially segregated regions of concentric and radial muscle fibers to control pupil dilation. Optogenetic skeletal muscle fibers grown on a STAMPed iris substrates formed a multi-oriented actuator, and selective light stimulation of the radial and concentric fibers was used to control the function of the iris, including pupil constriction. Computational modeling of the biohybrid robot as an active bilayer matched experimental outcomes, showcasing the robustness of our STAMP method for designing, fabricating, and testing planar biohybrid robots capable of complex multi-DOF motion.
</summary>
<dc:date>2025-03-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Spray Retention Using Cloaked Droplets to Reduce Pesticide Pollution</title>
<link href="https://hdl.handle.net/1721.1/164928" rel="alternate"/>
<author>
<name>Jayaprakash, Vishnu</name>
</author>
<author>
<name>Rufer, Simon</name>
</author>
<author>
<name>Panata, Sreedath</name>
</author>
<author>
<name>Varanasi,  Kripa K.</name>
</author>
<id>https://hdl.handle.net/1721.1/164928</id>
<updated>2026-02-20T03:08:05Z</updated>
<published>2025-03-25T00:00:00Z</published>
<summary type="text">Enhancing Spray Retention Using Cloaked Droplets to Reduce Pesticide Pollution
Jayaprakash, Vishnu; Rufer, Simon; Panata, Sreedath; Varanasi,  Kripa K.
Enhancing agrochemical spray retention on plant surfaces would have tremendous benefits to global health and the environment. The bouncing of sprayed pesticide droplets from hydrophobic leaves is a major source of water and soil pollution, and the resultant overuse of pesticides is a human health hazard and a financial burden for farmers. Here we report on the development of sustainable agricultural sprays consisting of cloaked droplets that significantly enhance droplet retention on plant surfaces. By leveraging wetting dynamics, we create cloaked droplets that consist of an ultra-thin food and environmentally safe oil layer (&lt;1% by volume) that encapsulates water droplets. We develop a fundamental understanding of the dynamics of cloaked droplet impact and retention on superhydrophobic surfaces. Using high-speed imaging, we capture how the oil cloak transforms into a wetting ridge that pins the droplets and suppresses their rebound. We span a wide range of impact conditions, oils, oil viscosities, and oil volume fractions to demonstrate the robustness of the approach. By considering a balance of kinetic energy, the work of adhesion, and viscous dissipation in this four-phase system, we develop a physical model that allows us to establish a regime map for rebound suppression. Finally, these findings are implemented into a prototype sprayer which leads to a ∼5-fold reduction in spray waste on crop leaves. We believe that our spray approach can greatly reduce agrochemical pollution as well as pesticide and surfactant usage.
</summary>
<dc:date>2025-03-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Total Synthesis and 13C NMR Revision of Nagelamide C</title>
<link href="https://hdl.handle.net/1721.1/164927" rel="alternate"/>
<author>
<name>Tong, Guanghu</name>
</author>
<author>
<name>Nguyen, Long V.</name>
</author>
<author>
<name>Jamison, Timothy F.</name>
</author>
<id>https://hdl.handle.net/1721.1/164927</id>
<updated>2026-02-20T03:08:04Z</updated>
<published>2025-06-04T00:00:00Z</published>
<summary type="text">Total Synthesis and 13C NMR Revision of Nagelamide C
Tong, Guanghu; Nguyen, Long V.; Jamison, Timothy F.
Nagelamide C (1), a dimeric pyrrole–imidazole alkaloid, exhibits antimicrobial and antibacterial activities. We demonstrate herein the first total synthesis of nagelamide C. This concise work was enabled by a series of significant transformations featuring: an imidazole benzylic Wittig olefination, a site selective bromination, and a regioselective trans-hydrostannylation/Stille coupling to construct a unique trisubstituted olefin. In addition, we show the original 13C NMR data of nagelamide C to be in error and revise the data.
</summary>
<dc:date>2025-06-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Symmetry-Constrained Generation of Diverse Low-Bandgap Molecules with Monte Carlo Tree Search</title>
<link href="https://hdl.handle.net/1721.1/164926" rel="alternate"/>
<author>
<name>Subramanian,  Akshay</name>
</author>
<author>
<name>Damewood,  James</name>
</author>
<author>
<name>Nam,  Juno</name>
</author>
<author>
<name>Greenman,  Kevin P.</name>
</author>
<author>
<name>Singhal, Avni P.</name>
</author>
<author>
<name>Gómez-Bombarelli, Rafael</name>
</author>
<id>https://hdl.handle.net/1721.1/164926</id>
<updated>2026-02-20T03:08:03Z</updated>
<published>2025-05-12T00:00:00Z</published>
<summary type="text">Symmetry-Constrained Generation of Diverse Low-Bandgap Molecules with Monte Carlo Tree Search
Subramanian,  Akshay; Damewood,  James; Nam,  Juno; Greenman,  Kevin P.; Singhal, Avni P.; Gómez-Bombarelli, Rafael
Organic optoelectronic materials are a promising avenue for next-generation electronic devices due to their solution processability, mechanical flexibility, and tunable electronic properties. In particular, near-infrared (NIR) sensitive molecules have unique applications in night-vision equipment and biomedical imaging. Molecular engineering has played a crucial role in developing non-fullerene acceptors (NFAs) such as the Y-series molecules, which feature a rigid fused-ring electron donor core flanked by electron-deficient end groups, leading to strong intramolecular charge-transfer and extended absorption into the NIR region. However, systematically designing molecules with targeted optoelectronic properties while ensuring synthetic accessibility remains a challenge. To address this, we leverage structural priors from domain-focused, patent-mined datasets of organic electronic molecules using a symmetry-aware fragment decomposition algorithm and a fragment-constrained Monte Carlo Tree Search (MCTS) generator. Our approach generates candidates that retain symmetry constraints from the patent dataset, while also exhibiting red-shifted absorption, as validated by TD-DFT calculations.
</summary>
<dc:date>2025-05-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Molecular analysis and design using generative artificial intelligence via multi-agent modeling</title>
<link href="https://hdl.handle.net/1721.1/164925" rel="alternate"/>
<author>
<name>Stewart, Isabella</name>
</author>
<author>
<name>Buehler, Markus J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164925</id>
<updated>2026-02-20T03:08:09Z</updated>
<published>2025-01-24T00:00:00Z</published>
<summary type="text">Molecular analysis and design using generative artificial intelligence via multi-agent modeling
Stewart, Isabella; Buehler, Markus J.
We report the use of a multiagent generative artificial intelligence framework, the X-LoRA-Gemma large language model (LLM), to analyze, design and test molecular design. The X-LoRA-Gemma model, inspired by biological principles and featuring 7 billion parameters, dynamically reconfigures its structure through a dual-pass inference strategy to enhance its problem-solving abilities across diverse scientific domains. The model is used to first identify molecular engineering targets through a systematic human–AI and AI–AI self-driving multi-agent approach to elucidate key targets for molecular optimization to improve interactions between molecules. Next, a multi-agent generative design process is used that includes rational steps, reasoning and autonomous knowledge extraction. Target properties of the molecule are identified either using a principal component analysis (PCA) of key molecular properties or sampling from the distribution of known molecular properties. The model is then used to generate a large set of candidate molecules, which are analyzed via their molecular structure, charge distribution, and other features. We validate that as predicted, increased dipole moment and polarizability is indeed achieved in the designed molecules. We anticipate an increasing integration of these techniques into the molecular engineering workflow, ultimately enabling the development of innovative solutions to address a wide range of societal challenges. We conclude with a critical discussion of challenges and opportunities of the use of multi-agent generative AI for molecular engineering, analysis and design.
</summary>
<dc:date>2025-01-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated fast-flow synthesis of the immune checkpoint receptors PD-1 and PD-L1</title>
<link href="https://hdl.handle.net/1721.1/164924" rel="alternate"/>
<author>
<name>Fittolani,  Giulio</name>
</author>
<author>
<name>Callahan, Alex J.</name>
</author>
<author>
<name>Loas, Andrei</name>
</author>
<author>
<name>Pentelute, Bradley L.</name>
</author>
<id>https://hdl.handle.net/1721.1/164924</id>
<updated>2026-02-20T03:08:10Z</updated>
<published>2025-03-17T00:00:00Z</published>
<summary type="text">Automated fast-flow synthesis of the immune checkpoint receptors PD-1 and PD-L1
Fittolani,  Giulio; Callahan, Alex J.; Loas, Andrei; Pentelute, Bradley L.
Programmed cell death protein 1 (PD-1) and programmed cell death ligand 1 (PD-L1) are key targets for cancer therapy. Here, we use automated fast-flow peptide synthesis (AFPS) to rapidly produce these challenging β-sheet-rich proteins in their active forms following oxidative refolding protocols. The methods presented here provide rapid access to synthetic, air-stable mutants of PD-1 and PD-L1 in which L-methionine residues are substituted with L-norleucine, potentially enabling investigation of post-translational modifications and mirror-image analogs for drug discovery.
</summary>
<dc:date>2025-03-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geology of Deception Gulch and the Verde Central mine</title>
<link href="https://hdl.handle.net/1721.1/164923" rel="alternate"/>
<author>
<name>Benedict, P. C.
            (Platt Carrico),
            1900-1969.</name>
</author>
<id>https://hdl.handle.net/1721.1/164923</id>
<updated>2026-02-20T03:04:15Z</updated>
<published>1923-01-01T00:00:00Z</published>
<summary type="text">Geology of Deception Gulch and the Verde Central mine
Benedict, P. C.
            (Platt Carrico),
            1900-1969.
Thesis: M.S., Massachusetts Institute of Technology, Department of Geology and Geophysics, 1923
</summary>
<dc:date>1923-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structural geology of Eastern Massachusetts</title>
<link href="https://hdl.handle.net/1721.1/164922" rel="alternate"/>
<author>
<name>Ilsley, Ralph,
            1896-</name>
</author>
<id>https://hdl.handle.net/1721.1/164922</id>
<updated>2026-02-20T03:02:14Z</updated>
<published>1934-01-01T00:00:00Z</published>
<summary type="text">Structural geology of Eastern Massachusetts
Ilsley, Ralph,
            1896-
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Geology, 1934; Vita.
</summary>
<dc:date>1934-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The design of a hydraulic draft gear for railway passenger cars</title>
<link href="https://hdl.handle.net/1721.1/164921" rel="alternate"/>
<author>
<name>Pearson, Harry L.</name>
</author>
<author>
<name>McGrady, Charles T.</name>
</author>
<id>https://hdl.handle.net/1721.1/164921</id>
<updated>2026-02-20T03:05:00Z</updated>
<published>1922-01-01T00:00:00Z</published>
<summary type="text">The design of a hydraulic draft gear for railway passenger cars
Pearson, Harry L.; McGrady, Charles T.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1922; Includes bibliographical references.
</summary>
<dc:date>1922-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of "Chu-Ma" as a textile fiber</title>
<link href="https://hdl.handle.net/1721.1/164920" rel="alternate"/>
<author>
<name>Chou, Cheng Yu,
            1901-</name>
</author>
<author>
<name>Hsueh, Tsu Kang.</name>
</author>
<id>https://hdl.handle.net/1721.1/164920</id>
<updated>2026-02-20T03:04:57Z</updated>
<published>1924-01-01T00:00:00Z</published>
<summary type="text">A study of "Chu-Ma" as a textile fiber
Chou, Cheng Yu,
            1901-; Hsueh, Tsu Kang.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1924
</summary>
<dc:date>1924-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The transportation decision making process in metropolitan Boston</title>
<link href="https://hdl.handle.net/1721.1/164919" rel="alternate"/>
<author>
<name>Zinner, Richard Mark.</name>
</author>
<id>https://hdl.handle.net/1721.1/164919</id>
<updated>2026-02-20T03:04:52Z</updated>
<published>1967-01-01T00:00:00Z</published>
<summary type="text">The transportation decision making process in metropolitan Boston
Zinner, Richard Mark.
Thesis: B.S., Massachusetts Institute of Technology, Department of Political Science, 1967; One unnumbered page inserted.; Bibliography: leaf 74.
</summary>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Oscillographic presentation of impedances on the reflection-coefficient plane</title>
<link href="https://hdl.handle.net/1721.1/164918" rel="alternate"/>
<author>
<name>Eckhart, Myron.</name>
</author>
<author>
<name>Fowler, Earl Bealle.</name>
</author>
<id>https://hdl.handle.net/1721.1/164918</id>
<updated>2026-02-20T03:04:47Z</updated>
<published>1949-01-01T00:00:00Z</published>
<summary type="text">Oscillographic presentation of impedances on the reflection-coefficient plane
Eckhart, Myron.; Fowler, Earl Bealle.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1949
</summary>
<dc:date>1949-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Radiation transfer in massive binary x-ray systems</title>
<link href="https://hdl.handle.net/1721.1/164917" rel="alternate"/>
<author>
<name>Lewis, Wayne Lloyd.</name>
</author>
<id>https://hdl.handle.net/1721.1/164917</id>
<updated>2026-02-20T03:02:25Z</updated>
<published>1991-01-01T00:00:00Z</published>
<summary type="text">Radiation transfer in massive binary x-ray systems
Lewis, Wayne Lloyd.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1991; Includes bibliographical references (leaves 167-173).
</summary>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Laser induced photoionization of helium</title>
<link href="https://hdl.handle.net/1721.1/164916" rel="alternate"/>
<author>
<name>Lewis, Wayne Lloyd.</name>
</author>
<id>https://hdl.handle.net/1721.1/164916</id>
<updated>2026-02-20T03:04:45Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">Laser induced photoionization of helium
Lewis, Wayne Lloyd.
Thesis: B.S., Massachusetts Institute of Technology, Department of Physics, 1980; Includes bibliographical references.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nonlinear elastic analysis of reinforced concrete structures by the finite element method</title>
<link href="https://hdl.handle.net/1721.1/164915" rel="alternate"/>
<author>
<name>Tulga, Said Şahin.</name>
</author>
<id>https://hdl.handle.net/1721.1/164915</id>
<updated>2026-02-20T03:04:07Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">Nonlinear elastic analysis of reinforced concrete structures by the finite element method
Tulga, Said Şahin.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1979; Includes bibliographical references.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Subcontractor bidding strategy</title>
<link href="https://hdl.handle.net/1721.1/164914" rel="alternate"/>
<author>
<name>Gilbane, Thomas Freeman.</name>
</author>
<id>https://hdl.handle.net/1721.1/164914</id>
<updated>2026-02-20T03:04:10Z</updated>
<published>1975-01-01T00:00:00Z</published>
<summary type="text">Subcontractor bidding strategy
Gilbane, Thomas Freeman.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1975; Bibliography: leaves 104-105.
</summary>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Petrography and geology of the Shoshone mining region in northwestern Wyoming</title>
<link href="https://hdl.handle.net/1721.1/164913" rel="alternate"/>
<author>
<name>Benedict, P. C.
            (Platt Carrico),
            1900-1969.</name>
</author>
<id>https://hdl.handle.net/1721.1/164913</id>
<updated>2026-02-20T03:04:35Z</updated>
<published>1922-01-01T00:00:00Z</published>
<summary type="text">Petrography and geology of the Shoshone mining region in northwestern Wyoming
Benedict, P. C.
            (Platt Carrico),
            1900-1969.
Thesis: B.S., Massachusetts Institute of Technology, Department of Geology, 1922
</summary>
<dc:date>1922-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>GeoXCP: uncertainty quantification of spatial explanations in explainable AI</title>
<link href="https://hdl.handle.net/1721.1/164912" rel="alternate"/>
<author>
<name>Lou, Xiayin</name>
</author>
<author>
<name>Luo, Peng</name>
</author>
<author>
<name>Li, Ziqi</name>
</author>
<author>
<name>Gao, Song</name>
</author>
<author>
<name>Meng, Liqiu</name>
</author>
<id>https://hdl.handle.net/1721.1/164912</id>
<updated>2026-02-20T03:08:01Z</updated>
<published>2025-10-27T00:00:00Z</published>
<summary type="text">GeoXCP: uncertainty quantification of spatial explanations in explainable AI
Lou, Xiayin; Luo, Peng; Li, Ziqi; Gao, Song; Meng, Liqiu
Understanding and explaining complex geographic phenomena—ranging from climate change to socioeconomic disparities—is a central focus in both geography and the broader scientific community. Various methods have been developed to elucidate relationships between variables, from coefficient estimates in linear regression models to the increasingly dominant use of feature attribution scores in Explainable AI (XAI) techniques. However, explanations generated by XAI methods often carry uncertainty, stemming from the model itself and the data used to train the model. Despite the critical importance of accounting for such uncertainty, this issue remains largely overlooked in the geospatial domain. In this study, we developed an uncertainty quantification framework for XAI explanations based on conformal prediction, termed Geospatial eXplanation Conformal Prediction (GeoXCP). By incorporating spatial dependence into the modeling process, GeoXCP produced spatially adaptive explanations with calibrated uncertainty estimates. We validated the effectiveness of GeoXCP through extensive simulation experiments and real-world datasets. The results demonstrated that GeoXCP provided reliable explanations while effectively quantifying uncertainty across diverse geospatial scenarios. Our approach represented a significant advancement in explainable geospatial machine learning, enabling decision-makers to better assess the trustworthiness of model-driven insights. The proposed framework was implemented in a python package, named GeoXCP.
</summary>
<dc:date>2025-10-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Open-source device for high sensitivity magnetic particle spectroscopy, relaxometry, and hysteresis loop tracing</title>
<link href="https://hdl.handle.net/1721.1/164911" rel="alternate"/>
<author>
<name>Mattingly, E.</name>
</author>
<author>
<name>Barksdale, A. C.</name>
</author>
<author>
<name>Śliwiak, M.</name>
</author>
<author>
<name>Chacon-Caldera, J.</name>
</author>
<author>
<name>Mason, E. E.</name>
</author>
<author>
<name>Wald, L. L.</name>
</author>
<id>https://hdl.handle.net/1721.1/164911</id>
<updated>2026-02-19T04:36:04Z</updated>
<published>2024-06-26T00:00:00Z</published>
<summary type="text">Open-source device for high sensitivity magnetic particle spectroscopy, relaxometry, and hysteresis loop tracing
Mattingly, E.; Barksdale, A. C.; Śliwiak, M.; Chacon-Caldera, J.; Mason, E. E.; Wald, L. L.
Magnetic nanoparticles (MNPs) are used extensively across numerous disciples, with applications including Magnetic Particle Imaging (MPI), targeted hyperthermia, deep brain stimulation, immunoassays, and thermometry. The assessment of MNPs, especially those being designed for MPI, is performed with magnetic particle spectrometers, relaxometers, loop tracers, or similar devices. Despite the many applications and the need for particle assessment, there are few consolidated resources for designing or building such a MNP assessment system. Here, we describe the design and performance of an open-source device capable of spectroscopy, relaxometry, and loop tracing. We show example measurements from the device and quantify the detection sensitivity by measuring a dilution series of Synomag-D 70 nm (from 0.5 mg Fe/ml to 7 ng Fe/ml) with a 10 mT drive field at 23.8 kHz. The device measures 260 pg Fe with SNR = 1 and 1.3 ng at SNR = 5 in spectroscopy mode in under one second of measurement time. The system has a dynamic range of 60 μg to 260 pg Fe without changing the hardware configuration. As an example application, we characterize Synomag-D’s relaxation time constant for drive fields 2–18 mT and compare the magnetization responses of two commonly used MNPs.
</summary>
<dc:date>2024-06-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Precise Fermi level engineering in a topological Weyl semimetal via fast ion implantation</title>
<link href="https://hdl.handle.net/1721.1/164910" rel="alternate"/>
<author>
<name>Mandal, Manasi</name>
</author>
<author>
<name>Chotrattanapituk, Abhijatmedhi</name>
</author>
<author>
<name>Woller, Kevin</name>
</author>
<author>
<name>Wu, Lijun</name>
</author>
<author>
<name>Xu, Haowei</name>
</author>
<author>
<name>Hung, Nguyen Tuan</name>
</author>
<author>
<name>Mao, Nannan</name>
</author>
<author>
<name>Okabe, Ryotaro</name>
</author>
<author>
<name>Boonkird, Artittaya</name>
</author>
<author>
<name>Nguyen, Thanh</name>
</author>
<author>
<name>Drucker, Nathan C</name>
</author>
<author>
<name>Chen, Xiaoqian M</name>
</author>
<author>
<name>Momiki, Takashi</name>
</author>
<author>
<name>Li, Ju</name>
</author>
<author>
<name>Kong, Jing</name>
</author>
<author>
<name>Zhu, Yimei</name>
</author>
<author>
<name>Li, Mingda</name>
</author>
<id>https://hdl.handle.net/1721.1/164910</id>
<updated>2026-02-19T04:36:09Z</updated>
<published>2024-06-25T00:00:00Z</published>
<summary type="text">Precise Fermi level engineering in a topological Weyl semimetal via fast ion implantation
Mandal, Manasi; Chotrattanapituk, Abhijatmedhi; Woller, Kevin; Wu, Lijun; Xu, Haowei; Hung, Nguyen Tuan; Mao, Nannan; Okabe, Ryotaro; Boonkird, Artittaya; Nguyen, Thanh; Drucker, Nathan C; Chen, Xiaoqian M; Momiki, Takashi; Li, Ju; Kong, Jing; Zhu, Yimei; Li, Mingda
The precise controllability of the Fermi level is a critical aspect of quantum materials. For topological Weyl semimetals, there is a pressing need to fine-tune the Fermi level to the Weyl nodes and unlock exotic electronic and optoelectronic effects associated with the divergent Berry curvature. However, in contrast to two-dimensional materials, where the Fermi level can be controlled through various techniques, the situation for bulk crystals beyond laborious chemical doping poses significant challenges. Here, we report the milli-electron-volt (meV) level ultra-fine-tuning of the Fermi level of bulk topological Weyl semimetal tantalum phosphide using accelerator-based high-energy hydrogen implantation and theory-driven planning. By calculating the desired carrier density and controlling the accelerator profiles, the Fermi level can be experimentally fine-tuned from 5 meV below, to 3.8 meV below, to 3.2 meV above the Weyl nodes. High-resolution transmission electron microscopy reveals the crystalline structure is largely maintained under irradiation, while electrical transport indicates that Weyl nodes are preserved and carrier mobility is also largely retained. Our work demonstrates the viability of this generic approach to tune the Fermi level in semimetal systems and could serve to achieve property fine-tuning for other bulk quantum materials with ultrahigh precision.
</summary>
<dc:date>2024-06-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>A facility for cryogenic ion irradiation and in situ characterization of rare-earth barium copper oxide superconducting tapes</title>
<link href="https://hdl.handle.net/1721.1/164909" rel="alternate"/>
<author>
<name>Devitre, AR</name>
</author>
<author>
<name>Fischer, DX</name>
</author>
<author>
<name>Woller, KB</name>
</author>
<author>
<name>Clark, BC</name>
</author>
<author>
<name>Short, MP</name>
</author>
<author>
<name>Whyte, DG</name>
</author>
<author>
<name>Hartwig, ZS</name>
</author>
<id>https://hdl.handle.net/1721.1/164909</id>
<updated>2026-02-19T04:36:11Z</updated>
<published>2024-06-26T00:00:00Z</published>
<summary type="text">A facility for cryogenic ion irradiation and in situ characterization of rare-earth barium copper oxide superconducting tapes
Devitre, AR; Fischer, DX; Woller, KB; Clark, BC; Short, MP; Whyte, DG; Hartwig, ZS
Superconducting magnets based on Rare Earth Barium Copper Oxides (REBCO) offer transformative capabilities in the fields of fusion energy, high energy physics, and space exploration. A challenge shared by these applications is the limited lifetime of REBCO due to radiation damage sustained during operation. Here we present a new ion-beam facility that enables simultaneous cryogenic irradiation and in situ characterization of commercial REBCO tapes. The ion source provides spatially uniform fluxes up to 1018 protons/m2s with kinetic energies up to 3.4 MeV, in addition to helium and higher-Z species. Using this facility, we can induce uniform damage profiles in the first 10–20 µm of REBCO tapes with less than 0.25 appm of hydrogen implanted in REBCO after a dose of 1020 protons/m2. The tape can be held between 20 and 300 K with an accuracy of ±0.1 K and is connected to a four-point probe measuring the critical current, Ic, and critical temperature, Tc, before, during, and after irradiation with transport current ranging from 100 nA to 100 A, and a typical voltage noise less than 0.1 μV. These capabilities are presently used to study the effect of irradiation temperature on REBCO performance change during and after proton bombardment, to assess the possibility of Ic and Tc recovery after irradiation through thermal annealing, and to explore the instantaneous and recoverable suppression of Ic and Tc observed during irradiation.
</summary>
<dc:date>2024-06-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>High temperature stability of regrown and alloyed Ohmic contacts to AlGaN/GaN heterostructure up to 500 °C</title>
<link href="https://hdl.handle.net/1721.1/164908" rel="alternate"/>
<author>
<name>Niroula, John</name>
</author>
<author>
<name>Xie, Qingyun</name>
</author>
<author>
<name>Rajput, Nitul S</name>
</author>
<author>
<name>Darmawi-Iskandar, Patrick K</name>
</author>
<author>
<name>Rahman, Sheikh Ifatur</name>
</author>
<author>
<name>Luo, Shisong</name>
</author>
<author>
<name>Palash, Rafid Hassan</name>
</author>
<author>
<name>Sikder, Bejoy</name>
</author>
<author>
<name>Yuan, Mengyang</name>
</author>
<author>
<name>Yadav, Pradyot</name>
</author>
<author>
<name>Micale, Gillian K</name>
</author>
<author>
<name>Chowdhury, Nadim</name>
</author>
<author>
<name>Zhao, Yuji</name>
</author>
<author>
<name>Rajan, Siddharth</name>
</author>
<author>
<name>Palacios, Tomás</name>
</author>
<id>https://hdl.handle.net/1721.1/164908</id>
<updated>2026-02-19T04:36:06Z</updated>
<published>2024-05-15T00:00:00Z</published>
<summary type="text">High temperature stability of regrown and alloyed Ohmic contacts to AlGaN/GaN heterostructure up to 500 °C
Niroula, John; Xie, Qingyun; Rajput, Nitul S; Darmawi-Iskandar, Patrick K; Rahman, Sheikh Ifatur; Luo, Shisong; Palash, Rafid Hassan; Sikder, Bejoy; Yuan, Mengyang; Yadav, Pradyot; Micale, Gillian K; Chowdhury, Nadim; Zhao, Yuji; Rajan, Siddharth; Palacios, Tomás
This Letter reports the stability of regrown and alloyed Ohmic contacts to AlGaN/GaN-on-Si high electron mobility transistors (HEMTs) for high temperature applications up to 500 °C. Transfer length method (TLM) measurements from 25 to 500 °C in air show that the regrown contacts appear to be stable up to 500 °C during short term (approximately 1 h) testing, while alloyed contacts appear to decrease in contact resistance from 300 to 500 °C though increases in the error bounds due to increase sheet resistance make it difficult to conclude definitely. Additionally, longer term testing shows both technologies remain stable at least up to 48 h at 500 °C, after which the large increase in sheet resistance makes the measurement uncertainty too large to conclude definitively. Advanced microscopy images indicate both the regrown and alloyed contact regions remain structurally intact after prolonged high temperature exposure with no visible degradation in crystallinity or metal composition.
</summary>
<dc:date>2024-05-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optical-pump–terahertz-probe spectroscopy in high magnetic fields with kHz single-shot detection</title>
<link href="https://hdl.handle.net/1721.1/164907" rel="alternate"/>
<author>
<name>Dastrup, Blake S</name>
</author>
<author>
<name>Miedaner, Peter R</name>
</author>
<author>
<name>Zhang, Zhuquan</name>
</author>
<author>
<name>Nelson, Keith A</name>
</author>
<id>https://hdl.handle.net/1721.1/164907</id>
<updated>2026-02-19T04:36:14Z</updated>
<published>2024-03-12T00:00:00Z</published>
<summary type="text">Optical-pump–terahertz-probe spectroscopy in high magnetic fields with kHz single-shot detection
Dastrup, Blake S; Miedaner, Peter R; Zhang, Zhuquan; Nelson, Keith A
We demonstrate optical pump–THz probe (OPTP) spectroscopy with a variable external magnetic field (0–9 T), in which the time-dependent THz signal is measured by echelon-based single-shot detection at a repetition rate of 1 kHz. The method reduces data acquisition times by more than an order of magnitude compared to conventional electro-optic sampling using a scanning delay stage. The approach illustrates the wide applicability of the single-shot measurement approach to non-equilibrium systems that are studied through OPTP spectroscopy, especially in cases where parameters such as magnetic field strength (B) or other experimental parameters are varied. We demonstrate the capabilities of our measurement by performing cyclotron resonance experiments in bulk silicon, where we observe B-field-dependent carrier relaxation and distinct relaxation rates for different carrier types. We use a pair of economical linear array detectors to measure 500 time points on each shot, offering an equivalent performance to camera-based detection with possibilities for higher repetition rates.
</summary>
<dc:date>2024-03-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Validation of the OpenMC Code for Fusion Applications: The FNG-Streaming Benchmark Case</title>
<link href="https://hdl.handle.net/1721.1/164906" rel="alternate"/>
<author>
<name>Segantin, Stefano</name>
</author>
<author>
<name>Ebiwonjumi, Bamidele</name>
</author>
<author>
<name>Peterson, Ethan</name>
</author>
<id>https://hdl.handle.net/1721.1/164906</id>
<updated>2026-02-19T04:36:08Z</updated>
<published>2025-07-04T00:00:00Z</published>
<summary type="text">Validation of the OpenMC Code for Fusion Applications: The FNG-Streaming Benchmark Case
Segantin, Stefano; Ebiwonjumi, Bamidele; Peterson, Ethan
In this work, we benchmark OpenMC against the FNG-ITER streaming experiment. FNG-ITER streaming, a high-quality experiment carried out at the ENEA laboratories in Frascati, Italy, was initially included in SINBAD (Shielding Integral Benchmark Archive and Database). More recently, the benchmark was included in the Compilation of Nuclear Data Experiments for Radiation Characterization as well. It consists of a neutron shielding experiment with a rather complex geometry that constitutes an appropriate validation study for the use of weight windows within OpenMC. Measurements include flux detection via four different types of activation foils divided into three batches and a set of thermoluminescent detectors for nuclear heating. The OpenMC results are in very good agreement with those of MCNP and the experimental measurements, with the majority of the discrepancies within the combined statistical error and experimental uncertainty (less than 10% computed measured discrepancy).
</summary>
<dc:date>2025-07-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Use of Bayesian decision analysis to maximize value in patient-centered randomized clinical trials in Parkinson’s disease</title>
<link href="https://hdl.handle.net/1721.1/164905" rel="alternate"/>
<author>
<name>Chaudhuri, Shomesh E</name>
</author>
<author>
<name>Ben Chaouch, Zied</name>
</author>
<author>
<name>Hauber, Brett</name>
</author>
<author>
<name>Mange, Brennan</name>
</author>
<author>
<name>Zhou, Mo</name>
</author>
<author>
<name>Christopher, Stephanie</name>
</author>
<author>
<name>Bardot, Dawn</name>
</author>
<author>
<name>Sheehan, Margaret</name>
</author>
<author>
<name>Donnelly, Anne</name>
</author>
<author>
<name>McLaughlin, Lauren</name>
</author>
<author>
<name>Caldwell, Brittany</name>
</author>
<author>
<name>Benz, Heather L</name>
</author>
<author>
<name>Ho, Martin</name>
</author>
<author>
<name>Saha, Anindita</name>
</author>
<author>
<name>Gwinn, Katrina</name>
</author>
<author>
<name>Sheldon, Murray</name>
</author>
<author>
<name>Lo, Andrew W</name>
</author>
<id>https://hdl.handle.net/1721.1/164905</id>
<updated>2026-02-19T04:36:13Z</updated>
<published>2025-09-03T00:00:00Z</published>
<summary type="text">Use of Bayesian decision analysis to maximize value in patient-centered randomized clinical trials in Parkinson’s disease
Chaudhuri, Shomesh E; Ben Chaouch, Zied; Hauber, Brett; Mange, Brennan; Zhou, Mo; Christopher, Stephanie; Bardot, Dawn; Sheehan, Margaret; Donnelly, Anne; McLaughlin, Lauren; Caldwell, Brittany; Benz, Heather L; Ho, Martin; Saha, Anindita; Gwinn, Katrina; Sheldon, Murray; Lo, Andrew W
A fixed one-sided significance level of 5% is commonly used to interpret the statistical significance of randomized clinical trial (RCT) outcomes. While it is necessary to reduce the false positive rate, the threshold used could be chosen quantitatively and transparently to specifically reflect patient preferences regarding benefit–risk tradeoffs as well as other considerations. How can patient preferences be explicitly incorporated into RCTs in Parkinson’s disease (PD), and what is the impact on statistical thresholds for device approval? In this analysis, we apply Bayesian decision analysis (BDA) to PD patient preference scores elicited from survey data. BDA allows us to choose a sample size (&#119899;) and significance level (&#120572;) that maximizes the overall expected value to patients of a balanced two-arm fixed-sample RCT, where the expected value is computed under both null and alternative hypotheses. For PD patients who had previously received deep brain stimulation (DBS) treatment, the BDA-optimal significance levels fell between 4.0% and 10.0%, similar to or greater than the traditional value of 5%. Conversely, for patients who had never received DBS, the optimal significance level ranged from 0.2% to 4.4%. In both of these populations, the optimal significance level increased with the severity of the patients’ cognitive and motor function symptoms. By explicitly incorporating patient preferences into clinical trial designs and the regulatory decision-making process, BDA provides a quantitative and transparent approach to combine clinical and statistical significance. For PD patients who have never received DBS treatment, a 5% significance threshold may not be conservative enough to reflect their risk-aversion level. However, this study shows that patients who previously received DBS treatment present a higher tolerance to accept therapeutic risks in exchange for improved efficacy which is reflected in a higher statistical threshold.
</summary>
<dc:date>2025-09-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Choice denied: impact of income and credit-based tenant screening on the Housing Choice Voucher program</title>
<link href="https://hdl.handle.net/1721.1/164904" rel="alternate"/>
<author>
<name>So, Wonyoung</name>
</author>
<author>
<name>Gade, Anisha</name>
</author>
<author>
<name>Hangen, Forrest</name>
</author>
<id>https://hdl.handle.net/1721.1/164904</id>
<updated>2026-02-19T04:35:56Z</updated>
<published>2025-04-30T00:00:00Z</published>
<summary type="text">Choice denied: impact of income and credit-based tenant screening on the Housing Choice Voucher program
So, Wonyoung; Gade, Anisha; Hangen, Forrest
The Housing Choice Voucher program supports over 2.5 million households by subsidizing rent payments within the private housing market. However, challenges arise due to exclusionary practices, undermining the program’s goal of ‘choice.’ Tenant screening practices have been critical in exacerbating these challenges, yet their impact remains understudied. Drawing on tenant screening criteria documents from property management websites and the Survey of Consumer Finances, this study finds that while voucher holders generally meet rent-to-income thresholds due to the subsidies—keeping their rent burden relative to their income, they still face barriers related to credit scores, bankruptcy history, and debt. These criteria, which apply to both voucher and non-voucher renters, may exclude approximately one in ten voucher holders, despite the guaranteed portion of rent covered by public assistance. These findings show an urgent need for policy interventions to the potential exclusionary impacts of tenant screening services.
</summary>
<dc:date>2025-04-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>AgentNexus: Accelerating AI Agent Development and Enhancing Interoperability with MCP</title>
<link href="https://hdl.handle.net/1721.1/164903" rel="alternate"/>
<author>
<name>Yae, Jung</name>
</author>
<author>
<name>Hamilton, Lei</name>
</author>
<id>https://hdl.handle.net/1721.1/164903</id>
<updated>2026-02-18T03:09:03Z</updated>
<published>2026-02-17T00:00:00Z</published>
<summary type="text">AgentNexus: Accelerating AI Agent Development and Enhancing Interoperability with MCP
Yae, Jung; Hamilton, Lei
The DoD faces significant challenges in its pursuit&#13;
of AI superiority, as disparate data and development platforms&#13;
create redundant efforts and limit interoperability. Additionally,&#13;
existing DoD systems are ill-equipped to handle the recent&#13;
paradigm shift toward agentic AI, which requires modern standards&#13;
and tools. To address these gaps, this paper introduces&#13;
AgentNexus, an application designed to streamline the development,&#13;
deployment, and servicing of AI agents. AgentNexus&#13;
provides an application featuring an advanced agents processing&#13;
backend, a scalable service layer, and an intuitive user interface.&#13;
It provides pre-built toolkits, sophisticated RAG pipeline, and&#13;
MCP for enhanced interoperability. The successful development&#13;
of an Education Assistant agent validates the application’s capacity&#13;
to support the rapid implementation of multi-agent workflows.&#13;
By fostering a collaborative and standardized environment,&#13;
AgentNexus mitigates critical barriers of interoperability and&#13;
duplicated effort, accelerating the delivery of multi-agent AI to&#13;
warfighters.
</summary>
<dc:date>2026-02-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Intelligent C-17 Load Planning for Flight Optimization</title>
<link href="https://hdl.handle.net/1721.1/164902" rel="alternate"/>
<author>
<name>McAlister, Catherine</name>
</author>
<author>
<name>Jones, Mathew</name>
</author>
<author>
<name>McConville, Sean</name>
</author>
<id>https://hdl.handle.net/1721.1/164902</id>
<updated>2026-02-18T03:08:58Z</updated>
<published>2026-02-17T00:00:00Z</published>
<summary type="text">Intelligent C-17 Load Planning for Flight Optimization
McAlister, Catherine; Jones, Mathew; McConville, Sean
C-17 Globemaster III cargo capacity is significantly&#13;
underutilized, with many sorties transporting only a few pallets&#13;
despite the aircraft’s 170,900-pound payload capability. Historical&#13;
flight data analysis reveals inefficient scheduling practices that&#13;
increase operational costs, crew workload, and overall negatively&#13;
effect mission capability. This paper details the development&#13;
of an AI-powered optimization model to improve C-17 cargo&#13;
utilization and reduce required flight operations. We analyzed&#13;
historical C-17 transportation data and created both traditional&#13;
optimization algorithms and predictive AI models to determine&#13;
optimal flight scheduling for 3-week operational periods. The AI&#13;
model achieved 97.9% accuracy in predicting optimal flight count&#13;
requirements and 89.3% accuracy in predicting optimal flight&#13;
assignment for specific cargo, representing a 23% reduction in&#13;
total flights and a 15% increase in average cargo utilization.&#13;
These results demonstrate that data-driven flight scheduling&#13;
can significantly improve C-17 operational efficiency, reduce&#13;
costs across the airlift community, and enabling additional time&#13;
towards advanced training, contingency support, and critical&#13;
warfighter operations, ultimately increasing the lethality and&#13;
readiness of the Department of Defense.
</summary>
<dc:date>2026-02-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Securing Intelligence: The Strategic Necessity of Air-Gapped AI Systems in the Age of Cloud-Based LLMs</title>
<link href="https://hdl.handle.net/1721.1/164901" rel="alternate"/>
<author>
<name>Viggh, Herbert</name>
</author>
<author>
<name>Tsagaratos, Jennifer</name>
</author>
<id>https://hdl.handle.net/1721.1/164901</id>
<updated>2026-02-18T03:09:05Z</updated>
<published>2026-02-17T00:00:00Z</published>
<summary type="text">Securing Intelligence: The Strategic Necessity of Air-Gapped AI Systems in the Age of Cloud-Based LLMs
Viggh, Herbert; Tsagaratos, Jennifer
The increasing use of large language models (LLMs)&#13;
in applications, from military strategy to customer service, raises&#13;
concerns about data sovereignty, security, and privacy. Cloudbased&#13;
API models, created by companies such as OpenAI, pose&#13;
significant risks due to training data exposure and prompt&#13;
injection attacks, which can compromise sensitive information&#13;
and hidden biases that could influence reporting or executive&#13;
decision-making processes. Real-world incidents, such as the&#13;
leakage of Samsung’s proprietary source code through ChatGPT,&#13;
highlight the dangers of relying on cloud providers with complete&#13;
visibility into client queries. Furthermore, data localization laws&#13;
and regulations, such as the General Data Protection Regulation&#13;
(GDPR), underscore the risks associated with outsourcing&#13;
intelligence and decision support systems to foreign entities. Airgapped&#13;
AI solutions, which run on isolated networks disconnected&#13;
from the outside world, offer a secure alternative for sensitive&#13;
environments such as national defense, research laboratories,&#13;
and critical infrastructure. By maintaining control over AI&#13;
processes, organizations can ensure information safety, comply&#13;
with regulations, and mitigate risks associated with cloud-based&#13;
AI infrastructure, ultimately safeguarding their data integrity,&#13;
privacy, and operational independence.
</summary>
<dc:date>2026-02-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>RAIMOND Requirements AI for Military Operational Needs Development</title>
<link href="https://hdl.handle.net/1721.1/164900" rel="alternate"/>
<author>
<name>Garcia, Fabio</name>
</author>
<author>
<name>Steilberg, Jackson</name>
</author>
<id>https://hdl.handle.net/1721.1/164900</id>
<updated>2026-02-18T03:09:07Z</updated>
<published>2026-02-17T00:00:00Z</published>
<summary type="text">RAIMOND Requirements AI for Military Operational Needs Development
Garcia, Fabio; Steilberg, Jackson
The Joint Capabilities Integration and Development&#13;
System (JCIDS) was created as a means to overhaul military&#13;
procurement processes. Ideally, the requirements development&#13;
process is meant to take a total of 2-4 years from concept&#13;
to manufacturing. However the actual length of concept development&#13;
is much longer. As a result, technologies that are&#13;
conceptualized through the analytical process often enter the&#13;
acquisition too late to need for the warrior. To reduce the&#13;
lengthy timeline in requirements development, we used Large&#13;
Language Models (LLMs) to conduct the necessary research&#13;
and synthesize documents that abide by strict JCIDS guidelines.&#13;
Prompt engineering can achieve these results as a proof of&#13;
concept. However, the output responses lack the content length&#13;
and depth necessary to pass through the requirements validation&#13;
process. Therefore, a combination of agentic workflows, prompt&#13;
engineering, and sufficient context is needed to achieve the desired&#13;
outcomes. This project utilizes a novel framework to derive&#13;
Capabilities Based Assessments (CBAs) at an approximate 80&#13;
percent readiness level requiring the final steps of validation and&#13;
verification by subject matter experts.
</summary>
<dc:date>2026-02-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning for the Enhancement of Adaptive Optics</title>
<link href="https://hdl.handle.net/1721.1/164899" rel="alternate"/>
<author>
<name>Hall, Robert</name>
</author>
<author>
<name>Chen, Justin</name>
</author>
<id>https://hdl.handle.net/1721.1/164899</id>
<updated>2026-02-18T03:09:05Z</updated>
<published>2026-02-17T00:00:00Z</published>
<summary type="text">Machine Learning for the Enhancement of Adaptive Optics
Hall, Robert; Chen, Justin
Optical systems (telescopes, lasers, microscopes,&#13;
etc.) have degraded performance over long distances&#13;
due to scintillation caused by Earth’s atmosphere,&#13;
where adaptive optics (AO) is often used to enhance&#13;
its signal-to-noise (SNR) ratio or image quality. Astronomers&#13;
have found success in laser-based adaptive&#13;
optics where they survey the atmosphere with a laser&#13;
and subtract its effects on the resultant image. Although&#13;
effective in most cases, these systems can be extremely&#13;
costly, are computationally intensive in real time, and&#13;
fall short in some edge cases. We propose an autoencoder/&#13;
decoder and a generalized sequence to sequence&#13;
model (LSTM) as a cost-effective method to off-load&#13;
computational complexity from real time and enhance&#13;
performance in edge cases. This study utilizes four&#13;
simulated datasets of wavefront sensor frames for a&#13;
variety of atmospheric conditions, done in collaboration&#13;
with MIT Lincoln Laboratory [1]–found auto-encoding&#13;
performance just shy of traditional methodology and&#13;
found LSTM performance that predicts well the general&#13;
shape on the WFS, but suffers from scaling issues.
</summary>
<dc:date>2026-02-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Hype to Reality: Real-World Lessons and Recommendations for AI in Military Applications</title>
<link href="https://hdl.handle.net/1721.1/164898" rel="alternate"/>
<author>
<name>Lynch, Joshua</name>
</author>
<author>
<name>Niss, Laura</name>
</author>
<id>https://hdl.handle.net/1721.1/164898</id>
<updated>2026-02-18T03:09:06Z</updated>
<published>2026-02-17T00:00:00Z</published>
<summary type="text">From Hype to Reality: Real-World Lessons and Recommendations for AI in Military Applications
Lynch, Joshua; Niss, Laura
The current use cases, limitations, and future capacity&#13;
of large language models (LLMs) as assistants to military&#13;
personnel remain an open question. This paper presents a case&#13;
study of an Airman’s interaction with and trust calibration of&#13;
LLMs over three months, both as an everyday assistant and&#13;
for development of ROMAD-AI, a tactical military application.&#13;
Through intuitive, AI-generated software development, an approach&#13;
relying on iterative code generation through natural&#13;
language prompting of LLMs from a technical novice rather&#13;
than human generated programming from a technical expert,&#13;
the research reveals significant gaps between industry curated&#13;
AI capability demonstrations and operational reality, requiring&#13;
systematic trust calibration and realistic scope management.&#13;
Outcomes are analyzed through operational and technical expertise&#13;
perspectives to provide practical guidance for both military&#13;
service members seeking effective AI integration and researchers&#13;
developing military-focused AI systems.
</summary>
<dc:date>2026-02-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Large Language Models and Defense Strategy: Escalation Risks and National Security Challenges</title>
<link href="https://hdl.handle.net/1721.1/164897" rel="alternate"/>
<author>
<name>Hou, Jonathan</name>
</author>
<author>
<name>Lax, Edwin</name>
</author>
<id>https://hdl.handle.net/1721.1/164897</id>
<updated>2026-02-18T03:09:04Z</updated>
<published>2026-02-17T00:00:00Z</published>
<summary type="text">Large Language Models and Defense Strategy: Escalation Risks and National Security Challenges
Hou, Jonathan; Lax, Edwin
This literature review examines the strategic vulnerabilities&#13;
posed by Large Language Models (LLMs) in military&#13;
and national security contexts. It synthesizes recent research&#13;
on their propensity for escalatory reasoning, cultural misalignment,&#13;
semantic manipulation, and dual-use ambiguity. Findings&#13;
from conflict s imulations a nd c oalition p lanning m odels reveal&#13;
how LLMs may default to aggressive or biased outputs under&#13;
ambiguity. These tendencies threaten alliance cohesion, distort&#13;
decision-making, and undermine trust in AI-enabled operations.&#13;
The review concludes by advocating for safeguards such as culturally&#13;
calibrated training, rigorous output verification, a nd the&#13;
integration of human-AI intermediaries to prevent destabilizing&#13;
outcomes.
</summary>
<dc:date>2026-02-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Synchronization-Aware Diffusion Models for Intra-Family RF Signal Classification</title>
<link href="https://hdl.handle.net/1721.1/164896" rel="alternate"/>
<author>
<name>Hayden, Hunter</name>
</author>
<author>
<name>Botero, Joey</name>
</author>
<id>https://hdl.handle.net/1721.1/164896</id>
<updated>2026-02-18T03:09:02Z</updated>
<published>2026-02-17T00:00:00Z</published>
<summary type="text">Synchronization-Aware Diffusion Models for Intra-Family RF Signal Classification
Hayden, Hunter; Botero, Joey
Classification of radio frequency (RF) signals in the&#13;
presence of channel-induced synchronization errors remains a&#13;
critical challenge in spectrum awareness systems. Traditional&#13;
classification pipelines generally rely on fixed synchronization&#13;
algorithms or assume aligned signals, which limits robustness&#13;
under real world timing, phase, and frequency distortions.&#13;
We introduce SyncDiff, a novel encoder-only diffusion model&#13;
architecture that predicts synchronization parameters through&#13;
iterative denoising steps prior to classification. By replacing&#13;
conventional synchronization algorithms with a learned datadriven&#13;
correction mechanism, our approach enables adaptive&#13;
signal alignment based on current channel distortions in unsynchronized&#13;
input data. SyncDiff employs a UNet based encoder&#13;
to refine synchronization parameters across multiple inference&#13;
steps, dynamically reducing channel-induced alignment errors&#13;
while preserving the inherit modulation specific characteristics&#13;
that allow these signals to be discriminable. Evaluations of the&#13;
RadioML2018 RF standard benchmark data set [1] demonstrates&#13;
improved classification accuracy across varying SNRs, modulation&#13;
schemes and synchronization impairments. Our findings&#13;
highlight the potential of diffusion-based synchronization learning&#13;
to improve downstream RF classification without reliance on&#13;
expert-engineered synchronization routines.
</summary>
<dc:date>2026-02-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Three-Dimensional Full-Core BEAVRS Using OpenMOC with Transport Equivalence</title>
<link href="https://hdl.handle.net/1721.1/164895" rel="alternate"/>
<author>
<name>Giudicelli, G</name>
</author>
<author>
<name>Forget, B</name>
</author>
<author>
<name>Smith, K</name>
</author>
<id>https://hdl.handle.net/1721.1/164895</id>
<updated>2026-02-18T03:07:45Z</updated>
<published>2025-04-04T00:00:00Z</published>
<summary type="text">Three-Dimensional Full-Core BEAVRS Using OpenMOC with Transport Equivalence
Giudicelli, G; Forget, B; Smith, K
Using an optimized implementation of the three-dimensional (3D) method of characteristics for neutron transport, along with a novel equivalence method for transport calculations that was designed to correct self-shielding errors from neglecting the angular dependence of resonant group absorption, a 3D full-core light water reactor hybrid stochastic-deterministic eigenvalue calculation was achieved. This paper presents the optimizations developed and compares the transport solutions obtained. For the statepoint, run times near 10 000 CPU hours are achieved—improving on previous works by an order of magnitude—with near 1% error on pin fission to 238U capture ratios and a few dozen pcms on the eigenvalue.
</summary>
<dc:date>2025-04-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Considering a US-Supported Self-Defense Option for Taiwan</title>
<link href="https://hdl.handle.net/1721.1/164894" rel="alternate"/>
<author>
<name>Glaser, Charles L.</name>
</author>
<id>https://hdl.handle.net/1721.1/164894</id>
<updated>2026-02-18T03:07:50Z</updated>
<published>2025-01-02T00:00:00Z</published>
<summary type="text">Considering a US-Supported Self-Defense Option for Taiwan
Glaser, Charles L.
There is wide agreement that Taiwan is the most dangerous issue dividing the United States and China. China believes Taiwan is part of its homeland, views unification with Taiwan as a core interest, and is determined to gain full control of the island. China continues to prefer peaceful unification, but explicitly retains the option of using military forces to achieve unification and seeks to use the threat of military force to strengthen its negotiating hand. Current US policy includes an ambiguous commitment to defend Taiwan if attacked or severely coerced by China—it leaves open whether and how the United States would respond. In addition, the United States provides Taiwan with weapons to improve its ability to defend itself. The United States is pressing Taiwan to deploy smaller mobile weapons that would increase the survivability and lethality of its forces; these forces would support a “porcupine strategy” that makes Taiwan harder to invade and conquer and would, at a minimum, provide time for US forces to arrive to aid Taiwan’s defense.
</summary>
<dc:date>2025-01-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Needs, Wants . . . and Excuses: What Executives Can Learn from Zig Ziglar About Working with Universities</title>
<link href="https://hdl.handle.net/1721.1/164893" rel="alternate"/>
<author>
<name>Wright, Randall S.</name>
</author>
<id>https://hdl.handle.net/1721.1/164893</id>
<updated>2026-02-18T03:07:49Z</updated>
<published>2025-04-09T00:00:00Z</published>
<summary type="text">Needs, Wants . . . and Excuses: What Executives Can Learn from Zig Ziglar About Working with Universities
Wright, Randall S.
Zig Ziglar was a famous sales trainer, motivational speaker, and author on salesmanship. When he died on November 28, 2012, Kevin Kruse (Citation2024)—best-selling author of Emotional Intelligence: 52 Strategies, coach to Fortune 500 CEOs, Marine Corps generals, and Silicon Valley entrepreneurs—wrote this in Forbes: “Zig Ziglar died today at age 86. A World War II veteran, Zig Ziglar became the top salesperson in several organizations before striking out on his own as a motivational speaker and trainer. With a Southern charm and lessons grounded in Christianity, Ziglar wrote over two dozen books and amassed a following of millions who were encouraged by his lessons for success.”
</summary>
<dc:date>2025-04-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational studies of electric field effects in CO2 methanation on Ni metal surfaces</title>
<link href="https://hdl.handle.net/1721.1/164892" rel="alternate"/>
<author>
<name>Wakamatsu, Katsuhiro</name>
</author>
<author>
<name>Yasuda, Takaaki</name>
</author>
<author>
<name>Aratani, Masato</name>
</author>
<author>
<name>Ogura, Teppei</name>
</author>
<id>https://hdl.handle.net/1721.1/164892</id>
<updated>2026-02-18T03:07:48Z</updated>
<published>2024-03-14T00:00:00Z</published>
<summary type="text">Computational studies of electric field effects in CO2 methanation on Ni metal surfaces
Wakamatsu, Katsuhiro; Yasuda, Takaaki; Aratani, Masato; Ogura, Teppei
Non-Faradaic electrochemical modification of catalytic activity (NEMCA) with an electric field (EF) has attracted attention as one of the methods to improve catalyst performance. However, this activation mechanism is not still clear. In this study, we focused on the NEMCA mechanism in CO2 methanation on Ni metal catalyst with solid oxide electrolysis cell (SOEC) and calculated two possible effects of the NEMCA mechanism; direct EF applications and oxygen atom co-adsorptions, using the density functional theory calculations and detailed kinetic simulations. Compared with these effects in terms of kinetic energy changes in the rate-determining steps, it has been revealed that the spillover effect of lattice oxygen toward the catalyst surface is dominant in the NEMCA mechanism. Also, we have found that overall CO2 methanation is promoted in SOEC mode with oxygen atom co-adsorptions in both cases of Ni flat and step sites.
</summary>
<dc:date>2024-03-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Equilibrium configurations of line arrays with respect to the deviatoric mean drift forces</title>
<link href="https://hdl.handle.net/1721.1/164891" rel="alternate"/>
<author>
<name>Tokić, Grgur</name>
</author>
<author>
<name>Yue, Dick KP</name>
</author>
<id>https://hdl.handle.net/1721.1/164891</id>
<updated>2026-02-18T03:07:47Z</updated>
<published>2024-05-02T00:00:00Z</published>
<summary type="text">Equilibrium configurations of line arrays with respect to the deviatoric mean drift forces
Tokić, Grgur; Yue, Dick KP
Monochromatic waves incident on an array of structures give rise to nonlinear, time-constant mean drift forces (MDFs). These forces depend on the array's spatial configuration; their magnitude and the direction is, in general, different for every structure in the array. If the spatial configuration of an array is not fixed, as is the case in arrays of individually anchor-moored structures, the time-constant differences in MDF on individual bodies can lead to a change in spatial configuration, which could, in turn, significantly affect both the first-order, time-harmonic response of the array, as well as the downwave component of the MDF. Here, we explore the dependency of these deviatoric forces on array configurations and on the frequency of the incident monochromatic waves. We consider configurations of line arrays (consisting of 2–5 vertical circular cylinders) that are described by 1 or 2 parameters, and we focus on the along-array component of deviatoric forces. Using multiple scattering computational simulations, we identify the array configurations in which the deviatoric drift forces are zero, and we discuss the stability of these equilibrium configurations with respect to class-preserving configuration perturbations. Both stable and unstable equilibria exist, but the relative number of unstable equilibria grows as the number of degrees of freedom of the configuration perturbations increases. Interestingly, the stable configurations experience a generally lower downwave mean drift force on the entire array than the unstable ones. Overall, the variations in the deviatoric and the downwave MDFs between equilibria are significant (on the order of the isolated body MDF).
</summary>
<dc:date>2024-05-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>1000-MW CSP with 100-gigawatt-hour crushed-rock heat storage to replace dispatchable fossil-fuel electricity</title>
<link href="https://hdl.handle.net/1721.1/164890" rel="alternate"/>
<author>
<name>Forsberg, Charles</name>
</author>
<id>https://hdl.handle.net/1721.1/164890</id>
<updated>2026-02-18T03:07:41Z</updated>
<published>2023-10-06T00:00:00Z</published>
<summary type="text">1000-MW CSP with 100-gigawatt-hour crushed-rock heat storage to replace dispatchable fossil-fuel electricity
Forsberg, Charles
We are developing 100-GWh heat-storage systems for use with 1000-MW Concentrated Solar Power (CSP) and nuclear reactor systems with capital cost goals of several dollars per kWh of heat storage—a factor of 50 under lithium ion batteries per unit of electricity. The capabilities of a 100-GWh heat storage system are similar to the Tennessee Valley Authority Raccoon Mountain pumped hydro facility that can provide 1652 MW(e) for 22 hours to address daily to weekly storage. The low capital cost of the Crushed Rock Ultra-large Stored Heat (CRUSH) system is only possible in large-capacity systems; thus, the CSP system average 24/7 heat inputs may exceed 1000 MW to match the heat storage capacity. Hot oil or nitrate salt is pumped from multiple solar farms or towers to the central CRUSH system and associated power block. The peak power block output may be 2 to 4 times average output with large economics of scale relative to the smaller power blocks associated with existing CSP systems. The cost savings from the large storage and the power block exceed the cost of hot oil or hot nitrate salt insulated pipelines over 10+ kilometers. The heat is stored in crushed rock in piles 20 m high and up to 250 m by 250 m on a side within an insulated floor and building structure. The sides of the rock pile are sloped rock that allow rock expansion and contraction with temperature without generating mechanical forces against walls. Heat is transferred from CSP to the crushed rock and then to the power cycle using (1) heat transfer oils for lower-temperature power systems to 400°C or (2) nitrate salts for higher-temperature power systems to 600°C. In charging mode, hot heat transfer fluid is sprayed over crushed rock and drains through the rock to the collection pans at the bottom to be reheated. Sections of rock are heated sequentially. In discharge mode cold heat transfer fluid is sprayed over crushed rock and drains through the rock to the collection pans below to deliver hot fluid to the power cycle. Heat storage costs are minimized by three features. Crushed rock is the lowest-cost storage material. The large building size minimizes the surface-to-volume ratio and thus building, insulation and foundation costs. The inventory and thus cost of oil and nitrate salt is minimized by using these fluids to transfer heat from CSP collectors to storage and then to the power block—but not for heat storage.
</summary>
<dc:date>2023-10-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>The way forward: The path to monolithic additive manufacture of lower hybrid current drive launchers</title>
<link href="https://hdl.handle.net/1721.1/164889" rel="alternate"/>
<author>
<name>Seltzman, AH</name>
</author>
<author>
<name>Wukitch, SJ</name>
</author>
<id>https://hdl.handle.net/1721.1/164889</id>
<updated>2026-02-18T03:07:38Z</updated>
<published>2023-08-18T00:00:00Z</published>
<summary type="text">The way forward: The path to monolithic additive manufacture of lower hybrid current drive launchers
Seltzman, AH; Wukitch, SJ
Additive Manufacturing (AM) is a key enabling technology for the rapid production of complex radio-frequency (RF) structures used in lower hybrid current drive (LHCD) launchers. Glenn Research Copper 84 (GRCop-84), a Niobium Chromide (Cr2Nb) 8 at. % Cr, 4 at. % Nb precipitation hardened alloy, is suitable for AM with Laser Powder Bed Fusion (L-PBF), achieving 99.5% density, Ra=3-4 µm surface roughness, yield strength of 470 MPa and an ultimate tensile strength (UTS) of 710 MPa in as-printed condition. AM of a high field side (HFS) lower LHCD launcher from GRCop-84 alloy demonstrated several critical advancements in AM of RF launchers. Waveguides with a pentagonal cross-section were designed to support the top internal waveguide surface with 45-degree chamfers from the sidewall, eliminating collapse of the ceiling, while maintaining RF properties near identical to a rectangular cross section. Hot Isostatic Pressing (HIPing) consolidated residual voids within the material, increasing density from 99.5% to 100%. Chemical-Mechanical Polishing (CMP) reduced residual surface roughness from the L-PBF process to Ra=0.1 µm / Rq=0.4 µm to lower RF losses. Advancements in L-PBF for the AM of copper alloys have increased the maximum build volume from 250x250x300mm on the Concept Laser M2 printer to 400x400x400mm on the EOS M400 printer. This increased build volume now enables monolithic AM of complete LHCD launchers with integrated cooling channels that eliminate the time-consuming laser welding assembly of launcher segments previously required by the smaller build volume.
</summary>
<dc:date>2023-08-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Particle-in-cell simulations of parasitic electrostatic wave excitation in the ion cyclotron range of frequencies and high harmonic fast wave regimes</title>
<link href="https://hdl.handle.net/1721.1/164888" rel="alternate"/>
<author>
<name>Diab, Raymond</name>
</author>
<author>
<name>Baek, Seung-Gyou</name>
</author>
<author>
<name>Bonoli, Paul</name>
</author>
<author>
<name>Jenkins, Thomas G</name>
</author>
<author>
<name>Ono, Masayuki</name>
</author>
<author>
<name>Smithe, David</name>
</author>
<id>https://hdl.handle.net/1721.1/164888</id>
<updated>2026-02-18T03:07:31Z</updated>
<published>2023-08-18T00:00:00Z</published>
<summary type="text">Particle-in-cell simulations of parasitic electrostatic wave excitation in the ion cyclotron range of frequencies and high harmonic fast wave regimes
Diab, Raymond; Baek, Seung-Gyou; Bonoli, Paul; Jenkins, Thomas G; Ono, Masayuki; Smithe, David
Using the open-source code SMILEI [J. Derouillat et al., Comput. Phys. Commun. 222, 351-373 (2018)], we perform one-dimensional full-f particle-in-cell (PIC) simulations of parasitic electrostatic wave excitation in the Ion Cyclotron Range of Frequencies (ICRF) and High Harmonic Fast Wave (HHFW) regimes in an inhomogeneous plasma. We first study direct coupling from the fast wave to electrostatic waves at the lower hybrid (LH) resonance (S=0). In the ICRF regime, we show that the fast wave can couple to the Ion Bernstein Wave (IBW), which propagates beyond the LH resonance layer. On the other hand, in the HHFW regime, no direct coupling to the IBW is observed, but electrostatic waves, likely to be Hot Ion Plasma Waves (HIPW or HPW), are seen on the low-density side of the LH resonance layer. The coupling efficiency to electrostatic waves is seen to increase with ion temperature. Parametric decay instabilities (PDIs) are then investigated in both regimes. In the ICRF regime, both resonant and non-resonant decay channels are observed and compared with theory. In the HHFW regime, we observe multiple sidebands separated by the ion cyclotron frequency, as measured experimentally on NSTX [J. R. Wilson et al., AIP Conf. Proc. 787, 66 (2005)]. The nature of these waves is discussed. Perpendicular ion heating is also found in the region where PDIs occur, consistent with experimental observations.
</summary>
<dc:date>2023-08-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards fast, accurate predictions of RF simulations via data-driven modeling: Forward and lateral models</title>
<link href="https://hdl.handle.net/1721.1/164887" rel="alternate"/>
<author>
<name>Wallace, GM</name>
</author>
<author>
<name>Bai, Z</name>
</author>
<author>
<name>Bertelli, N</name>
</author>
<author>
<name>Bethel, EW</name>
</author>
<author>
<name>Perciano, T</name>
</author>
<author>
<name>Shiraiwa, S</name>
</author>
<author>
<name>Wright, JC</name>
</author>
<id>https://hdl.handle.net/1721.1/164887</id>
<updated>2026-02-18T03:07:43Z</updated>
<published>2023-08-18T00:00:00Z</published>
<summary type="text">Towards fast, accurate predictions of RF simulations via data-driven modeling: Forward and lateral models
Wallace, GM; Bai, Z; Bertelli, N; Bethel, EW; Perciano, T; Shiraiwa, S; Wright, JC
Three machine learning techniques (multilayer perceptron, random forest, and Gaussian process) provide fast surrogate models for lower hybrid current drive (LHCD) simulations. A single GENRAY/CQL3D simulation without radial diffusion of fast electrons requires several minutes of wall-clock time to complete, which is acceptable for many purposes, but too slow for integrated modeling and real-time control applications. More accurate simulations with fast electron diffusion are even slower, requiring multiple hours of run time with parallel processing. The machine learning models use a database of 16,000+ GEN-RAY/CQL3D simulations for training, validation, and testing. Latin hypercube sampling methods implemented in πScope ensure that the database covers the range of 9 input parameters (ne0, Te0, Ip, Bt, R0, n∥︀, Ze f f, Vloop, PLHCD) with sufficient density in all regions of parameter space. The surrogate models reduce the computation time from minutes-hours to ms with high accuracy across the input parameter space. Data-driven surrogate models also allow for solving inverse and “lateral” problems. A surrogate model for the inverse problem maps from a desired current drive or power deposition profile to a set of input parameters that would result in such a profile, while a surrogate model for the lateral problem maps from a measured experimental quantity such as hard x-ray emission to a current drive or power deposition profile. The πScope database creation workflow is flexible and applicable to other RF simulation codes such as TORIC.
</summary>
<dc:date>2023-08-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decrypting the mechanisms of wicking and evaporation heat transfer on micro-pillars during the pool boiling of water using high-resolution infrared thermometry</title>
<link href="https://hdl.handle.net/1721.1/164886" rel="alternate"/>
<author>
<name>Wang, Chi</name>
</author>
<author>
<name>Rahman, Md Mahamudur</name>
</author>
<author>
<name>Bucci, Matteo</name>
</author>
<id>https://hdl.handle.net/1721.1/164886</id>
<updated>2026-02-18T03:07:40Z</updated>
<published>2023-03-08T00:00:00Z</published>
<summary type="text">Decrypting the mechanisms of wicking and evaporation heat transfer on micro-pillars during the pool boiling of water using high-resolution infrared thermometry
Wang, Chi; Rahman, Md Mahamudur; Bucci, Matteo
Surfaces with micrometer-scale pillars have shown great potential in delaying the boiling crisis and enhancing the critical heat flux (CHF). However, physical mechanisms enabling this enhancement remain unclear. This knowledge gap is due to a lack of diagnostics that allow elucidating how micro-pillars affect thermal transport phenomena on the engineered surface. In this study, for the first time, we are able to measure time-dependent temperature and heat flux distributions on a boiling surface with engineered micro-pillars using infrared thermometry. Using these data, we reveal the presence of an intra-pillar liquid layer, created by the nucleation of bubbles and partially refilled by capillary effects. However, contrarily to conventional wisdom, the energy removed by the evaporation of this liquid cannot explain the observed CHF enhancement. Yet, predicting its dry out is the key to delaying the boiling crisis. We achieve this goal using simple analytic models and demonstrate that this process is driven by conduction effects in the boiling substrates and, importantly, in the intra-pillar liquid layer itself. Importantly, these effects also control the wicking flow rate and its penetration length. The boiling crisis occurs when, by coalescing, the size of the intra-pillar liquid layer becomes too large for the wicking flow to reach its innermost region. Our study reveals and quantifies unidentified physical aspects, key to the performance optimization of boiling surfaces for cooling applications.
</summary>
<dc:date>2023-03-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>APP2 Status Summary: Proposed New VLBI Capabilities for Cycle 8</title>
<link href="https://hdl.handle.net/1721.1/164885" rel="alternate"/>
<author>
<name>Matthews, Lynn D.</name>
</author>
<author>
<name>Crew, Geoffrey B.</name>
</author>
<id>https://hdl.handle.net/1721.1/164885</id>
<updated>2026-03-05T18:44:02Z</updated>
<published>2019-11-21T00:00:00Z</published>
<summary type="text">APP2 Status Summary: Proposed New VLBI Capabilities for Cycle 8
Matthews, Lynn D.; Crew, Geoffrey B.
This document contains material regarding the new capabilities proposed for ALMA Cycle 8. These included a passive phasing mode (for weaker sources), an ALMA Band 3 pulsar observing mode, a prototype spectral line mode, also for Band 3, and minor improvements to the code for operational reasons. In the event, the first two were accepted for a Cycle 8 that was delayed by the COVID-19 pandemic. The spectral line capability was deferred until Cycle 9. In response to reviewer questions, a modified figure was prepared and presented as an addendum. Both documents are presented here as a single PDF.
This report was prepared for the formal acceptance of the software required for Cycle 8.&#13;
Notionally it is ALMA Technical Note #21, but not (yet) published as such.
</summary>
<dc:date>2019-11-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Final Report: Enabling New VLBI Science with the ALMA Phasing System - Phase 3 (APP3): An ALMA North America Development Project</title>
<link href="https://hdl.handle.net/1721.1/164884" rel="alternate"/>
<author>
<name>Matthews, Lynn D.</name>
</author>
<author>
<name>Crew, Geoffrey B.</name>
</author>
<id>https://hdl.handle.net/1721.1/164884</id>
<updated>2026-03-04T18:34:05Z</updated>
<published>2024-07-19T00:00:00Z</published>
<summary type="text">Final Report: Enabling New VLBI Science with the ALMA Phasing System - Phase 3 (APP3): An ALMA North America Development Project
Matthews, Lynn D.; Crew, Geoffrey B.
Executive Summary: This document provides a summary of activities undertaken as part of the ALMA North America Development Project “Enabling New VLBI Science with the ALMA Phasing System - Phase 3 (APP3)”, whose period of performance ex- tended from January 17, 2022 to July 16, 2024. The successful completion of this project has resulted in the introduction of flexible tuning for ALMA very long baseline interfer- ometry (VLBI) operations, a fully flexible spectral line VLBI observing mode, and the enabling of panchromatic VLBI, allowing ALMA in principle to operate as a phased array in any available receiver band. The project also carried out a series of activities aimed at maintaining and optimizing existing VLBI infrastructure and provided training to staff at the Joint ALMA Observatory (JAO) to enable a transition to autonomous VLBI observing. A video feature and accompanying news article were produced near the conclusion of this project to make the results accessible to a broader audience.
This report was prepared as a final report on the activities undertaken under the NA Development program mentioned in the abstract.
</summary>
<dc:date>2024-07-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Open Access Task Force - Implementation Team Progress Report</title>
<link href="https://hdl.handle.net/1721.1/164883" rel="alternate"/>
<author>
<name>Bebergal, Peter</name>
</author>
<author>
<name>Bourg, Chris</name>
</author>
<author>
<name>Dunn, Katharine H.</name>
</author>
<author>
<name>Nurnberger, Amy</name>
</author>
<author>
<name>Pierce, Marianna</name>
</author>
<author>
<name>Shirer, Karen</name>
</author>
<author>
<name>Weeramuni, Lindsey</name>
</author>
<author>
<name>Wilcoxson, Jaren D.</name>
</author>
<id>https://hdl.handle.net/1721.1/164883</id>
<updated>2026-02-14T03:08:05Z</updated>
<published>2020-11-09T00:00:00Z</published>
<summary type="text">Open Access Task Force - Implementation Team Progress Report
Bebergal, Peter; Bourg, Chris; Dunn, Katharine H.; Nurnberger, Amy; Pierce, Marianna; Shirer, Karen; Weeramuni, Lindsey; Wilcoxson, Jaren D.
This report outlines the progress to date of the Open Access Task Force Implementation Team (OATF-IT), first convened in December 2019 to prioritize, shepherd, and support the final recommendations (October 2019) of the MIT-wide Ad Hoc Task Force on Open Access to MIT’s Research (OATF). The OATF launched in 2017 to update and revise the Institute’s policies and practices around open publications, data, educational materials, and software.
</summary>
<dc:date>2020-11-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Recommendations of the MIT Ad Hoc Task Force on Open Access to MIT's Research</title>
<link href="https://hdl.handle.net/1721.1/164882" rel="alternate"/>
<author>
<name>Abelson, Harold</name>
</author>
<author>
<name>Bourg, Chris</name>
</author>
<author>
<name>Bebergal, Peter</name>
</author>
<author>
<name>Bond, Robert A.</name>
</author>
<author>
<name>Cheng, Herng Yi</name>
</author>
<author>
<name>Chuang, Isaac L.</name>
</author>
<author>
<name>Cummins, Christopher C</name>
</author>
<author>
<name>Fitzgerald, Deborah K</name>
</author>
<author>
<name>Jarzombek, Mark</name>
</author>
<author>
<name>Lindsay, Nick</name>
</author>
<author>
<name>Pollard, Tom Joseph</name>
</author>
<author>
<name>Reid, Jack</name>
</author>
<author>
<name>Shirer, Karen</name>
</author>
<author>
<name>Trout, Bernhardt L.</name>
</author>
<author>
<name>Vander Heiden, Matthew G.</name>
</author>
<author>
<name>von Hippel, Eric A</name>
</author>
<author>
<name>Wilcoxson, Jaren D.</name>
</author>
<id>https://hdl.handle.net/1721.1/164882</id>
<updated>2026-02-14T03:08:04Z</updated>
<published>2019-10-17T00:00:00Z</published>
<summary type="text">Recommendations of the MIT Ad Hoc Task Force on Open Access to MIT's Research
Abelson, Harold; Bourg, Chris; Bebergal, Peter; Bond, Robert A.; Cheng, Herng Yi; Chuang, Isaac L.; Cummins, Christopher C; Fitzgerald, Deborah K; Jarzombek, Mark; Lindsay, Nick; Pollard, Tom Joseph; Reid, Jack; Shirer, Karen; Trout, Bernhardt L.; Vander Heiden, Matthew G.; von Hippel, Eric A; Wilcoxson, Jaren D.
</summary>
<dc:date>2019-10-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Regulating Wait-Driven Requests in Queues</title>
<link href="https://hdl.handle.net/1721.1/164881" rel="alternate"/>
<author>
<name>Freund, Daniel</name>
</author>
<author>
<name>Hausman, David</name>
</author>
<author>
<name>Weng, Wentao</name>
</author>
<id>https://hdl.handle.net/1721.1/164881</id>
<updated>2026-03-08T03:22:53Z</updated>
<published>2025-07-02T00:00:00Z</published>
<summary type="text">Regulating Wait-Driven Requests in Queues
Freund, Daniel; Hausman, David; Weng, Wentao
The study of rational queueing has a long and distinguished history focused on individuals' preference to avoid waiting. Surprisingly, there are settings in which some potential arrivals (which we also refer to as requests) derive utility from waiting and disutility from service. Our primary example is the U.S. affirmative asylum process. In this context, applicants obtain a work permit while waiting for an asylum interview; hence, if the (expected) wait is long enough, then even an applicant who knows that their application will be denied and lead to deportation proceedings, may find it in their interest to apply and thus benefit from legally working during the wait. Similar dynamics could occur in other settings like content moderation in social networks.&#13;
The common thread of these examples is the potentially self-exciting queue: when wait times are long, many arrivals are incentivized to join, and wait times become even longer. However, the system designer usually wants to avoid a large backlog. Indeed, the US Citizenship and Immigration Services (USCIS) mostly schedules asylum interviews in a Last-In-First-Out (LIFO) manner with the explicit goal of dissuading applicants with non-meritorious cases trying to exploit the long backlog. Despite this interesting scheduling choice in practice, and the potential prevalence of similar settings in other applications, the existing literature on rational queueing lacks frameworks to study the impact of wait-driven requests.&#13;
Motivated by this gap in the literature, we formalize a dynamical system where in each round, a given scheduling policy and a realized request rate determine the wait time distribution in a fluid queueing system. Observing the expected benefit from waiting in one round, requests update their decisions, setting the request rate for the next round. Assuming a concave benefit function from waiting, alongside general conditions, we prove that, for minimizing the backlog, LIFO is most effective while First-In-First-Out (FIFO) is least effective among all work-conserving policies. Moreover, we show that the dynamical system exhibits metastability: for either FIFO or LIFO, the system converges to either a zero-wait or a congested equilibrium.&#13;
Although some asylum practitioners support the use of LIFO, critics often admonish the real-world use of LIFO for its failure to maintain FIFO's order fairness: earlier requests should get earlier service. Our results demonstrate this trade-off between LIFO and FIFO. But we also show limitations of hybrid policies, which probabilistically follow either LIFO or FIFO, in navigating the trade-off between LIFO's efficiency and FIFO's fairness. Our work formalizes the concept of order fairness in queueing systems with abandonment and demonstrates that hybrid policies can be Pareto-dominated by LIFO: they may have both longer backlog and worse order fairness. Finally, we use real-world data on the scheduling of affirmative asylum applications to evaluate the change in fairness over the past 20 years under different policies.
EC ’25, July 7–10, 2025, Stanford, CA, USA
</summary>
<dc:date>2025-07-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>DeepSeek Inside: Origins, Technology, and Impact</title>
<link href="https://hdl.handle.net/1721.1/164880" rel="alternate"/>
<author>
<name>Cusumano, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/164880</id>
<updated>2026-03-08T03:23:14Z</updated>
<published>2025-06-18T00:00:00Z</published>
<summary type="text">DeepSeek Inside: Origins, Technology, and Impact
Cusumano, Michael
The release of DeepSeek V3 and R1 in January 2025 caused steep declines in the stock prices of companies that provide generative artificial intelligence (GenAI) infrastructure technology and datacenter services. These two large language models (LLMs) came from a little-known Chinese startup with approximately 200 employees compared to at least 3,500 employees for industry-leader OpenAI. DeepSeek seemed to have developed this powerful technology much more cheaply than previously thought possible. If true, DeepSeek had the potential to disrupt the economics of the entire GenAI ecosystem and the dominance of U.S. companies ranging from OpenAI to Nvidia.
</summary>
<dc:date>2025-06-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Density-Dependent Graph Orientation and Coloring in Scalable MPC</title>
<link href="https://hdl.handle.net/1721.1/164879" rel="alternate"/>
<author>
<name>Ghaffari, Mohsen</name>
</author>
<author>
<name>Grunau, Christoph</name>
</author>
<id>https://hdl.handle.net/1721.1/164879</id>
<updated>2026-03-08T03:23:16Z</updated>
<published>2025-06-13T00:00:00Z</published>
<summary type="text">Density-Dependent Graph Orientation and Coloring in Scalable MPC
Ghaffari, Mohsen; Grunau, Christoph
This paper presents massively parallel computation (MPC) algorithms in the strongly sublinear memory regime (aka, scalable MPC) for orienting and coloring graphs as a function of its subgraph density. Our algorithms run in poly(log log n) rounds and compute an orientation of the edges with maximum outdegree O (α log log n) as well as a coloring of the vertices with O (α log log n) colors. Here, α denotes the density of the densest subgraph. Our algorithm's round complexity is notable because it breaks the [EQUATION] barrier, which applied to the previously best known density-dependent orientation algorithm [Ghaffari, Lattanzi, and Mitrovic ICML'19] and is common to many other scalable MPC algorithms.
PODC ’25, Huatulco, Mexico
</summary>
<dc:date>2025-06-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Privacy-Preserving Mechanisms for Coordinating Airspace Usage in Advanced Air Mobility</title>
<link href="https://hdl.handle.net/1721.1/164878" rel="alternate"/>
<author>
<name>Maheshwari, Chinmay</name>
</author>
<author>
<name>Mendoza, Maria</name>
</author>
<author>
<name>Tuck, Victoria</name>
</author>
<author>
<name>Su, Pan-Yang</name>
</author>
<author>
<name>Qin, Victor</name>
</author>
<author>
<name>Seshia, Sanjit</name>
</author>
<author>
<name>Balakrishnan, Hamsa</name>
</author>
<author>
<name>Sastry, Shankar</name>
</author>
<id>https://hdl.handle.net/1721.1/164878</id>
<updated>2026-03-08T03:22:45Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Privacy-Preserving Mechanisms for Coordinating Airspace Usage in Advanced Air Mobility
Maheshwari, Chinmay; Mendoza, Maria; Tuck, Victoria; Su, Pan-Yang; Qin, Victor; Seshia, Sanjit; Balakrishnan, Hamsa; Sastry, Shankar
Advanced Air Mobility (AAM) operations are expected to transform air transportation while challenging current air traffic management practices. By introducing a novel market-based mechanism, we address the problem of on-demand allocation of capacity-constrained airspace to AAM vehicles with heterogeneous and private valuations. We model airspace and air infrastructure as a collection of contiguous regions (or sectors) with constraints on the number of vehicles that simultaneously enter, stay, or exit each region. Vehicles request access to airspace with trajectories spanning multiple regions at different times. We use the graph structure of our airspace model to formulate the allocation problem as a path allocation problem on a time-extended graph. To ensure that the cost information of AAM vehicles remains private, we introduce a novel mechanism that allocates each vehicle a budget of "air-credits" (an artificial currency) and anonymously charges prices for traversing the edges of the time-extended graph. We seek to compute a competitive equilibrium that ensures that: (i) capacity constraints are satisfied, (ii) a strictly positive resource price implies that the sector capacity is fully utilized, and (iii) the allocation is integral and optimal for each AAM vehicle given current prices, without requiring access to individual vehicle utilities. However, a competitive equilibrium with integral allocations may not always exist. We provide sufficient conditions for the existence and computation of a fractional-competitive equilibrium, where allocations can be fractional. Building on these theoretical insights, we propose a distributed, iterative, two-step algorithm that: 1) computes a fractional competitive equilibrium,  and 2) derives an integral allocation from this equilibrium. We validate the effectiveness of our approach in allocating trajectories for the emerging urban air mobility service of drone delivery.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Meschers: Geometry Processing of Impossible Objects</title>
<link href="https://hdl.handle.net/1721.1/164877" rel="alternate"/>
<author>
<name>Dodik, Ana</name>
</author>
<author>
<name>Yu, Isabella</name>
</author>
<author>
<name>Chandra, Kartik</name>
</author>
<author>
<name>Ragan-Kelley, Jonathan</name>
</author>
<author>
<name>Tenenbaum, Joshua</name>
</author>
<author>
<name>Sitzmann, Vincent</name>
</author>
<author>
<name>Solomon, Justin</name>
</author>
<id>https://hdl.handle.net/1721.1/164877</id>
<updated>2026-03-08T03:23:14Z</updated>
<published>2025-07-27T00:00:00Z</published>
<summary type="text">Meschers: Geometry Processing of Impossible Objects
Dodik, Ana; Yu, Isabella; Chandra, Kartik; Ragan-Kelley, Jonathan; Tenenbaum, Joshua; Sitzmann, Vincent; Solomon, Justin
Impossible objects, geometric constructions that humans can perceive but that cannot exist in real life, have been a topic of intrigue in visual arts, perception, and graphics, yet no satisfying computer representation of such objects exists. Previous work embeds impossible objects in 3D, cutting them or twisting/bending them in the depth axis. Cutting an impossible object changes its local geometry at the cut, which can hamper downstream graphics applications, such as smoothing, while bending makes it difficult to relight the object. Both of these can invalidate geometry operations, such as distance computation. As an alternative, we introduce Meschers, meshes capable of representing impossible constructions akin to those found in M.C. Escher's woodcuts.  Our representation has a theoretical foundation in discrete exterior calculus and supports the use-cases above, as we demonstrate in a number of example applications. Moreover, because we can do discrete geometry processing on our representation, we can inverse-render impossible objects. We also compare our representation to cut and bend representations of impossible objects.
</summary>
<dc:date>2025-07-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning-Augmented Competitive Algorithms for Spatiotemporal Online Allocation with Deadline Constraints</title>
<link href="https://hdl.handle.net/1721.1/164876" rel="alternate"/>
<author>
<name>Lechowicz, Adam</name>
</author>
<author>
<name>Christianson, Nicolas</name>
</author>
<author>
<name>Sun, Bo</name>
</author>
<author>
<name>Bashir, Noman</name>
</author>
<author>
<name>Hajiesmaili, Mohammad</name>
</author>
<author>
<name>Wierman, Adam</name>
</author>
<author>
<name>Shenoy, Prashant</name>
</author>
<id>https://hdl.handle.net/1721.1/164876</id>
<updated>2026-03-08T03:23:15Z</updated>
<published>2025-06-09T00:00:00Z</published>
<summary type="text">Learning-Augmented Competitive Algorithms for Spatiotemporal Online Allocation with Deadline Constraints
Lechowicz, Adam; Christianson, Nicolas; Sun, Bo; Bashir, Noman; Hajiesmaili, Mohammad; Wierman, Adam; Shenoy, Prashant
We introduce and study spatiotemporal online allocation with deadline constraints (SOAD), a new online problem motivated by emerging challenges in sustainability and energy. In SOAD, an online player completes a workload by allocating and scheduling it on the points of a metric space (X, d) while subject to a deadline T. At each time step, a service cost function is revealed that represents the cost of servicing the workload at each point, and the player must irrevocably decide the current allocation of work to points. Whenever the player moves this allocation, they incur a movement cost defined by the distance metric d(•, •) that captures, e.g., an overhead cost. SOAD formalizes the open problem of combining general metrics and deadline constraints in the online algorithms literature, unifying problems such as metrical task systems and online search. We propose a competitive algorithm for SOAD along with a matching lower bound establishing its optimality. Our main algorithm, ST-CLIP, is a learning-augmented algorithm that takes advantage of predictions (e.g., forecasts of relevant costs) and achieves an optimal consistency-robustness trade-off. We evaluate our proposed algorithms in a simulated case study of carbon-aware spatiotemporal workload management, an application in sustainable computing that schedules a delay-tolerant batch compute job on a distributed network of data centers. In these experiments, we show that ST-CLIP substantially improves on heuristic baseline methods.
SIGMETRICS Abstracts ’25, Stony Brook, NY, USA
</summary>
<dc:date>2025-06-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Faraday Cage Estimation of Normals for Point Clouds and Ribbon Sketches</title>
<link href="https://hdl.handle.net/1721.1/164875" rel="alternate"/>
<author>
<name>Scrivener, Daniel</name>
</author>
<author>
<name>Cui, Daniel</name>
</author>
<author>
<name>Coldren, Ellis</name>
</author>
<author>
<name>Abulnaga, Mazdak</name>
</author>
<author>
<name>Bessmeltsev, Mikhail</name>
</author>
<author>
<name>Chien, Edward</name>
</author>
<id>https://hdl.handle.net/1721.1/164875</id>
<updated>2026-03-08T03:23:22Z</updated>
<published>2025-07-25T00:00:00Z</published>
<summary type="text">Faraday Cage Estimation of Normals for Point Clouds and Ribbon Sketches
Scrivener, Daniel; Cui, Daniel; Coldren, Ellis; Abulnaga, Mazdak; Bessmeltsev, Mikhail; Chien, Edward
We propose a novel method (FaCE) for normal estimation of unoriented point clouds and VR ribbon sketches that leverages a modeling of the Faraday cage effect. Input points, or a sampling of the ribbons, form a conductive cage and shield the interior from external fields. The gradient of the maximum field strength over external field scenarios is used to estimate a normal at each input point or ribbon. The electrostatic effect is modeled with a simple Poisson system, accommodating intuitive user-driven sculpting via the specification of point charges and Faraday cage points. On inputs sampled from clean, watertight meshes, our method achieves comparable normal quality to existing methods tailored for this scenario. On inputs containing interior structures and artifacts, our method produces superior surfacing output when combined with Poisson Surface Reconstruction. In the case of ribbon sketches, our method accommodates sparser ribbon input while maintaining an accurate geometry, allowing for greater flexibility in the artistic process. We demonstrate superior performance to an existing approach for surfacing ribbon sketches in this sparse setting.
</summary>
<dc:date>2025-07-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Draft Recommendations of the MIT Ad Hoc Faculty Task Force on Open Access to MIT's Research</title>
<link href="https://hdl.handle.net/1721.1/164874" rel="alternate"/>
<author>
<name>Abelson, Harold</name>
</author>
<author>
<name>Bourg, Chris</name>
</author>
<author>
<name>Bebergal, Peter</name>
</author>
<author>
<name>Bond, Robert A.</name>
</author>
<author>
<name>Cheng, Herng Yi</name>
</author>
<author>
<name>Chuang, Isaac L.</name>
</author>
<author>
<name>Cummins, Christopher C</name>
</author>
<author>
<name>Fitzgerald, Deborah K</name>
</author>
<author>
<name>Jarzombek, Mark</name>
</author>
<author>
<name>Lindsay, Nick</name>
</author>
<author>
<name>Pollard, Tom Joseph</name>
</author>
<author>
<name>Reid, Jack</name>
</author>
<author>
<name>Shirer, Karen</name>
</author>
<author>
<name>Trout, Bernhardt L.</name>
</author>
<author>
<name>Vander Heiden, Matthew G.</name>
</author>
<author>
<name>von Hippel, Eric A</name>
</author>
<author>
<name>Wilcoxson, Jaren D.</name>
</author>
<author>
<name>Finnie, Ellen</name>
</author>
<author>
<name>Dunn, Katharine H.</name>
</author>
<id>https://hdl.handle.net/1721.1/164874</id>
<updated>2026-02-14T03:08:03Z</updated>
<published>2019-03-16T00:00:00Z</published>
<summary type="text">Draft Recommendations of the MIT Ad Hoc Faculty Task Force on Open Access to MIT's Research
Abelson, Harold; Bourg, Chris; Bebergal, Peter; Bond, Robert A.; Cheng, Herng Yi; Chuang, Isaac L.; Cummins, Christopher C; Fitzgerald, Deborah K; Jarzombek, Mark; Lindsay, Nick; Pollard, Tom Joseph; Reid, Jack; Shirer, Karen; Trout, Bernhardt L.; Vander Heiden, Matthew G.; von Hippel, Eric A; Wilcoxson, Jaren D.; Finnie, Ellen; Dunn, Katharine H.
</summary>
<dc:date>2019-03-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Networking Systems for Video Anomaly Detection: A Tutorial and Survey</title>
<link href="https://hdl.handle.net/1721.1/164873" rel="alternate"/>
<author>
<name>Liu, Jing</name>
</author>
<author>
<name>Liu, Yang</name>
</author>
<author>
<name>Lin, Jieyu</name>
</author>
<author>
<name>Li, Jielin</name>
</author>
<author>
<name>Cao, Liang</name>
</author>
<author>
<name>Sun, Peng</name>
</author>
<author>
<name>Hu, Bo</name>
</author>
<author>
<name>Song, Liang</name>
</author>
<author>
<name>Boukerche, Azzedine</name>
</author>
<author>
<name>Leung, Victor</name>
</author>
<id>https://hdl.handle.net/1721.1/164873</id>
<updated>2026-03-08T03:22:52Z</updated>
<published>2025-05-07T00:00:00Z</published>
<summary type="text">Networking Systems for Video Anomaly Detection: A Tutorial and Survey
Liu, Jing; Liu, Yang; Lin, Jieyu; Li, Jielin; Cao, Liang; Sun, Peng; Hu, Bo; Song, Liang; Boukerche, Azzedine; Leung, Victor
The increasing utilization of surveillance cameras in smart cities, coupled with the surge of online video applications, has heightened concerns regarding public security and privacy protection, which propelled automated Video Anomaly Detection (VAD) into a fundamental research task within the Artificial Intelligence (AI) community. With the advancements in deep learning and edge computing, VAD has made significant progress and advances synergized with emerging applications in smart cities and video internet, which has moved beyond the conventional research scope of algorithm engineering to deployable Networking Systems for VAD (NSVAD), a practical hotspot for intersection exploration in the AI, IoVT, and computing fields. In this article, we delineate the foundational assumptions, learning frameworks, and applicable scenarios of various deep learning-driven VAD routes, offering an exhaustive tutorial for novices in NSVAD. In addition, this article elucidates core concepts by reviewing recent advances and typical solutions and aggregating available research resources accessible at https://github.com/fdjingliu/NSVAD. Lastly, this article projects future development trends and discusses how the integration of AI and computing technologies can address existing research challenges and promote open opportunities, serving as an insightful guide for prospective researchers and engineers.
</summary>
<dc:date>2025-05-07T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis and Performance Evaluation of Blockchain Consensus Mechanisms for  Network Sharing</title>
<link href="https://hdl.handle.net/1721.1/164872" rel="alternate"/>
<author>
<name>Zeydan, Engin</name>
</author>
<author>
<name>MANGUES-BAFALLUY, JOSEP</name>
</author>
<author>
<name>Arslan, Suayb</name>
</author>
<author>
<name>Turk, Yekta</name>
</author>
<author>
<name>Antevski, Kiril</name>
</author>
<id>https://hdl.handle.net/1721.1/164872</id>
<updated>2026-03-08T03:23:06Z</updated>
<published>2026-01-27T00:00:00Z</published>
<summary type="text">Analysis and Performance Evaluation of Blockchain Consensus Mechanisms for  Network Sharing
Zeydan, Engin; MANGUES-BAFALLUY, JOSEP; Arslan, Suayb; Turk, Yekta; Antevski, Kiril
The growing demand for mobile data services has made it necessary to find efficient and cost-effective ways to share networks. Blockchain technology is a promising solution to the challenges of network sharing, such as interoperability, trust, and accountability. This paper presents a comprehensive classification and categorization of blockchain-based network sharing scenarios, highlighting their advantages and limitations. Seven network sharing scenarios are identified, ranging from centralized network sharing to fully decentralized spectrum sharing. The suitability of some selected blockchain consensus algorithms (namely Proof-of-Work (PoW) with Ethereum, Proof-of-Authority (PoA) with Ethereum, Practical Byzantine Fault Tolerance (PBFT) with Tendermint and Proof-of-Stake (PoS) with Cosmo) is assessed for selected scenarios through extensive evaluations. This paper also identifies gaps and opportunities in blockchain-based network sharing solutions, and presents future research directions.
</summary>
<dc:date>2026-01-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>MapTune: Versatile ASIC Technology Mapping via Reinforcement Learning Guided Library Tuning</title>
<link href="https://hdl.handle.net/1721.1/164871" rel="alternate"/>
<author>
<name>Liu, Mingju</name>
</author>
<author>
<name>Robinson, Daniel</name>
</author>
<author>
<name>Li, Yingjie</name>
</author>
<author>
<name>Maximilian Kuehn, Johannes</name>
</author>
<author>
<name>Liang, Rongjian</name>
</author>
<author>
<name>Ren, Haoxing</name>
</author>
<author>
<name>Yu, Cunxi</name>
</author>
<id>https://hdl.handle.net/1721.1/164871</id>
<updated>2026-03-08T03:22:54Z</updated>
<published>2025-07-11T00:00:00Z</published>
<summary type="text">MapTune: Versatile ASIC Technology Mapping via Reinforcement Learning Guided Library Tuning
Liu, Mingju; Robinson, Daniel; Li, Yingjie; Maximilian Kuehn, Johannes; Liang, Rongjian; Ren, Haoxing; Yu, Cunxi
Technology mapping involves mapping logical circuits to a library of cells. Traditionally, the full technology library is used, leading to a large search space and potential overhead. Motivated by randomly sampled technology mapping case studies, we propose a MapTune framework that addresses this challenge by utilizing reinforcement learning to make design-specific choices during cell selection. By learning from the environment, MapTune refines the cell selection process, resulting in a reduced search space and potentially improved mapping quality. The effectiveness of MapTune is evaluated on a wide range of benchmarks, different technology libraries, and various technology mappers. The experimental results demonstrate that MapTune achieves higher mapping accuracy and reduces delay/area across diverse circuit designs, technology libraries, and mappers. The paper also discusses the Pareto-Optimal exploration and confirms the perpetual delay-area trade-off. Conducted on benchmark suites ISCAS 85/89, ITC/ISCAS 99, VTR8.0, and EPFL benchmarks, the post-technology mapping and post-sizing quality-of-results (QoR) have been significantly improved, with average Area-Delay Product (ADP) improvement of 16.56\% among all different exploration settings in MapTune. The improvements consistently remained for four different technologies (7nm, 45nm, 130nm, and 180 nm) with various mappers from both state-of-the-art open-source and commercial synthesis tools.
</summary>
<dc:date>2025-07-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Two-Stage Approach to Improve Poverty Mapping Spatial Resolution</title>
<link href="https://hdl.handle.net/1721.1/164870" rel="alternate"/>
<author>
<name>Salas, Joaquín</name>
</author>
<author>
<name>Zea-Ortiz, Marivel</name>
</author>
<author>
<name>Vera, Pablo</name>
</author>
<author>
<name>Wood, Danielle</name>
</author>
<id>https://hdl.handle.net/1721.1/164870</id>
<updated>2026-03-08T03:39:51Z</updated>
<published>2026-01-28T00:00:00Z</published>
<summary type="text">A Two-Stage Approach to Improve Poverty Mapping Spatial Resolution
Salas, Joaquín; Zea-Ortiz, Marivel; Vera, Pablo; Wood, Danielle
Global extreme poverty has fallen dramatically over the past two centuries, yet hundreds of millions remain impoverished, underscoring the need for scalable monitoring tools. In Mexico, poverty metrics are available only sporadically in terms of time and space (e.g., every 5 years at the municipal level), making it difficult for decision-makers to access reliable, up-to-date, and sufficiently detailed information, highlighting the need for higher-resolution, timely methods. To address this problem, we propose a two-stage approach that combines socioeconomic and Earth Observations-based data. Initially, a machine learning model maps census variables to official poverty indicators belonging to a multidimensional model, yielding fine-scale poverty estimates. A census-based model trained with eXtreme Gradient Boosting (XGBoost) achieved a determination coefficient (&#119877;2) of approximately 0.842, indicating strong agreement with official poverty figures and providing high-resolution proxies. Afterward, we use features based on remote observations to predict these poverty estimates at a 469 m grid scale. In this case, advanced foundation models outperformed other machine learning (ML) approaches, achieving an &#119877;2 of 0.683. While foundation models enable more accurate, fine-scale poverty mapping and could accelerate poverty assessments, their use comes at a heavy price in terms of carbon emissions.
</summary>
<dc:date>2026-01-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effectiveness of a Participatory Voice Intervention on Psychological Well-Being Among Warehouse Workers: Results From the Fulfillment Center Intervention Study, United States, 2021‒2023</title>
<link href="https://hdl.handle.net/1721.1/164869" rel="alternate"/>
<author>
<name>Siebach, Kirsten F.</name>
</author>
<author>
<name>Diaz-Linhart, Yaminette</name>
</author>
<author>
<name>Kubzansky, Laura D.</name>
</author>
<author>
<name>Berkman, Lisa F.</name>
</author>
<author>
<name>Wang, Molin</name>
</author>
<author>
<name>Ge, Lin</name>
</author>
<author>
<name>Kowalski, Alexander M.</name>
</author>
<author>
<name>Rahmandad, Hazhir</name>
</author>
<author>
<name>Kelly, Erin L.</name>
</author>
<id>https://hdl.handle.net/1721.1/164869</id>
<updated>2026-03-08T03:39:54Z</updated>
<published>2026-01-15T00:00:00Z</published>
<summary type="text">Effectiveness of a Participatory Voice Intervention on Psychological Well-Being Among Warehouse Workers: Results From the Fulfillment Center Intervention Study, United States, 2021‒2023
Siebach, Kirsten F.; Diaz-Linhart, Yaminette; Kubzansky, Laura D.; Berkman, Lisa F.; Wang, Molin; Ge, Lin; Kowalski, Alexander M.; Rahmandad, Hazhir; Kelly, Erin L.
Objectives. To examine whether a novel workplace intervention designed to increase worker voice can&#13;
reduce psychological distress and improve emotional vitality at 6- and 12-months follow-up.&#13;
Methods. We conducted a cluster-randomized trial in 16 fulfillment centers throughout the United States&#13;
between 2021-2023. Data were collected at three time points; 2813 workers participated in at least one&#13;
survey. Treated fulfillment centers established a new, participatory committee called the Health and&#13;
Well-Being Committee (HaWC). We compared differences in psychological distress and emotional&#13;
vitality and explored differential treatment effects by gender.&#13;
Results. At baseline, moderate or severe psychological distress was 51%. Intervention sites had lower&#13;
average psychological distress at the 6-month follow-up compared to control sites, with no significant&#13;
differences at 12-month follow-up. Gender moderation analyses suggest the HaWC was particularly&#13;
effective in reducing psychological distress among men at 6-month follow-up.&#13;
Conclusions. Our findings suggest that opportunities for workers to share concerns to a committee of&#13;
their peers tasked with identifying solutions can support mental health. Our study contributes important&#13;
experimental evidence on workplace interventions that improve the well-being of low-wage U.S.&#13;
populations.
</summary>
<dc:date>2026-01-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>ML Prediction Models to Identify Novel Beyond Visual Range Tactics and Error Analysis for DARPA AIR Agents</title>
<link href="https://hdl.handle.net/1721.1/164868" rel="alternate"/>
<author>
<name>Li, William</name>
</author>
<author>
<name>Castor, Jeremy</name>
</author>
<id>https://hdl.handle.net/1721.1/164868</id>
<updated>2026-02-13T05:07:40Z</updated>
<published>2026-02-12T00:00:00Z</published>
<summary type="text">ML Prediction Models to Identify Novel Beyond Visual Range Tactics and Error Analysis for DARPA AIR Agents
Li, William; Castor, Jeremy
This paper investigates the utility of using machine&#13;
learning models to predict the outcome of simulated 2 vs. 2&#13;
Tactical Intercept engagements flown by autonomous agents in&#13;
support of the DARPA Artificial Intelligence Reinforcements&#13;
(AIR) program. We investigated the performance of four models:&#13;
Feed Forward Neural Network, Random Forest, Extreme&#13;
Gradient Boost, and Long Short Term Memory (LSTM). We&#13;
examined their ability to successfully predict the outcomes of&#13;
simulated engagements, tactical errors, and the execution of novel&#13;
game plans by autonomous agents. The models were trained on&#13;
53 features pertaining to the agents including distance between&#13;
aircraft, altitude, speed, missile availability, and other eventbased&#13;
features from simulated runs. The LSTM model had the&#13;
best performance towards the beginning of a run and was able to&#13;
predict the correct winner with 87.8% accuracy only one minute&#13;
into a run while the XGBoost model achieved the best overall&#13;
performance with a 91.7% classification accuracy and an R² of&#13;
0.712. The XGB model was also able to correctly predict the&#13;
winner of 84.7% of the runs after only seven minutes into the&#13;
simulated engagement. These results demonstrate the utility and&#13;
need for further investigation into other ML models potential&#13;
to identify unique attributes and predictive analysis of more&#13;
complex multi-agent scenarios that include additional criteria&#13;
such as varying rules of engagement, incorporating acceptable&#13;
levels of risk as well as other requirements fighter pilots must take&#13;
into account during offensive and defensive operations needed to&#13;
gain air superiority and support the objectives of the Joint Forces&#13;
Commander.
</summary>
<dc:date>2026-02-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Open Access at MIT and Beyond: A White Paper of the MIT Ad Hoc Task Force on Open Access to MIT's Research</title>
<link href="https://hdl.handle.net/1721.1/164867" rel="alternate"/>
<author>
<name>Dunn, Katharine H.</name>
</author>
<author>
<name>Abelson, Harold</name>
</author>
<author>
<name>Bourg, Chris</name>
</author>
<author>
<name>Finnie, Ellen</name>
</author>
<id>https://hdl.handle.net/1721.1/164867</id>
<updated>2026-02-13T04:26:28Z</updated>
<published>2018-01-01T00:00:00Z</published>
<summary type="text">Open Access at MIT and Beyond: A White Paper of the MIT Ad Hoc Task Force on Open Access to MIT's Research
Dunn, Katharine H.; Abelson, Harold; Bourg, Chris; Finnie, Ellen
</summary>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling New Science with the ALMA Phasing System - Phase 2 (APP2):  An ALMA North America Development Project</title>
<link href="https://hdl.handle.net/1721.1/164864" rel="alternate"/>
<author>
<name>Matthews, L. D.</name>
</author>
<author>
<name>Crew, G. B.</name>
</author>
<id>https://hdl.handle.net/1721.1/164864</id>
<updated>2026-03-04T18:32:50Z</updated>
<published>2024-10-18T00:00:00Z</published>
<summary type="text">Enabling New Science with the ALMA Phasing System - Phase 2 (APP2):  An ALMA North America Development Project
Matthews, L. D.; Crew, G. B.
This document provides a summary of activities undertaken as part of the Cycle 5 ALMA North America Development Project “Enabling New Science with the ALMA Phasing System - Phase 2 (APP2)”, whose period of performance extended from January 1, 2018 to August 31, 2024. APP2 provided a series of enhancements to ALMA’s very long baseline interferometry (VLBI) and phased array capabilities, leading to the introduction of submillimeter (Band 7) phasing and VLBI capabilities, a passive phasing mode, a Phased Array (pulsar) observing mode, a prototype spectral line VLBI capability, an improved method of handling baseband delays, and a number of other minor system enhancements.
This report was prepared as a final report on the activities undertaken under the NA Development program mentioned in the abstract.
</summary>
<dc:date>2024-10-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mechanisms of Interaction Between Hydraulic and Natural Fractures in Shale Rocks</title>
<link href="https://hdl.handle.net/1721.1/164863" rel="alternate"/>
<author>
<name>Arzuaga García, Ignacio Martín</name>
</author>
<id>https://hdl.handle.net/1721.1/164863</id>
<updated>2026-02-13T03:10:59Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Mechanisms of Interaction Between Hydraulic and Natural Fractures in Shale Rocks
Arzuaga García, Ignacio Martín
Understanding the interaction between hydraulically induced fractures and pre-existing natural fractures in geologic formations is key for optimizing subsurface energy systems that rely on fluid injection into fractured rocks. These include Enhanced Geothermal Systems (EGS), CO₂ sequestration, hydrogen storage in depleted reservoirs, unconventional oil and gas development in shale formations, and nuclear waste disposal, among others. In all these applications, controlling fracture propagation and interaction is essential for ensuring operational efficiency, safety, and long-term integrity of the system. This thesis presents a comprehensive experimental and theoretical investigation of hydraulic fracture (HF) interactions with natural fractures (NFs), using Opalinus Clayshale as a representative anisotropic material.&#13;
&#13;
The experimental work involved a series of hydraulic fracturing tests on Opalinus Clayshale specimens under controlled quasi-true-triaxial stress conditions, comparing normal and dried states. Novel monitoring techniques, including high-resolution imaging, high-speed video, acoustic emissions (AE), and pressure tracking, were employed to capture the fracturing process in real-time. Three dominant interaction modes (Crossing, Arrest, and Opening) were systematically characterized and linked to key parameters, including stress ratio, fracture geometry, and injection rates. A critical stress ratio (σ₁/σ₃) of approximately 20 was identified as the threshold for achieving fracture crossing under our experimental conditions: cohesionless, “open” natural fractures, with a low viscosity injection fluid, in a toughness-dominated regime. In dried specimens, high flaw pressurization rates were necessary to overcome matrix fluid loss and achieve crossing.&#13;
&#13;
To complement and interpret the experimental results, existing theoretical models were reviewed and implemented. Furthermore, a simplified version of the OpenT model (Chuprakov et al., 2014) was developed and applied for Opalinus Clayshale, incorporating stress, energy, friction, and permeability effects. By integrating laboratory results with theoretical frameworks, this thesis offers an integral approach to predictive understanding of fracture propagation in naturally fractured rocks, stating that not only the characteristics of the discontinuity or the far-field stresses involved in the process are important in determining the mechanism of interaction, but also the dynamic energy balance at the fracture tip, which is influenced by injection rate, fluid viscosity, and discontinuity properties.&#13;
&#13;
Overall, this thesis bridges the gap between laboratory experiments and theoretical models, advancing a more comprehensive understanding of fracture propagation in naturally fractured media. The findings highlight the importance of considering both mechanical and hydraulic parameters, particularly in low-viscosity, toughness-dominated regimes, for accurately predicting fracture behavior.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Satellite Drag and Sustainable Space Operations in a Dynamic Thermosphere</title>
<link href="https://hdl.handle.net/1721.1/164862" rel="alternate"/>
<author>
<name>Parker, William E.</name>
</author>
<id>https://hdl.handle.net/1721.1/164862</id>
<updated>2026-02-13T03:11:02Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Satellite Drag and Sustainable Space Operations in a Dynamic Thermosphere
Parker, William E.
Earth’s orbit has become increasingly congested and contested in recent years. The surge in launched payloads, combined with satellite failures, explosions, and collisions, has contributed to a large and growing population of orbital debris objects that can remain in orbit for decades, centuries, or longer. Meanwhile, decreasing launch costs and maturing satellite technology have created conditions favorable for rapid commercialization across orbital regimes, especially in low Earth orbit (LEO). Today, a small number of commercial entities operate the large majority of the world’s active satellites as part of proliferated LEO constellations. Sustaining productive activity in an increasingly crowded orbital environment has made satellite conjunction assessment and collision avoidance essential for safe operations. These efforts require not just accurate trajectory predictions, but also credible estimates of uncertainty. In LEO, variability in atmospheric drag is by far the dominant source of propagation error, often leading to deviations of several kilometers per day due to unpredictable solar and geomagnetic activity. Even over short timescales, trajectory prediction is challenging because existing forecasts exhibit limited predictive skill. Although forecast errors are often non-Gaussian and heteroscedastic, operational products are generally presented as deterministic, and atmospheric models rarely provide rigorous uncertainty characterization. This work introduces a new approach for probabilistic satellite drag modeling based on historical correlations between space weather drivers and satellite dynamics. Unlike traditional methods, it models satellite behavior directly without reconstructing thermospheric mass density or requiring detailed knowledge of satellite properties such as the ballistic coefficient. This end-to-end strategy offers substantial computational and operational advantages for many space domain awareness tasks. Capturing both trajectory predictions and their associated uncertainty is critical for enabling informed collision avoidance decisions, particularly during geomagnetic storms when current infrastructure frequently fails. Because the orbital lifetime of debris objects can exceed hundreds of years, population dynamics in space critically depend on long-term variability in the composition of Earth’s thermosphere. Rising concentrations of carbon dioxide and other greenhouse gases have caused warming in the troposphere but cooling and contraction in the upper atmosphere. This contraction decreases atmospheric density in LEO, reducing drag and extending the orbital lifetime of debris objects. Longer-lived debris populations pose a persistent collision hazard for all active satellites as long as they remain in orbit. Even natural events, such as a prolonged grand solar minimum, could further reduce thermospheric density and contribute to longer debris lifetime in LEO. With little ability to predict such an event, it is necessary to understand the potential consequences and to identify strategies that enable the continued safe and productive use of LEO. This work models the impact of such long-term environmental changes on limits for sustainable satellite deployments. LEO is a finite respource increasingly at risk of overexploitation. Conserving it and sharing it fairly requires that we first understand its fundamental capacity and our current occupation of that capacity. Some metrics have been proposed to measure the satellite carrying capacity of Earth’s orbit, but none have previously accounted for the potential influence of a changing space climate. This work develops new methods for defining carrying capacity as a common currency, enabling clear constraint-driven thresholds on activity and a better understanding of how existing and proposed missions consume available capacity. These new metrics provide insight into how environmental variability may affect the long-term sustainability of operations in LEO. Respecting and understanding this influence that the natural environment has on our collective ability to operate spacecraft in LEO is critical to preventing the overexploitation of this regime and protecting it for future generations.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Large Language Models as Circuit Design Assistants</title>
<link href="https://hdl.handle.net/1721.1/164861" rel="alternate"/>
<author>
<name>Cox, Matthew J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164861</id>
<updated>2026-02-13T03:49:28Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Evaluating Large Language Models as Circuit Design Assistants
Cox, Matthew J.
Large language models (LLMs) have exploded in capability in recent years. Previous attempts at AI systems for circuit design have had limited proficiency and been restricted in problem scope. LLMs, with their breadth of knowledge and reasoning ability, are a promising technology for a much more general-purpose circuit design assistant. We developed a dataset of electrical engineering problems and solutions with which to test an LLM-based system, since no such publicly available dataset exists to our knowledge; unmodified GPT-4 was able to solve 42% of the problems. We did a preliminary comparison of several knowledge bases to use for RAG knowledge injection, finding that a small, curated set of resources performed better than a larger, less-focused set of resources, though there were confounding factors which may have skewed the result. While this work is a start, significant future work is needed to continue developing an LLM-based circuit design assistant.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Increasing Program Code Coverage Using Fuzzing and Targeted Branch Exploration</title>
<link href="https://hdl.handle.net/1721.1/164860" rel="alternate"/>
<author>
<name>Nguyen, Gary</name>
</author>
<id>https://hdl.handle.net/1721.1/164860</id>
<updated>2026-02-13T03:49:33Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Increasing Program Code Coverage Using Fuzzing and Targeted Branch Exploration
Nguyen, Gary
Code coverage is a longstanding metric for evaluating how thoroughly a program has been tested. Achieving high coverage remains a priority goal for quality assurance and software stability. Exhaustive enumeration of possible input paths to every code region is desirable in theory but computationally infeasible in practice, especially in large-scale codebases. Fuzzing is a widely used technique for input generation and is effective at exploring smaller programs but often struggles with more complex conditional logic and nested modules. Concolic execution, which exhaustively explores paths using constraint solving, can work effectively with complex conditional logic but suffers from path explosion. Targeted branch exploration is a similar approach for input generation but sidesteps the path explosion problem by focusing more on specific constraint paths of interest.&#13;
&#13;
In this thesis, I introduce a hybrid system that combines fuzzing and targeted branch exploration with the goal of improving code coverage by leveraging the complementary strengths of each. The system uses fuzzing to quickly generate a broad input corpus and follows up with targeted branch exploration to explore paths that fuzzing struggles to reach. Findings from experiments on two C projects of different complexities show that the system did not outperform the individual techniques in terms of raw coverage, revealing limitations of the approach and opportunities for future improvement.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Essays in Financial Economics and Econometrics</title>
<link href="https://hdl.handle.net/1721.1/164859" rel="alternate"/>
<author>
<name>Orestes, Victor M.</name>
</author>
<id>https://hdl.handle.net/1721.1/164859</id>
<updated>2026-02-13T03:10:50Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Essays in Financial Economics and Econometrics
Orestes, Victor M.
This thesis comprises three essays in finance and econometrics. The first two essays focus on the role of credit access and liquidity in shaping real firm outcomes. The first essay examines the transmission of modern monetary policy through corporate asset markets. Exploiting quasi-experimental variation in the Central Bank of Brazil’s collateral framework and implementing a novel dynamic regression discontinuity design, it shows that monetary policy can ease expected future borrowing constraints, reduce firms’ precautionary cash holdings, and stimulate employment. The second essay analyzes how receivables financing through factoring helps firms smooth cash flows. Using a shift-share instrument and matched administrative data, it finds that cheaper liquidity leads firms to rely more on permanent labor. The third essay develops a new method for distributional inference—nonparametric quantile mixture models. This framework can be applied to financial settings such as tail risk estimation and density forecasting, as well as to causal inference when the objective is to estimate the distributional effects of interventions. It is used here to quantify the heterogeneous wage effects of a major environmental disaster.&#13;
&#13;
The first chapter (joint with Luis Alvarez and Thiago Christiano Silva) studies how modern monetary policy tools, which increasingly operate through corporate asset markets, affect real firm outcomes. We exploit quasi-experimental variation from the inclusion of specific corporate debt instruments in the Central Bank of Brazil’s collateral framework and implement a novel dynamic regression discontinuity design. We find that eligibility increases firms’ debt issuance, modestly lowers spreads, and reduces cash holdings, reflecting a decline in precautionary savings. These effects translate into higher employment and greater supply chain liquidity. We interpret the mechanism through the lens of segmented financial markets: by relaxing firms’ expected future borrowing constraints, the policy acts as a persistent borrowing subsidy and liquidity injection. This encourages firms to reduce cash hoarding and expand production. Using a semi-structural framework calibrated to our reduced-form estimates, we find that an induced 0.8% borrowing subsidy leads to a 1% increase in debt issuance, a 0.2% reduction in cash holdings, and a 0.4% increase in the wage bill.&#13;
&#13;
The second chapter (joint with Thiago Christiano Silva and Henry Zhang) &#13;
shows that firms experience large increases in sales and purchases after receiving cheaper liquidity. We focus on factoring, defined as the supplier-initiated sale of receivables. In Brazil, receivables funds (FIDCs) securitize receivables for institutional investors. By assembling a novel transaction-level dataset of factoring with other credit operations for all registered firms and FIDCs, we construct a shift-share instrument for factoring financing supply based on FIDC flows. We then use a novel combination of electronic payments, trade credit, and employer-employee matched data to estimate the impacts. A flow-induced increase in receivables demand reduces firms’ factoring interest rate. In response, firms demand more permanent labor and less temporary labor. In our model, these effects arise from factoring’s purpose of reducing cash inflow volatility, helping firms match inflows to outflows, which firms otherwise achieve at an efficiency cost through substitution across labor types.&#13;
&#13;
The third chapter (joint with Luis Alvarez) introduces nonparametric quantile mixture models as a computationally convenient and flexible alternative to standard nonparametric density mixtures, which are widely used in Statistics and Econometrics but face significant computational and inferential challenges. We propose a sieve estimator based on a generalized method of L-moments and develop a full inferential theory. In doing so, we contribute to the statistical literature by extending a numerical bootstrap method to high-dimensional settings. As a direct application of our theory, we provide the first inference method for the distributional synthetic controls of Gunsilius (2023), a novel tool for counterfactual analysis that previously lacked formal inference procedures. We apply this method to evaluate the effects of the Brumadinho dam collapse—a large-scale environmental disaster—on the local wage distribution. The results reveal substantial heterogeneity across the distribution, with evidence of displacement effects in which median-paying jobs are replaced by lower-wage contracts.&#13;
JEL Codes: C1, E4, E5, G2, G3
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine-Learned Representations of Basis Sets and Their&#13;
Application in Quantum Computational Chemistry</title>
<link href="https://hdl.handle.net/1721.1/164858" rel="alternate"/>
<author>
<name>He, Wenhao</name>
</author>
<id>https://hdl.handle.net/1721.1/164858</id>
<updated>2026-02-13T03:49:30Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Machine-Learned Representations of Basis Sets and Their&#13;
Application in Quantum Computational Chemistry
He, Wenhao
Quantum simulations of electronic structure promise to deliver significant speedups over classical methods, but remain limited by the number of qubits on near-term devices. A key strategy to reduce quantum resource requirements is to truncate the molecular Hilbert space via compact and efficient basis sets. However, most optimized basis sets either rely on predefined heuristics or require expensive classical computations, such as CASSCF orbital optimization or ℓ1-norm minimization of the Hamiltonian. In this work, we introduce a general machine learning framework for fast basis set prediction in quantum computational chemistry. Our method employs an equivariant graph neural network that outputs a Hermitian matrix encoding optimized molecular orbitals. The eigenvectors of this matrix define a transferable and efficient basis set, trained on orbitals obtained via CASSCF and Hamiltonian ℓ1 norm optimization. We evaluate our model on hydrogen chains and demonstrate that the predicted bases achieve energy accuracy and Hamiltonian sparsity comparable to orbital-optimized methods, while reducing classical preprocessing time. In addition, the predicted orbitals can be directly used as high-quality initial guesses for CASSCF calculations, further accelerating their convergence.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Signaling at the Tumor-Immune Interface in Glioblastoma</title>
<link href="https://hdl.handle.net/1721.1/164857" rel="alternate"/>
<author>
<name>D'Souza, Alicia D.</name>
</author>
<id>https://hdl.handle.net/1721.1/164857</id>
<updated>2026-02-13T03:10:43Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Signaling at the Tumor-Immune Interface in Glioblastoma
D'Souza, Alicia D.
Glioblastoma (GBM) is a devastating brain cancer, and the standard of care has not changed in over 20 years. GBM tumors are composed of a milieu of cancer cells and innate immune cells, which are co-opted by the cancer cells to promote an anti-inflammatory environment. Despite tremendous success in immunotherapy in several cancers over the past 10 years, immunotherapies have failed to show efficacy in GBM. A systems biology approach to characterizing temporal changes in tumor-immune interface of glioblastoma could illuminate new strategies to activate an anti-tumor immune response by examining changes in cell signaling and antigen presentation.&#13;
&#13;
In the first part of my thesis, I investigated how macrophages alter their phenotype in response to tumor co-culture and how these changes are reflected at the level of the phosphoproteome. To characterize signaling changes in distinct cell populations during co-culture, I developed a method to preserve and analyze cell-type-specific signaling using fixation. This approach enables phosphoproteomic profiling of two interacting cell types, capturing dynamic signaling events with cell-type resolution. I applied this method to study co-cultures of glioblastoma (GBM) cells and primary human macrophages. When cultured together, GBM cells induced an anti-inflammatory, immunosuppressive phenotype in macrophages, mirroring features observed in the glioblastoma tumor microenvironment. Phosphoproteomic analysis revealed that this phenotypic shift was accompanied by distinct signaling alterations in macrophages, including the upregulation of ABL kinase activity. To test this finding, I treated macrophages with an ABL kinase inhibitor and observed a reduction in the anti-inflammatory phenotype, suggesting that ABL signaling plays a role in supporting immunosuppressive macrophage polarization. Furthermore, in a mouse model of GBM, treatment with an ABL kinase inhibitor led to a reduction in the abundance of anti-inflammatory macrophages within the tumor and was associated with a modest extension of survival.&#13;
&#13;
In the second part, I examined changes in antigen presentation and signaling in glioblastoma tumors in response to treatment with an oncolytic virus (OV). In patient derived tumor (PDX) models in mice, mice treated with OV have increased antigen presentation, pointing to the use of OV therapy to reshape the tumor micro-environment to a more inflammatory state. Finally, tissue obtained from serial biopsies of GBM patients treated with OV shows an increase in antigen presentation and both Class I and Class II MHC protein expression. We also observed an increase in interferon alpha and interferon gamma signaling pathways as well as early induction of apoptotic pathways. These findings highlight the role of therapeutics in altering the tumor microenvironment and potentially priming it for combination immunotherapies. This thesis explores the dynamic nature of the tumor and immune compartments in glioblastoma and underscores how therapies can act on the immune compartment to promote anti-tumor activity.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SmellNet: A Large-scale Dataset for Real-world Smell&#13;
Recognition</title>
<link href="https://hdl.handle.net/1721.1/164856" rel="alternate"/>
<author>
<name>Feng, Dewei</name>
</author>
<id>https://hdl.handle.net/1721.1/164856</id>
<updated>2026-02-13T03:49:31Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">SmellNet: A Large-scale Dataset for Real-world Smell&#13;
Recognition
Feng, Dewei
The ability of AI to sense and identify various substances based on their smell alone can have profound impacts on allergen detection (e.g., smelling gluten or peanuts in a cake), monitoring the manufacturing process, and sensing hormones that indicate emotional states, stress levels, and diseases. Despite these broad impacts, there are virtually no large-scale benchmarks, and therefore little progress, for training and evaluating AI systems’ ability to smell in the real world. In this paper, we use portable gas and chemical sensors to create SmellNet, the first large-scale database that digitizes a diverse range of smells in the natural world. SmellNet contains about 180,000 time steps of 50 substances (spanning nuts, spices, herbs, fruits, and vegetables) with 50 hours of data. Using SmellNet, we trained AI models for real-time classification of substances based on their smell alone. Our best methods leverage sequence models, contrastive learning to integrate high-resolution Gas Chromatography–Mass Spectrometry molecular data, and a new temporal difference method that identifies sharp changes in sensor readings. Our best models achieve up to 65.35% accuracy on pre-recorded data, and generalize to real-world conditions with 10.71% accuracy on nuts and 25.38% on spices in the challenging 50-way online classification task. Despite these promising results, SmellNet highlights many technical challenges in building AI for smell, including richer feature learning, on-edge smell models, and robustness to environmental changes.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Zero-Shot Pretrained Models for Efficient Black-Box Optimization</title>
<link href="https://hdl.handle.net/1721.1/164855" rel="alternate"/>
<author>
<name>Meindl, Jamison Chivvis</name>
</author>
<id>https://hdl.handle.net/1721.1/164855</id>
<updated>2026-02-13T03:49:16Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Towards Zero-Shot Pretrained Models for Efficient Black-Box Optimization
Meindl, Jamison Chivvis
Global optimization of expensive, derivative-free black-box functions requires extreme sample efficiency. While Bayesian optimization (BO) is the current state-of-the-art, its performance hinges on surrogate and acquisition function hyperparameters that are often hand-tuned and fail to generalize across problem landscapes. We present ZeroShotOpt, the first general-purpose, pretrained model for continuous black-box optimization tasks ranging from 2 D to 20 D. Our approach leverages offline reinforcement learning on large-scale optimization trajectories collected from 12 BO variants. To scale pretraining, we generate millions of synthetic Gaussian process-based functions with diverse landscapes, enabling the model to learn transferable optimization policies. As a result, ZeroShotOpt achieves robust zero-shot generalization on a wide array of unseen synthetic and real-world benchmarks, matching or surpassing the sample efficiency of leading global optimizers, including BO, while also offering a reusable foundation for future extensions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Temperature Characterization of Colloidal Quantum Dot&#13;
Light Emitting Diodes</title>
<link href="https://hdl.handle.net/1721.1/164854" rel="alternate"/>
<author>
<name>Nguyen, Thienan D.</name>
</author>
<id>https://hdl.handle.net/1721.1/164854</id>
<updated>2026-02-13T03:49:26Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Temperature Characterization of Colloidal Quantum Dot&#13;
Light Emitting Diodes
Nguyen, Thienan D.
Colloidal quantum dot light emitting diodes reveal to be promising candidates for the next generation of display technologies. Their brighter emissions, greater color purity, and higher efficiency make them highly desirable in consumer electronics. As such, research into the performance and stability of these novel LEDs are crucial for their operation in displays. These investigations are ongoing, with focused efforts on improving the operating stability through different quantum dot materials and passivation methods. However, less attention is paid in confidently understanding the fundamental relationships between current, voltage, and luminance by which these devices operate. These electrical characteristics reveal insights into the operation of these devices and the behavior of charge carriers. Additionally, temperature-dependent electrical measurements can showcase different behavior at different temperatures and deviations from the expected performance at set temperatures. Temperature dependent processes are revealed and from such, a better understanding of how the device operates is gained. In this thesis, an investigation into the temperature-dependent electrical characteristics of quantum dot light emitting diodes was conducted by measuring the current-voltage-luminance, JVL, relationships at various cryogenic temperatures. These temperatures ranged from 78K, liquid nitrogen boiling point, to 293K, room temperature. This investigation revealed the temperature dependent nature and origin of turn-on voltage, current, EQE, EQE roll-off, and hysteresis.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Uncertainty-Aware Knowledge Graph Retrieval Methods and Their Use in LLM Question-Answering</title>
<link href="https://hdl.handle.net/1721.1/164853" rel="alternate"/>
<author>
<name>Rich, Benjamin R.</name>
</author>
<id>https://hdl.handle.net/1721.1/164853</id>
<updated>2026-02-13T03:49:13Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Uncertainty-Aware Knowledge Graph Retrieval Methods and Their Use in LLM Question-Answering
Rich, Benjamin R.
Knowledge Graph Question Answering (KGQA) encompasses a set of techniques aimed at generating accurate, interpretable responses to natural language queries posed over structured, graph-based datasets. Recent approaches to KGQA involve reducing the knowledge graph (KG) to a relevant subgraph, which is then encoded in natural language as a series of triples (subject, predicate, object) and passed to a large language model (LLM) for interpretation and answer generation. These methods have shown state-of-the-art accuracy. However, this paradigm is undermined by a critical vulnerability: the retrieval of irrelevant or erroneous facts can amplify LLM hallucinations and degrade system trustworthiness, while the reasoning process remains opaque. This thesis addresses this challenge by extending an existing stateof-the-art KGQA architecture with uncertainty-aware subgraph retrieval methods. To achieve this, we modify the retrieval component to learn the epistemic uncertainty of each candidate triple’s relevance to a given query. We implement these modifications using Bayesian methods and learn a well-calibrated approximation of the posterior distribution over triple relevance. By explicitly modeling this uncertainty, the retriever model is shown to provide a fine-grained confidence score for each piece of evidence. We expose these metrics downstream to the LLM during reasoning and evaluate whether LLMs can reason over uncertainty-related metrics to improve KGQA. We find that LLMs cannot reason effectively over uncertainties in most cases, but that agentic workflows that provide selective access to uncertainty metrics may enhance performance. We evaluate our approach against established benchmarks using HIT-rate and set-comparison accuracy metrics. Additionally, we introduce reasoning-path and statistical trust metrics derived from calibrated uncertainty scores. Our analysis reveals a significant positive correlation between path-based uncertainty metrics and the veracity of the Large Language Model’s (LLM) answers. These findings establish a robust foundation for developing uncertainty-grounded trust mechanisms in LLM-agnostic KGQA systems. As a proof of concept, a lightweight classifier trained exclusively on the LLM’s inputs and outputs demonstrates substantial predictive power in identifying correct responses. Finally, we briefly explore using uncertainty to identify out-of-distribution (OOD) queries.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applied Compiler Optimizations for Proving Code</title>
<link href="https://hdl.handle.net/1721.1/164852" rel="alternate"/>
<author>
<name>Ruiz, Ricardo</name>
</author>
<id>https://hdl.handle.net/1721.1/164852</id>
<updated>2026-02-13T03:49:29Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Applied Compiler Optimizations for Proving Code
Ruiz, Ricardo
The recent popularity of massively distributed, trustless systems has created a demand for cryptographic proofs: systems to prove that a piece of data is a valid output for a given program. These systems exist, but face very high runtimes for the generation of proofs. Significant effort has been invested in optimizing the prover systems, but relatively less has been focused on optimizing the code that gets read as an input. This paper proposes a new approach to optimizing prover systems by modifying the compiler to produce proof-ready code. It proposes a benchmarking framework for comparing the relative proof costs of RISC-V instructions; the resulting analyis find that shift instructions do not offer heavy savings over multiplication. The finding suggests that strength reduction, a fundamental optimization in modern compilers, can sabotage end-to-end performance. The paper proposes methods for applying this knowledge to better optimize code, leaving the door open for future researchers to continue to make code proofs more performant and accessible.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reconstructing Cross-Species Ancestral Adeno-Associated&#13;
Viruses for Enhanced Gene Therapy Delivery</title>
<link href="https://hdl.handle.net/1721.1/164850" rel="alternate"/>
<author>
<name>Xie, Yuxin</name>
</author>
<id>https://hdl.handle.net/1721.1/164850</id>
<updated>2026-02-13T03:49:32Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Reconstructing Cross-Species Ancestral Adeno-Associated&#13;
Viruses for Enhanced Gene Therapy Delivery
Xie, Yuxin
Adeno-associated viruses (AAV) are one of the most promising vectors for gene therapy because of their established safety, low immunogenicity, and capability to achieve sustained gene expression. However, many naturally occurring AAV variants have limitations in their potency, particularly in penetrating biological barriers like the blood-brain barrier (BBB). Additionally, their broad and nonspecific tropism can translate into suboptimal cross-species transduction efficiency and potential toxicity, complicating the clinical transition from animal model to humans. These challenges impede the use of naturally occurring AAVs for therapeutic gene delivery in many neurological disorders-such as autism spectrum disorders (ASD), Parkinson’s disease (PD), Huntington’s disease (HD)—as well as other systemic conditions like cystic fibrosis (CF). To overcome these barriers, we developed a computational framework based on ancestral sequence reconstruction (ASR) to engineer synthetic ancestral AAV capsids with the goal of enhanced targeting specificity and potency. We first validated this computational framework by replicating the previously engineered Anc80L65 capsid. Then, with 75 naturally occurring functional AAV sequences and additional experimentally screened variants exhibiting brain-targeting potency, we built an evolutionary framework. We applied multiple computational methods such as enhanced multiple sequence alignment, maximum-likelihood-based phylogenetic tree inference, and ancestral sequence reconstruction with Bayesian inference. With this methodology, we predicted several novel ancestral AAV capsid sequences at critical evolutionary nodes, particularly those representing functional transitions with potential improved blood-brain barrier penetration and CNS tropism. Our computational framework thus streamlines and accelerates the process of designing ancestral AAV variants with targeted gene therapy applications.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Intercellular flow-mediated force relaxation measurement on the three-dimensional multicellular tissue</title>
<link href="https://hdl.handle.net/1721.1/164849" rel="alternate"/>
<author>
<name>Liu, Fan</name>
</author>
<id>https://hdl.handle.net/1721.1/164849</id>
<updated>2026-02-13T03:10:55Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Intercellular flow-mediated force relaxation measurement on the three-dimensional multicellular tissue
Liu, Fan
Three-dimensional (3D) multicellular tissues are prevailing over 2D monolayer or single cells; their mechanical properties like stiffness, surface tension, and viscosity have been shown to relate to diseases like fibrosis or tumor metastasis. Multicellular tissues have been traditionally modeled as a viscoelastic material due to their apparent shape rearrangement, which hardly considers the internal structure, including the extracellular matrix (ECM) and resulting intercellular water flow. These intercellular communications usually provide significant information on diseases such as tumor invasion, but immediate supporting evidence of this behavior is lacking. In this work, we investigate the bulk response of 3D multicellular tissues due to such intercellular flows and explore the related mechanism through a tailored micro-mechanics platform. &#13;
Firstly, we design and establish a micro-mechanics platform based on the parallel plate compression (PPC) method. We adopt a precise micro-balance as the sensor to detect the force variation of the sample during compression. A piezo linear stage is incorporated to exert such tiny vertical displacement. Besides, a lateral microscope is designed to monitor the compression process instantaneously. This platform has proved to be applicable to various samples, including hydrogels, cell spheroids, and natural tissues or organs. &#13;
Then, we propose the critical criterion, the size dependency of force relaxation time, to distinguish a material's properties, i.e., viscoelasticity and poroelasticity. For poroelastic material, the force relaxation is due to water redistribution; hence, the speed highly depends on the sample sizes. In contrast, for viscoelastic material, it is determined by the bulk material properties, thus independent of the size. We theoretically verify this criterion via Abaqus simulation and experimentally on classic poro-/visco-elastic materials with various dimensions. &#13;
Next, we apply the size-dependency criterion on the 3D multicellular tissues to distinguish the poro-/visco-elasticity in this biomaterial. We take the PPC on multiple cell spheroids with different sizes through the platform. It is observed that the force relaxation times are linearly proportional to the size of all tested cell lines, demonstrating poroelasticity in our experimental time range. Intriguingly, we take tests on the natural organs of the mouse islets and find such linear correlation as well. Hence, both cultured spheroids and natural tissues are poroelastic.&#13;
Finally, we explore the mechanism determining the poroelasticity inside the 3D multicellular tissues. By inhibiting the cell-cell junctions, we demonstrate the intercellular water flow through the extracellular gaps dominates this poroelastic force relaxation in the biomaterial. Further experiments show that the stiffness of the structure and the extracellular gaps inside the 3D multicellular tissues couple to contribute to the intercellular water flow, i.e., the stiffer the structure and/or the larger the gaps, the faster the water flows, thus quicker the force decays after compression.&#13;
These findings highlight the fundamental role of intercellular water flow in the mechanical properties of 3D multicellular tissues. The designed micro-mechanics platform is also beneficial to research at the tissue level with micro-newton forces owing to the development of artificial organoids for early disease diagnosis and treatment.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generating Unprecedented Extreme Scenarios with Limited Data</title>
<link href="https://hdl.handle.net/1721.1/164848" rel="alternate"/>
<author>
<name>Chang, Kai</name>
</author>
<id>https://hdl.handle.net/1721.1/164848</id>
<updated>2026-02-13T03:49:19Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Generating Unprecedented Extreme Scenarios with Limited Data
Chang, Kai
Quantifying and predicting rare and extreme events persists as a crucial yet challenging task in understanding complex dynamical systems, ubiquitous in science and engineering. Many practical challenges arise from the infrequency and severity of these events, including the considerable variance of simple sampling methods and the substantial computational cost of high-fidelity numerical simulations. Numerous data-driven methods have recently been developed to tackle these challenges. However, a typical assumption for the success of these methods is the occurrence of multiple extreme events, either within the training dataset or during the sampling process. This leads to accurate models in regions of quiescent events but with high epistemic uncertainty in regions associated with extremes. To overcome this limitation, we introduce the framework of Extreme Event Aware (e2a or eta) or η-learning which does not assume the existence of extreme events in the available data. η-learning reduces the uncertainty even in ‘unchartered’ extreme event regions, by enforcing the extreme event statistics of a few observables during training, which can be available or assumed through qualitative arguments or other forms of analysis. This type of statistical regularization results in models that fit the observed data, but also enforces consistency with the prescribed statistics of some observables, enabling the generation of unprecedented extreme events even when the training data lack extremes therein. Theoretical results based on optimal transport offer a rigorous justification and highlight the optimality of the introduced method. Additionally, extensive numerical experiments illustrate the favorable properties of the ηlearning framework on several prototype problems and real-world precipitation downscaling problems.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A*-Decoding: Token-Efficient Inference Scaling</title>
<link href="https://hdl.handle.net/1721.1/164846" rel="alternate"/>
<author>
<name>Chatziveroglou, Ioannis</name>
</author>
<id>https://hdl.handle.net/1721.1/164846</id>
<updated>2026-02-13T03:49:18Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">A*-Decoding: Token-Efficient Inference Scaling
Chatziveroglou, Ioannis
Inference-time scaling has emerged as a powerful alternative to parameter scaling for improving language model performance on complex reasoning tasks. While existing methods have shown strong performance gains under fixed compute budgets, there has been little focus on optimally utilizing that budget during inference. In this work, we introduce A*-decoding, a search-based inference-time strategy that builds on the A* search algorithm to optimally utilize a fixed compute budget by prioritizing high-quality reasoning paths during generation. We frame language model decoding as a structured search in a state space of partial solutions, applying the A* transition model to identify promising continuations guided by an external process supervision signal. In our experiments, A*-decoding reaches the performance levels of strong inference scaling baselines like best-of-N and particle filtering while using up to 3x fewer tokens and 30% fewer PRM passes under equivalent compute budgets. On the MATH500 and AIME 2024 benchmarks, A*-decoding enables Llama-3.2-1B-Instruct to match the performance of the 70x larger Llama-3.1-70B-Instruct, and allows Qwen3-1.7B to reach o1-like reasoning accuracy. These results highlight the power of structured search in decoding, offering an alternative to brute-force sampling or scale-driven gains. Our work demonstrates how thoughtful inference-time strategies can enhance reasoning in SLMs, pointing toward future advances in more efficient and scalable language model deployment.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>U-Net Network Enhancements to Facilitate Rapid Electron Microscopy Imaging for Connectomics</title>
<link href="https://hdl.handle.net/1721.1/164845" rel="alternate"/>
<author>
<name>Varma, Vikram</name>
</author>
<id>https://hdl.handle.net/1721.1/164845</id>
<updated>2026-02-13T03:49:27Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">U-Net Network Enhancements to Facilitate Rapid Electron Microscopy Imaging for Connectomics
Varma, Vikram
Imaging the structural and functional connections between cells in the brain allows neuroscientists to understand the brain by studying neuronal wiring diagrams. To automatically segment and classify images that are used in the construction of these neuronal wiring diagrams, or connectomes today, machine learning segmentation techniques require an image scanned with an electron microscope at either a slow dwell time or with small pixel sizes. However, a scalable and more rapid implementation of connectome construction has not yet been realized because of the significant cost of multi-beam electron microscopes and the relatively slow time in which connectomes can be constructed using a single-beam electron microscope. Segmented connectomes include sections that can be segmented properly with a fast scanned image as well as sections that require slow scanning for proper segmentation. Due to this fact, a potential way to enhance the time in which connectomes can be produced and segmented is to first scan samples at fast resolution and perform segmentation using a convolutional neural network, identify those areas of interest that require more detailed imaging through a learning-based error detection network, and then rescan only those identified high interest areas to produce a fused image for segmentation. The proposed thesis will analyze various machine learning methods for segmentation using the U-Net network and review proposed enhancements to the U-Net network that can better utilize electron microscopy images for construction of segmented connectomes. The successful use of fused electron microscopy images will potentially enable higher speed and lower cost electron microscopy imaging for connectomics.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Low-Temperature Germanium Waveguides for Mid-Infrared Sensing Applications</title>
<link href="https://hdl.handle.net/1721.1/164844" rel="alternate"/>
<author>
<name>Zhang, Erin Wei</name>
</author>
<id>https://hdl.handle.net/1721.1/164844</id>
<updated>2026-02-13T03:49:21Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Low-Temperature Germanium Waveguides for Mid-Infrared Sensing Applications
Zhang, Erin Wei
Waveguide integrated devices that operate in the mid-infrared (mid-IR) wavelength range (2.5-12 µm) are used for sensing the fundamental absorption bands in a variety of molecules. Germanium (Ge) is commonly used for photodetection in the nearinfrared (near-IR) wavelength range of 1.2-1.6 µm due to its strong absorption from a 0.8 eV direct band gap. At longer wavelengths in the mid-IR range, Ge exhibits transparency that makes it a desirable waveguide material for sensing applications. Its epitaxial growth compatibility with silicon (Si) substrates makes Ge-on-Si an effective platform for mid-IR waveguides. For back-end-of-line (BEOL) integration of waveguides in sensing applications, the thermal budget limits the temperature to below 450°C. In this work, we investigated the use of h-line exposure as a commercially viable, low-cost option for patterning low temperature (LT) Ge-on-Si waveguides using direct write lithography. Waveguide dimensions for optimal confinement in single-mode transverse electric (TE) polarization at wavelengths of 3 µm and 10.4- 11.3 µm were modeled and the direct lithography process was refined. Through dose testing and adjustments to the raster direction and pixel resolution, it was found that direct write lithography lacked the resolution required for low-loss waveguides. Scanning electron microscopy (SEM) revealed inconsistent waveguide widths and sidewall roughness, and e-beam lithography was identified as the preferred lithography process. For future integration of LT-Ge in a foundry process design kit (PDK), a universal thickness of 1.7 µm was found to support single-mode waveguide operation from 3-11.3 µm wavelength.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing Log-Based Coordination Systems for Managed Cloud Environments</title>
<link href="https://hdl.handle.net/1721.1/164843" rel="alternate"/>
<author>
<name>Jimenez, Gabriel</name>
</author>
<id>https://hdl.handle.net/1721.1/164843</id>
<updated>2026-02-13T03:49:14Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Assessing Log-Based Coordination Systems for Managed Cloud Environments
Jimenez, Gabriel
The distributed systems landscape is undergoing a significant shift toward managed cloud environments, reducing the prevalence of self-hosted coordination services such as ZooKeeper. While ZooKeeper remains a proven and feature-rich solution for coordination tasks, its deployment in cloud environments can introduce component redundancy. This is because the underlying cloud platform already provides internal mechanisms to ensure coordination guarantees. This thesis investigates the design and evaluates the performance of a log-based coordination service library tailored for managed cloud environments. The proposed library removes the ensemble management overhead inherent in ZooKeeper by delegating durability and consistency responsibilities to the cloud provider’s data layer. This architectural simplification enables a modular design, allowing for tailored implementations that exploit the strengths and mitigate the limitations of a system's specified data layer. The library demonstrated feature parity with ZooKeeper for a targeted subset of coordination features, including leader election, membership tracking, and ephemeral state management. The same is noted for migration from an existing ZooKeeper-based application to this work's library, requiring minimal design changes while preserving coordination guarantees. While the results show that this design does not yet match mature coordination services in raw performance, they highlight potential avenues for further research, particularly in optimizing log-based coordination systems for the unique characteristics of cloud-managed data layers. Given the industry’s steady movement toward cloud-native infrastructure, these findings provide a foundation for future exploration into lightweight, platform-integrated coordination solutions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Language Comprehension, Production, and Reasoning in&#13;
Humans and Neural Language Models</title>
<link href="https://hdl.handle.net/1721.1/164842" rel="alternate"/>
<author>
<name>Eisape, Tiwalayo</name>
</author>
<id>https://hdl.handle.net/1721.1/164842</id>
<updated>2026-02-13T03:10:48Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Language Comprehension, Production, and Reasoning in&#13;
Humans and Neural Language Models
Eisape, Tiwalayo
How closely do neural language models mirror human language processing, and what can this alignment teach us about cognition? This dissertation presents convergent evidence in comprehension, production, and reasoning that neural language models (LMs) can serve as productive instruments for understanding naturalistic human language use at scale. Studies 1-2 examine comprehension with complementary methods. First, Cloze Distillation—a novel method for aligning models with human next-word predictions—improves both language modeling and reading time prediction, demonstrating that LMs and humans make distinct, complementary predictions. Second, new methods for identifying syntactic information in LM hidden states demonstrate that models learn to implicitly represent incremental syntactic state. These probes also enable targeted interventions, allowing us to manipulate representations to resolve (or induce) temporary misinterpretations, confirming mechanistic understanding. While these studies demonstrate prediction’s role in comprehension, a complete account requires examining whether these mechanisms also shape how humans produce language in real-time. Study 3 analyzes a massive corpus of 2.3 million competitive typing events from TypeRacer.com, uncovering the first evidence of in-context predictability effects in this domain of production. Finally, Study 4 compares human and LM reasoning systematically—LMs achieve higher syllogistic reasoning accuracy than humans while still replicating several fine-grained human-like error patterns that are orthogonal to logical accuracy, including premise ordering effects. These converging findings reveal prediction as a fundamental mechanism in comprehension, production, and reasoning in both humans and LMs. While models achieve this through statistical learning rather than specialized cognitive architecture—often outperforming humans yet replicating their systematic biases—this alignment supports predictive processing theories of cognition. This work establishes LMs as scalable cognitive laboratories that can complement traditional experiments, and contributes psycholinguistically principled methods for understanding and controlling LMs.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Proof-of-Work Based Mitigation of Real Time Video DDoS Attacks</title>
<link href="https://hdl.handle.net/1721.1/164841" rel="alternate"/>
<author>
<name>Echezona, Chukwuemekalum</name>
</author>
<id>https://hdl.handle.net/1721.1/164841</id>
<updated>2026-02-13T03:49:20Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Proof-of-Work Based Mitigation of Real Time Video DDoS Attacks
Echezona, Chukwuemekalum
As the Internet continues to grow in size and complexity, Distributed Denial of Service (DDoS) attacks grow in size and complexity alongside it. One particularly common form of DDoS attack is the TCP SYN flood, which exploits the TCP handshake process to exhaust server resources. This thesis investigates the use of a novel proof-of-work (PoW) based mitigation method to respond to such attacks, specifically in the context of WebRTC video conferencing applications. PoW aims to shift the computational burden from the server to the client, by utilizing a hard to solve puzzle that is easily verifiable. Guided by the same evaluation framework used by the original contributors, we conducted controlled experiments using SPHERE, a national research testbed, and the open-source Jitsi Meet video conference application to simulate DDoS attacks and measure their impact on video quality metrics such as upload/download bitrate and video framerate. Our experiments involved multiple scenarios with and without active attacks and PoW mitigation activate. Results demonstrate that PoW imposes minimal overhead on legitimate clients while maintaining high efficacy when faced with the threat of a SYN Flood attack, regardless of whether the attackers do the proof-of-work before sending traffic. These findings highlight PoW as a promising low overhead mitigation method for WebRTC conference systems under the threat of DDoS attacks.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Priority-Based Search for Lifelong Multi-Agent Path Finding</title>
<link href="https://hdl.handle.net/1721.1/164840" rel="alternate"/>
<author>
<name>Huang, Natalie</name>
</author>
<id>https://hdl.handle.net/1721.1/164840</id>
<updated>2026-02-13T03:49:13Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Optimizing Priority-Based Search for Lifelong Multi-Agent Path Finding
Huang, Natalie
The lifelong Multi-Agent Path Finding (MAPF) problem requires planning collision-free trajectories for agents operating continuously in dynamic environments. Traditional solvers such as Priority-Based Search (PBS) use fixed branching heuristics, which can be inefficient in high-congestion scenarios. This work explores how learning-based methods can improve PBS decision-making. We develop supervised learning (SL) policies trained from high-quality beam search trajectories and reinforcement learning (RL) policies learned directly through simulation, enabling adaptive branching strategies. Evaluations on warehouse-style and Kiva-style maps with varying agent densities show that learned policies can significantly boost throughput in congested warehouse layouts, while identifying scenarios where classical heuristics remain competitive. Our findings provide guidance on solver selection based on environment layout and congestion characteristics.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning to Interpret Language Model Diffs</title>
<link href="https://hdl.handle.net/1721.1/164839" rel="alternate"/>
<author>
<name>Goel, Avichal</name>
</author>
<id>https://hdl.handle.net/1721.1/164839</id>
<updated>2026-02-13T03:49:23Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Learning to Interpret Language Model Diffs
Goel, Avichal
Finetuning-induced changes to a model’s weights (a “model diff”) are semantically meaningful but often difficult to interpret. This makes us wonder: can we describe the content of an unknown model diff using natural language? We introduce diff interpretation training, a method that teaches a model describe its own finetuning-induced modifications. Our approach uses synthetic model diffs to train a lightweight adapter, which in turn can be applied to a compatible finetuned model to make it self-describing. Using two simple task settings, we demonstrate that our method can successfully decode model diffs into accurate natural language descriptions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Product architectures for solar-powered drip irrigation (SPDI) systems in the Middle East and North Africa</title>
<link href="https://hdl.handle.net/1721.1/164838" rel="alternate"/>
<author>
<name>Grant, Fiona R.</name>
</author>
<id>https://hdl.handle.net/1721.1/164838</id>
<updated>2026-02-13T03:10:44Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Product architectures for solar-powered drip irrigation (SPDI) systems in the Middle East and North Africa
Grant, Fiona R.
To feed the growing global population, agriculture production must be intensified using existing land and resources. Sustainable agriculture intensification is particularly important in the Middle East and North Africa (MENA), the most water-stressed region in the world. Solar-powered drip irrigation (SPDI) has the potential to increase water use efficiency and reduce fossil fuel use for irrigation. Despite these benefits, SPDI adoption is limited by its high investment cost and the misalignment between farmers' risk tolerance and broader sustainability goals. Past work has explored three areas of SPDI innovation: low-pressure drip emitters, system cost optimization, and precision irrigation control. This thesis integrates previous innovations in an end-to-end design process to generate SPDI architectures that are accessible to resource-constrained farmers.&#13;
A market study was conducted to understand farmers' priorities and constraints and articulate SPDI value propositions for the target users. Stakeholder surveys were conducted in Jordan and Morocco for farms ranging from 1–130 hectares. Three market segments were identified, grouping farmers who face similar economic and knowledge barriers. While farmers generally prioritized irrigation reliability and low system costs, the observed variety in farm size, production volume, and technical expertise suggested that SPDI architectures must be tailored to each market segment.&#13;
This thesis proposes an energetic framework that captures system parametric relationships to identify feasible SPDI design trade-offs. The optimized solar power systems were 14%–80% less expensive than conventionally-sized designs. Despite significant changes to the hydraulic operating parameters, the proposed SPDI architectures were as reliable as existing systems. For farms with long irrigation times, it was optimal to pair low-pressure drip emitters with an irrigation schedule that tracks the daily solar profile, termed “solar profile matching” (SPM), to maximize direct solar power use. The SPM schedule reduced system cost by minimizing the battery capacity. An economic analysis demonstrated that the optimal SPDI designs could be made cost-competitive with grid power through SPDI retrofit subsidies, which some local governments already support. Researchers and industry professionals could use the energetic framework and techno-economic analysis presented in this thesis to inform system design and policy decisions and promote SPDI adoption.&#13;
Finally, this work created guidelines for designing a precision irrigation controller in resource-constrained markets. A controller was conceptualized to implement the SPDI-SPM architecture. The controller functional requirements and design specifications were iteratively defined with stakeholders, and a prototype was tested on two farms in the MENA region. The controller reduced water and energy use by up to 44% and 43%, respectively, while maintaining crop yield. However, the controller relied on battery power to execute the irrigation schedule. A yield loss sensitivity analysis found that using 72%–79% of the available solar energy on average, an increase of about 40% from the experiment SPM schedules, would have been sufficient to reliably irrigate with solar alone. The results suggest that, with software modifications, the proposed controller could eliminate the need for a battery and enable low-cost SPDI systems. If adopted, the proposed controller could make sustainable irrigation practices more accessible to farmers.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Approximate L² Error Control by Solution Post-Processing for Finite Element Solutions of PDEs with Higher-Order Adaptive Methods</title>
<link href="https://hdl.handle.net/1721.1/164837" rel="alternate"/>
<author>
<name>Botto Tornielli, Marcos Julian</name>
</author>
<id>https://hdl.handle.net/1721.1/164837</id>
<updated>2026-02-13T03:49:17Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Approximate L² Error Control by Solution Post-Processing for Finite Element Solutions of PDEs with Higher-Order Adaptive Methods
Botto Tornielli, Marcos Julian
With the substantial computing resources available today, computational fluid dynamics simulations allow scientists and engineers to simulate physical problems very accurately. However, achieving this accuracy requires a sufficiently refined computational mesh, which is a primary driver for the high cost of complex simulations. Mesh adaptation methods provide an automated way to determine the regions where a mesh needs the most refinement and generate a new mesh that efficiently targets these regions. In this thesis, we build on previous work in a posteriori error estimation and mesh adaptation for finite element methods to propose a new mesh adaptation method based on L² error control by solution post-processing. A key feature of our method is its natural extension to higher-order discretizations while providing a problem-independent adaptation methodology. Problem-independent adaptation methods do not depend on specific information about the partial differential equation (PDE) problem being solved, and can therefore be applied to a wide range of problems without modification. We present numerical results applying the approximate L² error control method to a two-dimensional advection-diffusion problem with anisotropic features. These results demonstrate the proposed method’s ability to generate well-adapted anisotropic meshes for solutions with polynomial orders 1, 2, and 3. We also apply the approximate L² error control method to a more complex two-dimensional Reynolds-Averaged Navier-Stokes problem with turbulent flow over a flat plate. We compare the convergence of the drag coefficient and the characteristics of adapted meshes obtained with the proposed method and with an output-based adaptation approach. As expected, the approximate L² error control method is not as effective as the output-based approach in reaching a converged drag coefficient value, but it nevertheless demonstrates the ability to effectively control the approximate L² error in the Mach field.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advancing Ubiquitous Tactile Sensing through&#13;
Comprehensive Tooling for Resistive Matrix-Based&#13;
Sensors</title>
<link href="https://hdl.handle.net/1721.1/164836" rel="alternate"/>
<author>
<name>Murphy, Devin</name>
</author>
<id>https://hdl.handle.net/1721.1/164836</id>
<updated>2026-02-13T03:49:25Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Advancing Ubiquitous Tactile Sensing through&#13;
Comprehensive Tooling for Resistive Matrix-Based&#13;
Sensors
Murphy, Devin
Resistive matrix-based tactile sensors offer a scalable and intuitive approach to capturing human-environment interactions, yet deploying them in real-world systems remains challenging because they must remain portable, adaptive, and long-lasting. This thesis presents the WiReSens Toolkit, an open-source hardware and software platform for developing resistive tactile sensing systems that meet the demands of real world applications. The toolkit features adaptive hardware for interfacing with resistive sensors and a web-based GUI that mediates access to otherwise complex functionality, including 1) multi-device programming and wireless visualization across three distinct communication protocols 2) autocalibration methods for adaptive sensitivity and 3) intermittent data transmission for low-power operation. As a use case for the toolkit, the thesis then introduces a method for the automatic design and fabrication of custom tactile sensing gloves using flexible printed circuit boards (FPCBs), enabling rapid, scalable production. Together, these contributions lower barriers to adoption and support broader exploration of tactile sensing in HCI, robotics, and ubiquitous computing.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Topics in Geometric Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/164835" rel="alternate"/>
<author>
<name>Tahmasebi, Behrooz</name>
</author>
<id>https://hdl.handle.net/1721.1/164835</id>
<updated>2026-02-13T03:10:46Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Topics in Geometric Machine Learning
Tahmasebi, Behrooz
Recent advances and the widespread adoption of neural networks have revolutionized machine learning and artificial intelligence. These developments demand learning paradigms capable of processing data from diverse applications and sources. In structured domains such as molecules, graphs, sets, and 3D objects, as well as fields such as drug discovery, materials science, and astronomy, models must account for data structures. The emerging field of geometric machine learning has gained attention for enabling neural networks to handle geometric structures, unlocking novel solutions across scientific disciplines. Despite recent advances, theoretical gaps remain. This thesis aims to address these gaps by studying the benefits and limitations of leveraging geometric structures and symmetries in data. We explore sample complexity, generalization bounds, hypothesis testing for the presence of symmetries in data, time complexity of learning under symmetries, and regularization and optimization in symmetric settings. The goal is to build a robust theoretical framework that validates recent successes and sheds light on unexplored aspects, fostering future progress in geometric machine learning.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantization Methods for Matrix Multiplication and Efficient Transformers</title>
<link href="https://hdl.handle.net/1721.1/164834" rel="alternate"/>
<author>
<name>Savkin, Semyon</name>
</author>
<id>https://hdl.handle.net/1721.1/164834</id>
<updated>2026-02-13T03:49:20Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Quantization Methods for Matrix Multiplication and Efficient Transformers
Savkin, Semyon
We study quantization in Machine Learning. First, we introduce NestQuant — a technique for quantization of matrix products and post-training quantization of LLMs. Beyond reducing the memory footprint, quantization accelerates inference, as the primary bottleneck during autoregressive generation is often the memory bandwidth. NestQuant leverages two nested lattices to construct an efficient vector codebook for quantization, along with practical encoding and decoding algorithms. The approach is grounded in recent theoretical work that characterizes the optimal rate–distortion trade-off for matrix products. Empirically, on Llama-3-8B, it reduces the perplexity gap between full-precision and quantized models by more than 55% relative to the current state-of-the-art technique (SpinQuant). Second, we investigate data-domain quantization for RF signals. We propose a tokenized transformer for source separation that discretizes RF waveforms into learned tokens and operates directly on the resulting sequences, outperforming strong convolutional baselines. Together, these contributions connect information-theoretic limits with deployable systems: structured vector quantizers accelerate LLM inference and enable competitive discrete representations for RF tasks.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Data-Driven Contact Localization and Force Estimation for Barometric Tactile Sensors</title>
<link href="https://hdl.handle.net/1721.1/164833" rel="alternate"/>
<author>
<name>Chun, Ethan</name>
</author>
<id>https://hdl.handle.net/1721.1/164833</id>
<updated>2026-02-13T03:49:15Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Improving Data-Driven Contact Localization and Force Estimation for Barometric Tactile Sensors
Chun, Ethan
Barometric tactile sensors offer a cheap, robust, and customizable means for robots to perceive the world. Central to their operation are models that extract useful information from the sensors’ raw pressure readings. In this work, I focus on improving data-driven methods for single-point contact localization and force estimation using a previously presented three-quarter sphere barometric tactile sensor. To allow modeling of time-dependent effects in the sensor material, I introduce a multi-threaded data collection system that captures ground truth contact and sensor data at exactly 100 Hz. I construct both feed-forward and recurrent networks using this data, finding that a recurrent network achieves a 15% lower mean absolute error for angular contact localization on the sphere compared to prior methods. The recurrent architecture’s computational efficiency ensures that the architecture can still run within the constraints of the sensors’ microcontroller. Despite this improvement, I find that more expressive models such as LSTMs tend to overfit on the collected data and physical phenomena observed during deployment were not well represented by the training metrics. To better understand the extent that these data-driven methods alone can improve sensor performance, I shift focus away from the modeling and analyze the physical sensor instead. I find that viscous effects in the sensor can render the prediction task unlearnable without historical data and that thermal effects introduce a train-test distribution shift. Finally, I discuss design criteria for a theoretical future barometric tactile sensor that may mitigate the effects found during my modeling and analysis.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Probabilistic Programming over Heterogeneous Language&#13;
and Hardware Targets</title>
<link href="https://hdl.handle.net/1721.1/164832" rel="alternate"/>
<author>
<name>Rojas Collins, Elias G.</name>
</author>
<id>https://hdl.handle.net/1721.1/164832</id>
<updated>2026-02-13T03:49:09Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Probabilistic Programming over Heterogeneous Language&#13;
and Hardware Targets
Rojas Collins, Elias G.
Modern probabilistic programming applications, from large-scale Bayesian inference to real-time decision making, require both the expressiveness of CPU-oriented languages such as Gen.jl and the massive parallelism of GPU-backed array languages such as GenJAX, yet existing platforms force users to trade modeling flexibility for performance. This thesis introduces GenUflect, a metalanguage that embeds multiple Gen-compatible dialects inside a single program, allowing each sub-component to run on the most appropriate language and hardware target while preserving Gen’s programmable-inference interface. GenUflect extends Gen’s dynamic-modeling language with the @union, @vmap, @amortize, @amortize≤, and @runtime_union combinators; these macros compile at build-time (or justin-time) to autonomous generative functions written in the target dialect, link them through a lightweight FFI layer, and manage cross-device data via zero-copy MirrorArrays and lazily materialized traces. The resulting programs remain sound by construction because each foreign subtrace is itself a valid Gen generative function. Empirical studies demonstrate that this hybrid approach yields large practical gains. On a split linear-vs-sinusoidal regression task, GenUflect matches pure GenJAX throughput while running higher-order control logic on the CPU, and is up to two orders of magnitude faster than a pure Gen implementation for datasets of 105 points. In a collapsed-Gibbs sampler for a Dirichlet-process mixture model, GenUflect’s elastic allocation (@amortize≤) lets vectorized GPU kernels adapt to a growing number of clusters; the same inference that takes over an hour in Gen executes in seconds with GenUflect. A probabilistic inverse-graphics pipeline further showcases how heterogeneous submodels can cooperate seamlessly within unified inference code. By coupling language interoperability with automated data movement and compile-time code generation, GenUflect bridges the gap between flexibility and speed, enabling scalable, expressive probabilistic programs that natively exploit both CPUs and accelerators.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Under-Coverage of Double Machine Learning Due to Implementation Choices</title>
<link href="https://hdl.handle.net/1721.1/164831" rel="alternate"/>
<author>
<name>Siegmann, Charlotte B.</name>
</author>
<id>https://hdl.handle.net/1721.1/164831</id>
<updated>2026-02-13T03:49:08Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Under-Coverage of Double Machine Learning Due to Implementation Choices
Siegmann, Charlotte B.
Double ML estimators can estimate coefficients of interest with far fewer functional form assumptions than linear econometric methods. However, DML requires researchers to make a range of implementation choices, including the selection of the function class, the random seed, and hyperparameter configurations. While asymptotic theory suggests these choices should not affect final estimates, we show that for 10 economic analyses (8 of them published and peer-reviewed), implementation choices affect the results. In half of the datasets, different implementation choices even change the interpretation of findings between negative, null, or positive effects. We link these results to a framework for empirically assessing the performance of machine-learning-based estimators, focusing on precision, coverage, and susceptibility to manipulation. This is meant to complement asymptotic theory. We demonstrate that the coverage of DML confidence intervals is too low—placing an upper bound of 48% on the expected coverage of conventional 95% confidence intervals for published DML economics papers. We show that in the status quo, the susceptibility of DML to manipulation by researchers is high, but propose ways to mitigate this susceptibility.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Irreversible Perturbations of the Unadjusted Langevin Algorithm</title>
<link href="https://hdl.handle.net/1721.1/164830" rel="alternate"/>
<author>
<name>Zhu, Qianyu Julie</name>
</author>
<id>https://hdl.handle.net/1721.1/164830</id>
<updated>2026-02-13T03:49:05Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Optimizing Irreversible Perturbations of the Unadjusted Langevin Algorithm
Zhu, Qianyu Julie
A central task in Bayesian inference and scientific computing is to compute expectations with respect to probability distributions that are only known up to a normalizing constant. Markov chain Monte Carlo (MCMC) methods, and in particular Langevin dynamics, provide a powerful framework for this task by constructing stochastic processes that converge to the target distribution. However, practical implementations face two challenges: slow mixing when the target distribution is anisotropic or multimodal, and persistent discretization bias introduced by numerical schemes. This thesis investigates irreversible perturbations of overdamped Langevin dynamics, aiming to accelerate mixing while controlling discretization error. Irreversible perturbations introduce skew-symmetric drift terms that preserve the target distribution while inducing rotational flow, thereby enhancing exploration. Although prior work has established their benefits in continuous-time settings, the impact of discretization and the design of optimal perturbations for discrete-time algorithms remain open problems. We develop a framework for optimizing constant (position-independent) irreversible perturbations in the Unadjusted Langevin Algorithm (ULA). Our approach balances two competing objectives: maximizing the spectral gap of the continuous dynamics to accelerate convergence, and minimizing discretization error that drives estimation bias. Motivated by this, we introduce new criteria that jointly evaluate bias and efficiency, and we show how these criteria identify perturbations that improve performance beyond existing constructions. Theoretical analysis is complemented by numerical experiments on Gaussian and nonGaussian targets. These experiments demonstrate that appropriately designed irreversible perturbations can reduce mean-squared error without sacrificing stability, while poorly chosen perturbations can degrade performance. The results highlight the importance of geometry-aware design and motivate systematic optimization strategies for irreversible perturbations. Overall, this work extends the theoretical and practical understanding of irreversible Langevin dynamics, bridging the gap between continuous-time spectral analysis and discrete-time numerical performance. It provides principled tools for constructing efficient MCMC samplers, with potential applications in high-dimensional Bayesian inference and modern machine learning.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Single Camera Motion Compensated Viewpoint Shift</title>
<link href="https://hdl.handle.net/1721.1/164829" rel="alternate"/>
<author>
<name>Snowdon, Adam</name>
</author>
<id>https://hdl.handle.net/1721.1/164829</id>
<updated>2026-02-13T03:49:03Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Single Camera Motion Compensated Viewpoint Shift
Snowdon, Adam
Eye contact is a necessary tool for human connection and in most video conferencing situations, eye contact is not possible. Standard laptop and webcam configurations position the camera at the top of the screen, meaning that when the user looks at other people’s faces in the center of the screen, the camera captures the user looking downward, creating the impression of poor eye contact for remote participants. Solutions involving 3D modeling of the face to synthesize a gaze-corrected view have been explored and exist but are too computationally costly for most personal computers. To address this computational challenge, we draw inspiration from 2D frame interpolation techniques to synthesize a virtual camera view that repositions the user’s apparent gaze toward the camera. Our method uses a single camera located at the top of the user’s screen and requires only a brief setup period. Assuming there is only one user, our approach creates a virtual camera view that transforms the user’s viewpoint from the screen center to the camera position, enabling more realistic eye contact in video conference calls.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Framework for 3D Mouse Brain Reconstruction: RNA-based Stitching of Adjacent Tissue Slices and Co-Registration of Multimodal Imaging Data</title>
<link href="https://hdl.handle.net/1721.1/164828" rel="alternate"/>
<author>
<name>Pan, Jessica N.</name>
</author>
<id>https://hdl.handle.net/1721.1/164828</id>
<updated>2026-02-13T03:48:56Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">A Framework for 3D Mouse Brain Reconstruction: RNA-based Stitching of Adjacent Tissue Slices and Co-Registration of Multimodal Imaging Data
Pan, Jessica N.
Mapping the brain’s complex neural networks requires tracing the long-distance pathways of individual axons, a task that demands a comprehensive 3D reconstruction of the brain. Recently, spatially resolved transcriptomics (SRT) methods enable the study of gene expression and biomolecule distribution in each neuron in its spatial context, opening the door to more thoroughly investigating cell-cell interactions between neurons. However, SRT methods are limited to slices of tissue; therefore, computational alignment is essential to reconstruct a cohesive 3D volume while correcting for both batch effects and inherent sample variability. This thesis presents a novel framework that addresses these challenges through three primary contributions. First, a memory-efficient, non-referenced-based algorithm was developed to align the superficial surfaces of adjacent, high-resolution tissue slices. Second, these surface transformations were interpolated through the tissue slices on a proof-of-concept dataset of three adjacent slices. Third, methods for co-transforming fluorescent protein imaging data were explored to fully resolve the cell boundaries between neurons. These three methods are necessary steps towards creating a fully-resolved, multimodal 3D model of the brain.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Non-Convex Objectives to Plan More Optimal&#13;
Motion for Manipulators</title>
<link href="https://hdl.handle.net/1721.1/164827" rel="alternate"/>
<author>
<name>Garg, Shruti</name>
</author>
<id>https://hdl.handle.net/1721.1/164827</id>
<updated>2026-02-13T03:49:11Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Optimizing Non-Convex Objectives to Plan More Optimal&#13;
Motion for Manipulators
Garg, Shruti
Non-convex optimization is essential to tackle increasingly complex and practical problems in kinematic motion planning. Although introducing non-convexity often sacrifices guarantees of feasibility and optimality–making solutions more susceptible to local minima or failure to converge–many robotic systems and tasks are non-convex by nature, necessitating at least somewhat non-convex formulations. In this thesis, we aim to mostly constrain non-convexity to the objective. This optimization structure helps preserve certain feasibility guarantees in theory and usability in practice while enhancing optimality of solutions, even if global optimality is not achieved. In the first chapter, we demonstrate the effectiveness of non-convex objectives in scenarios where motion planning involves a non-convex parameterization of the configuration space. We keep constraints strictly convex, with the non-convexity quarantined to the objective. This structure guarantees a feasible solution given a feasible initial guess. We primarily use our method to post-process Graphs of Convex Sets solutions in three domains: constrained bimanual motion, motion with guaranteed non-collision, and planning in SO(3). In each case, the non-convex objective compensates for distortion introduced by the parameterization, resulting in more efficient and natural motion. In the second chapter, we propose teleoperation scheme with full-body motion planning for non-holonomic mobile manipulators. Our key contribution is a Differential Inverse Kinematics (DiffIK) formulation that crafts non-convex objectives to avoid singularities and joint limits leading to more robust feasible motion. Unlike before, the constraints are not strictly convex, so the optimization has no guarantees of feasibility. However, we mitigate the non-convexity in the constraints as much as we can by linearizing around the robot’s current position and approximating the highly non-convex non-holonomic constraint. We explore multiple formulations for singularity avoidance and empirically demonstrate that integrating these objectives into DiffIK improves motion quality for teleoperation for the RBY-1 robot.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CableSplat: Optimized 3D Gaussian Splatting for 1D Deformable Pose Estimation</title>
<link href="https://hdl.handle.net/1721.1/164826" rel="alternate"/>
<author>
<name>Pai, Sameer</name>
</author>
<id>https://hdl.handle.net/1721.1/164826</id>
<updated>2026-02-13T03:49:02Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">CableSplat: Optimized 3D Gaussian Splatting for 1D Deformable Pose Estimation
Pai, Sameer
A key challenge in the robotic manipulation of deformable objects is the lack of accurate and efficient systems for estimating their pose in real-time, especially in the presence of occlusion. In this thesis we propose CableSplat, a novel non-parametric method leveraging 3D Gaussian Splatting to estimate the pose of a linear deformable object given RGB images of the object from multiple viewpoints. To facilitate the evaluation of the performance of this method, we develop both simulated and real-world pipelines to collect calibrated and segmented recordings of cables undergoing various manipulations and transformations. We find that our method is consistently able to estimate cable pose to within an average error of ∼2.5mm across simulated tasks. Furthermore, performance on a scene reconstruction metric drops only slightly between simulated and real-world data, suggesting high-fidelity state estimation even in the real world. CableSplat is therefore a promising candidate for the extension of existing manipulation systems to deformables.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>scPhen: Single-Cell Phenotype Predictor for Alzheimer’s&#13;
Disease</title>
<link href="https://hdl.handle.net/1721.1/164825" rel="alternate"/>
<author>
<name>Guo, Sophie J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164825</id>
<updated>2026-02-13T03:49:10Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">scPhen: Single-Cell Phenotype Predictor for Alzheimer’s&#13;
Disease
Guo, Sophie J.
Advances in artificial intelligence (AI) and generative AI for representation learning have transformed our ability to model complex biological systems. Single-cell RNA sequencing (scRNA-seq) provides unprecedented resolution into cellular heterogeneity, offering a powerful substrate for modeling disease circuitry. However, predicting patient-level phenotypes from scRNA-seq remains challenging due to limited sample sizes, variable cell counts, and the computational burden of modeling long-context dependencies. We present scPhen, a flexible, parametric deep-learning framework for phenotype prediction from single-cell transcriptomic data, applied here to Alzheimer’s disease (AD) as a paradigm of complex, heterogeneous pathology. scPhen consists of a cell embedding module and a patient embedding module, designed to capture both fine-grained molecular patterns and higher-order cell–cell relationships. The framework supports multiple architectural backbones, including Transformers, Graph Neural Networks (GNNs), and state-space models such as Mamba, Mamba2, and BiMamba2, allowing exploration of tunable components for optimized performance. Across classification and regression tasks, state-space models, and in particular BiMamba2, demonstrated superior predictive accuracy and computational efficiency compared to Transformer-based and hybrid approaches. We further integrated attention-based multiple instance learning to enable variable cell counts per patient and to prioritize phenotype-informative cellular subsets. Interpretability analyses using Integrated Gradients and cell-level attention scores revealed gene programs and cell populations associated with AD progression, highlighting known neuroinflammatory signatures and suggesting novel molecular targets. By unifying cutting-edge sequence modeling architectures with scalable single-cell analysis, scPhen provides a generalizable, high-resolution approach to phenotype prediction. While demonstrated here in AD, this framework is readily extensible to other complex diseases and multi-modal cellular datasets, bridging computational innovation and biological discovery.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predicting Task Functional Localizers Using Naturalistic fMRI</title>
<link href="https://hdl.handle.net/1721.1/164824" rel="alternate"/>
<author>
<name>Wilke, Jordan</name>
</author>
<id>https://hdl.handle.net/1721.1/164824</id>
<updated>2026-02-13T03:49:09Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Predicting Task Functional Localizers Using Naturalistic fMRI
Wilke, Jordan
Functional magnetic resonance imaging (fMRI) data collected during naturalistic stimuli has shown promise for predicting individual traits, biomarkers of disease and functional brain localizations, potentially offering advantages over traditional resting-state approaches. This study investigated the use of interpretable deep learning models to predict demographics and functional task localizer activations from fMRI time-series data collected while participants viewed naturalistic stimuli. Using the data of 143 subjects from the Human Connectome Project, I analyzed 7T fMRI scans from participants watching movies to predict sex, age, and functional localizer activations across multiple cognitive tasks. I employed state-of-the-art machine learning architectures, including DICE and Glacier models, specifically chosen for their interpretable design features that build directed connectivity matrices and produce weighted temporal attention maps. These models aimed to capture dynamic brain activity patterns while maintaining the ability to understand which temporal features drive predictions. The results successfully reproduced previous findings for sex classification but showed poor performance for age prediction, with correlations ranging from -0.175 to 0.243. For functional localizer predictions, models initially appeared to achieve high performance with some specific contrasts having correlations around 0.9 and Dice scores generally above 0.6. However, detailed analysis revealed that these models were primarily predicting group averages rather than learning meaningful inter-subject variability, as evidenced by chance-level subject identification accuracy. This finding contrasts with previous works that demonstrated successful prediction of individual differences in functional localizations. The failure to capture inter-subject variability represents a significant limitation, as individual differences in functional regions of interest are crucial for applications such as pre-surgical mapping and disease prediction. My findings suggest that predicting from raw fMRI time-series may require different approaches than those used here, with preprocessed functional connectivity matrices showing promising results, and highlight the importance of sufficient training data to separate signal from noise when learning directly from naturalistic stimuli. Despite these challenges, this work establishes important methodological foundations and identifies key limitations that must be addressed in future research combining naturalistic stimuli with machine learning for fMRI prediction tasks. The findings emphasize the need for models that can capture individual functional differences while maintaining the interpretability necessary for understanding how naturalistic stimuli drive brain-based predictions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Probabilistic Programming with Low-Level, High-Performance GPU Programmable Inference</title>
<link href="https://hdl.handle.net/1721.1/164823" rel="alternate"/>
<author>
<name>Chung, Karen</name>
</author>
<id>https://hdl.handle.net/1721.1/164823</id>
<updated>2026-02-13T03:49:04Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Probabilistic Programming with Low-Level, High-Performance GPU Programmable Inference
Chung, Karen
GPU-compatible probabilistic programming languages (PPLs) have enabled high-performance, data-parallel programmable inference. However, these systems face fundamental trade-offs between expressiveness and performance, as their GPU code generation is automated and black-boxed, limiting optimization opportunities and imposing restrictions on program expressivity. This thesis introduces GenCUDA, a probabilistic programming system that addresses this limitation by embedding the CUDA GPU programming language directly into a C++/CUDA frontend, enabling GPU programmable inference with fine-grained control over runtime and memory profiles. GenCUDA extends the Gen probabilistic programming architecture by providing a dynamic modeling language (DML) that allows users to write performance-critical sections of generative functions as CUDA kernels while maintaining automatic trace management and the generative function interface (GFI). The system supports both sequential and parallel execution contexts through specialized effect handlers that seamlessly compose CPU and GPU code paths. Key technical contributions include: (1) a high-performance GPU distributions library achieving 10-100× speedups over TensorFlow-Probability, (2) memory-efficient trace management via template-optimized parallel effect handlers, and (3) vectorized generative functions that enable massive parallelization of inference algorithms. We demonstrate GenCUDA’s capabilities through comprehensive benchmarks on inference algorithms applied to diverse models including factor graphs, mixture models, and Hidden Markov Models. Results show significant performance improvements over JAX-based implementations: up to 3× speedup for importance sampling on a hierarchical model, 5.7× speedup for parallel Gibbs sampling on factor graphs, and memory efficiency improvements for large-scale mixture models supporting up to 6× as many clusters compared to existing frameworks’ limits. The system maintains the composability and expressiveness of probabilistic programming while unlocking GPU performance optimization techniques such as kernel fusion and memory hierarchy exploitation that are inaccessible to higher-level frameworks. GenCUDA demonstrates that embedding low-level GPU programming within automated probabilistic inference workflows can achieve both performance gains and algorithmic expressivity without sacrificing the modularity of probabilistic programming paradigms.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simplifying Equivariant GPU Kernels through Tile-based&#13;
Programming</title>
<link href="https://hdl.handle.net/1721.1/164822" rel="alternate"/>
<author>
<name>Kotak, Mit</name>
</author>
<id>https://hdl.handle.net/1721.1/164822</id>
<updated>2026-02-13T03:49:00Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Simplifying Equivariant GPU Kernels through Tile-based&#13;
Programming
Kotak, Mit
E(3)-equivariant neural networks have demonstrated success across a wide range of 3D modeling tasks. Until recently, they were bottlenecked due to their high memory and wall-time requirements. In this thesis we first provide an overview of recent GPU kernel efforts by both academia and industry that address this issue. These approaches tradeoff performance for engineering complexity, while still being algorithmically bottlenecked at 10 % GPU utilization. We instead trade off engineering complexity for performance. This not only lowers the barrier to GPU programming but also builds an abstraction layer to reason about future algorithmic innovations that can improve GPU utilization. Our kernel &#119861;3, based on the tiling- optimizations in just 100 lines of PyTorch-like code. We explore the performance-simplicity tradeoff with two case studies and demonstrate the practicality of our kernel workflow through downstream integration with a production model. We hope this work serves as inspiration to broaden and deepen existing equivariant kernel efforts.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Chemical exposures in drinking water: contaminant analysis and physicochemical behavior</title>
<link href="https://hdl.handle.net/1721.1/164821" rel="alternate"/>
<author>
<name>Bugher, Nicolette A.</name>
</author>
<id>https://hdl.handle.net/1721.1/164821</id>
<updated>2026-02-13T03:10:41Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Chemical exposures in drinking water: contaminant analysis and physicochemical behavior
Bugher, Nicolette A.
Environmental chemical exposures pose an understudied risk to human health. The quality and accessibility of data on occurrence in the environment and physicochemical behavior of industrial chemicals are integral for accurate exposure risk assessment. In this dissertation, analytical chemistry techniques were developed and leveraged to characterize human exposures to contaminants in drinking water and improve methods for assessing such risks. The occurrence of organic industrial pollutants in domestic well waters was investigated, with a particular focus on the impacts of region-specific industrial activity (e.g., hydraulic fracturing), legacy pollution sites (e.g., Superfund sites), and geochemistry. The exposure risk to water contaminants of domestic well users was further interrogated by evaluating trends in contaminant concentrations resulting from the implementation and maintenance of in-home water treatment devices. The results show widespread, low-dose mixtures of organic pollutants, where the efficacy of removal by in-home water treatment varied by water contaminant class and maintenance frequency. Additionally, analytical methods were optimized to quantify a group of organic water contaminants (i.e., probable carcinogens, N-nitrosamines), improving method sensitivity and critically identifying false-positive interferences. Finally, methods were evaluated and deployed for the determination of physicochemical properties of N-nitrosamines. The results of which demonstrate gaps in existing experimental data, provide a valuable methodological intercomparison (two experimental and two computational approaches), and contribute novel partitioning data. This dissertation addresses gaps in occurrence data, analytical method sensitivity, and reliability of physicochemical parameters for risk assessment. The combination of method development and implementation enables the study of exposures to water contaminant mixtures at health-relevant concentrations, representative of prevalent exposure pathways.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From coarse fate choice to precise pattern: post-mitotic progenitor targeting</title>
<link href="https://hdl.handle.net/1721.1/164820" rel="alternate"/>
<author>
<name>Nie, Mel F.</name>
</author>
<id>https://hdl.handle.net/1721.1/164820</id>
<updated>2026-02-13T03:49:07Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">From coarse fate choice to precise pattern: post-mitotic progenitor targeting
Nie, Mel F.
Planarians possess remarkable regenerative abilities, driven by pluripotent stem cells called neoblasts. While neoblasts are known to give rise to progenitor cells that form various tissues, whether and the extent to which these progenitors migrate across the animal remains unclear. Irradiation experiments eliminate all neoblasts outside shielded areas, allowing for the visualization of cell migration from the remaining neoblasts, but irradiated animals may not reflect homeostatic progenitor migration patterns. To address this, 5-ethynyl-2’-deoxyuridine (EdU) labeling and plug transplant techniques were used to trace progenitor movement in non-irradiated planarians. Using whole-mount fluorescence in situ hybridization (FISH) and the quantification of EdU-labeled cells, this study demonstrates that progenitor cells are capable of migrating long distances and exhibit a pronounced anterior bias in their movement and integration.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Data Layouts for Evolving Cloud Table Storage</title>
<link href="https://hdl.handle.net/1721.1/164819" rel="alternate"/>
<author>
<name>Sudhir, Sivaprasad</name>
</author>
<id>https://hdl.handle.net/1721.1/164819</id>
<updated>2026-02-13T03:10:39Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Optimizing Data Layouts for Evolving Cloud Table Storage
Sudhir, Sivaprasad
Modern data analytics platforms increasingly adopt disaggregated architectures, storing data in cost-effective cloud object stores. While this approach enables a clean separation of concerns, allowing each layer to be independently managed and scaled, it introduces significant performance bottlenecks due to expensive data movement. Effective data layouts, which organize data to minimize unnecessary data reads, are thus critical to achieving high query performance. However, existing techniques typically rely on manually specified layouts, collect limited metadata, or lack mechanisms to dynamically adapt to changing data and workloads.&#13;
&#13;
This thesis investigates adaptive, metadata-rich, expressive data layouts for cloud table storage. First, we introduce Pando, a correlation-aware layout technique that leverages rich metadata on query predicates to significantly improve data skipping. Next, we propose CopyRight, a partial replication strategy that selectively replicates subsets of data and optimizes each replica differently, efficiently serving heterogeneous query patterns. Finally, we describe Self-Organizing Data Containers (SDCs), a practical table storage layer for the cloud that incrementally reorganizes complex data layouts based on changes in data and workload distributions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Ensemble Strategies for Generalization in&#13;
Deepfake Image Detection</title>
<link href="https://hdl.handle.net/1721.1/164818" rel="alternate"/>
<author>
<name>Wagh, Rohan M.</name>
</author>
<id>https://hdl.handle.net/1721.1/164818</id>
<updated>2026-02-13T03:49:01Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Development of Ensemble Strategies for Generalization in&#13;
Deepfake Image Detection
Wagh, Rohan M.
The growing accessibility of generative models has enabled the rapid proliferation of deepfake content, posing significant challenges in image-based biometric security and media authenticity. In this thesis, six diverse facial deepfake image datasets are assembled, and four modern detection models are evaluated in a cross-domain scenario. We observe that individual models fail to generalize to images generated by techniques outside the scope of their training data. This often hinders the applicability of a single model in real-world deepfake detection. This thesis proposes ensemble strategies as a means of addressing this lack of generalization. We find that the ensemble models outperform individual models in classifying deepfake images, particularly in terms of accuracy and recall. An exhaustive evaluation of combinations of models shows that ensembles of similar models provide limited benefit, whereas ensembles of complementary models lead to significant improvements in classification performance. Ensembling models based specifically on accuracy and recall metrics also produces models that lower the rate of more harmful false negative predictions. This work highlights the value of ensemble models in improving generalization across diverse image families and provides a framework for building robustness in real-world deepfake detection systems.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantum information science and underground facilities</title>
<link href="https://hdl.handle.net/1721.1/164817" rel="alternate"/>
<author>
<name>Formaggio, Joseph A</name>
</author>
<id>https://hdl.handle.net/1721.1/164817</id>
<updated>2026-03-08T03:40:02Z</updated>
<published>2023-09-05T00:00:00Z</published>
<summary type="text">Quantum information science and underground facilities
Formaggio, Joseph A
As both nuclear physics and particle physics involve the quantum interactions of many sub-atomic particles, there has&#13;
always existed a strong interplay between these fields and the study of quantum physics and quantum information&#13;
systems (QIS). This interplay has accelerated in recent years, particularly with the emergence of new, highly sensitive&#13;
technologies, nascent access to quantum computing environments at the O(10)-O(100)-bit scale, and the use of coherence and entanglement to enhance sensitivity to novel and exotic phenomena. One unusual area of interplay between the two disciplines that has recently emerged is the role of background radiation and background mitigation on highly sensitive systems such as qubits.
</summary>
<dc:date>2023-09-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reduced-order model to predict thermal conductivity of dimensionally confined materials</title>
<link href="https://hdl.handle.net/1721.1/164816" rel="alternate"/>
<author>
<name>Hosseini, S Aria</name>
</author>
<author>
<name>Greaney, Alex</name>
</author>
<author>
<name>Romano, Giuseppe</name>
</author>
<id>https://hdl.handle.net/1721.1/164816</id>
<updated>2026-03-08T03:40:08Z</updated>
<published>2023-06-27T00:00:00Z</published>
<summary type="text">Reduced-order model to predict thermal conductivity of dimensionally confined materials
Hosseini, S Aria; Greaney, Alex; Romano, Giuseppe
Predicting nanoscale thermal transport in dielectrics requires models, such as the Boltzmann transport equation (BTE), that account for phonon boundary scattering in structures with complex geometries. Although the BTE has been validated against several key experiments, its computational expense limits its applicability. Here, we demonstrate the use of an analytic reduced-order model for predicting the thermal conductivity in dimensionally confined materials, i.e., monolithic and porous thin films, and rectangular and cylindrical nanowires. The approach uses the recently developed “Ballistic Correction Model,” which accounts for materials' full distribution of phonon mean-free-paths. The model is validated against BTE simulations for a selection of base materials, obtaining excellent agreement. By furnishing a precise yet easy-to-use prediction of thermal transport in nanostructures, our work strives to accelerate the identification of materials for energy-conversion and thermal-management applications.
</summary>
<dc:date>2023-06-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parametric decay instabilities driven by high power helicon waves in DIII-D</title>
<link href="https://hdl.handle.net/1721.1/164815" rel="alternate"/>
<author>
<name>Porkolab, M</name>
</author>
<author>
<name>Pinsker, RI</name>
</author>
<author>
<name>DeGrandchamp, GH</name>
</author>
<author>
<name>Baek, SG</name>
</author>
<author>
<name>Compernolle, B Van</name>
</author>
<author>
<name>Denk, S</name>
</author>
<author>
<name>Petty, CC</name>
</author>
<author>
<name>Tang, SX</name>
</author>
<author>
<name>Thome, KE</name>
</author>
<id>https://hdl.handle.net/1721.1/164815</id>
<updated>2026-03-08T03:40:07Z</updated>
<published>2023-08-18T00:00:00Z</published>
<summary type="text">Parametric decay instabilities driven by high power helicon waves in DIII-D
Porkolab, M; Pinsker, RI; DeGrandchamp, GH; Baek, SG; Compernolle, B Van; Denk, S; Petty, CC; Tang, SX; Thome, KE
High power helicon waves (whistler or very high harmonic fast lower hybrid waves) at a frequency of 476 MHz are being tested for efficient off-axis current drive on DIII-D with the goal of demonstrating profile control in AT plasmas [1-4]. In agreement with earlier theoretical predictions, strong Parametric Decay Instability (PDI) has been observed at injected RF power levels in the range of 0.05-0.5 MW with corresponding electric fields of 10-30 kV/m [5,6]. The dominant driver of the PDI is the E×B and the polarization drift velocity which can drive ion cyclotron quasi-modes and lower hybrid (or IBW) sideband waves unstable [5,6]. Initial experimental results have been obtained with powers up to 0.3 MW showing evidence of strong PDI measured with high-frequency one-turn magnetic probes located at both the outboard and the inboard wall at frequencies set by the usual selection rules [7,8]. Here we review the appropriate analytic formulation to predict such instabilities and present numerical evaluation of frequencies and growth rates relevant to DIII-D plasma parameters. We also assess the convective thresholds for the PDIs, and compared them with experimental observations.
</summary>
<dc:date>2023-08-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Experimental footprints of a water-rich depletion layer in the Herschel–Bulkley pipe flow of solidifying polyelectrolytes</title>
<link href="https://hdl.handle.net/1721.1/164814" rel="alternate"/>
<author>
<name>Nazari, B.</name>
</author>
<author>
<name>Moghimi, E.</name>
</author>
<author>
<name>Bousfield, D. W.</name>
</author>
<id>https://hdl.handle.net/1721.1/164814</id>
<updated>2026-03-08T03:40:12Z</updated>
<published>2023-01-27T00:00:00Z</published>
<summary type="text">Experimental footprints of a water-rich depletion layer in the Herschel–Bulkley pipe flow of solidifying polyelectrolytes
Nazari, B.; Moghimi, E.; Bousfield, D. W.
A fundamental understanding of the transition from fluid-like to gel-like behavior is critical for a range of applications including personal care, pharmaceuticals, food products, batteries, painting, biomaterials, and concrete. The pipe flow behavior of a Herschel–Bulkley fluid is examined by a combination of rheology, ultrasound imaging velocimetry, and pressure measurements together with modeling. The system is a solution of 0.50 wt. % polyelectrolytes of sulfated polysaccharides in water that solidifies on cooling. Fluids with different ionic strengths were pumped at various rates from a reservoir at 80 °C into a pipe submerged in a bath maintained at 20 °C. The fluid velocity, pressure drop ΔP, and temperature were monitored. The same quantities were extracted by solving continuity, energy, and momentum equations. Moreover, the modeling results demonstrate that the local pressure gradient along the pipe dPdx|x is related to the local yield stress near the pipe wall τywall|x, which explains the variations of dPdx|x along the pipe. Experimental results show much lower values for ΔP compared to those from modeling. This discrepancy is exacerbated at higher ionic strengths and smaller flow rates, where fluid shows a higher degree of solidification. The tabulated experimental ΔP data against the solidification onset length Lonset (where the fluid is cool enough to solidify) along with the ultrasound imaging velocimetry associate these discrepancies between experiments and models to a depletion layer of ∼1 μm, reflecting the lubrication effects caused by the water layer at the wall.
</summary>
<dc:date>2023-01-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Radiation pressure of radio frequency waves on turbulence in edge plasmas</title>
<link href="https://hdl.handle.net/1721.1/164813" rel="alternate"/>
<author>
<name>Ram, Abhay K</name>
</author>
<author>
<name>Hizanidis, Kyriakos</name>
</author>
<id>https://hdl.handle.net/1721.1/164813</id>
<updated>2026-03-08T03:40:06Z</updated>
<published>2023-08-18T00:00:00Z</published>
<summary type="text">Radiation pressure of radio frequency waves on turbulence in edge plasmas
Ram, Abhay K; Hizanidis, Kyriakos
The scattering of radio frequency (RF) waves – lower hybrid and helicon waves – by a single cylindrical filament, embedded in a background plasma, is studied using a full-wave analytical theory. While a filament can affect the propagation of RF waves, the radiation force exerted by the waves can influence the filament. The force on a filament is determined using the Maxwell stress tensor. The radiation force can either pull the filament towards the RF source or push it away. The radiation force, in the two frequency ranges, is large enough to impact the motion of a filament and could be measured experimentally. Consequently, it may be possible to modify the edge turbulence using RF waves.
</summary>
<dc:date>2023-08-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Replies to Moran, Gallois, and Bar-On and Johnson</title>
<link href="https://hdl.handle.net/1721.1/164812" rel="alternate"/>
<author>
<name>Byrne, Alex</name>
</author>
<id>https://hdl.handle.net/1721.1/164812</id>
<updated>2026-03-08T03:40:05Z</updated>
<published>2026-01-04T00:00:00Z</published>
<summary type="text">Replies to Moran, Gallois, and Bar-On and Johnson
Byrne, Alex
I am very grateful to Dorit Bar-On, Drew Johnson, André Gallois, and Dick Moran for their thoughtful commentaries. Bar-On, Gallois, and Moran are discussed extensively in Transparency and Self-Knowledge (hereafter T&amp;SK), and their work has been an important source of inspiration for my own. In order to make my contribution to this symposium reasonably compact, I have not attempted to reply to every single point. (One especially notable omission is the alternative account of self-knowledge sketched by Bar-On and Johnson.) Instead, I have concentrated on the main objections.
</summary>
<dc:date>2026-01-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>The end of MAD? Technological innovation and the future of nuclear retaliatory capabilities</title>
<link href="https://hdl.handle.net/1721.1/164811" rel="alternate"/>
<author>
<name>Glaser, Charles L.</name>
</author>
<id>https://hdl.handle.net/1721.1/164811</id>
<updated>2026-03-08T03:40:10Z</updated>
<published>2025-01-30T00:00:00Z</published>
<summary type="text">The end of MAD? Technological innovation and the future of nuclear retaliatory capabilities
Glaser, Charles L.
This article motivates the special issue, explaining the new debate over whether emerging technologies – including small satellites, machine learning, cyber weapons, and quantum technologies – will enable major powers to undermine each others’ nuclear retaliatory capabilities. The first article analyzes key relevant emerging technologies. Following articles explore how emerging technologies will influence the vulnerability of mobile missiles, ballistic missile submarines, and nuclear command-and-control; and the ability of missile defenses against intercontinental range missiles. The final article explores China’s views on the requirements of nuclear deterrence. Overall, the articles suggest that U.S. prospects for achieving a damage-limitation capability are poor and declining.
</summary>
<dc:date>2025-01-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Prototyping longevity services: Tech-driven or human-assisted service?</title>
<link href="https://hdl.handle.net/1721.1/164810" rel="alternate"/>
<author>
<name>Lee, Sheng-Hung</name>
</author>
<author>
<name>Coughlin, Joseph F</name>
</author>
<author>
<name>Yang, Maria</name>
</author>
<id>https://hdl.handle.net/1721.1/164810</id>
<updated>2026-03-08T03:40:05Z</updated>
<published>2025-05-04T00:00:00Z</published>
<summary type="text">Prototyping longevity services: Tech-driven or human-assisted service?
Lee, Sheng-Hung; Coughlin, Joseph F; Yang, Maria
The study investigates the design of longevity services through an experimental comparison of tech-driven and human-assisted service encounters, focusing on six key features: learnability, efficiency, safety, trustworthiness, confidence, and satisfaction. The controlled experiment, which involved 12 gender-balanced participants from Boston, USA, employed four qualitative methods, including surveys, the Think-aloud technique, semi-structured interviews, and transcript analysis supported by computer-assisted qualitative data analysis software (CAQDAS) and its AI-empowered coding function. The study concluded with two insights: 1. Tech-driven services can improve safety, trust, confidence, and satisfaction; and 2. both service encounters are context-sensitive, shaped by participants’ demographics, personality, culture, and environmental factors. Although the small sample size limits the study’s generalizability, the participants’ stories and perceptions offered valuable insights into their implicit needs and subtle behaviors in learning, experiencing, and addressing sensitive, private, and vulnerable topics related to longevity planning.
</summary>
<dc:date>2025-05-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Iraq Petroleum Company’s Infrastructure of “Desert Control” during the British Mandate in the Middle East</title>
<link href="https://hdl.handle.net/1721.1/164809" rel="alternate"/>
<author>
<name>Freeman, Margaret</name>
</author>
<id>https://hdl.handle.net/1721.1/164809</id>
<updated>2026-03-08T03:40:01Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Iraq Petroleum Company’s Infrastructure of “Desert Control” during the British Mandate in the Middle East
Freeman, Margaret
This article discusses the infrastructure of the Iraq Petroleum Company (IPC) in the interwar British Mandatory Middle East as belonging to a larger British imperial project for “desert control” through architecture. Britain’s so-called “desert control” was, more accurately, a programme for control over the pastoralist Bedouin tribespeople who were the primary inhabitants of the Mandatory territories’ desert zones. This article identifies the two pillars of Britain’s “desert control” strategy: the use of Bedouin police forces, and the architectural annexation and restriction of water resources from Bedouin tribes. It argues that Mandate Britain’s “desert control” programme was replicated and adapted by the IPC for its own needs to protect its commercial infrastructural investment, the Iraq–Mediterranean Pipeline, in the British Mandatory territories. It compares two building typologies, the Mandate’s “desert outposts” and the IPC pipeline’s pumping stations, as sites where the Bedouin were alternately welcomed into and excluded from imperial and commercial projects in the interest of controlling them.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Surrogate modelling of surface roughness for asphalt pavements using artificial neural networks: a mechanistic-empirical approach</title>
<link href="https://hdl.handle.net/1721.1/164808" rel="alternate"/>
<author>
<name>Li, Haoran</name>
</author>
<author>
<name>AzariJafari, Hessam</name>
</author>
<author>
<name>Kirchain, Randolph</name>
</author>
<author>
<name>Santos, João</name>
</author>
<author>
<name>Khazanovich, Lev</name>
</author>
<id>https://hdl.handle.net/1721.1/164808</id>
<updated>2026-03-08T03:39:59Z</updated>
<published>2024-12-09T00:00:00Z</published>
<summary type="text">Surrogate modelling of surface roughness for asphalt pavements using artificial neural networks: a mechanistic-empirical approach
Li, Haoran; AzariJafari, Hessam; Kirchain, Randolph; Santos, João; Khazanovich, Lev
Pavement surface smoothness (or roughness) is crucial for traffic safety, driving comfort, and fuel efficiency. As a widely applied roughness indicator, an accurate forecasting of the International Roughness Index (IRI) and its deterioration is essential for the design, maintenance, and management of asphalt pavements. Previous studies have used field measurement data or AASHTOWare Pavement ME Design simulations for the development of machine learning (ML) models to streamline the IRI modelling. However, these models frequently lack the accuracy and robustness of the measurement data or high-fidelity computational simulations they are intended to surrogate. To address this issue, we employed a new adaptive sampling technique to generate an informative yet efficient pavement damage database from Pavement ME simulations. Utilising Artificial Neural Networks (ANNs), we engineered two types of surrogate ML models: (a) Model I, an ANN-based IRI predictive model, and (b) Model II, a hybrid model combining ANN-based predictions of rutting, fatigue damage, and thermal cracking with closed-form relationships between these indicators and IRI. Our findings show that Model II outperforms Model I in IRI modelling accuracy both globally and locally. Moreover, Model II matches IRI simulations of Pavement ME while providing enhanced efficiency and adaptability to a broader spectrum of design considerations.
</summary>
<dc:date>2024-12-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Honoring practices of community-based educators: lessons learned from the collaborative design of a creative mobile app</title>
<link href="https://hdl.handle.net/1721.1/164807" rel="alternate"/>
<author>
<name>Rusk, Natalie</name>
</author>
<author>
<name>Jain, Rupal</name>
</author>
<author>
<name>Martin, Caitlin K.</name>
</author>
<author>
<name>Roque, Ricarose</name>
</author>
<author>
<name>Freitas, João Adriano</name>
</author>
<author>
<name>Molaodi, Linford</name>
</author>
<id>https://hdl.handle.net/1721.1/164807</id>
<updated>2026-03-08T03:39:52Z</updated>
<published>2024-11-29T00:00:00Z</published>
<summary type="text">Honoring practices of community-based educators: lessons learned from the collaborative design of a creative mobile app
Rusk, Natalie; Jain, Rupal; Martin, Caitlin K.; Roque, Ricarose; Freitas, João Adriano; Molaodi, Linford
This paper shares reflections and stories from a collaborative design process between the Lifelong Kindergarten group at the MIT Media Lab and a global network of community-based educators to develop a creative coding app called OctoStudio, which supports children and families to create and share interactive projects on mobile devices. The app design is grounded in practices that community-based educators who are primarily from the Global South have developed around strengths, needs, and interests of children and their communities, as well as constraints and affordances of local infrastructure. We use the lens of minimal computing – which focuses on community context and constraints in decisions about technology – to describe our collaborative work on OctoStudio. We describe trade-offs involved in the design decisions, and highlight insights from the process of collaboration to develop tools and practices that are more responsive and meaningful to communities who are often excluded from design decisions that impact them.
</summary>
<dc:date>2024-11-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Expected Constant Round Byzantine Broadcast under Dishonest Majority</title>
<link href="https://hdl.handle.net/1721.1/164806" rel="alternate"/>
<author>
<name>Wan, Jun</name>
</author>
<author>
<name>Xiao, Hanshen</name>
</author>
<author>
<name>Shi, Elaine</name>
</author>
<author>
<name>Devadas, Srinivas</name>
</author>
<id>https://hdl.handle.net/1721.1/164806</id>
<updated>2026-02-12T03:06:52Z</updated>
<summary type="text">Expected Constant Round Byzantine Broadcast under Dishonest Majority
Wan, Jun; Xiao, Hanshen; Shi, Elaine; Devadas, Srinivas
Byzantine Broadcast (BB) is a central question in distributed systems, and an important challenge is to understand its round complexity. Under the  honest majority setting, it is long known that there exist randomized protocols that can achieve BB in expected constant rounds, regardless of the number of nodes n. However, whether we can match the expected constant round complexity in the corrupt majority setting --- or more precisely, when f &gt; n/2 --- remains unknown, where f denotes the number of corrupt nodes.     In this paper, we are the first to resolve this long-standing question. We show how to achieve BB in expected O((n/(n-f))2) rounds. Our results hold under a weakly adaptive adversary who cannot perform ``after-the-fact removal' of messages already sent by a node before it becomes corrupt. We also assume trusted setup and the Decision Linear (DLIN) assumption in bilinear groups.
</summary>
</entry>
<entry>
<title>Minimum Plane Bichromatic Spanning Trees</title>
<link href="https://hdl.handle.net/1721.1/164805" rel="alternate"/>
<author>
<name>Akitaya, Hugo</name>
</author>
<author>
<name>Biniaz, Ahmad</name>
</author>
<author>
<name>Demaine, Erik</name>
</author>
<author>
<name>Kleist, Linda</name>
</author>
<author>
<name>Stock, Frederick</name>
</author>
<author>
<name>T?th, Csaba D.</name>
</author>
<id>https://hdl.handle.net/1721.1/164805</id>
<updated>2026-02-12T03:06:50Z</updated>
<summary type="text">Minimum Plane Bichromatic Spanning Trees
Akitaya, Hugo; Biniaz, Ahmad; Demaine, Erik; Kleist, Linda; Stock, Frederick; T?th, Csaba D.
For a set of red and blue points in the plane, a minimum bichromatic spanning tree (MinBST) is a shortest spanning tree of the points  such that every edge has a red and a blue endpoint. A MinBST can be computed in O(n log n) time where n is the number of points.  In contrast to the standard Euclidean MST, which is always plane (noncrossing), a MinBST may have edges that cross each other.  However, we prove that a MinBST is quasi-plane, that is, it does not contain three pairwise crossing edges, and we determine the  maximum number of crossings.    Moreover, we study the problem of finding a minimum plane bichromatic spanning tree (MinPBST) which is a shortest bichromatic  spanning tree with pairwise noncrossing edges. This problem is known to be NP-hard. The previous best approximation algorithm,  due to Borgelt et al. (2009), has a ratio of O(sqrt(n)). It is also known that the optimum solution can be computed in polynomial time in  some special cases, for instance, when the points are in convex position, collinear, semi-collinear, or when one color class has constant  size. We present an O(log n)-factor approximation algorithm for the general case.
</summary>
</entry>
<entry>
<title>Nested Dissection Meets IPMs: Planar Min-Cost Flow in Nearly-Linear Time</title>
<link href="https://hdl.handle.net/1721.1/164804" rel="alternate"/>
<author>
<name>Dong, Sally</name>
</author>
<author>
<name>Gao, Yu</name>
</author>
<author>
<name>Goranci, Gramoz</name>
</author>
<author>
<name>Lee, Yin Tat</name>
</author>
<author>
<name>Sachdeva, Sushant</name>
</author>
<author>
<name>Peng, Richard</name>
</author>
<author>
<name>Ye, Guanghao</name>
</author>
<id>https://hdl.handle.net/1721.1/164804</id>
<updated>2026-02-12T03:07:19Z</updated>
<published>2025-07-26T00:00:00Z</published>
<summary type="text">Nested Dissection Meets IPMs: Planar Min-Cost Flow in Nearly-Linear Time
Dong, Sally; Gao, Yu; Goranci, Gramoz; Lee, Yin Tat; Sachdeva, Sushant; Peng, Richard; Ye, Guanghao
We present a nearly-linear time algorithm for finding a minimum-cost flow in planar graphs with polynomially bounded integer costs and  capacities. The previous fastest algorithm for this problem is based on interior point methods (IPMs) and works for general sparse graphs in O(n1.5 polylog n)) time [Daitch-Spielman, STOC'.     Intuitively, ?(n1.5) is a natural runtime barrier for IPM-based methods, since they require ?n iterations, each routing a possibly-dense electrical flow. To break this barrier, we develop a new implicit representation for flows based on generalized nested dissection [Lipton-Rose-Tarjan, SINUM'79] and approximate Schur complements [Kyng-Sachdeva, FOCS'. This implicit representation permits us to design a data structure to route an electrical flow with sparse demands in roughly ?n update time, resulting in a total runtime of O(n polylog n).    Our results immediately extend to all families of separable graphs.
</summary>
<dc:date>2025-07-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Electric Vehicle Security and Privacy through Decentralized Identity Management</title>
<link href="https://hdl.handle.net/1721.1/164803" rel="alternate"/>
<author>
<name>Aydeger, Abdullah</name>
</author>
<author>
<name>Zeydan, Engin</name>
</author>
<author>
<name>Mangues-Bafalluy, Josep</name>
</author>
<author>
<name>Arslan, Suayb</name>
</author>
<author>
<name>Turk, Yekta</name>
</author>
<id>https://hdl.handle.net/1721.1/164803</id>
<updated>2026-02-12T03:06:49Z</updated>
<published>2025-09-12T00:00:00Z</published>
<summary type="text">Enhancing Electric Vehicle Security and Privacy through Decentralized Identity Management
Aydeger, Abdullah; Zeydan, Engin; Mangues-Bafalluy, Josep; Arslan, Suayb; Turk, Yekta
In the next decade, electric vehicles (EVs) are expected to contribute to reducing climate change and transforming road mobility significantly. However, the security and privacy of EV charging systems present considerable challenges that need to be addressed. This paper introduces a novel approach by integrating blockchain-based self-sovereign identity (SSI) to enhance the security and privacy of EV charging systems. By leveraging decentralized and immutable nature of blockchain, the proposed SSI framework can ensure secure and private data exchanges between EVs, charging stations, and backend systems. This three-way integration addresses the vulnerabilities identified in existing EV charging methods, such as conductive, inductive, and battery swapping, and complies with cybersecurity regulations like UNECE R155. This paper provides a comprehensive analysis, practical case study, and evaluation of the security and privacy enhancements achieved through the proposed SSI framework, offering valuable insights for industry professionals and researchers. We have conducted extensive end-to-end testing to evaluate the performance of our blockchain-based SSI framework in the EV charging ecosystem, focusing on identity verification, credential management and service orchestration. The results show that the system enables fast wallet creation, efficient metadata retrieval and low-latency service deployment, ensuring seamless identity management and service orchestration.
</summary>
<dc:date>2025-09-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Local Distributed Rounding: Generalized to MIS, Matching, Set Cover, and Beyond</title>
<link href="https://hdl.handle.net/1721.1/164802" rel="alternate"/>
<author>
<name>Faour, Salwa</name>
</author>
<author>
<name>Ghaffari, Mohsen</name>
</author>
<author>
<name>Grunau, Christoph</name>
</author>
<author>
<name>Kuhn, Fabian</name>
</author>
<author>
<name>Rozho?, V?clav</name>
</author>
<id>https://hdl.handle.net/1721.1/164802</id>
<updated>2026-02-12T03:06:46Z</updated>
<summary type="text">Local Distributed Rounding: Generalized to MIS, Matching, Set Cover, and Beyond
Faour, Salwa; Ghaffari, Mohsen; Grunau, Christoph; Kuhn, Fabian; Rozho?, V?clav
We develop a general deterministic distributed method for locally rounding fractional solutions of graph problems for which the analysis can be broken down into analyzing pairs of vertices. Roughly speaking, the method can transform fractional/probabilistic label assignments of the vertices into integral/deterministic label assignments for the vertices, while approximately preserving a potential function that is a linear combination of functions, each of which depends on at most two vertices (subject to some conditions usually satisfied in pairwise analyses). The method unifies and significantly generalizes prior work on deterministic local rounding techniques [Ghaffari, Kuhn FOCS'21; Harris FOCS'19; Fischer, Ghaffari, Kuhn FOCS'17; Fischer DISC' to obtain polylogarithmic-time deterministic distributed solutions for combinatorial graph problems. Our general rounding result enables us to locally and efficiently derandomize a range of distributed algorithms for local graph problems, including maximal independent set (MIS), maximum-weight independent set approximation, and minimum-cost set cover approximation.
</summary>
</entry>
<entry>
<title>Feasibility Study on Heat Pipes for Radio Frequency Antennas</title>
<link href="https://hdl.handle.net/1721.1/164801" rel="alternate"/>
<author>
<name>Jung, Minuk</name>
</author>
<author>
<name>Watterson, Amy</name>
</author>
<author>
<name>Wallace, Gregory M</name>
</author>
<id>https://hdl.handle.net/1721.1/164801</id>
<updated>2026-02-12T03:07:33Z</updated>
<published>2026-02-17T00:00:00Z</published>
<summary type="text">Feasibility Study on Heat Pipes for Radio Frequency Antennas
Jung, Minuk; Watterson, Amy; Wallace, Gregory M
The applicability of a heat pipe is investigated for the cooling of radio frequency antennas in fusion reactors operating at high temperatures. A heat pipe is a passive cooling device that transfers a large amount of heat through the liquid-vapor phase change and pumps the working fluid by the surface tension of the wick structure without moving parts. As the heat pipe is expected to operate near 1000 K, refractory metals or ceramics should be used for wall materials, and liquid metals are primarily considered as the working fluid. However, liquid metals are electrically conductive, and the strong magnetic field perpendicular to the flow direction imposes significant magnetohydrodynamic (MHD) flow resistance in addition to viscous friction, which impairs heat transfer performance. Since a strong magnetic field is inevitable in magnetic confinement fusion reactors, materials with low electrical conductivity should be applied to wall coatings to reduce the MHD effect. Heat flux limitations at a magnetic field of 10 T and a condenser coolant temperature of 773 K are estimated using COMSOL multiphysics, which can capture the fully developed MHD wick flow, laminar/turbulent vapor flow, and heat transfer simultaneously. For simplicity, the generic heat pipe geometry of a straight horizontal cylinder with a length of 2 ft (0.6096 m) is employed. Optimal geometrical parameters are evaluated to meet radial evaporator/condenser heat fluxes greater than 0.1 MW/m2, even under a strong MHD effect.
</summary>
<dc:date>2026-02-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Regional incidence and persistence of high-growth firms: testing ideas from the entrepreneurial ecosystems literature</title>
<link href="https://hdl.handle.net/1721.1/164800" rel="alternate"/>
<author>
<name>Coad, Alex</name>
</author>
<author>
<name>Domnick, Clemens</name>
</author>
<author>
<name>Santoleri, Pietro</name>
</author>
<author>
<name>Srhoj, Stjepan</name>
</author>
<id>https://hdl.handle.net/1721.1/164800</id>
<updated>2026-02-12T03:07:25Z</updated>
<published>2025-01-08T00:00:00Z</published>
<summary type="text">Regional incidence and persistence of high-growth firms: testing ideas from the entrepreneurial ecosystems literature
Coad, Alex; Domnick, Clemens; Santoleri, Pietro; Srhoj, Stjepan
Policymakers and scholars often assume that a higher incidence of high-growth firms (HGFs) is synonymous with vibrant regional economic dynamics, and that HGF shares are persistent over time as entrepreneurial ecosystems (EEs) have slowly changing features. In this paper we test these hypotheses, which are deeply rooted in the EE literature. Results do not provide strong support for the hypothesis that more developed regions feature higher HGF shares. We do find evidence consistent with HGF shares displaying persistency over time. However, we show that more developed regions do not have higher persistence in their HGF shares, and that the strength in persistence does not increase across the HGFs distribution, which does not support path-dependency as the main mechanism behind the observed persistence. Overall, we call for a more nuanced interpretation of both regional HGF shares and the EEs literature.
</summary>
<dc:date>2025-01-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>Knives out: response to critics</title>
<link href="https://hdl.handle.net/1721.1/164799" rel="alternate"/>
<author>
<name>Khoo, Justin</name>
</author>
<id>https://hdl.handle.net/1721.1/164799</id>
<updated>2026-02-12T03:07:34Z</updated>
<published>2025-08-09T00:00:00Z</published>
<summary type="text">Knives out: response to critics
Khoo, Justin
Writing a book can feel like a solitary endeavor. You labor for (in my case) years, sometimes talking about parts of the project with others, but mostly toiling alone to work out the consequences of commitments you made months and years prior. I'm grateful for the opportunity to engage with three brilliant interlocutors about these ideas, which for so long seemed to matter to no one besides myself (and maybe my publisher).
</summary>
<dc:date>2025-08-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward Ontological Alignment: Coordinating Student Ideas with the Representational System of a Computational Modeling Unit for Science Learning</title>
<link href="https://hdl.handle.net/1721.1/164798" rel="alternate"/>
<author>
<name>Wagh, Aditi</name>
</author>
<author>
<name>Rosenbaum, Leah F.</name>
</author>
<author>
<name>Fuhrmann, Tamar</name>
</author>
<author>
<name>Eloy, Adelmo</name>
</author>
<author>
<name>Blikstein, Paulo</name>
</author>
<author>
<name>Wilkerson, Michelle</name>
</author>
<id>https://hdl.handle.net/1721.1/164798</id>
<updated>2026-02-12T03:07:10Z</updated>
<published>2024-11-18T00:00:00Z</published>
<summary type="text">Toward Ontological Alignment: Coordinating Student Ideas with the Representational System of a Computational Modeling Unit for Science Learning
Wagh, Aditi; Rosenbaum, Leah F.; Fuhrmann, Tamar; Eloy, Adelmo; Blikstein, Paulo; Wilkerson, Michelle
Computational modeling tools present unique opportunities and challenges for student learning. Each tool has a representational system that impacts the kinds of explorations students engage in. Inquiry aligned with a tool’s representational system can support more productive engagement toward target learning goals. However, little research has examined how teachers can make visible the ways students’ ideas about a phenomenon can be expressed and explored within a tool’s representational system. In this paper, we elaborate on the construct of ontological alignment—that is, identifying and leveraging points of resonance between students’ existing ideas and the representational system of a tool. Using interaction analysis, we identify alignment practices adopted by a science teacher and her students in a computational agent-based modeling unit. Specifically, we describe three practices: (1) Elevating student ideas relevant to the tool’s representational system; (2) Exploring and testing links between students’ conceptual and computational models; and (3) Drawing on evidence resonant with the tool’s representational system to differentiate between theories. Finally, we discuss the pedagogical value of ontological alignment as a way to leverage students’ ideas in alignment with a tool’s representational system and suggest the presented practices as exemplary ways to support students’ computational modeling for science learning.
</summary>
<dc:date>2024-11-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stand Up and Split. Desiring Desertion in Jean Giono and Emmanuelle Lambert</title>
<link href="https://hdl.handle.net/1721.1/164797" rel="alternate"/>
<author>
<name>Perreau, Bruno</name>
</author>
<id>https://hdl.handle.net/1721.1/164797</id>
<updated>2026-02-12T03:07:22Z</updated>
<published>2024-10-19T00:00:00Z</published>
<summary type="text">Stand Up and Split. Desiring Desertion in Jean Giono and Emmanuelle Lambert
Perreau, Bruno
To face the powers that be, contemporary French writer Virginie Despentes proposes a straightforward solution: “stand up and split!” But where to go and with whom? How do we stop the proliferation of contested norms if we clear the decks? In a context of ecological crisis, desiring desertion is not rare even if we have only one world to inhabit. This article analyzes the desire to desert from two texts: Le Déserteur et autres récits (Citation1966 [1973]) by Jean Giono and La Désertion (Citation2018a) by Emmanuelle Lambert. It demonstrates that desertion does not make a clean sweep of the past but rather accepts the desert at the heart of existence. That is, both presence and disappearance.
</summary>
<dc:date>2024-10-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fuel Behavior Implications of Reactor Design Choices in Pressurized Water SMRs</title>
<link href="https://hdl.handle.net/1721.1/164796" rel="alternate"/>
<author>
<name>Halimi, Assil</name>
</author>
<author>
<name>Shirvan, Koroush</name>
</author>
<id>https://hdl.handle.net/1721.1/164796</id>
<updated>2026-02-12T03:07:18Z</updated>
<published>2025-08-03T00:00:00Z</published>
<summary type="text">Fuel Behavior Implications of Reactor Design Choices in Pressurized Water SMRs
Halimi, Assil; Shirvan, Koroush
Small pressurized water reactors can feature boron-free operation, natural circulation mode, reduced-height assemblies, and/or long refueling cycles. This paper attempts to explore core design optimization for each of these design evolutions. In consequence, five core design layouts are developed incorporating boron-free operation with continuous control rod insertion, natural circulation with low burnup/low power density design, natural circulation with high burnup/low power density design, forced circulation with standard core power density design, and forced circulation with high power density design. These cores’ performance is compared to a standard four-loop pressurized water reactor. The design process aims to improve the fuel cycle cost under safety constraints through core design optimization using the CASMO4E/SIMULATE3 reactor physics codes and the FRAPCON4.1 fuel performance assessment tool. Core modeling assumes standard 17×17 PWR fuel assemblies loaded with low enriched uranium up to 5 wt% or low enriched uranium plus (i.e. below 10 wt% enrichment) pellets with gadolinium oxide as the burnable poison. Satisfactory core and fuel performances are obtained for all the designed cores under steady state and considered overpower transients. For low power density operation, long cycle lengths are achieved reaching 2.5-year and 5-year cycles, and peak rod-average burnup is pushed to 83 MWd/kgU. Other cycle lengths are maintained at 18 months. Boron-free operation exhibits the ability to achieve longer cycle lengths at the cost of higher peaking factors leading to high local power and fuel temperatures, which prevents sizable power uprates and is deemed uneconomical. Fuel assembly height reduction allows coolant velocity retrofit, which enables higher core power density without violating the structural integrity of the fuel assembly. As a result, a core power density of 123 kW/L is reached where total cladding hoop strain becomes the limiting parameter.
</summary>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shades of authoritarian digital sovereignty: divergences in Russian and Chinese data localisation regimes</title>
<link href="https://hdl.handle.net/1721.1/164795" rel="alternate"/>
<author>
<name>Khasanova, Liliya</name>
</author>
<author>
<name>Tai, Katharin</name>
</author>
<id>https://hdl.handle.net/1721.1/164795</id>
<updated>2026-02-12T03:07:21Z</updated>
<published>2024-01-02T00:00:00Z</published>
<summary type="text">Shades of authoritarian digital sovereignty: divergences in Russian and Chinese data localisation regimes
Khasanova, Liliya; Tai, Katharin
The concept of sovereignty is now referred to in cyberspace-related policy by a range of governments, both authoritarian and democratic. At the same time, the most prominent proponents of state – or sovereignty-centric models of internet governance are Russia and China, whose positions are often characterised as a shared ‘Sino-Russian’ model. This paper subjects this idea of a shared Sino-Russian approach to empirical scrutiny by conducting a comparative analysis of rules, regulations and policies on data localisation in both countries. By delimiting the research question to regulations on data localisation and cross-border data transfers in both countries, we identify an important set of similarities and differences between the Russian and Chinese approaches. They share some features associated with authoritarian regimes, such as uncertainty around the selective enforcement of broadly formulated rules and a centralised assessment of outbound data transfers. However, we also find significant differences in the level of institutional centralisation, degrees of responsiveness within the policymaking process, and economic logics driving data localisation and cross-border transfer regulations. Based on these findings, we argue that despite a perception that Russia and China adhere to a similar model of authoritarian digital sovereignty, there are significant disparities in their data localisation regimes.
</summary>
<dc:date>2024-01-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Race, profit, and algorithms: Neighborhood-level analysis of iBuyers’ profit margin</title>
<link href="https://hdl.handle.net/1721.1/164794" rel="alternate"/>
<author>
<name>So, Wonyoung</name>
</author>
<id>https://hdl.handle.net/1721.1/164794</id>
<updated>2026-02-12T03:07:08Z</updated>
<published>2024-10-29T00:00:00Z</published>
<summary type="text">Race, profit, and algorithms: Neighborhood-level analysis of iBuyers’ profit margin
So, Wonyoung
iBuyers are firms that use automated valuation models (AVMs), streamline home buying processes, and provide all-cash offers to purchase homes. Although the previous literature has explored the roles and limitations of iBuyers in the housing market, empirical research on the racial implications of these algorithmic home buying processes remains understudied. Using a spatial lag model, this study shows the spatial clustering of iBuyer profit margins, that iBuyers gain more profits when they resell to individuals than institutions, and that some iBuyers have a statistically significant correlation between their profit margins and the proportion of marginalized racial groups within a census tract, while controlling for individual housing characteristics, neighborhood housing quality and demand, and neighborhood amenities and socioeconomic factors. These findings suggest that the more adeptly iBuyers can forecast housing values, the greater the potential to maximize profits from homeowners in communities of color. Consequently, this research contributes to the understanding of how technological mechanisms operate within a purportedly race-neutral framework and advocates for the development and deployment of algorithmic systems guided by the principles of antisubordination, rather than relying solely on notions of “fairness” and anticlassification.
</summary>
<dc:date>2024-10-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Powering Through the Turn: Finding Time for Concept Exploration Before Industry Stagnation</title>
<link href="https://hdl.handle.net/1721.1/164793" rel="alternate"/>
<author>
<name>Noble, Connery</name>
</author>
<author>
<name>Cameron, Bruce G</name>
</author>
<id>https://hdl.handle.net/1721.1/164793</id>
<updated>2026-02-12T03:07:29Z</updated>
<published>2025-03-15T00:00:00Z</published>
<summary type="text">Powering Through the Turn: Finding Time for Concept Exploration Before Industry Stagnation
Noble, Connery; Cameron, Bruce G
This study examines how the tension between exploration and exploitation affects early-stage development within the engineering teams of large corporations. Using survey data collected from over 900 system engineers and managers, it was observed that exploration decreased as an organization’s market growth declined, but dire market projections prompted a refocus on exploration. In addition, engineers routinely desire more concept exploration time than they perceive that they have available. The authors argue that engineering teams should more intentionally consider their innovation strategy, and that companies with stagnant market growth should invest in concept exploration before they get to a period of market decline.
</summary>
<dc:date>2025-03-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Vital Biodiversity Systems: A Companion Paper</title>
<link href="https://hdl.handle.net/1721.1/164792" rel="alternate"/>
<author>
<name>Westerlaken, Michelle</name>
</author>
<author>
<name>Bischoff, Amanda</name>
</author>
<author>
<name>Mertens, Krishen</name>
</author>
<author>
<name>Pertusa, Alejandro</name>
</author>
<id>https://hdl.handle.net/1721.1/164792</id>
<updated>2026-02-12T03:01:00Z</updated>
<published>2026-02-11T00:00:00Z</published>
<summary type="text">Vital Biodiversity Systems: A Companion Paper
Westerlaken, Michelle; Bischoff, Amanda; Mertens, Krishen; Pertusa, Alejandro
Regenerative and diverse ecosystems are essential to living futures. Healthy ecosystems are more resilient to climate change and are better able to absorb and store carbon. Communities and corporations worldwide are currently establishing how environmental data can best support these processes. This Companion Paper provides the rationale for the Design Brief and synthesizes findings from four years of research across academia, corporate sustainability teams, and community stakeholders. It argues that biodiversity data systems are not neutral repositories but designed artefacts that embed assumptions and values. To redirect innovation, the paper supports the Brief by expanding on its key design principles, criteria, constraints, and propositions that together chart a pathway for ‘vital biodiversity systems’: platforms that embed the aliveness of the ecosystems they mediate.
</summary>
<dc:date>2026-02-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>Molecular dynamics simulations and structural bioinformatics of bacterial integral alpha-helical membrane enzymes and their AlphaFold2-predicted water-soluble QTY analogues</title>
<link href="https://hdl.handle.net/1721.1/164791" rel="alternate"/>
<author>
<name>Sajeev-Sheeja, Akash</name>
</author>
<author>
<name>Karagöl, Alper</name>
</author>
<author>
<name>Karagöl, Taner</name>
</author>
<author>
<name>Zhang, Shuguang</name>
</author>
<id>https://hdl.handle.net/1721.1/164791</id>
<updated>2026-02-12T03:07:30Z</updated>
<published>2025-09-29T00:00:00Z</published>
<summary type="text">Molecular dynamics simulations and structural bioinformatics of bacterial integral alpha-helical membrane enzymes and their AlphaFold2-predicted water-soluble QTY analogues
Sajeev-Sheeja, Akash; Karagöl, Alper; Karagöl, Taner; Zhang, Shuguang
The study of integral membrane proteins has long been challenging because of their poor solubility in aqueous environments. We previously used QTY code to enhance the hydrophilicity in alpha-helices, beta-barrels, and monoclonal antibodies by systematically pairwise replacing the hydrophobic amino acids L (leucine) to Q (glutamine), V(valine)/I(isoleucine) to T (threonine), and F (phenylalanine) to Y (tyrosine). The superposed AlphaFold2-predicted structures of alpha-helical transmembrane enzyme variants with &gt;41% amino acid substitutions displayed remarkable similarity to native structures (RMSD 0.3Å-0.7 Å). We conducted molecular dynamics (MD) simulations, which revealed that, even in the absence of a lipid bilayer, the QTY-modified enzymes retained stable dynamics comparable to their membrane-bound forms. Root mean square fluctuation (RMSF) values remained below 2 Å across the transmembrane and core regions, and residue-wise root mean square deviation (RMSD) values were minimal (&lt;3 Å), indicating that the structural integrity of the protein core was largely preserved. These results suggest that the QTY variants, designed for soluble environments, effectively mimic the stability and conformational rigidity of natural membrane-bound enzymes. Our findings show that the QTY code is a simple method for designing water-soluble membrane protein enzymes in different biological scenarios, and it may encourage further experiments to validate our structural bioinformatics research.
</summary>
<dc:date>2025-09-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>City of ‘social saints’: the role of place in driving impact entrepreneurship in Turin, Italy</title>
<link href="https://hdl.handle.net/1721.1/164790" rel="alternate"/>
<author>
<name>Burke, Mary Kathleen</name>
</author>
<author>
<name>Sydow, Alisa</name>
</author>
<author>
<name>Torchia, Daniel</name>
</author>
<author>
<name>Corazza, Laura</name>
</author>
<id>https://hdl.handle.net/1721.1/164790</id>
<updated>2026-02-12T03:07:27Z</updated>
<published>2025-09-15T00:00:00Z</published>
<summary type="text">City of ‘social saints’: the role of place in driving impact entrepreneurship in Turin, Italy
Burke, Mary Kathleen; Sydow, Alisa; Torchia, Daniel; Corazza, Laura
This paper theorizes impact entrepreneurship (IE) in relation to place by examining dynamics at the individual, community, and organizational levels. While existing IE literature emphasizes entrepreneurship aimed at addressing grand challenges, it often adopts an aggregate view that overlooks how locally embedded entrepreneurs access and mobilize social and economic resources. We introduce a novel, multidimensional framework to show how sense of place, community embeddedness and IE interrelate to shape approaches to current social/environmental challenges. Adopting a qualitative approach, this paper investigates how different actors in Turin, Italy, contribute to IE through building on a legacy of social sector institutions. We find that individuals identifying with a place-based vocation of social impact find communities with a shared volition to work together and across organizations. We contribute to understanding how individuals’ senses of place can be leveraged into wider community efforts to support IE in the region. The paper advances the IE concept to account for the individual perspectives influencing local organizing practices and visions for IE rooted in place.
</summary>
<dc:date>2025-09-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Plane Delivery: Towards a Physical Grammar for Large-Scale Digital Fabrication</title>
<link href="https://hdl.handle.net/1721.1/164789" rel="alternate"/>
<author>
<name>Sass, Lawrence</name>
</author>
<id>https://hdl.handle.net/1721.1/164789</id>
<updated>2026-02-12T03:07:12Z</updated>
<published>2025-07-03T00:00:00Z</published>
<summary type="text">Plane Delivery: Towards a Physical Grammar for Large-Scale Digital Fabrication
Sass, Lawrence
There will come a day when computers and robots will participate regularly in designing, fabricating, and delivering homes as customized kits of parts (Sass Citation2008). They will not replace builders. Instead, one possible future is where computers and robots operate as intelligent assistants, discovering, reasoning, and inferring the best solutions using large language models (LLMs). This language will be vector-based on points, lines, and planes of the type Stiny described (Stiny Citation2006). A standard design and builder language is a first step towards automation. The proposed system is of a Lego-style approach to physical house production, used to manage costs, enhance design variety, improve design quality, and, most importantly, facilitate building.
</summary>
<dc:date>2025-07-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Techno-statecraft and industrial strategy: semiconductor development in Arizona</title>
<link href="https://hdl.handle.net/1721.1/164788" rel="alternate"/>
<author>
<name>Kollar, Justin</name>
</author>
<id>https://hdl.handle.net/1721.1/164788</id>
<updated>2026-02-12T03:07:24Z</updated>
<published>2025-05-27T00:00:00Z</published>
<summary type="text">Techno-statecraft and industrial strategy: semiconductor development in Arizona
Kollar, Justin
The resurgence of U.S. industrial strategy discourse is not a centralised return of the state but a territorially fragmented form of techno-statecraft. This article analyzes Arizona's semiconductor expansion as a case in which subnational actors – agencies, utilities, universities, and developers – mobilise infrastructure, land-use policy, and regulatory coordination to attract global capital. Rather than a coherent national plan, Arizona's strategy reflects speculative governance oriented toward risk absorption and territorial readiness. The article situates this conjuncture within longer histories of militarised growth and infrastructural overbuild, contributing to debates on state capitalism, industrial strategy, and the spatial politics of techno-industrial transformation.
</summary>
<dc:date>2025-05-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ideology, Equity, and Structure: Comments on Tzu-wei Hung’s ‘Equity and Marxist Buddhism’</title>
<link href="https://hdl.handle.net/1721.1/164787" rel="alternate"/>
<author>
<name>Haslanger, Sally</name>
</author>
<id>https://hdl.handle.net/1721.1/164787</id>
<updated>2026-02-12T03:07:35Z</updated>
<published>2024-10-01T00:00:00Z</published>
<summary type="text">Ideology, Equity, and Structure: Comments on Tzu-wei Hung’s ‘Equity and Marxist Buddhism’
Haslanger, Sally
In his essay, ‘Equity and Marxist Buddhism’, Tzu-wei Hung argues that Marxist Buddhism brings a commitment to social justice together with a distinctive form of virtue theory. In my commentary, I raise several questions from a Marxian perspective: (1) Might it be argued that Marxist Buddhism is (in the critical sense) ideological (similar to religion) because the spiritual goal of ‘transcendence’ distracts us from the need to fight for emancipation? (2) Can justice as equity be achieved by promoting individual altruism? (3) Aren’t both mainstream accounts of justice and Marxist Buddhism aspirational and so need to rely on non-ideal theory to achieve justice?
</summary>
<dc:date>2024-10-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A data-driven and context-aware approach for demand forecasting in the beverage industry</title>
<link href="https://hdl.handle.net/1721.1/164786" rel="alternate"/>
<author>
<name>Ma, Benedict Jun</name>
</author>
<author>
<name>Jackson, Ilya</name>
</author>
<author>
<name>Huang, Maggie</name>
</author>
<author>
<name>Villegas, Sebastian</name>
</author>
<author>
<name>Macias-Aguayo, Jaime</name>
</author>
<id>https://hdl.handle.net/1721.1/164786</id>
<updated>2026-02-12T03:07:07Z</updated>
<published>2025-10-10T00:00:00Z</published>
<summary type="text">A data-driven and context-aware approach for demand forecasting in the beverage industry
Ma, Benedict Jun; Jackson, Ilya; Huang, Maggie; Villegas, Sebastian; Macias-Aguayo, Jaime
Accurate demand forecasting is essential for logistics and supply chain management as it enables efficient inventory planning, reduces operational costs, and ensures high service levels across the network. However, in practice, diverse demand patterns of items make this task challenging, and a one-size-fits-all forecasting approach is inadequate. This paper proposes a data-driven and context-aware forecasting framework and tests it by using both endogenous data from a large private-label beverage manufacturer and exogenous features (such as holidays and temperature). Our method begins by classifying SKUs based on demand volume, volatility, and intermittency, and then refining the derived clusters by taking volume distribution into account. Totally, we obtain four distinct clusters, which are (i) stable and high volume, (ii) stable with low volume, (iii) erratic and intermittent, and (iv) lumpy. To explore the appropriate forecasting models for different demand patterns, we employ statistical models (exponential smoothing, ARIMA, and Croston), machine learning models (XGBoost), deep learning models (TiDE and N-BEATS), and even qualitative approaches such as collaborative planning, forecasting, and replenishment (CPFR). Our experimental results suggest which forecasting models are recommended for each demand pattern, and insightful implications are provided for the managers.
</summary>
<dc:date>2025-10-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adversarial Network Optimization under Bandit Feedback: Maximizing Utility in Non-Stationary Multi-Hop Networks</title>
<link href="https://hdl.handle.net/1721.1/164785" rel="alternate"/>
<author>
<name>Dai, Yan</name>
</author>
<author>
<name>Huang, Longbo</name>
</author>
<id>https://hdl.handle.net/1721.1/164785</id>
<updated>2026-02-11T04:36:10Z</updated>
<published>2025-06-09T00:00:00Z</published>
<summary type="text">Adversarial Network Optimization under Bandit Feedback: Maximizing Utility in Non-Stationary Multi-Hop Networks
Dai, Yan; Huang, Longbo
Stochastic Network Optimization (SNO) concerns scheduling in stochastic queueing systems and has been widely studied in network theory. Classical SNO algorithms require network conditions to be stationary w.r.t. time, which fails to capture the non-stationary components in increasingly many real-world scenarios. Moreover, most existing algorithms in network optimization assume perfect knowledge of network conditions before decision, which again rules out applications where unpredictability in network conditions presents.&#13;
Motivated by these issues, this paper considers Adversarial Network Optimization (ANO) under bandit feedback. Specifically, we consider the task of i) maximizing some unknown and time-varying utility function associated with scheduler's actions, where ii) the underlying network topology is a non-stationary multi-hop network whose conditions change arbitrarily with time, and iii) only bandit feedback (the effect of actually deployed actions) is revealed after decision-making. We propose the UMO2 algorithm, which does not require any pre-decision knowledge or counterfactual feedback, ensures network stability, and also matches the utility maximization performance of any ''mildly varying'' reference policy up to a polynomially decaying gap. To our knowledge, no previous algorithm can handle multi-hop networks or achieve utility maximization guarantees in ANO problems with bandit feedback, whereas ours is able to do both.&#13;
Technically, our method builds upon a novel integration of online learning techniques into the Lyapunov drift-plus-penalty method. Specifically, we propose meticulous analytical techniques to jointly balance online learning and Lyapunov arguments, which is used to handle the complex inter-dependency among queues in multi-hop networks. To tackle the learning obstacles due to potentially unbounded queue sizes and negative queue differences, we design a new online linear optimization algorithm that automatically adapts to the unknown (potentially negative) loss magnitudes. Finally, we also propose a bandit convex optimization algorithm with novel queue-dependent learning rate scheduling that suites drastically varying queue lengths in utility maximization. Our new insights and techniques in online learning can also be of independent interest.
SIGMETRICS Abstracts ’25, Stony Brook, NY, USA
</summary>
<dc:date>2025-06-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>IDAT: A Multi-Modal Dataset and Toolkit for Building and Evaluating Interactive Task-Solving Agents</title>
<link href="https://hdl.handle.net/1721.1/164784" rel="alternate"/>
<author>
<name>Mohanty, Shrestha</name>
</author>
<author>
<name>Arabzadeh, Negar</name>
</author>
<author>
<name>Tupini, Andrea</name>
</author>
<author>
<name>Sun, Yuxuan</name>
</author>
<author>
<name>Skrynnik, Alexey</name>
</author>
<author>
<name>Zholus, Artem</name>
</author>
<author>
<name>C?t?, Marc-Alexandre</name>
</author>
<author>
<name>Kiseleva, Julia</name>
</author>
<id>https://hdl.handle.net/1721.1/164784</id>
<updated>2026-02-11T04:36:14Z</updated>
<published>2025-07-13T00:00:00Z</published>
<summary type="text">IDAT: A Multi-Modal Dataset and Toolkit for Building and Evaluating Interactive Task-Solving Agents
Mohanty, Shrestha; Arabzadeh, Negar; Tupini, Andrea; Sun, Yuxuan; Skrynnik, Alexey; Zholus, Artem; C?t?, Marc-Alexandre; Kiseleva, Julia
Seamless interaction between AI agents and humans using natural language remains a key goal in AI research. This paper addresses the challenges of developing interactive agents capable of understanding and executing grounded natural language instructions through the IGLU competition. Despite advancements, challenges such as a scarcity of appropriate datasets and the need for effective evaluation platforms persist. We introduce a scalable data collection tool for gathering interactive grounded language instructions within a Minecraft-like environment, resulting in a Multi-Modal dataset with around 9,000 utterances and over 1,000 clarification questions. Additionally, we present a Human-in-the-Loop interactive evaluation platform for qualitative analysis and comparison of agent performance through multi-turn communication with human annotators. We offer to the community these assets referred to as IDAT (IGLU Dataset And Toolkit) which aim to advance the development of intelligent, interactive AI agents and provide essential resources for further research.
SIGIR ’25, Padua, Italy
</summary>
<dc:date>2025-07-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Theory to Estimate, Bound, and Manage Systemic Cyber-Risk</title>
<link href="https://hdl.handle.net/1721.1/164783" rel="alternate"/>
<author>
<name>Pal, Ranjan</name>
</author>
<author>
<name>Duan, Konnie</name>
</author>
<author>
<name>Sequeira, Rohan</name>
</author>
<id>https://hdl.handle.net/1721.1/164783</id>
<updated>2026-02-11T04:36:11Z</updated>
<published>2025-06-22T00:00:00Z</published>
<summary type="text">A Theory to Estimate, Bound, and Manage Systemic Cyber-Risk
Pal, Ranjan; Duan, Konnie; Sequeira, Rohan
The market to manage critical infrastructure cyber-risks using cyber insurance (CI) has been growing steadily (but not fast enough) as it is still skeptical of the extent of economic and societal impact of systemic risk across networked supply chains in interdependent IT-driven enterprises. To demystify this skepticism, we first study in this paper the role of (a) the statistical nature of multiple enterprise cyber-risks contributing to aggregate supply chain risk and (b) the graph structure of the underlying enterprise supply chain network, in the statistical spread of aggregate cyber-risk. We provide statistical tail bounds on the aggregate cyber-risk that a risk managing firm such as a cyber insurer is exposed to in a supply chain. Subsequently, we study the problem of aggregate cyber-risk management by cyber re-insurance firms via portfolio design to optimally diversify aggregate/systemic cyber-risk sourced from multiple CIs insuring enterprises on a supply chain. We propose the first mathematical framework for re-insurers to test the operational sustainability of systemic cyber-risk diversification portfolios with respect to the standard Value-at-Risk (VaR) metric for general aggregate cyber risk distributions. We also propose a statistical copula methodology to make systemic cyber-risk portfolio diversification sustainable for re-insurers in scenarios where the sustainability test fails. We validate our theory via Monte Carlo simulations.
SIGSIM-PADS ’25, Santa Fe, NM, USA
</summary>
<dc:date>2025-06-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>14.41 Public Finance and Public Policy, Fall 2010</title>
<link href="https://hdl.handle.net/1721.1/164782" rel="alternate"/>
<author>
<name>Gruber, Jonathan</name>
</author>
<id>https://hdl.handle.net/1721.1/164782</id>
<updated>2026-02-10T18:32:40Z</updated>
<published>2010-01-01T00:00:00Z</published>
<summary type="text">14.41 Public Finance and Public Policy, Fall 2010
Gruber, Jonathan
Explores the role of government in the economy, applying tools of basic microeconomics to answer important policy questions such as government response to global warming, school choice by K-12 students, Social Security versus private retirement savings accounts, government versus private health insurance, setting income tax rates for individuals and corporations.
</summary>
<dc:date>2010-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamic Incentive Allocation for City-Scale Deep Decarbonization</title>
<link href="https://hdl.handle.net/1721.1/164781" rel="alternate"/>
<author>
<name>Sitaraman, Anupama</name>
</author>
<author>
<name>Lechowicz, Adam</name>
</author>
<author>
<name>Bashir, Noman</name>
</author>
<author>
<name>Liu, Xutong</name>
</author>
<author>
<name>Hajiesmaili, Mohammad</name>
</author>
<author>
<name>Shenoy, Prashant</name>
</author>
<id>https://hdl.handle.net/1721.1/164781</id>
<updated>2026-02-11T04:35:56Z</updated>
<published>2025-07-16T00:00:00Z</published>
<summary type="text">Dynamic Incentive Allocation for City-Scale Deep Decarbonization
Sitaraman, Anupama; Lechowicz, Adam; Bashir, Noman; Liu, Xutong; Hajiesmaili, Mohammad; Shenoy, Prashant
Greenhouse gas emissions from the residential sector represent a large fraction of global emissions and must be significantly curtailed to achieve ambitious climate goals. To stimulate the adoption of relevant technologies such as rooftop PV and heat pumps, governments and utilities have designed incentives that encourage adoption of decarbonization technologies. However, studies have shown that many of these incentives are inefficient since a substantial fraction of spending does not actually promote adoption. Further, these incentives are not equitably distributed across socioeconomic groups. In this paper, we present a novel data-driven approach that adopts a holistic, emissions-based, and city-scale perspective on decarbonization.  &#13;
We propose an optimization model that dynamically allocates a total incentive budget to households to directly maximize the resultant carbon emissions reduction -- this is in contrast to prior work, which focuses on metrics such as the number of new installations.  We leverage techniques from the multi-armed bandits problem to estimate human factors, such as a household's willingness to adopt new technologies given a certain incentive. We apply our proposed dynamic incentive framework to a city in the Northeast U.S., using real household energy data, grid carbon intensity data, and future price scenarios.  We compare our learning-based technique to two baselines, one "status-quo' baseline using incentives offered by a state and utility, and one simple heuristic baseline. With these baselines, we show that our learning-based technique significantly outperforms both the status-quo baseline and the heuristic baseline, achieving up to 37.88% higher carbon reductions than the status-quo baseline and up to 28.76% higher carbon reductions compared to the heuristic baseline. Additionally, our incentive allocation approach is able to achieve significant carbon reduction even in a broad set of environments, with varying values for electricity and gas prices, and for carbon intensity of the grid. Finally, we show that our framework can accommodate equity-aware constraints to preserve an equitable allocation of incentives across socioeconomic groups while achieving 83.34% of the carbon reductions of the optimal solution on average.
</summary>
<dc:date>2025-07-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Economics of Large Language Models: Token Allocation, Fine-Tuning, and Optimal Pricing</title>
<link href="https://hdl.handle.net/1721.1/164780" rel="alternate"/>
<author>
<name>Bergemann, Dirk</name>
</author>
<author>
<name>Bonatti, Alessandro</name>
</author>
<author>
<name>Smolin, Alex</name>
</author>
<id>https://hdl.handle.net/1721.1/164780</id>
<updated>2026-02-11T04:36:05Z</updated>
<published>2025-07-02T00:00:00Z</published>
<summary type="text">The Economics of Large Language Models: Token Allocation, Fine-Tuning, and Optimal Pricing
Bergemann, Dirk; Bonatti, Alessandro; Smolin, Alex
We develop an economic framework to analyze the optimal pricing and product design of Large Language Models (LLM). Our framework captures several key features of LLMs: variable operational costs of processing input and output tokens; the ability to customize models through fine-tuning; and high-dimensional user heterogeneity in terms of task requirements and error sensitivity. In our model, a monopolistic seller offers multiple versions of LLMs through a menu of products. The optimal pricing structure depends on whether token allocation across tasks is contractible and whether users face scale constraints.&#13;
When it is possible to contract on the entire assignment of tokens to tasks, the seller's problem ("Token Allocations") is an infinite-dimensional screening problem, which is well-known to be difficult. We are nonetheless able to make progress in two important classes of environments: binary environment and two dimensional value-scale heterogeneity, in which case users with similar aggregate value-scale characteristics choose similar levels of fine-tuning and token consumption. When only the total number of tokens is contractible ("Token Packages"), we leverage the tractability of a constant elasticity of substitution framework to drastically simplify the problem: the buyer's type-a function mapping each task to a value of precision- is an index. This index for the value of precision allows the seller to solve a one-dimensional screening problem. The optimal mechanism can be implemented through menus of two-part tariffs, with higher markups for more intensive users. Our results rationalize observed industry practices such as tiered pricing based on model customization and usage levels.
EC ’25, July 7–10, 2025, Stanford, CA, USA
</summary>
<dc:date>2025-07-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Alternates, Assemble! Selecting Optimal Alternates for Citizens’ Assemblies</title>
<link href="https://hdl.handle.net/1721.1/164779" rel="alternate"/>
<author>
<name>Assos, Angelos</name>
</author>
<author>
<name>Baharav, Carmel</name>
</author>
<author>
<name>Flanigan, Bailey</name>
</author>
<author>
<name>Procaccia, Ariel</name>
</author>
<id>https://hdl.handle.net/1721.1/164779</id>
<updated>2026-02-11T04:36:09Z</updated>
<published>2025-07-02T00:00:00Z</published>
<summary type="text">Alternates, Assemble! Selecting Optimal Alternates for Citizens’ Assemblies
Assos, Angelos; Baharav, Carmel; Flanigan, Bailey; Procaccia, Ariel
Citizens' assemblies are an increasingly influential form of deliberative democracy, where randomly selected people discuss policy questions. The legitimacy of these assemblies hinges on their representation of the broader population, but participant dropout often leads to an unbalanced composition. In practice, dropouts are replaced by preselected alternates, but existing methods do not address how to choose these alternates. To address this gap, we introduce an optimization framework for alternate selection. Our algorithmic approach, which leverages learning-theoretic machinery, estimates dropout probabilities using historical data and selects alternates to minimize expected misrepresentation. Our theoretical bounds provide guarantees on sample complexity (with implications for computational efficiency) and on loss due to dropout probability mis-estimation. Empirical evaluation using real-world data demonstrates that, compared to the status quo, our method significantly improves representation while requiring fewer alternates.
EC ’25, July 7–10, 2025, Stanford, CA, USA
</summary>
<dc:date>2025-07-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Does Firm Size Influence the Collection of Sensitive Data?: A Study of Child-Orientated Apps</title>
<link href="https://hdl.handle.net/1721.1/164778" rel="alternate"/>
<author>
<name>Cecere, Grazia</name>
</author>
<author>
<name>Tucker, Catherine</name>
</author>
<author>
<name>Lefrere, Vincent</name>
</author>
<id>https://hdl.handle.net/1721.1/164778</id>
<updated>2026-02-11T04:36:07Z</updated>
<published>2025-07-02T00:00:00Z</published>
<summary type="text">Does Firm Size Influence the Collection of Sensitive Data?: A Study of Child-Orientated Apps
Cecere, Grazia; Tucker, Catherine; Lefrere, Vincent
How does firm size affect the privacy protections offered to customers? On the one hand, it could be that larger firms use their size to amass more data. On the other hand, smaller firms may be less careful in their data protection practices, because they have a different perception of risk. Using data from the Google Play Store over a three-year period, we explore this empirical question in the U.S. children's app market. Our findings indicate that larger app developers consistently implement stronger privacy protections, requesting less sensitive data compared to smaller developers. These results hold across empirical approaches, including instrumental variables and the propensity-score matching approach. Additionally, our analysis shows that mergers between developers and sudden increases in size of the user-bases of the product are associated with reduced data collection. We show that newly created and updated apps produced by large developers collect less data compared to existing apps. Our findings indicate a trend toward standardized privacy practices across different national regulatory regimes. This research highlights the potential for growth-driven improvements in data privacy practices among app developers, regardless of their regulatory context.
EC ’25, July 7–10, 2025, Stanford, CA, USA
</summary>
<dc:date>2025-07-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inertial Coordination Games</title>
<link href="https://hdl.handle.net/1721.1/164777" rel="alternate"/>
<author>
<name>Koh, Andrew</name>
</author>
<author>
<name>Li, Ricky</name>
</author>
<author>
<name>Uzui, Kei</name>
</author>
<id>https://hdl.handle.net/1721.1/164777</id>
<updated>2026-02-11T04:36:06Z</updated>
<published>2025-07-02T00:00:00Z</published>
<summary type="text">Inertial Coordination Games
Koh, Andrew; Li, Ricky; Uzui, Kei
Coordination lies at the heart of many economic phenomena. A well-known example is currency crises, in which traders decide whether to launch a speculative attack. On one hand, shocks to the currency's fundamentals can propagate: as more traders attack, the central bank's foreign reserves are depleted, which in turn encourages further attacks as traders seek to exploit a weakening currency. On the other hand, shocks can also fizzle out: traders may quickly learn that the central bank's balance sheet is strong, causing pessimism to dissipate and attacks to subside. When do shocks propagate, and when do they fizzle out? In particular, how do these outcomes depend on the speed at which traders learn about the fundamental?&#13;
Motivated by these questions, we propose a model of inertial coordination games—dynamic coordination games where players repeatedly decide whether to take a risky action. The payoff from this risky action depends on (i) a persistent fundamental; and (ii) an endogenous component that depends on others' past play. Players receive private signals about the persistent fundamental over time and form beliefs about the current state. Notably, the current state depends on past play, which in turn depends on past beliefs about play yet farther back into the past. Thus, expectations about histories shape behavior in the present which, in turn, drives the evolution of future states and future play.&#13;
Our main result develops a tight connection between the speed of learning and limit aggregate play: the risk-dominant action is played in the limit if and only if posterior precisions grow sub-quadratically. This has sharp implications for the long-run propagation of shocks. With slow (sub-quadratic) learning, limit play exhibits history independence: initial shocks have no lasting effect, and limit play is determined solely by fundamentals. By contrast, with fast (super-quadratic) learning, limit play is history dependent: initial shocks can be self-fulfilling, and whether they propagate depends jointly on fundamentals, the size of the shock, and the speed of learning. Our results offer a novel perspective on whether 'history' or 'expectations' shape long-run coordination outcomes: in our model, expectations about histories is what matters for whether self-fulfilling spirals occur.&#13;
Finally, we show that the speed of learning also shapes the path of play, focusing on the case of sub-quadratic learning. When signals are precise, aggregate play exhibits a sudden transition from nearly all players choosing the non-risk-dominant action to nearly all players choosing the risk-dominant action. In contrast, when signals are noisy, the transition is gradual, with the share of players choosing the risk-dominant action increasing gradually over time. This suggests that "spikes" in aggregate behavior (such as a sudden and massive sell-off of a currency) can be consistent with transition to limit equilibrium play, and need not indicate an "equilibrium shift."
EC ’25, July 7–10, 2025, Stanford, CA, USA
</summary>
<dc:date>2025-07-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Galley: Modern Query Optimization for Sparse Tensor Programs</title>
<link href="https://hdl.handle.net/1721.1/164776" rel="alternate"/>
<author>
<name>Deeds, Kyle</name>
</author>
<author>
<name>Ahrens, Willow</name>
</author>
<author>
<name>Balazinska, Magdalena</name>
</author>
<author>
<name>Suciu, Dan</name>
</author>
<id>https://hdl.handle.net/1721.1/164776</id>
<updated>2026-02-11T04:36:13Z</updated>
<published>2025-06-18T00:00:00Z</published>
<summary type="text">Galley: Modern Query Optimization for Sparse Tensor Programs
Deeds, Kyle; Ahrens, Willow; Balazinska, Magdalena; Suciu, Dan
The tensor programming abstraction is a foundational paradigm which allows users to write high performance programs via a high-level imperative interface. Recent work on sparse tensor compilers has extended this paradigm to sparse tensors (i.e., tensors where most entries are not explicitly represented). With these systems, users define the semantics of the program and the algorithmic decisions in a concise language that can be compiled to efficient low-level code. However, these systems still require users to make complex decisions about program structure and memory layouts to write efficient programs.&#13;
This work presents .Galley, a system for declarative tensor programming that allows users to write efficient tensor programs without making complex algorithmic decisions. Galley is the first system to perform cost based lowering of sparse tensor algebra to the imperative language of sparse tensor compilers, and the first to optimize arbitrary operators beyond Σ and *. First, it decomposes the input program into a sequence of aggregation steps through a novel extension of the FAQ framework. Second, Galley optimizes and converts each aggregation step to a concrete program, which is compiled and executed with a sparse tensor compiler. We show that Galley produces programs that are 1-300x faster than competing methods for machine learning over joins and 5-20x faster than a state-of-the-art relational database for subgraph counting workloads with a minimal optimization overhead.
</summary>
<dc:date>2025-06-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Virtualizing Cloud Data Infrastructures with BRAD</title>
<link href="https://hdl.handle.net/1721.1/164775" rel="alternate"/>
<author>
<name>Yu, Geoffrey</name>
</author>
<author>
<name>Wu, Ziniu</name>
</author>
<author>
<name>Kossmann, Ferdi</name>
</author>
<author>
<name>Li, Tianyu</name>
</author>
<author>
<name>Markakis, Markos</name>
</author>
<author>
<name>Ngom, Amadou</name>
</author>
<author>
<name>Zhang, Sophie</name>
</author>
<author>
<name>Kraska, Tim</name>
</author>
<author>
<name>Madden, Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/164775</id>
<updated>2026-02-11T04:36:16Z</updated>
<published>2025-06-22T00:00:00Z</published>
<summary type="text">Virtualizing Cloud Data Infrastructures with BRAD
Yu, Geoffrey; Wu, Ziniu; Kossmann, Ferdi; Li, Tianyu; Markakis, Markos; Ngom, Amadou; Zhang, Sophie; Kraska, Tim; Madden, Samuel
Organizations usually manage their data using multiple specialized cloud database engines (e.g., Aurora, BigQuery, etc.). However, designing and managing multi-engine infrastructures is hard; there can be many designs, each with different performance and costs. Changing the design afterwards (e.g., due to growth) is even more challenging since application code usually ends up tightly coupled to the engines. We propose data infrastructure virtualization. The key idea is to declare a set of virtual database engines (VDBEs), which specify an engine's application-facing properties (e.g., query interface, performance) and its tables, but do not prescribe a concrete engine. An automated planner then decides how to best realize the VDBEs onto physical engines based on the workload. Clients connect to VDBE endpoints and are oblivious to the underlying physical engines-allowing for seamless infrastructure changes. We implemented VDBEs and an automated planner in BRAD: the first data infrastructure virtualization runtime. Our demo will showcase VDBEs and BRAD's automated planner under different workloads.
SIGMOD-Companion ’25, Berlin, Germany
</summary>
<dc:date>2025-06-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Efficient Subcritical Multiplication Mode for Monte Carlo Solvers</title>
<link href="https://hdl.handle.net/1721.1/164774" rel="alternate"/>
<author>
<name>Forget, Benoit</name>
</author>
<id>https://hdl.handle.net/1721.1/164774</id>
<updated>2026-02-11T04:36:46Z</updated>
<published>2025-10-20T00:00:00Z</published>
<summary type="text">An Efficient Subcritical Multiplication Mode for Monte Carlo Solvers
Forget, Benoit
This paper presents an efficient Monte Carlo mode for simulating subcritical systems with external sources. While solving these systems as a fixed source is possible, the length of the histories grows significantly as the system nears criticality, making run time significant. Instead, a hybrid method is proposed that leverages the traditional eigensolver while including elements of the external source. The method builds on prior work,but proposes an approach that maintains the size of the source bank and also provides a natural way of scaling tallies with the true multiplication factor. The method is demonstrated on a subcritical sphere with varying point source position and energy spectrum, as well as an approach to criticality problem. The results demonstrate good agreement with the fixed-source mode, with much improved particle tracking rates for near-critical problems.
</summary>
<dc:date>2025-10-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>Afterword: Reflections from Afar, with Hope for our Collective Future</title>
<link href="https://hdl.handle.net/1721.1/164773" rel="alternate"/>
<author>
<name>Henderson, Diana E</name>
</author>
<id>https://hdl.handle.net/1721.1/164773</id>
<updated>2026-02-11T04:36:44Z</updated>
<published>2025-10-02T00:00:00Z</published>
<summary type="text">Afterword: Reflections from Afar, with Hope for our Collective Future
Henderson, Diana E
Appearing in the wake of a decade of rapid growth in Indian screen Shakespeare as an academic subspeciality, ‘Adapting Shakespearean Romance in Indian Cinema’ reveals how cross-cultural comparison and attention to popular reception can profitably modify inherited critical assumptions for all Shakespeare’s readership. Taking ‘romance’ as a key term, this afterword considers the possibilities and potential problems of recasting its dominant meaning as thematic, focusing on modern love, rather than as a dramatic subgenre. In a time of increasing political censorship and existential threats to gender studies, greater engagement and exchange between those in other areas of Shakespeare studies with this rich cinematic corpus and its aligned subfield of cross-disciplinary criticism provides reasons for hope and renewed community.
</summary>
<dc:date>2025-10-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Resolving the Contested Future of the GSEs: The Stakes Are High</title>
<link href="https://hdl.handle.net/1721.1/164772" rel="alternate"/>
<author>
<name>Golding, Edward</name>
</author>
<author>
<name>Wachter, Susan</name>
</author>
<id>https://hdl.handle.net/1721.1/164772</id>
<updated>2026-02-11T04:36:37Z</updated>
<published>2025-11-17T00:00:00Z</published>
<summary type="text">Resolving the Contested Future of the GSEs: The Stakes Are High
Golding, Edward; Wachter, Susan
Seventeen years after entering conservatorship, Fannie Mae and Freddie Mac remain central to the future of U.S. housing finance. This paper evaluates the feasibility of their exit from conservatorship without Congressional action, assessing repayment of the federal bailout, capital adequacy under current regulatory frameworks, and the durability of structural reforms. It puts forth a utility model that preserves liquidity, affordability, and mission alignment while mitigating risks of increased mortgage costs. Treasury mechanisms—including commitment fees and stock warrant monetization—are examined as tools to support affordable housing and fulfill charter mandates. A carefully structured exit, supported by robust oversight and capital standards, can balance adequate financial returns with public purpose. A regulatory framework that maintains stable lending standards and pricing over the business cycle is essential to reducing investor-required returns and enhancing affordability, thereby resolving the contested future of the GSEs.
</summary>
<dc:date>2025-11-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stabilizing far-from-equilibrium (Mo,Ti)S2 thin films by metal sulfurization at reduced temperature</title>
<link href="https://hdl.handle.net/1721.1/164771" rel="alternate"/>
<author>
<name>Li, Yifei</name>
</author>
<author>
<name>Reidy, Kate</name>
</author>
<author>
<name>Penn, Aubrey</name>
</author>
<author>
<name>Lee, Seng Huat</name>
</author>
<author>
<name>Wang, Baoming</name>
</author>
<author>
<name>Ye, Kevin</name>
</author>
<author>
<name>Mao, Zhiqiang</name>
</author>
<author>
<name>Ross, Frances M</name>
</author>
<author>
<name>Jaramillo, R</name>
</author>
<id>https://hdl.handle.net/1721.1/164771</id>
<updated>2026-02-11T04:36:42Z</updated>
<published>2023-02-16T00:00:00Z</published>
<summary type="text">Stabilizing far-from-equilibrium (Mo,Ti)S2 thin films by metal sulfurization at reduced temperature
Li, Yifei; Reidy, Kate; Penn, Aubrey; Lee, Seng Huat; Wang, Baoming; Ye, Kevin; Mao, Zhiqiang; Ross, Frances M; Jaramillo, R
We report the synthesis of large-area, high-Ti-content, Mo1−xTixS2 alloy thin films in the 2H phase at temperature as low as 500 °C using a scalable two-step method of metal film deposition, followed by sulfurization in H2S. Film processing at higher temperature accelerates Ti segregation, film coarsening, and the formation of TiS2 in the 1T phase. Crystal growth at higher temperature results in the formation of multiple binary sulfide phases, in agreement with the equilibrium phase diagram. Making highly metastable, smooth, and uniform single-phase alloy films, therefore, hinges on developing low-temperature processing. Our results are relevant to the development of technologies based on designer transition metal dichalcogenide alloys, including in photonic integrated circuits and gas sensing.
</summary>
<dc:date>2023-02-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Experimental study of lower hybrid wave power absorption on EAST</title>
<link href="https://hdl.handle.net/1721.1/164770" rel="alternate"/>
<author>
<name>Baek, S-G</name>
</author>
<author>
<name>Li, MH</name>
</author>
<author>
<name>Bonoli, PT</name>
</author>
<author>
<name>Ding, BJ</name>
</author>
<author>
<name>Wallace, GM</name>
</author>
<author>
<name>Chen, JL</name>
</author>
<author>
<name>Duan, YM</name>
</author>
<author>
<name>Gong, XZ</name>
</author>
<author>
<name>Qian, JP</name>
</author>
<author>
<name>Wang, L</name>
</author>
<author>
<name>Yang, H</name>
</author>
<author>
<name>Zang, Q</name>
</author>
<author>
<name>Zhang, JY</name>
</author>
<author>
<name>Zhang, XJ</name>
</author>
<id>https://hdl.handle.net/1721.1/164770</id>
<updated>2026-02-11T04:36:40Z</updated>
<published>2023-08-18T00:00:00Z</published>
<summary type="text">Experimental study of lower hybrid wave power absorption on EAST
Baek, S-G; Li, MH; Bonoli, PT; Ding, BJ; Wallace, GM; Chen, JL; Duan, YM; Gong, XZ; Qian, JP; Wang, L; Yang, H; Zang, Q; Zhang, JY; Zhang, XJ
Lower hybrid power absorption analysis is presented on the EAST high-density plasmas using the power modulation technique. The change in the plasma and magnetic energy is monitored to evaluate the power absorption coefficient by linearizing the change for the first 10 msec for the given input power. The power absorption coefficient evaluated is approximately 0.44 (0.35) for 4.6 GHz (2.45 GHz) at n̄e = 3.5x1019 m-3 GENRAY/CQL3D current drive modeling suggests a combination of antenna spectrum, accessibility, and edge losses could primarily be responsible for the observed level of power absorption. Evidence of first-pass parasite LH power flow causing impurity sputtering is also reported, suggesting a need for optimum power coupling. Implications of the experimental findings are discussed.
</summary>
<dc:date>2023-08-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Limiting role of dislocations in high-current AlGaN/GaN hot electron transistors</title>
<link href="https://hdl.handle.net/1721.1/164769" rel="alternate"/>
<author>
<name>Daulton, J. W.</name>
</author>
<author>
<name>Molnar, R. J.</name>
</author>
<author>
<name>Brinkerhoff, J. A.</name>
</author>
<author>
<name>Weir, T. J.</name>
</author>
<author>
<name>Hollis, M. A.</name>
</author>
<author>
<name>Zaslavsky, A.</name>
</author>
<id>https://hdl.handle.net/1721.1/164769</id>
<updated>2026-02-11T04:36:38Z</updated>
<published>2024-02-06T00:00:00Z</published>
<summary type="text">Limiting role of dislocations in high-current AlGaN/GaN hot electron transistors
Daulton, J. W.; Molnar, R. J.; Brinkerhoff, J. A.; Weir, T. J.; Hollis, M. A.; Zaslavsky, A.
III-nitride-based hot electron transistors (HETs) hold significant promise as high-speed, high-power devices. In our previous work, we demonstrated high current density and common-emitter gain at room temperature. Here, we measure multiple devices at cryogenic temperatures, extending the Gummel characteristics past the onset of intervalley scattering at 77 K. We demonstrate a Gummel current gain of 4.7 at a collector current density of 2.6 MA/cm2 at 77 K as well as a peak current density exceeding 3 MA/cm2. From these data, we determine that dislocation-associated inhomogeneities play a limiting role in AlGaN/GaN HETs, controlling the current gain, density, knee voltage, and base-collector leakage. A comparison of two nominally identical devices suggests that even a modest reduction in dislocation density would result in a substantial improvement in HET performance.
</summary>
<dc:date>2024-02-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>SeerCuts: Explainable Attribute Discretization</title>
<link href="https://hdl.handle.net/1721.1/164768" rel="alternate"/>
<author>
<name>Lai, Eugenie</name>
</author>
<author>
<name>Croitoru, Inbal</name>
</author>
<author>
<name>Bitton, Noam</name>
</author>
<author>
<name>Shalem, Ariel</name>
</author>
<author>
<name>Youngmann, Brit</name>
</author>
<author>
<name>Galhotra, Sainyam</name>
</author>
<author>
<name>Rezig, El Kindi</name>
</author>
<author>
<name>Cafarella, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/164768</id>
<updated>2026-02-10T03:07:40Z</updated>
<published>2025-06-22T00:00:00Z</published>
<summary type="text">SeerCuts: Explainable Attribute Discretization
Lai, Eugenie; Croitoru, Inbal; Bitton, Noam; Shalem, Ariel; Youngmann, Brit; Galhotra, Sainyam; Rezig, El Kindi; Cafarella, Michael
This demonstration showcases SeerCuts - a tool that suggests useful and semantically meaningful discretization strategies (partitions) for numerical attributes. SeerCuts is a generic, interactive framework where users specify attributes to discretize and their utility measure for a downstream task of choice. It uses GPT-4o to assess the semantic meaningfulness of candidate partitions and employs an efficient search strategy to explore the vast space of discretization options. With hierarchical clustering to group related partitions and a multi-armed bandit policy to identify useful partitions with only a few samples, SeerCuts quickly finds meaningful and useful partitions. In the demo, we will provide an overview of SeerCuts and allow the audience to explore various datasets and tasks, including data visualization and comprehensive modeling. The users will be able to evaluate how SeerCuts identifies meaningful discretization strategies and compare the tradeoff between different discretization options.
SIGMOD-Companion ’25, Berlin, Germany
</summary>
<dc:date>2025-06-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>PalimpChat: Declarative and Interactive AI analytics</title>
<link href="https://hdl.handle.net/1721.1/164767" rel="alternate"/>
<author>
<name>Liu, Chunwei</name>
</author>
<author>
<name>Vitagliano, Gerardo</name>
</author>
<author>
<name>Rose, Brandon</name>
</author>
<author>
<name>Printz, Matthew</name>
</author>
<author>
<name>Samson, David Andrew</name>
</author>
<author>
<name>Cafarella, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/164767</id>
<updated>2026-02-10T03:07:41Z</updated>
<published>2025-06-22T00:00:00Z</published>
<summary type="text">PalimpChat: Declarative and Interactive AI analytics
Liu, Chunwei; Vitagliano, Gerardo; Rose, Brandon; Printz, Matthew; Samson, David Andrew; Cafarella, Michael
Thanks to the advances in generative architectures and large language models, data scientists can now code pipelines of AI operations to process large collections of unstructured data. Recent progress has seen the rise of declarative AI frameworks (e.g., Palimpzest, Lotus, and DocETL) to build optimized and increasingly complex pipelines, but these systems often remain accessible only to expert programmers. In this demonstration, we present PalimpChat, a chat-based interface to Palimpzest that bridges this gap by letting users create and run sophisticated AI pipelines through natural language alone. By integrating Archytas, a ReAct-based reasoning agent, and Palimpzest's suite of relational and LLM-based operators, PalimpChat provides a practical illustration of how a chat interface can make declarative AI frameworks truly accessible to non-experts.&#13;
Our demo system is publicly available online. At SIGMOD'25, participants can explore three real-world scenarios-scientific discovery, legal discovery, and real estate search-or apply PalimpChat to their own datasets. In this paper, we focus on how PalimpChat, supported by the Palimpzest optimizer, simplifies complex AI workflows such as extracting and analyzing biomedical data.
SIGMOD-Companion ’25, Berlin, Germany
</summary>
<dc:date>2025-06-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>CauSumX: Summarized Causal Explanations For Group-By-Average Queries</title>
<link href="https://hdl.handle.net/1721.1/164766" rel="alternate"/>
<author>
<name>Levy, Nativ</name>
</author>
<author>
<name>Cafarella, Michael</name>
</author>
<author>
<name>Gilad, Amir</name>
</author>
<author>
<name>Roy, Sudeepa</name>
</author>
<author>
<name>Youngmann, Brit</name>
</author>
<id>https://hdl.handle.net/1721.1/164766</id>
<updated>2026-02-10T03:07:39Z</updated>
<published>2025-06-22T00:00:00Z</published>
<summary type="text">CauSumX: Summarized Causal Explanations For Group-By-Average Queries
Levy, Nativ; Cafarella, Michael; Gilad, Amir; Roy, Sudeepa; Youngmann, Brit
Group-by-average SQL queries are a cornerstone of data analysis, often employed to uncover patterns and trends within datasets. However, interpreting the results of these queries can be challenging and time-intensive, particularly when working with large, high-dimensional datasets. Automating the generation of explanations for such queries can greatly enhance analysts' ability to derive meaningful insights while reducing human effort. Effective explanations must balance succinctness and depth, offering insights into different patterns across aggregate results, while crucially reflecting cause-effect relationships rather than mere correlations. This ensures that users can make informed, data-driven decisions grounded in reality. In this demonstration, we present CauSumX, a system that produces concise and causal explanations for group-by-average queries. Leveraging background causal knowledge, CauSumX identifies the key causal factors driving variations in the outcome variable across different groups. The system employs an efficient algorithm based on a recently published paper. We will demonstrate the utility of CauSumX for generating useful summarized causal explanations by interacting with the SIGMOD'25 participants, who will act as data analysts aiming to explain their query results.
SIGMOD-Companion ’25, Berlin, Germany
</summary>
<dc:date>2025-06-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>CausaLens: A System for Summarizing Causal DAGs</title>
<link href="https://hdl.handle.net/1721.1/164765" rel="alternate"/>
<author>
<name>Chen, Noam</name>
</author>
<author>
<name>Zeng, Anna</name>
</author>
<author>
<name>Cafarella, Michael</name>
</author>
<author>
<name>Kenig, Batya</name>
</author>
<author>
<name>Markakis, Markos</name>
</author>
<author>
<name>Mishali, Oren</name>
</author>
<author>
<name>Youngmann, Brit</name>
</author>
<author>
<name>Salimi, Babak</name>
</author>
<id>https://hdl.handle.net/1721.1/164765</id>
<updated>2026-02-10T03:07:04Z</updated>
<published>2025-06-22T00:00:00Z</published>
<summary type="text">CausaLens: A System for Summarizing Causal DAGs
Chen, Noam; Zeng, Anna; Cafarella, Michael; Kenig, Batya; Markakis, Markos; Mishali, Oren; Youngmann, Brit; Salimi, Babak
Causal inference aids researchers in discovering causal relationships, leading to scientific insights. Pearl's causal model uses causal DAGs to estimate causal effects, so DAG correctness is essential for reliable causal conclusions. However, for high-dimensional data, the causal DAGs are often complex beyond human verifiability. Graph summarization is a logical next step, but current methods for general-purpose graph summarization are inadequate for causal DAG summarization, as they are not designed to preserve causal information. In this demonstration, we present a system called CausaLens that summarizes a given causal DAG and balances graph simplification for better understanding and retention of essential causal information for reliable inference directly on the summary DAG. We illustrate that causal inference on the summary DAG is more robust to misspecification in the initial causal DAG compared to performing inference directly on the initial causal DAG, thereby enhancing the robustness of causal inference. We will demonstrate the utility of CausaLens for generating useful summary causal DAGs by interacting with the SIGMOD'25 participants, who will act as data analysts aiming to perform causal analysis on high dimensional datasets.
SIGMOD-Companion ’25, Berlin, Germany
</summary>
<dc:date>2025-06-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>First Workshop on Novel Optimizations for Visionary AI Systems (NOVAS)</title>
<link href="https://hdl.handle.net/1721.1/164764" rel="alternate"/>
<author>
<name>Vitagliano, Gerardo</name>
</author>
<author>
<name>Liu, Chunwei</name>
</author>
<author>
<name>Cao, Lei</name>
</author>
<author>
<name>Sun, Huan</name>
</author>
<author>
<name>Papotti, Paolo</name>
</author>
<id>https://hdl.handle.net/1721.1/164764</id>
<updated>2026-02-10T03:06:46Z</updated>
<published>2025-06-22T00:00:00Z</published>
<summary type="text">First Workshop on Novel Optimizations for Visionary AI Systems (NOVAS)
Vitagliano, Gerardo; Liu, Chunwei; Cao, Lei; Sun, Huan; Papotti, Paolo
The first NOVAS workshop (Novel Optimizations for Visionary AI Systems) is aimed at hosting novel work at the intersection between artificial intelligence and data management. This area has emerged with the rise of transformer-based architectures, which have revolutionized data processing across modalities. While these models benefit from massive pre-training and large-context inference, there are significant challenges related to scalability, determinism, and resource constraints. These issues-long studied in the data management community-have sparked a convergence between generative AI and traditional database research.&#13;
The workshop will be held on June 22nd, in conjunction with SIGMOD/PODS 2025. The workshop solicits regular and short papers on topics including hardware and execution optimizations, high-level programming abstractions, integration of LLMs with relational databases, and new transformer architectures for structured data. By bridging together the different communities of machine learning, data systems, and information retrieval, NOVAS aims at becoming the venue to discuss, share ideas and early results, and spark new research collaborations for the next-generation of data-driven AI systems.
SIGMOD-Companion ’25, Berlin, Germany
</summary>
<dc:date>2025-06-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-Efficient Discovery of Hyperelastic TPMS Metamaterials with Extreme Energy Dissipation</title>
<link href="https://hdl.handle.net/1721.1/164763" rel="alternate"/>
<author>
<name>Perroni-Scharf, Maxine</name>
</author>
<author>
<name>Ferguson, Zachary</name>
</author>
<author>
<name>Butruille, Thomas</name>
</author>
<author>
<name>Portela, Carlos</name>
</author>
<author>
<name>Konakovi? Lukovi?, Mina</name>
</author>
<id>https://hdl.handle.net/1721.1/164763</id>
<updated>2026-02-10T03:07:38Z</updated>
<published>2025-07-27T00:00:00Z</published>
<summary type="text">Data-Efficient Discovery of Hyperelastic TPMS Metamaterials with Extreme Energy Dissipation
Perroni-Scharf, Maxine; Ferguson, Zachary; Butruille, Thomas; Portela, Carlos; Konakovi? Lukovi?, Mina
Triply periodic minimal surfaces (TPMS) are a class of metamaterials with a variety of applications and well-known primitive morphologies. We present a new method for discovering novel microscale TPMS structures with exceptional energy-dissipation capabilities, achieving double the energy absorption of the best existing TPMS primitive structure. Our approach employs a parametric representation, allowing seamless interpolation between structures and representing a rich TPMS design space. As simulations are intractable for efficiently optimizing microscale hyperelastic structures, we propose a sample-efficient computational strategy for rapid discovery with limited empirical data from 3D-printed and tested samples that ensures high-fidelity results. We achieve this by leveraging a predictive uncertainty-aware Deep Ensembles model to identify which structures to fabricate and test next. We iteratively refine our model through batch Bayesian optimization, selecting structures for fabrication that maximize exploration of the performance space and exploitation of our energy-dissipation objective. Using our method, we produce the first open-source dataset of hyperelastic microscale TPMS structures, including a set of novel structures that demonstrate extreme energy dissipation capabilities, and show several potential applications of these structures.
SIGGRAPH Conference Papers ’25, Vancouver, BC, Canada
</summary>
<dc:date>2025-07-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Splat and Replace: 3D Reconstruction with Repetitive Elements</title>
<link href="https://hdl.handle.net/1721.1/164762" rel="alternate"/>
<author>
<name>Violante, Nicolas</name>
</author>
<author>
<name>Meuleman, Andr?as</name>
</author>
<author>
<name>Gauthier, Alban</name>
</author>
<author>
<name>Durand, Fredo</name>
</author>
<author>
<name>Groueix, Thibault</name>
</author>
<author>
<name>Drettakis, George</name>
</author>
<id>https://hdl.handle.net/1721.1/164762</id>
<updated>2026-02-10T03:07:03Z</updated>
<published>2025-07-27T00:00:00Z</published>
<summary type="text">Splat and Replace: 3D Reconstruction with Repetitive Elements
Violante, Nicolas; Meuleman, Andr?as; Gauthier, Alban; Durand, Fredo; Groueix, Thibault; Drettakis, George
We leverage repetitive elements in 3D scenes to improve novel view synthesis. Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have greatly improved novel view synthesis but renderings of unseen and occluded parts remain low-quality if the training views are not exhaustive enough. Our key observation is that our environment is often full of repetitive elements. We propose to leverage those repetitions to improve the reconstruction of low-quality parts of the scene due to poor coverage and occlusions. We propose a method that segments each repeated instance in a 3DGS reconstruction, registers them together, and allows information to be shared among instances. Our method improves the geometry while also accounting for appearance variations across instances. We demonstrate our method on a variety of synthetic and real scenes with typical repetitive elements, leading to a substantial improvement in the quality of novel view synthesis.
SIGGRAPH Conference Papers ’25, Vancouver, BC, Canada
</summary>
<dc:date>2025-07-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Variational Elastodynamic Simulation</title>
<link href="https://hdl.handle.net/1721.1/164761" rel="alternate"/>
<author>
<name>Mattos Da Silva, Leticia</name>
</author>
<author>
<name>Sell?n, Silvia</name>
</author>
<author>
<name>Pacheco-Tallaj, Natalia</name>
</author>
<author>
<name>Solomon, Justin</name>
</author>
<id>https://hdl.handle.net/1721.1/164761</id>
<updated>2026-02-10T03:07:26Z</updated>
<published>2025-07-27T00:00:00Z</published>
<summary type="text">Variational Elastodynamic Simulation
Mattos Da Silva, Leticia; Sell?n, Silvia; Pacheco-Tallaj, Natalia; Solomon, Justin
Numerical schemes for time integration are the cornerstone of dynamical simulations for deformable solids. The most popular time integrators for isotropic distortion energies rely on nonlinear root-finding solvers, most prominently, Newton’s method. These solvers are computationally expensive and sensitive to ill-conditioned Hessians and poor initial guesses; these difficulties can particularly hamper the effectiveness of variational integrators, whose momentum conservation properties require reliable root-finding. To tackle these difficulties, this paper shows how to express variational time integration for a large class of elastic energies as an optimization problem with a “hidden” convex substructure. This hidden convexity suggests uses of optimization techniques with rigorous convergence analysis, guaranteed inversion-free elements, and conservation of physical invariants up to tolerance/numerical precision. In particular, we propose an Alternating Direction Method of Multipliers (ADMM) algorithm combined with a proximal operator step to solve our formulation. Empirically, our integrator improves the performance of elastic simulation tasks, as we demonstrate in a number of examples.
SIGGRAPH Conference Papers ’25, Vancouver, BC, Canada
</summary>
<dc:date>2025-07-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strain-Tunable Thermal Conductivity in Largely Amorphous Polyolefin Fibers via Alignment-Induced Vibrational Delocalization</title>
<link href="https://hdl.handle.net/1721.1/164760" rel="alternate"/>
<author>
<name/>
</author>
<id>https://hdl.handle.net/1721.1/164760</id>
<updated>2026-02-10T03:08:05Z</updated>
<published>2026-02-09T00:00:00Z</published>
<summary type="text">Strain-Tunable Thermal Conductivity in Largely Amorphous Polyolefin Fibers via Alignment-Induced Vibrational Delocalization
Developing fast, reversible, and recyclable thermal switches is essential to advance adaptive thermal management. Here, we present a strain-tunable thermal switch based on largely amorphous olefin block copolymer (OBC) fibers, achieving a continuous switching ratio above 2 over 1000 cycles, as well as very short response times below 0.22 s. Using Raman spectroscopy, we quantify vibrational delocalization with increasing strain and demonstrate its direct connection to the observed thermal conductivity changes. We show that unlike prior assumptions linking propagating heat carriers primarily to crystalline domains, alignment in amorphous systems can enable phonon-like modes that dominate transport. To our best knowledge, this work is the first to experimentally probe vibrational delocalization using Raman spectroscopy and to demonstrate that alignment alone can govern the dominant carrier in disordered polymers. These findings establish design strategies for fatigue-resistant, high-performance, and recyclable polymer thermal switches for advanced thermal energy transport applications.
</summary>
<dc:date>2026-02-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Introducing synchromodality: One missing link between transportation and supply chain management</title>
<link href="https://hdl.handle.net/1721.1/164759" rel="alternate"/>
<author>
<name>Acero, Beatriz</name>
</author>
<author>
<name>Saenz, Maria Jesus</name>
</author>
<author>
<name>Luzzini, Davide</name>
</author>
<id>https://hdl.handle.net/1721.1/164759</id>
<updated>2026-02-10T03:07:55Z</updated>
<published>2021-05-24T00:00:00Z</published>
<summary type="text">Introducing synchromodality: One missing link between transportation and supply chain management
Acero, Beatriz; Saenz, Maria Jesus; Luzzini, Davide
This study develops and tests the synchromodality construct, a novel supply chain concept that integrates the flexible use of different transport modes based on real-time information. At a time when global supply chains are complex and subject to uncertainty, synchromodality has emerged at the forefront of research and practice as a tool to ensure efficient delivery performance and thus supply chain competitiveness. Despite synchromodality is attracting the attention of leading companies and policy makers, only scholars within the transport research community have engaged with the topic so far. We believe a supply chain management perspective is missing, but essential, to develop the full potential of synchromodality. Our study shows that synchromodality capabilities encapsulate four key elements: visibility, integration, multi-modal transport, and flexibility. Thanks to a three-stage research approach exploiting multiple methods, this study conceptualizes, develops, and validates the first synchromodality measurement model, which reflects the multidimensional nature of the concept. We hope to set the stage for a number of potential future research opportunities that can explore synchromodality implementation and outcomes.
</summary>
<dc:date>2021-05-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shape Space Spectra</title>
<link href="https://hdl.handle.net/1721.1/164758" rel="alternate"/>
<author>
<name>Chang, Yue</name>
</author>
<author>
<name>Benchekroun, Otman</name>
</author>
<author>
<name>Chiaramonte, Maurizio M.</name>
</author>
<author>
<name>Chen, Peter Yichen</name>
</author>
<author>
<name>Grinspun, Eitan</name>
</author>
<id>https://hdl.handle.net/1721.1/164758</id>
<updated>2026-02-10T03:07:07Z</updated>
<published>2025-07-27T00:00:00Z</published>
<summary type="text">Shape Space Spectra
Chang, Yue; Benchekroun, Otman; Chiaramonte, Maurizio M.; Chen, Peter Yichen; Grinspun, Eitan
Eigenanalysis of differential operators, such as the Laplace operator or elastic energy Hessian, is typically restricted to a single shape and its discretization, limiting reduced order modeling (ROM). We introduce the first eigenanalysis method for continuously parameterized shape families. Given a parametric shape, our method constructs spatial neural fields that represent eigen-functions across the entire shape space. It is agnostic to the specific shape representation, requiring only an inside/outside indicator function that depends on shape parameters. Eigenfunctions are computed by minimizing a variational principle over nested spaces with orthogonality constraints. Since eigenvalues may swap dominance at points of multiplicity, we jointly train multiple eigenfunctions while dynamically reordering them based on their eigenvalues at each step. Through causal gradient filtering, this reordering is reflected in backpropagation. Our method enables applications to operate over shape space, providing a single ROM that encapsulates vibration modes for all shapes, including previously unseen ones. Since our eigenanalysis is differentiable with respect to shape parameters, it facilitates eigenfunction-aware shape optimization. We evaluate our approach on shape optimization for sound synthesis and locomotion, as well as reduced-order modeling for elastodynamic simulation.
</summary>
<dc:date>2025-07-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamic Mesh Processing on the GPU</title>
<link href="https://hdl.handle.net/1721.1/164757" rel="alternate"/>
<author>
<name>Mahmoud, Ahmed H.</name>
</author>
<author>
<name>Porumbescu, Serban D.</name>
</author>
<author>
<name>Owens, John D.</name>
</author>
<id>https://hdl.handle.net/1721.1/164757</id>
<updated>2026-03-08T03:23:09Z</updated>
<published>2025-07-27T00:00:00Z</published>
<summary type="text">Dynamic Mesh Processing on the GPU
Mahmoud, Ahmed H.; Porumbescu, Serban D.; Owens, John D.
We present a system for dynamic triangle mesh processing entirely on the GPU. Our system features an efficient data structure that enables rapid updates to mesh connectivity and attributes. By partitioning the mesh into small patches, we process all dynamic updates for each patch within the GPU's fast shared memory. This approach leverages speculative processing for conflict handling, minimizing rollback costs, maximizing parallelism, and reducing locking overhead. Additionally, we introduce a new programming model for dynamic mesh processing. This model provides concise semantics for dynamic updates, abstracting away concerns about conflicting updates during parallel execution. At the core of our model is the cavity operator, a general mesh update operator that facilitates any dynamic operation by removing a set of mesh elements and inserting new ones into the resulting void. We applied our system to various GPU applications, including isotropic remeshing, surface tracking, mesh decimation, and Delaunay edge flips. On large inputs, our system achieves an order-of-magnitude speedup compared to multi-threaded CPU solutions and is more than two orders of magnitude faster than state-of-the-art single-threaded CPU solutions. Furthermore, our data structure outperforms state-of-the-art GPU static data structures in terms of both speed and memory efficiency.
</summary>
<dc:date>2025-07-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exciton Fine Structure in 2D Perovskites: The Out‐of‐Plane Excitonic State</title>
<link href="https://hdl.handle.net/1721.1/164756" rel="alternate"/>
<author>
<name>Posmyk, Katarzyna</name>
</author>
<author>
<name>Dyksik, Mateusz</name>
</author>
<author>
<name>Surrente, Alessandro</name>
</author>
<author>
<name>Maude, Duncan K</name>
</author>
<author>
<name>Zawadzka, Natalia</name>
</author>
<author>
<name>Babiński, Adam</name>
</author>
<author>
<name>Molas, Maciej R</name>
</author>
<author>
<name>Paritmongkol, Watcharaphol</name>
</author>
<author>
<name>Mączka, Mirosław</name>
</author>
<author>
<name>Tisdale, William A</name>
</author>
<author>
<name>Plochocka, Paulina</name>
</author>
<author>
<name>Baranowski, Michał</name>
</author>
<id>https://hdl.handle.net/1721.1/164756</id>
<updated>2026-03-08T03:39:49Z</updated>
<published>2024-07-23T00:00:00Z</published>
<summary type="text">Exciton Fine Structure in 2D Perovskites: The Out‐of‐Plane Excitonic State
Posmyk, Katarzyna; Dyksik, Mateusz; Surrente, Alessandro; Maude, Duncan K; Zawadzka, Natalia; Babiński, Adam; Molas, Maciej R; Paritmongkol, Watcharaphol; Mączka, Mirosław; Tisdale, William A; Plochocka, Paulina; Baranowski, Michał
2D Ruddlesden-Popper metal-halide perovskites feature particularly strong excitonic effects, making them a fascinating playground for studying exciton physics. A complete understanding of the properties of this quasi-particle is crucial to fully exploit the tremendous potential of 2D perovskites (2DP) in light emission applications. Despite intense investigations, some of the exciton properties remain elusive to date, for example, the energy-ordering of the exciton states within the so-called fine structure manifold. Using optical spectroscopy, it demonstrates that in the archetypical 2DP (PEA)2PbI4, in contradiction to theoretical predictions, the energy of the bright out-of-plane exciton state is higher than that of two in-plane states. Having elucidated the order of exciton fine structure, it determines the g-factor of the dark exciton transition, together with the values of the electron and hole g-factors in the direction parallel to the c-axis of the crystal. In this way, it provides for the first time, a complete picture of the exciton fine structure in (PEA)2PbI4 2DP.
</summary>
<dc:date>2024-07-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Discovery of enhanced lattice dynamics in a single-layered hybrid perovskite</title>
<link href="https://hdl.handle.net/1721.1/164755" rel="alternate"/>
<author>
<name>Zhang, Zhuquan</name>
</author>
<author>
<name>Zhang, Jiahao</name>
</author>
<author>
<name>Liu, Zi-Jie</name>
</author>
<author>
<name>Dahod, Nabeel S</name>
</author>
<author>
<name>Paritmongkol, Watcharaphol</name>
</author>
<author>
<name>Brown, Niamh</name>
</author>
<author>
<name>Stollmann, Alexia</name>
</author>
<author>
<name>Lee, Woo Seok</name>
</author>
<author>
<name>Chien, Yu-Che</name>
</author>
<author>
<name>Dai, Zhenbang</name>
</author>
<author>
<name>Nelson, Keith A</name>
</author>
<author>
<name>Tisdale, William A</name>
</author>
<author>
<name>Rappe, Andrew M</name>
</author>
<author>
<name>Baldini, Edoardo</name>
</author>
<id>https://hdl.handle.net/1721.1/164755</id>
<updated>2026-03-08T03:39:47Z</updated>
<published>2023-08-16T00:00:00Z</published>
<summary type="text">Discovery of enhanced lattice dynamics in a single-layered hybrid perovskite
Zhang, Zhuquan; Zhang, Jiahao; Liu, Zi-Jie; Dahod, Nabeel S; Paritmongkol, Watcharaphol; Brown, Niamh; Stollmann, Alexia; Lee, Woo Seok; Chien, Yu-Che; Dai, Zhenbang; Nelson, Keith A; Tisdale, William A; Rappe, Andrew M; Baldini, Edoardo
Layered hybrid perovskites exhibit emergent physical properties and exceptional functional performances, but the coexistence of lattice order and structural disorder severely hinders our understanding of these materials. One unsolved problem regards how the lattice dynamics are affected by the dimensional engineering of the inorganic frameworks and their interaction with the molecular moieties. Here, we address this question by using a combination of spontaneous Raman scattering, terahertz spectroscopy, and molecular dynamics simulations. This approach reveals the structural dynamics in and out of equilibrium and provides unexpected observables that differentiate single- and double-layered perovskites. While no distinct vibrational coherence is observed in double-layered perovskites, an off-resonant terahertz pulse can drive a long-lived coherent phonon mode in the single-layered system. This difference highlights the dramatic change in the lattice environment as the dimension is reduced, and the findings pave the way for ultrafast structural engineering and high-speed optical modulators based on layered perovskites.
</summary>
<dc:date>2023-08-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bright Excitonic Fine Structure in Metal-Halide Perovskites: From Two-Dimensional to Bulk</title>
<link href="https://hdl.handle.net/1721.1/164754" rel="alternate"/>
<author>
<name>Posmyk, Katarzyna</name>
</author>
<author>
<name>Zawadzka, Natalia</name>
</author>
<author>
<name>Łucja Kipczak</name>
</author>
<author>
<name>Dyksik, Mateusz</name>
</author>
<author>
<name>Surrente, Alessandro</name>
</author>
<author>
<name>Maude, Duncan K</name>
</author>
<author>
<name>Kazimierczuk, Tomasz</name>
</author>
<author>
<name>Babiński, Adam</name>
</author>
<author>
<name>Molas, Maciej R</name>
</author>
<author>
<name>Bumrungsan, Wakul</name>
</author>
<author>
<name>Chooseng, Chanisara</name>
</author>
<author>
<name>Paritmongkol, Watcharaphol</name>
</author>
<author>
<name>Tisdale, William A</name>
</author>
<author>
<name>Baranowski, Michał</name>
</author>
<author>
<name>Plochocka, Paulina</name>
</author>
<id>https://hdl.handle.net/1721.1/164754</id>
<updated>2026-03-08T03:40:20Z</updated>
<published>2024-02-07T00:00:00Z</published>
<summary type="text">Bright Excitonic Fine Structure in Metal-Halide Perovskites: From Two-Dimensional to Bulk
Posmyk, Katarzyna; Zawadzka, Natalia; Łucja Kipczak; Dyksik, Mateusz; Surrente, Alessandro; Maude, Duncan K; Kazimierczuk, Tomasz; Babiński, Adam; Molas, Maciej R; Bumrungsan, Wakul; Chooseng, Chanisara; Paritmongkol, Watcharaphol; Tisdale, William A; Baranowski, Michał; Plochocka, Paulina
The optical response of two-dimensional (2D) perovskites, often referred to as natural quantum wells, is primarily governed by excitons, whose properties can be readily tuned by adjusting the perovskite layer thickness. We have investigated the exciton fine structure splitting in the archetypal 2D perovskite (PEA)2(MA)n−1PbnI3n+1 with varying numbers of inorganic octahedral layers n = 1, 2, 3, and 4. We demonstrate that the in-plane excitonic states exhibit splitting and orthogonally oriented dipoles for all confinement regimes. The evolution of the exciton states in an external magnetic field provides further insights into the g-factors and diamagnetic coefficients. With increasing n, we observe a gradual evolution of the excitonic parameters characteristic of a 2D to three-dimensional transition. Our results provide valuable information concerning the evolution of the optoelectronic properties of 2D perovskites with the changing confinement strength.
</summary>
<dc:date>2024-02-07T00:00:00Z</dc:date>
</entry>
<entry>
<title>Persistent enhancement of exciton diffusivity in CsPbBr3 nanocrystal solids</title>
<link href="https://hdl.handle.net/1721.1/164753" rel="alternate"/>
<author>
<name>Shcherbakov-Wu, Wenbi</name>
</author>
<author>
<name>Saris, Seryio</name>
</author>
<author>
<name>Sheehan, Thomas John</name>
</author>
<author>
<name>Wong, Narumi Nagaya</name>
</author>
<author>
<name>Powers, Eric R</name>
</author>
<author>
<name>Krieg, Franziska</name>
</author>
<author>
<name>Kovalenko, Maksym V</name>
</author>
<author>
<name>Willard, Adam P</name>
</author>
<author>
<name>Tisdale, William A</name>
</author>
<id>https://hdl.handle.net/1721.1/164753</id>
<updated>2026-03-08T03:39:47Z</updated>
<published>2024-02-21T00:00:00Z</published>
<summary type="text">Persistent enhancement of exciton diffusivity in CsPbBr3 nanocrystal solids
Shcherbakov-Wu, Wenbi; Saris, Seryio; Sheehan, Thomas John; Wong, Narumi Nagaya; Powers, Eric R; Krieg, Franziska; Kovalenko, Maksym V; Willard, Adam P; Tisdale, William A
In semiconductors, exciton or charge carrier diffusivity is typically described as an inherent material property. Here, we show that the transport of excitons among CsPbBr3 perovskite nanocrystals (NCs) depends markedly on how recently those NCs were occupied by a previous exciton. Using transient photoluminescence microscopy, we observe a striking dependence of the apparent exciton diffusivity on excitation laser power that does not arise from nonlinear exciton-exciton interactions or thermal heating. We interpret our observations with a model in which excitons cause NCs to transition to a long-lived metastable configuration that markedly increases exciton transport. The exciton diffusivity observed here (&gt;0.15 square centimeters per second) is considerably higher than that observed in other NC systems, revealing unusually strong excitonic coupling between NCs. The finding of a persistent enhancement in excitonic coupling may help explain other photophysical behaviors observed in CsPbBr3 NCs, such as superfluorescence, and inform the design of optoelectronic devices.
</summary>
<dc:date>2024-02-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>All-Perovskite Multicomponent Nanocrystal Superlattices</title>
<link href="https://hdl.handle.net/1721.1/164752" rel="alternate"/>
<author>
<name>Sekh, Taras V</name>
</author>
<author>
<name>Cherniukh, Ihor</name>
</author>
<author>
<name>Kobiyama, Etsuki</name>
</author>
<author>
<name>Sheehan, Thomas J</name>
</author>
<author>
<name>Manoli, Andreas</name>
</author>
<author>
<name>Zhu, Chenglian</name>
</author>
<author>
<name>Athanasiou, Modestos</name>
</author>
<author>
<name>Sergides, Marios</name>
</author>
<author>
<name>Ortikova, Oleksandra</name>
</author>
<author>
<name>Rossell, Marta D</name>
</author>
<author>
<name>Bertolotti, Federica</name>
</author>
<author>
<name>Guagliardi, Antonietta</name>
</author>
<author>
<name>Masciocchi, Norberto</name>
</author>
<author>
<name>Erni, Rolf</name>
</author>
<author>
<name>Othonos, Andreas</name>
</author>
<author>
<name>Itskos, Grigorios</name>
</author>
<author>
<name>Tisdale, William A</name>
</author>
<author>
<name>Stöferle, Thilo</name>
</author>
<author>
<name>Rainò, Gabriele</name>
</author>
<author>
<name>Bodnarchuk, Maryna I</name>
</author>
<author>
<name>Kovalenko, Maksym V</name>
</author>
<id>https://hdl.handle.net/1721.1/164752</id>
<updated>2026-03-08T03:39:46Z</updated>
<published>2024-03-06T00:00:00Z</published>
<summary type="text">All-Perovskite Multicomponent Nanocrystal Superlattices
Sekh, Taras V; Cherniukh, Ihor; Kobiyama, Etsuki; Sheehan, Thomas J; Manoli, Andreas; Zhu, Chenglian; Athanasiou, Modestos; Sergides, Marios; Ortikova, Oleksandra; Rossell, Marta D; Bertolotti, Federica; Guagliardi, Antonietta; Masciocchi, Norberto; Erni, Rolf; Othonos, Andreas; Itskos, Grigorios; Tisdale, William A; Stöferle, Thilo; Rainò, Gabriele; Bodnarchuk, Maryna I; Kovalenko, Maksym V
Nanocrystal superlattices (NC SLs) have long been sought as promising metamaterials, with nanoscale-engineered properties arising from collective and synergistic effects among the constituent building blocks. Lead halide perovskite (LHP) NCs come across as outstanding candidates for SL design, as they demonstrate collective light emission, known as superfluorescence, in single- and multicomponent SLs. Thus far, LHP NCs have only been assembled in single-component SLs or coassembled with dielectric NC building blocks acting solely as spacers between luminescent NCs. Here, we report the formation of multicomponent LHP NC-only SLs, i.e., using only CsPbBr3 NCs of different sizes as building blocks. The structural diversity of the obtained SLs encompasses the ABO6, ABO3, and NaCl structure types, all of which contain orientationally and positionally locked NCs. For the selected model system, the ABO6-type SL, we observed efficient NC coupling and Förster-like energy transfer from strongly confined 5.3 nm CsPbBr3 NCs to weakly confined 17.6 nm CsPbBr3 NCs, along with characteristic superfluorescence features at cryogenic temperatures. Spatiotemporal exciton dynamics measurements reveal that binary SLs exhibit enhanced exciton diffusivity compared to single-component NC assemblies across the entire temperature range (from 5 to 298 K). The observed coherent and incoherent NC coupling and controllable excitonic transport within the solid NC SLs hold promise for applications in quantum optoelectronic devices.
</summary>
<dc:date>2024-03-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>Electrical manipulation of dissipation in microwave photon–magnon hybrid system through the spin Hall effect</title>
<link href="https://hdl.handle.net/1721.1/164751" rel="alternate"/>
<author>
<name>Hou, Justin T</name>
</author>
<author>
<name>Chou, Chung-Tao</name>
</author>
<author>
<name>Han, Jiahao</name>
</author>
<author>
<name>Fan, Yabin</name>
</author>
<author>
<name>Liu, Luqiao</name>
</author>
<id>https://hdl.handle.net/1721.1/164751</id>
<updated>2026-03-08T03:40:19Z</updated>
<published>2024-02-12T00:00:00Z</published>
<summary type="text">Electrical manipulation of dissipation in microwave photon–magnon hybrid system through the spin Hall effect
Hou, Justin T; Chou, Chung-Tao; Han, Jiahao; Fan, Yabin; Liu, Luqiao
Hybrid dynamic systems combine advantages from different subsystems for realizing information processing tasks in both classical and quantum domains. However, the lack of controlling knobs in tuning system parameters becomes a severe challenge in developing scalable, versatile hybrid systems for useful applications. Here, we report an on-chip microwave photon–magnon hybrid system where the dissipation rates and the coupling cooperativity can be electrically influenced by the spin Hall effect. Through magnon–photon coupling, the linewidths of the resonator photon mode and the hybridized magnon polariton modes are effectively changed by the spin injection into the magnetic wires from an applied direct current, which exhibit different trends in samples with low and high coupling strengths. Moreover, the linewidth modification by the spin Hall effect shows strong dependence on the detuning of the two subsystems, in contrast to the classical behavior of a standalone magnonic device. Our results point to a direction of realizing tunable, on-chip, scalable magnon-based hybrid dynamic systems, where spintronic effects provide useful control mechanisms.
</summary>
<dc:date>2024-02-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Human Factors Observations in Flightcrew Response to System Failure Events in Transport Category Aircraft from 2000 to 2024</title>
<link href="https://hdl.handle.net/1721.1/164750" rel="alternate"/>
<author>
<name>Perez Gago, Cecilia</name>
</author>
<author>
<name>Hansman, R. John</name>
</author>
<id>https://hdl.handle.net/1721.1/164750</id>
<updated>2026-02-06T03:00:48Z</updated>
<published>2026-02-05T00:00:00Z</published>
<summary type="text">Human Factors Observations in Flightcrew Response to System Failure Events in Transport Category Aircraft from 2000 to 2024
Perez Gago, Cecilia; Hansman, R. John
</summary>
<dc:date>2026-02-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>When Will (Game) Wars End?</title>
<link href="https://hdl.handle.net/1721.1/164749" rel="alternate"/>
<author>
<name>Bhatia, Manan</name>
</author>
<author>
<name>Chin, Byron</name>
</author>
<author>
<name>Mani, Nitya</name>
</author>
<author>
<name>Mossel, Elchanan</name>
</author>
<id>https://hdl.handle.net/1721.1/164749</id>
<updated>2026-03-08T03:40:17Z</updated>
<published>2026-01-02T00:00:00Z</published>
<summary type="text">When Will (Game) Wars End?
Bhatia, Manan; Chin, Byron; Mani, Nitya; Mossel, Elchanan
We study several variants of the classical card game war. As anyone who has played this game knows, the game can take some time to terminate, but it usually does. Here, we analyze a number of asymptotic variants of the game, where the number of cards is n, and show that all have an expected termination time of order &#119899;2. This is the same expected termination time as the game where at each turn a fair coin toss decides which player wins a card, known as Gambler’s Ruin and was studied by Blaise Pascal, Pierre de Fermat and others in the seventeenth century.
</summary>
<dc:date>2026-01-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Semiconductor-free, monolithically 3D-printed logic gates and resettable fuses</title>
<link href="https://hdl.handle.net/1721.1/164748" rel="alternate"/>
<author>
<name>Cañada, Jorge</name>
</author>
<author>
<name>Velásquez-García, Luis Fernando</name>
</author>
<id>https://hdl.handle.net/1721.1/164748</id>
<updated>2026-03-08T03:40:18Z</updated>
<published>2024-09-21T00:00:00Z</published>
<summary type="text">Semiconductor-free, monolithically 3D-printed logic gates and resettable fuses
Cañada, Jorge; Velásquez-García, Luis Fernando
Additive manufacturing has the potential to enable the inexpensive, single-step fabrication of fully functional electromechanical devices. However, while the 3D printing of mechanical parts and passive electrical components is well developed, the fabrication of fully 3D-printed active electronics, which are the cornerstone of intelligent devices, remains a challenge. Existing examples of 3D-printed active electronics show potential but lack integrability and accessibility. This work reports the first active electronics fully 3D-printed via material extrusion, i.e. one of the most accessible and versatile additive manufacturing processes. The technology is proof-of-concept demonstrated through the implementation of the first fully 3D-printed, semiconductor-free, solid-state logic gates, and the first fully 3D-printed resettable fuses. The devices take advantage of a positive temperature coefficient phenomenon found to affect narrow traces of 3D-printed copper-reinforced, polylactic acid. Although the reported devices don’t perform competitively against semiconductor-enabled integrated circuits, the customisability and accessibility intrinsic to material extrusion additive manufacturing make this technology promisingly disruptive. This work serves as a steppingstone for the semiconductor-free democratisation of electronic device fabrication and is of immediate relevance for the manufacture of custom, intelligent devices far from traditional manufacturing centres.
</summary>
<dc:date>2024-09-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rapid large-scale building damage level classification after earthquakes using deep learning with Lidar and satellite optical data</title>
<link href="https://hdl.handle.net/1721.1/164747" rel="alternate"/>
<author>
<name>Liu, Chang</name>
</author>
<author>
<name>Ge, Linlin</name>
</author>
<author>
<name>Bai, Ting</name>
</author>
<id>https://hdl.handle.net/1721.1/164747</id>
<updated>2026-03-08T03:40:21Z</updated>
<published>2024-12-31T00:00:00Z</published>
<summary type="text">Rapid large-scale building damage level classification after earthquakes using deep learning with Lidar and satellite optical data
Liu, Chang; Ge, Linlin; Bai, Ting
In post-earthquake scenarios, the swift assessment of building damage levels is pivotal for efficient emergency response and recovery planning. Nevertheless, conventional in-situ damage evaluations consume time. Current satellite-based deep learning methods save time but often lack detail, usually classifying damage as either collapsed or intact. This two-level information is not enough for rescue or recovery planning. Light Detection and Ranging (Lidar)-based deep learning methods, which provide three-dimensional (3D) information, could address this issue of damage details. Therefore, this paper proposes a deep learning-based building damage level classification method using both Lidar and satellite data. The proposed method classifies damage into four levels, including no/minor damage, partially collapsed, totally collapsed, and story failure. The developed network builds upon RandLA-Net, incorporating surface normal vectors to enhance accuracy. A colourised Lidar dataset was created for the network. The network underscores the advantage of incorporating surface normal information. A framework is also proposed based on the damage level outcomes of the developed network, which aids in emergency response efforts. Consequently, this paper demonstrates the practical utility of deep learning networks in rapidly assessing detailed building damage levels after earthquakes. Its practical contribution is guiding decision-making during the critical phases of post-earthquake response and recovery.
</summary>
<dc:date>2024-12-31T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cortical somatostatin innervation follows a unique experience-independent developmental trajectory</title>
<link href="https://hdl.handle.net/1721.1/164746" rel="alternate"/>
<author>
<name>Boivin, Josiah R</name>
</author>
<author>
<name>Schmerl, Bettina</name>
</author>
<author>
<name>Martin, Kendyll B</name>
</author>
<author>
<name>Lee, Chia-Fang</name>
</author>
<author>
<name>Nedivi, Elly</name>
</author>
<id>https://hdl.handle.net/1721.1/164746</id>
<updated>2026-03-08T03:39:45Z</updated>
<published>2026-01-13T00:00:00Z</published>
<summary type="text">Cortical somatostatin innervation follows a unique experience-independent developmental trajectory
Boivin, Josiah R; Schmerl, Bettina; Martin, Kendyll B; Lee, Chia-Fang; Nedivi, Elly
&lt;jats:p&gt;Despite the critical role of inhibition in regulating developmental plasticity, there are significant gaps in our understanding of inhibitory synapse development, particularly for the vast majority of inhibitory synapses that reside on dendrites. Dendritic inhibitory synapses, canonically arising from somatostatin (SST)-expressing neurons, are challenging to detect electrophysiologically and difficult to visualize without a molecular tag. Here, we integrate a genetic synapse labeling strategy with epitope-preserving magnified analysis of proteome (eMAP), a combination of tissue expansion and clearing, to reveal the development of SST innervation in the primary visual cortex of male and female mice. Unlike excitatory innervation, which follows a deep to shallow progression and undergoes pruning, we find that SST bouton formation occurs simultaneously across all cortical layers and is not subject to a period of net pruning. SST bouton and synapse formation occur most dramatically in the days following eye opening and during the opening of the critical period for ocular dominance plasticity. Yet, despite a coincidence with these visual milestones, neither SST bouton nor synapse formation depend on visual experience. This is in contrast to excitatory and non-SST inhibitory synapses, whose development has been shown to depend heavily on visual experience. Thus, SST cortical innervation follows a unique developmental trajectory that is independent of sensory experience and is optimally timed to regulate processes that are fundamental to cortical circuit maturation.&lt;/jats:p&gt;&#13;
                  &lt;jats:p&gt;&#13;
                    &lt;jats:bold&gt;Significance statement&lt;/jats:bold&gt;&#13;
                    During development, neurons form extensive synaptic connections while maintaining a delicate balance of excitation and inhibition. It is critical to understand how different subpopulations of synapses form during development, because perturbations in this precisely coordinated process can cause neurodevelopmental disorders. Here, we reveal at unprecedented resolution the development of cortical inhibitory innervation from somatostatin-expressing neurons, which canonically target dendrites. We show that somatostatin neurons follow different rules than other cell types during development, and somatostatin innervation is well-timed to contribute to developmental processes that are central to healthy cortical function. Our results provide new insights on how somatostatin neurons, a critically influential cell type, integrate into cortical circuitry during development.&#13;
                  &lt;/jats:p&gt;
</summary>
<dc:date>2026-01-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Neruda through copper-coloured glasses: the role of place attachment in the embeddedness of Chilean entrepreneurship</title>
<link href="https://hdl.handle.net/1721.1/164745" rel="alternate"/>
<author>
<name>Burke, M. Kathleen</name>
</author>
<author>
<name>Conley, Mark A.</name>
</author>
<author>
<name>Jack, Sarah L.</name>
</author>
<id>https://hdl.handle.net/1721.1/164745</id>
<updated>2026-03-08T03:40:17Z</updated>
<published>2025-01-11T00:00:00Z</published>
<summary type="text">Neruda through copper-coloured glasses: the role of place attachment in the embeddedness of Chilean entrepreneurship
Burke, M. Kathleen; Conley, Mark A.; Jack, Sarah L.
Despite scholarly interest in how emotional and instrumental place attachments motivate entrepreneurship, the influences on embeddedness remain underexplored. Building on the notion that entrepreneurship becomes embedded in a locality, we argue that this process is packed with place-based interpretations of the material and imagined reality. Engaging with the empirical setting of Chile, the world’s largest copper producer, we embark on a study examining the interactions between the place attachment, embeddedness and natural resource-based entrepreneurship. We uncover these interactions through analysing several works of poetry by Nobel laureate Pablo Neruda, which focus on the diverging place attachment styles between local and multinational agents. Through reflecting on the poems, we show how historical changes within the Chilean mining industry and broader societal changes are visible in Neruda’s imagery of place attachments, emotions and concerns for local conditions. We problematize embeddedness and entrepreneurship through illuminating the place attachments shaping local actors’ entrepreneurial imagination, thus contributing to knowledge about being embedded in natural resource-based entrepreneurship contexts. We provide new insights into how place attachment can evolve alongside different forms of embedded entrepreneurship.
</summary>
<dc:date>2025-01-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>Human-centric manufacturing culture: a research study of MedTech manufacturers in Ireland</title>
<link href="https://hdl.handle.net/1721.1/164744" rel="alternate"/>
<author>
<name>Rhodes, Donna H</name>
</author>
<author>
<name>Cuddy, Sara</name>
</author>
<author>
<name>Jeffers, Malcolm</name>
</author>
<author>
<name>O’Rourke, Fiona</name>
</author>
<id>https://hdl.handle.net/1721.1/164744</id>
<updated>2026-03-08T03:40:19Z</updated>
<published>2025-12-31T00:00:00Z</published>
<summary type="text">Human-centric manufacturing culture: a research study of MedTech manufacturers in Ireland
Rhodes, Donna H; Cuddy, Sara; Jeffers, Malcolm; O’Rourke, Fiona
Digital manufacturing is rapidly evolving; however, this transformation is predominantly technology centric. Human-centric manufacturing shifts the paradigm for the digital manufacturing enterprise towards a human focus to realising its envisioned digital future. In that context, Digital Manufacturing Ireland (DMI), Ireland’s expert body for driving digital adoption across manufacturing, initiated a research study in collaboration with two research partners, MIT and IAAE, in support of this important focus for future manufacturing. This paper discusses results of the DMI 2023 human-Centric Manufacturing Culture Study, which engaged manufacturing leaders from 11 MedTech companies with major manufacturing sites in Ireland. Overall findings are discussed, with a focus on 12 emergent themes grouped in four categories: imperatives, values, strategies, and practices. Planned collaboration initiatives and anticipated future research are described. This paper also highlights considerations regarding new thinking needed by manufacturing leaders, along with recommendations as to what leaders can begin to do differently.
</summary>
<dc:date>2025-12-31T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decomposition of Frobenius pushforwards of line bundles on wonderful compactifications</title>
<link href="https://hdl.handle.net/1721.1/164743" rel="alternate"/>
<author>
<name>Cai, Merrick</name>
</author>
<author>
<name>Krylov, Vasily</name>
</author>
<id>https://hdl.handle.net/1721.1/164743</id>
<updated>2026-03-08T03:39:43Z</updated>
<published>2025-01-28T00:00:00Z</published>
<summary type="text">Decomposition of Frobenius pushforwards of line bundles on wonderful compactifications
Cai, Merrick; Krylov, Vasily
De Concini and Procesi introduced varieties known as wonderful compactifications, which are smooth projective compactifications of semisimple adjoint groups G. We study the Frobenius pushforwards of line bundles on the wonderful compactifications, and in particular we decompose them into a direct sum of vector subbundles and explicitly describe the ranks. We are especially interested in when these subbundles are line bundles, and in the case of &#119866;=&#120239;&#120242;&#120235;&#119899;, we offer lower bounds on the multiplicities (as direct summands) for these line bundles.
</summary>
<dc:date>2025-01-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>BrepDiff: Single-Stage B-rep Diffusion Model</title>
<link href="https://hdl.handle.net/1721.1/164742" rel="alternate"/>
<author>
<name>Lee, Mingi</name>
</author>
<author>
<name>Zhang, Dongsu</name>
</author>
<author>
<name>Jambon, Cl?ment</name>
</author>
<author>
<name>Kim, Young Min</name>
</author>
<id>https://hdl.handle.net/1721.1/164742</id>
<updated>2026-03-08T03:22:50Z</updated>
<published>2025-07-27T00:00:00Z</published>
<summary type="text">BrepDiff: Single-Stage B-rep Diffusion Model
Lee, Mingi; Zhang, Dongsu; Jambon, Cl?ment; Kim, Young Min
The Boundary Representation (B-rep) is a widely used 3D model representation of most consumer products designed with CAD software. However, its highly irregular and sparse set of relationships poses significant challenges for designing a generative model tailored to B-reps. Existing approaches use multi-stage approaches to satisfy the complex constraints sequentially. As a result, the final geometry cannot incorporate user edits due to the non-deterministic dependencies between cascaded stages. In contrast, we propose BrepDiff, a single-stage diffusion model for B-rep generation. We present a masked UV grid representation consisting of structured point samples from faces, serving as input for a diffusion transformer. By introducing an asynchronous and shifted noise schedule, we improve the training signal, enabling the diffusion model to better capture the distribution of UV grids. The explicitness of our masked UV grid representation enables users to intuitively understand and freely design surface geometry without being constrained by topological validity. The interconnectivity can be derived from the face layout, which is later processed into a valid solid volume during post-processing. Our approach achieves performance on par with state-of-the-art cascaded models while offering complex and diverse manipulations of geometry and topology, such as shape completion, merging, and interpolation.
SIGGRAPH Conference Papers ’25, Vancouver, BC, Canada
</summary>
<dc:date>2025-07-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>SwiftSketch: A Diffusion Model for Image-to-Vector Sketch Generation</title>
<link href="https://hdl.handle.net/1721.1/164741" rel="alternate"/>
<author>
<name>Arar, Ellie</name>
</author>
<author>
<name>Frenkel, Yarden</name>
</author>
<author>
<name>Cohen-Or, Daniel</name>
</author>
<author>
<name>Shamir, Ariel</name>
</author>
<author>
<name>Vinker, Yael</name>
</author>
<id>https://hdl.handle.net/1721.1/164741</id>
<updated>2026-03-08T03:22:35Z</updated>
<published>2025-07-27T00:00:00Z</published>
<summary type="text">SwiftSketch: A Diffusion Model for Image-to-Vector Sketch Generation
Arar, Ellie; Frenkel, Yarden; Cohen-Or, Daniel; Shamir, Ariel; Vinker, Yael
Recent advancements in large vision-language models have enabled highly expressive and diverse vector sketch generation. However, state-of-the-art methods rely on a time-consuming optimization process involving repeated feedback from a pretrained model to determine stroke placement. Consequently, despite producing impressive sketches, these methods are limited in practical applications. In this work, we introduce SwiftSketch, a diffusion model for image-conditioned vector sketch generation that can produce high-quality sketches in less than a second. SwiftSketch operates by progressively denoising stroke control points sampled from a Gaussian distribution. Its transformer-decoder architecture is designed to effectively handle the discrete nature of vector representation and capture the inherent global dependencies between strokes. To train SwiftSketch, we construct a synthetic dataset of image-sketch pairs, addressing the limitations of existing sketch datasets, which are often created by non-artists and lack professional quality. For generating these synthetic sketches, we introduce ControlSketch, a method that enhances SDS-based techniques by incorporating precise spatial control through a depth-aware ControlNet. We demonstrate that SwiftSketch generalizes across diverse concepts, efficiently producing sketches that combine high fidelity with a natural and visually appealing style.
SIGGRAPH Conference Papers ’25, Vancouver, BC, Canada
</summary>
<dc:date>2025-07-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lifting the Winding Number: Precise Discontinuities in Neural Fields for Physics Simulation</title>
<link href="https://hdl.handle.net/1721.1/164740" rel="alternate"/>
<author>
<name>Chang, Yue</name>
</author>
<author>
<name>Liu, Mengfei</name>
</author>
<author>
<name>Wang, Zhecheng</name>
</author>
<author>
<name>Chen, Peter Yichen</name>
</author>
<author>
<name>Grinspun, Eitan</name>
</author>
<id>https://hdl.handle.net/1721.1/164740</id>
<updated>2026-03-08T03:22:40Z</updated>
<published>2025-07-27T00:00:00Z</published>
<summary type="text">Lifting the Winding Number: Precise Discontinuities in Neural Fields for Physics Simulation
Chang, Yue; Liu, Mengfei; Wang, Zhecheng; Chen, Peter Yichen; Grinspun, Eitan
Cutting thin-walled deformable structures is common in daily life, but poses significant challenges for simulation due to the introduced spatial discontinuities. Traditional methods rely on mesh-based domain representations, which require frequent remeshing and refinement to accurately capture evolving discontinuities. These challenges are further compounded in reduced-space simulations, where the basis functions are inherently geometry- and mesh-dependent, making it difficult or even impossible for the basis to represent the diverse family of discontinuities introduced by cuts.&#13;
Recent advances in representing basis functions with neural fields offer a promising alternative, leveraging their discretization-agnostic nature to represent deformations across varying geometries. However, the inherent continuity of neural fields is an obstruction to generalization, particularly if discontinuities are encoded in neural network weights.&#13;
We present Wind Lifter, a novel neural representation designed to accurately model complex cuts in thin-walled deformable structures. Our approach constructs neural fields that reproduce discontinuities precisely at specified locations, without “baking in” the position of the cut line. To achieve this, we augment the input coordinates of the neural field with the generalized winding number of any given cut line, effectively lifting the input from two to three dimensions. Lifting allows the network to focus on the easier problem of learning a 3D everywhere-continuous volumetric field, while a corresponding restriction operator enables the final output field to precisely resolve strict discontinuities. Crucially, our approach does not embed the discontinuity in the neural network’s weights, opening avenues to generalization of cut placement.&#13;
Our method achieves real-time simulation speeds and supports dynamic updates to cut line geometry during the simulation. Moreover, the explicit representation of discontinuities makes our neural field intuitive to control and edit, offering a significant advantage over traditional neural fields, where discontinuities are embedded within the network’s weights, and enabling new applications that rely on general cut placement.
SIGGRAPH Conference Papers ’25, Vancouver, BC, Canada
</summary>
<dc:date>2025-07-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Making Concurrent Hardware Verification Sequential</title>
<link href="https://hdl.handle.net/1721.1/164739" rel="alternate"/>
<author>
<name>Bourgeat, Thomas</name>
</author>
<author>
<name>Liu, Jiazheng</name>
</author>
<author>
<name>Chlipala, Adam</name>
</author>
<author>
<name>Arvind</name>
</author>
<id>https://hdl.handle.net/1721.1/164739</id>
<updated>2026-03-08T03:39:50Z</updated>
<published>2025-06-13T00:00:00Z</published>
<summary type="text">Making Concurrent Hardware Verification Sequential
Bourgeat, Thomas; Liu, Jiazheng; Chlipala, Adam; Arvind
Compared to familiar hardware-description languages like Verilog, rule-based languages like Bluespec offer&#13;
opportunities to import modularity features from software programming. While Verilog modules are about&#13;
connecting wires between submodules, Bluespec modules resemble objects in object-oriented programming,&#13;
where interactions with a module occur only through calls to its methods. However, while software objects&#13;
can typically be characterized one method at a time, the concurrent nature of hardware makes it essential to&#13;
consider the repercussions of invoking multiple methods simultaneously. Prior formalizations of rule-based&#13;
languages conceptualized modules by describing their semantics considering arbitrary sets of simultaneous&#13;
method calls. This internalized concurrency significantly complicates correctness proofs. Rather than analyzing&#13;
methods one-at-a-time, as is done when verifying software object methods, validating the correctness of&#13;
rule-based modules necessitated simultaneous consideration of arbitrary subsets of method calls. The result&#13;
was a number of proof cases that grew exponentially in the size of the module&amp;#8217;s API.&#13;
In this work, we side-step the exponential blowup through a set of judicious language restrictions. We&#13;
introduce a new Bluespec-inspired formal language, Fjfj, that supports sequential characterization of modules,&#13;
while preserving the concurrent hardware nature of the language. We evaluated Fjfj by implementing it in&#13;
Coq, proving the key framework principle: the refinement theorem. We demonstrated Fjfj&amp;#8217;s expressivity via&#13;
implementation and verification of three examples: a pipelined processor, a parameterized crossbar, and a&#13;
network switch.
</summary>
<dc:date>2025-06-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lilo: A Higher-Order, Relational Concurrent Separation Logic for Liveness</title>
<link href="https://hdl.handle.net/1721.1/164738" rel="alternate"/>
<author>
<name>Lee, Dongjae</name>
</author>
<author>
<name>Lee, Janggun</name>
</author>
<author>
<name>Yoon, Taeyoung</name>
</author>
<author>
<name>Cho, Minki</name>
</author>
<author>
<name>Kang, Jeehoon</name>
</author>
<author>
<name>Hur, Chung-Kil</name>
</author>
<id>https://hdl.handle.net/1721.1/164738</id>
<updated>2026-03-08T03:22:44Z</updated>
<published>2025-04-09T00:00:00Z</published>
<summary type="text">Lilo: A Higher-Order, Relational Concurrent Separation Logic for Liveness
Lee, Dongjae; Lee, Janggun; Yoon, Taeyoung; Cho, Minki; Kang, Jeehoon; Hur, Chung-Kil
Concurrent separation logic (CSL) has excelled in verifying safety properties across various applications, yet its application to liveness properties remains limited. While existing approaches like TaDA Live and Fair Operational Semantics (FOS) have made significant strides, they still face limitations. TaDA Live struggles to verify certain classes of programs, particularly concurrent objects with non-local linearization points, and lacks support for general liveness properties such as "good things happen infinitely often". On the other hand, FOS&amp;#8217;s scalability is hindered by the absence of thread modular reasoning principles and modular specifications.&#13;
&#13;
This paper introduces Lilo, a higher-order, relational CSL designed to overcome these limitations. Our core observation is that FOS helps us to maintain simple primitives for our logic, which enable us to explore design space with fewer restrictions. As a result, Lilo adapts various successful techniques from literature. It supports reasoning about non-terminating programs by supporting refinement proofs, and also provides Iris-style invariants and modular specifications to facilitate modular verification. To support higher-order reasoning without relying on step-indexing, we develop a technique called stratified propositions inspired by Nola. In particular, we develop novel abstractions for liveness reasoning that bring these techniques together in a uniform way. We show Lilo&amp;#8217;s scalability through case studies, including the first termination-guaranteeing modular verification of the elimination stack. Lilo and examples in this paper are mechanized in Coq.
</summary>
<dc:date>2025-04-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stochastic Lazy Knowledge Compilation for Inference in Discrete Probabilistic Programs</title>
<link href="https://hdl.handle.net/1721.1/164737" rel="alternate"/>
<author>
<name>Bowers, Maddy</name>
</author>
<author>
<name>Lew, Alexander K.</name>
</author>
<author>
<name>Tenenbaum, Joshua B.</name>
</author>
<author>
<name>Solar-Lezama, Armando</name>
</author>
<author>
<name>Mansinghka, Vikash K.</name>
</author>
<id>https://hdl.handle.net/1721.1/164737</id>
<updated>2026-03-08T03:22:51Z</updated>
<published>2025-06-13T00:00:00Z</published>
<summary type="text">Stochastic Lazy Knowledge Compilation for Inference in Discrete Probabilistic Programs
Bowers, Maddy; Lew, Alexander K.; Tenenbaum, Joshua B.; Solar-Lezama, Armando; Mansinghka, Vikash K.
We present new techniques for exact and approximate inference in discrete probabilistic programs, based on two new ways of exploiting lazy evaluation. First, we show how knowledge compilation, a state-of-the art technique for exact inference in discrete probabilistic programs, can be made lazy, enabling asymptotic speed-ups. Second, we show how a probabilistic program&amp;#8217;s lazy semantics naturally give rise to a division of its random choices into subproblems, which can be solved in sequence by sequential Monte Carlo with locally-optimal proposals automatically computed via lazy knowledge compilation. We implement our approach in a new tool, Pluck, and evaluate its performance against state-of-the-art approaches to inference in discrete probabilistic languages. We find that on a suite of inference benchmarks, lazy knowledge compilation can be faster than state-of-the-art approaches, sometimes by orders of magnitude.
</summary>
<dc:date>2025-06-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Probabilistic Programming with Vectorized Programmable Inference</title>
<link href="https://hdl.handle.net/1721.1/164736" rel="alternate"/>
<author>
<name>Becker, McCoy R.</name>
</author>
<author>
<name>Huot, Mathieu</name>
</author>
<author>
<name>Matheos, George</name>
</author>
<author>
<name>Wang, Xiaoyan</name>
</author>
<author>
<name>Chung, Karen</name>
</author>
<author>
<name>Smith, Colin</name>
</author>
<author>
<name>Ritchie, Sam</name>
</author>
<author>
<name>Saurous, Rif A.</name>
</author>
<author>
<name>Lew, Alexander K.</name>
</author>
<author>
<name>Rinard, Martin C.</name>
</author>
<author>
<name>Mansinghka, Vikash K.</name>
</author>
<id>https://hdl.handle.net/1721.1/164736</id>
<updated>2026-03-08T03:39:39Z</updated>
<published>2026-01-08T00:00:00Z</published>
<summary type="text">Probabilistic Programming with Vectorized Programmable Inference
Becker, McCoy R.; Huot, Mathieu; Matheos, George; Wang, Xiaoyan; Chung, Karen; Smith, Colin; Ritchie, Sam; Saurous, Rif A.; Lew, Alexander K.; Rinard, Martin C.; Mansinghka, Vikash K.
We present GenJAX, a new language and compiler for vectorized programmable probabilistic inference.&#13;
GenJAX integrates the vectorizing map (vmap) operation from array programming frameworks such as JAX&#13;
into the programmable inference paradigm, enabling compositional&#13;
vectorization of features such as probabilistic program traces, stochastic branching&#13;
(for expressing mixture models), and programmable inference interfaces&#13;
for writing custom probabilistic inference algorithms.  &#13;
We formalize vectorization as a source-to-source program transformation on a core calculus for probabilistic programming ($\gen$), and&#13;
prove that it correctly vectorizes both modeling and inference operations.&#13;
We have implemented our approach in \href{https://github.com/probcomp/genjax}{the GenJAX language and compiler}, and have empirically evaluated this implementation on&#13;
several benchmarks and case studies. Our results show that our implementation&#13;
supports a wide and expressive set of programmable inference patterns and delivers&#13;
performance comparable to hand-optimized JAX code.
</summary>
<dc:date>2026-01-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>Waste-Efficient Work Stealing</title>
<link href="https://hdl.handle.net/1721.1/164735" rel="alternate"/>
<author>
<name>Singer, Kyle</name>
</author>
<author>
<name>Agrawal, Kunal</name>
</author>
<author>
<name>Schardl, Tao B.</name>
</author>
<id>https://hdl.handle.net/1721.1/164735</id>
<updated>2026-03-08T03:39:38Z</updated>
<published>2026-01-28T00:00:00Z</published>
<summary type="text">Waste-Efficient Work Stealing
Singer, Kyle; Agrawal, Kunal; Schardl, Tao B.
Although randomized work stealing is effective at automatically load-balancing task-parallel programs, it can waste computational resources when scheduling programs that lack sufficient parallelism to use all available threads. For such programs, threads will waste cycles attempting to steal parallel tasks when none are available. This waste can reduce the machine’s efficiency by wasting computational resources and energy and needlessly burdening the operating system.&#13;
This paper introduces WEWS, a simple, practical, and provably efficient extension to randomized work stealing that mitigates waste. WEWS dynamically adjusts the number of active threads to reduce the waste of randomized work stealing. WEWS executes a parallel computation with the same asymptotic running time as traditional randomized work stealing while bounding the waste to O(min{PT∞, T1 + P2}) instructions. WEWS also follows the work-first principle to perform well in practice.&#13;
WEWS requires no special support from the operating system or hardware, which simplifies its implementation. We implemented WEWS within the OpenCilk runtime system and compared it to other common waste-mitigation strategies. Across 10 parallel benchmarks, we find that WEWS has minimal impact on parallel running times while, on programs with limited parallelism, substantially reducing waste.
PPoPP ’26, Sydney, NSW, Australia
</summary>
<dc:date>2026-01-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>UniTe: A Universal Tensor Abstraction for Capturing Spatial Relationships</title>
<link href="https://hdl.handle.net/1721.1/164734" rel="alternate"/>
<author>
<name>Ray, Jessica</name>
</author>
<author>
<name>Collin, Teodoro</name>
</author>
<author>
<name>Sze, Vivienne</name>
</author>
<author>
<name>Reuther, Albert</name>
</author>
<author>
<name>Amarasinghe, Saman</name>
</author>
<id>https://hdl.handle.net/1721.1/164734</id>
<updated>2026-03-08T03:39:41Z</updated>
<summary type="text">UniTe: A Universal Tensor Abstraction for Capturing Spatial Relationships
Ray, Jessica; Collin, Teodoro; Sze, Vivienne; Reuther, Albert; Amarasinghe, Saman
Tensors are an integral part of numerous domains, and while significant effort has been put into the design of tensor data structures in isolation, little attention has been paid to the relationships that exist across tensors and how this affects their representation and use. In this paper, we focus on spatial relationships across tensors in a program, where such tensors are defined relative to a common reference coordinate system. These relationships are complicated by the fact that the tensors may differ in their representations, such as having variations in their axes, spacings, origins, and overall shape. Due to the lack of existing abstractions and language support for these types of tensor semantics, users are currently forced to manually perform the bookkeeping necessary to account for these varying relationships and representations. Unfortunately, we cannot rely on a simple library to capture these relationships, as computations on these types of tensors often happen at the innermost levels of programs; we find that the overheads associated with an unoptimized implementation quickly accumulate, leading to performance up to nearly 65x slower than a reference C implementation on a series of image and video compression benchmarks.     In this paper, we introduce the novel UniTe abstraction, which captures spatial relationships across all such tensors in a program. We also introduce two domain-specific languages and optimizing compilers, CoLa for Python and SHiM for C/C++, built off of UniTe. Both CoLa and SHiM provide users an intuitive set of tensor primitives based on spatial relationships, hiding the complexity that goes into maintaining the tensors and computing accesses across them. In addition, we discuss the optimizations necessary to remove the associated abstraction overhead, and describe their implementations. On the benchmarks, we show that both CoLa and SHiM successfully remove the overheads, achieving performance parity with existing C implementations.
</summary>
</entry>
<entry>
<title>Triplet Exciton Sensitization of Silicon Mediated by Defect States in Hafnium Oxynitride</title>
<link href="https://hdl.handle.net/1721.1/164733" rel="alternate"/>
<author>
<name>Nagaya, Narumi</name>
</author>
<author>
<name>Alexiu, Alexandra</name>
</author>
<author>
<name>Perkinson, Collin F</name>
</author>
<author>
<name>Nix, Oliver M</name>
</author>
<author>
<name>Koh, Dooyong</name>
</author>
<author>
<name>Bawendi, Moungi G</name>
</author>
<author>
<name>Tisdale, William A</name>
</author>
<author>
<name>Van Voorhis, Troy</name>
</author>
<author>
<name>Baldo, Marc A</name>
</author>
<id>https://hdl.handle.net/1721.1/164733</id>
<updated>2026-03-08T03:39:56Z</updated>
<published>2024-12-23T00:00:00Z</published>
<summary type="text">Triplet Exciton Sensitization of Silicon Mediated by Defect States in Hafnium Oxynitride
Nagaya, Narumi; Alexiu, Alexandra; Perkinson, Collin F; Nix, Oliver M; Koh, Dooyong; Bawendi, Moungi G; Tisdale, William A; Van Voorhis, Troy; Baldo, Marc A
Singlet exciton fission has the potential to increase the efficiency of crystalline silicon solar cells beyond the conventional single junction limit. Perhaps the largest obstacle to achieving this enhancement is uncertainty about energy coupling mechanisms at the interfaces between silicon and exciton fission materials such as tetracene. Here, the previously reported silicon‐hafnium oxynitride‐tetracene structure is studied and a combination of magnetic‐field‐dependent silicon photoluminescence measurements and density functional theory calculations is used to probe the influence of the interlayer composition on the triplet transfer process across the hafnium oxynitride interlayer. It is found that hafnium oxide interlayers do not show triplet exciton sensitization of silicon, and that nitrogen content in hafnium oxynitride layers is correlated with enhanced sensitization. Calculation results reveal that defects in hafnium oxynitride interlayers with higher nitrogen content introduce states close to the band‐edge of silicon, which can mediate the triplet exciton transfer process. Some defects introduce additional deleterious mid‐gap states, which may explain observed silicon photoluminescence quenching. These results show that band‐edge states can mediate the triplet exciton transfer process, potentially through a sequential charge transfer mechanism.
</summary>
<dc:date>2024-12-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Layered Metal–Organic Chalcogenides: 2D Optoelectronics in 3D Self-Assembled Semiconductors</title>
<link href="https://hdl.handle.net/1721.1/164732" rel="alternate"/>
<author>
<name>Paritmongkol, Watcharaphol</name>
</author>
<author>
<name>Feng, Zhifu</name>
</author>
<author>
<name>Refaely-Abramson, Sivan</name>
</author>
<author>
<name>Tisdale, William A</name>
</author>
<author>
<name>Kastl, Christoph</name>
</author>
<author>
<name>Maserati, Lorenzo</name>
</author>
<id>https://hdl.handle.net/1721.1/164732</id>
<updated>2026-03-08T03:40:10Z</updated>
<published>2025-03-26T00:00:00Z</published>
<summary type="text">Layered Metal–Organic Chalcogenides: 2D Optoelectronics in 3D Self-Assembled Semiconductors
Paritmongkol, Watcharaphol; Feng, Zhifu; Refaely-Abramson, Sivan; Tisdale, William A; Kastl, Christoph; Maserati, Lorenzo
Molecular self-assembly offers an effective and scalable way to design nanostructured materials with tunable optoelectronic properties. In the past 30 years, organic chemistry has delivered a plethora of metal-organic structures based on the combination of organic groups, chalcogens, and a broad range of metals. Among these, several layered metal-organic chalcogenides (MOCs)─including "mithrene" (AgSePh)─recently emerged as interesting platforms to host 2D physics embedded in 3D crystals. Their combination of broad tunability, easy processability, and promising optoelectronic performance is driving a renewed interest in the more general material group of "low-dimensional" hybrids. In addition, the covalent MOC lattice provides higher stability compared with polar materials in operating devices. Here, we provide a perspective on the rise of 2D MOCs in terms of their synthesis approaches, 2D quantum confined exciton physics, and potential future applications in UV and X-ray photodetection, chemical sensors, and electrocatalysis.
</summary>
<dc:date>2025-03-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exciton fission enhanced silicon solar cell</title>
<link href="https://hdl.handle.net/1721.1/164731" rel="alternate"/>
<author>
<name>Nagaya, Narumi</name>
</author>
<author>
<name>Lee, Kangmin</name>
</author>
<author>
<name>Perkinson, Collin F</name>
</author>
<author>
<name>Li, Aaron</name>
</author>
<author>
<name>Lee, Youri</name>
</author>
<author>
<name>Zhong, Xinjue</name>
</author>
<author>
<name>Lee, Sujin</name>
</author>
<author>
<name>Weisburn, Leah P</name>
</author>
<author>
<name>Wang, Janet Z</name>
</author>
<author>
<name>Baikie, Tomi K</name>
</author>
<author>
<name>Bawendi, Moungi G</name>
</author>
<author>
<name>Van Voorhis, Troy</name>
</author>
<author>
<name>Tisdale, William A</name>
</author>
<author>
<name>Kahn, Antoine</name>
</author>
<author>
<name>Seo, Kwanyong</name>
</author>
<author>
<name>Baldo, Marc A</name>
</author>
<id>https://hdl.handle.net/1721.1/164731</id>
<updated>2026-03-08T03:39:57Z</updated>
<published>2025-07-16T00:00:00Z</published>
<summary type="text">Exciton fission enhanced silicon solar cell
Nagaya, Narumi; Lee, Kangmin; Perkinson, Collin F; Li, Aaron; Lee, Youri; Zhong, Xinjue; Lee, Sujin; Weisburn, Leah P; Wang, Janet Z; Baikie, Tomi K; Bawendi, Moungi G; Van Voorhis, Troy; Tisdale, William A; Kahn, Antoine; Seo, Kwanyong; Baldo, Marc A
While silicon solar cells dominate global photovoltaic energy production, their continued improvement is hindered by the single-junction limit. One potential solution is to use molecular singlet exciton fission to generate two electrons from each absorbed high-energy photon. We demonstrate that the long-standing challenge of coupling molecular excited states to silicon solar cells can be overcome using sequential charge transfer. Combining zinc phthalocyanine, aluminum oxide, and a shallow junction crystalline silicon microwire solar cell, the peak charge generation efficiency per photon absorbed in tetracene is (138% ± 6%), comfortably surpassing the quantum efficiency limit for conventional silicon solar cells and establishing a new, scalable approach to low-cost, high-efficiency photovoltaics.
</summary>
<dc:date>2025-07-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>1D Silver Organochalcogenide Semiconductors: Color Tunable Luminescence, Polarized Emission, and Long-Range Exciton Diffusion</title>
<link href="https://hdl.handle.net/1721.1/164730" rel="alternate"/>
<author>
<name>Sakurada, Tomoaki</name>
</author>
<author>
<name>Pathoor, Nithin</name>
</author>
<author>
<name>Matsumoto, Takuma</name>
</author>
<author>
<name>Khamlue, Rattapon</name>
</author>
<author>
<name>Chatsiri, Petcharaphorn</name>
</author>
<author>
<name>Valenta, Jan</name>
</author>
<author>
<name>Kawamoto, Tadashi</name>
</author>
<author>
<name>Omagari, Shun</name>
</author>
<author>
<name>Tisdale, William A</name>
</author>
<author>
<name>Paritmongkol, Watcharaphol</name>
</author>
<author>
<name>Cho, Yeongsu</name>
</author>
<author>
<name>Vacha, Martin</name>
</author>
<id>https://hdl.handle.net/1721.1/164730</id>
<updated>2026-03-08T03:40:03Z</updated>
<published>2025-10-14T00:00:00Z</published>
<summary type="text">1D Silver Organochalcogenide Semiconductors: Color Tunable Luminescence, Polarized Emission, and Long-Range Exciton Diffusion
Sakurada, Tomoaki; Pathoor, Nithin; Matsumoto, Takuma; Khamlue, Rattapon; Chatsiri, Petcharaphorn; Valenta, Jan; Kawamoto, Tadashi; Omagari, Shun; Tisdale, William A; Paritmongkol, Watcharaphol; Cho, Yeongsu; Vacha, Martin
Metal organochalcogenides (MOCs) represent a promising class of organic-inorganic hybrid semiconductors with unique light-matter interactions. Their hybrid nature enables extensive structural and optoelectronic tunability via ligand engineering. In this study, we systematically modulated the electronic properties of ligands using Cl and Me functional groups, achieving precise control over the optoelectronic properties of Ag-based MOCs. Structural analysis revealed that these MOCs adopt a one-dimensional (1D) chain structure with organic ligands surrounding a Ag-chalcogen core. Density functional theory (DFT) calculations demonstrated that MOCs exhibit characteristics of 1D semiconductors with strongly dispersive conduction and valence bands aligned along the crystal rod directions. Experimentally, the MOCs displayed bright luminescence, with peaks centered between 560 and 690 nm. The substitution of Cl with Me groups in the benzene ligands induced a red shift in both absorption and photoluminescence, corroborated by experimental and theoretical analyses. Further optical measurements indicated that the emission from the MOCs is strongly polarized along the chain directions. Notably, Se-based MOCs exhibited enhanced exciton diffusivity along the chain axis with a diffusion length of 130 nm, which is among the highest reported for covalent systems. The observed trend in carrier diffusivity among individual compounds is attributed to differences in the effective masses of the carriers, as determined by DFT calculations. Our findings offer valuable insights into the systematic structural and property tuning of hybrid semiconductors and highlight the unique characteristics of the 1D MOC family.
</summary>
<dc:date>2025-10-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Excitonic Anisotropy in Single‐Crystalline 2D Silver Phenylchalcogenides</title>
<link href="https://hdl.handle.net/1721.1/164729" rel="alternate"/>
<author>
<name>Lee, Woo Seok</name>
</author>
<author>
<name>Cho, Yeongsu</name>
</author>
<author>
<name>Posmyk, Katarzyna</name>
</author>
<author>
<name>Peksa, Paulina</name>
</author>
<author>
<name>Dyksik, Mateusz</name>
</author>
<author>
<name>Samulewicz, Nicholas</name>
</author>
<author>
<name>Plochocka, Paulina</name>
</author>
<author>
<name>Baranowski, Michał</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<author>
<name>Tisdale, William A</name>
</author>
<id>https://hdl.handle.net/1721.1/164729</id>
<updated>2026-03-08T03:40:01Z</updated>
<published>2025-10-30T00:00:00Z</published>
<summary type="text">Excitonic Anisotropy in Single‐Crystalline 2D Silver Phenylchalcogenides
Lee, Woo Seok; Cho, Yeongsu; Posmyk, Katarzyna; Peksa, Paulina; Dyksik, Mateusz; Samulewicz, Nicholas; Plochocka, Paulina; Baranowski, Michał; Kulik, Heather J; Tisdale, William A
2D materials exhibiting in‐plane anisotropy enable new applications in directional energy transport and polarized optical response. Silver phenylchalcogenides (AgEPh) – including mithrene (AgSePh), tethrene (AgTePh), and thiorene (AgSPh) – represent an exciting new addition to this family, with optical response spanning the visible to near‐UV. Here, excitonic anisotropy is predicted and characterized in this family of materials using a combination of ab initio theory and optical micro‐spectroscopy of single‐crystalline flakes. Using density functional theory and GW with the Bethe–Salpeter equation calculations, it is revealed that all AgEPh compounds exhibit anisotropic electronic band structure and host multiple delocalized excitons with in‐plane anisotropy. Room‐temperature polarization‐resolved optical micro‐spectroscopy shows that orthogonally polarized excitons with similar energy lead to nearly isotropic absorption in AgSPh, whereas energy separation between excitonic resonances in AgSePh and AgTePh leads to strong absorption and emission anisotropy. Cryogenic reflectance micro‐spectroscopy further reveals exciton fine structure in AgSePh, reconciling the discrepancies between room‐temperature experiments and theoretical predictions. Finally, it is demonstrated that the optical response of thicker AgEPh crystals is influenced by photonic effects arising from finite crystal size. Overall, this work advances the understanding of the relationship between anisotropic structure, composition, and excitonic properties in AgEPh, providing a foundation for technological integration.
</summary>
<dc:date>2025-10-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Revolutionize cold chain: an AI/ML driven approach to overcome capacity shortages</title>
<link href="https://hdl.handle.net/1721.1/164728" rel="alternate"/>
<author>
<name>Jackson, Ilya</name>
</author>
<author>
<name>Namdar, Jafar</name>
</author>
<author>
<name>Saénz, Maria Jesús</name>
</author>
<author>
<name>Elmquist III, Richard Augustus</name>
</author>
<author>
<name>Dávila Novoa, Luis Rodrigo</name>
</author>
<id>https://hdl.handle.net/1721.1/164728</id>
<updated>2026-03-08T03:40:12Z</updated>
<published>2025-03-19T00:00:00Z</published>
<summary type="text">Revolutionize cold chain: an AI/ML driven approach to overcome capacity shortages
Jackson, Ilya; Namdar, Jafar; Saénz, Maria Jesús; Elmquist III, Richard Augustus; Dávila Novoa, Luis Rodrigo
This research investigates how Artificial Intelligence (AI) and Machine Learning (ML) forecasting methodologies can be leveraged for cold chain capacity planning, specifically utilising Prophet and Seasonal Autoregressive Integrated Moving Average parametrised through grid search. In collaboration with Americold, the world's second-largest refrigerated logistic service provider, the study explores the challenges and opportunities in applying AI/ML techniques to complex operations covering 385 customers and a capacity of 73,296 pallet positions. We train and test several AI/ML and traditional statistical models using extensive data for every customer over 3.5 years. Based on the results, MAPE of 5.28% was achieved on the whole site level, and SARIMA outperformed ML models in most cases. Next, we show that developing and applying a Customer Segmentation Matrix has enabled more accurate forecasting and planning across various customer segments, addressing the issue of forecasting inaccuracies. This approach effectively improves forecasting inaccuracies, underscoring the significance of tailoring AI/ML models for demand forecasting within the cold-chain industry. Ultimately, this research presents an AI-driven approach that transcends mere forecasting, offering a practical pathway to manage capacity in light of the constraints.
</summary>
<dc:date>2025-03-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beyond binary group categorization: towards a dynamic view of human groups</title>
<link href="https://hdl.handle.net/1721.1/164727" rel="alternate"/>
<author>
<name>Kish Bar-On, Kati</name>
</author>
<id>https://hdl.handle.net/1721.1/164727</id>
<updated>2026-03-08T03:39:57Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Beyond binary group categorization: towards a dynamic view of human groups
Kish Bar-On, Kati
Society is a composite of interacting people and groups. These groups play a significant role in maintaining social status, establishing group identity and social identity, and enforcing norms. As such, groups are essential for understanding human behavior. Nevertheless, the study of groups in everyday group life yields many diverse and sometimes contradicting theories of group behavior, and researchers tend to agree that we have yet to understand the emergence of groups out of aggregates of individuals. The current paper aims to shed new light on the convoluted interrelation between groups and individuals by focusing on individuals’ social identities and group categorization. It does so by exploring the dynamic nature of the self and its implications on identity and group membership, and introducing a framework recognizing the fluidity of groups and group categorization. Incorporating historical insights with contemporary theories, this paper argues for a flexible understanding of group dynamics that surpasses rigid in-group and out-group classifications, proposing instead that group affiliations exist along a continuum that reflects the ever-changing social landscape.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reparative Urban Science: Challenging the Myth of Neutrality and Crafting Data-Driven Narratives</title>
<link href="https://hdl.handle.net/1721.1/164726" rel="alternate"/>
<author>
<name>So, Wonyoung</name>
</author>
<id>https://hdl.handle.net/1721.1/164726</id>
<updated>2026-03-08T03:40:00Z</updated>
<published>2024-05-26T00:00:00Z</published>
<summary type="text">Reparative Urban Science: Challenging the Myth of Neutrality and Crafting Data-Driven Narratives
So, Wonyoung
I offer how urban planning should approach technology within the context of systemic racism, advocating for a reparative approach to address the issues of urban technology perpetuating today’s racial inequality and hindering efforts to redress historical oppression. I identify three mechanisms – formalization, context removal and legitimization, and penalization and extraction – that illustrate how urban technology perpetuates historical inequalities, often penalizing marginalized groups under the pretext of neutrality and fairness. Then, I discuss methodologies of reparative urban science, aiming to use urban technology to challenge race-neutral ideologies and create data-driven narratives for reparations.
</summary>
<dc:date>2024-05-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>What determines EV architecture? An analysis of the most influential battery electric vehicle design decisions from market data</title>
<link href="https://hdl.handle.net/1721.1/164725" rel="alternate"/>
<author>
<name>Khan, Mumin</name>
</author>
<author>
<name>Cameron, Bruce</name>
</author>
<id>https://hdl.handle.net/1721.1/164725</id>
<updated>2026-03-08T03:39:58Z</updated>
<published>2025-08-12T00:00:00Z</published>
<summary type="text">What determines EV architecture? An analysis of the most influential battery electric vehicle design decisions from market data
Khan, Mumin; Cameron, Bruce
The penetration and variety of Battery Electric Vehicles (BEVs) in the automotive sector have been growing rapidly. While there is substantial research on hybrid ICE-battery vs. battery-only choices, little work has examined whether a dominant design for BEVs is emerging, as predicted by the innovation literature. This study provides a comprehensive exploration of BEV architectures, examining the influence of individual architectural decisions on vehicle performance and market prevalence. This study utilizes multivariate linear regression to analyze a curated dataset of global BEV models from 2022 and 2023, focusing on candidate architectural decisions such as battery cathode composition, battery voltage choice, number of motors, and drive layout. Our research aims to identify potential dominant designs by assessing their impact on performance metrics. The analysis then leverages statistical tools to evaluate the correlation between these architectural decisions and vehicle performance, using range as a primary indicator of consumer appeal. Findings from this research indicate significant variance in the adoption of specific BEV architectures, suggesting that the market has not yet consolidated down to a dominant design. We observe, however, that range is most strongly influenced by the architectural decisions for battery capacity, drive type, and motor type.
</summary>
<dc:date>2025-08-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Chemical and Chemical-Mechanical Polishing of Surface Roughness on L-PBF/GRCop-42 Cu-Cr-Nb Additive Manufactured 10-GHz RF Structures</title>
<link href="https://hdl.handle.net/1721.1/164724" rel="alternate"/>
<author>
<name>Seltzman, AH</name>
</author>
<author>
<name>Wukitch, SJ</name>
</author>
<id>https://hdl.handle.net/1721.1/164724</id>
<updated>2026-03-08T03:39:53Z</updated>
<published>2025-09-30T00:00:00Z</published>
<summary type="text">Chemical and Chemical-Mechanical Polishing of Surface Roughness on L-PBF/GRCop-42 Cu-Cr-Nb Additive Manufactured 10-GHz RF Structures
Seltzman, AH; Wukitch, SJ
Laser-based powder bed fusion (L-PBF) allows additive manufacture (AM) of lower hybrid current drive (LHCD) radio-frequency (RF) launchers from Glenn Research Copper, a Cr2Nb precipitation-hardened alloy (GRCop-42) in configurations unachievable with conventional machining. Rough surfaces in AM components increase RF losses and lead to arcing in high-power vacuum RF applications. Chemical polishing, chemical-mechanical polishing, or a combination of both were utilized to planarize the internal surfaces of RF structures, resulting in surface roughness as low as Ra = 0.2 µm. Refinement in polishing techniques now enables GRCop-42 alloys (4 at. % Cr, 2 at. % Nb) to achieve similar surface roughness to GRCop-84 (8 at. % Cr, 4 at. % Nb) and equivalent cavity losses to extruded oxygen-free copper waveguides at 10 GHz.
</summary>
<dc:date>2025-09-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Name of Moses in an Egyptian Context—A Hypothetical Etymology</title>
<link href="https://hdl.handle.net/1721.1/164723" rel="alternate"/>
<author>
<name>Adair, Aaron</name>
</author>
<id>https://hdl.handle.net/1721.1/164723</id>
<updated>2026-03-08T03:40:06Z</updated>
<published>2025-01-02T00:00:00Z</published>
<summary type="text">The Name of Moses in an Egyptian Context—A Hypothetical Etymology
Adair, Aaron
The etymological origins of the name “Moses” have been unclear, but an Egyptian candidate is the most likely hypothesis. In this article, a new proposal is given that finds the best candidate as the Demotic word, mšꜥ, but only in the late Persian period or later would it fit the Hebrew Mōše. Evidence from Greek orthography and testimony from Manetho provide a stronger basis for this proposal over prior candidates. However, this results in a Hellenistic-era inclusion of “Moses” into the Exodus narrative.
</summary>
<dc:date>2025-01-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Making Sense of Models: Connecting Science and Math Through Decoding and Modifying Computational Models</title>
<link href="https://hdl.handle.net/1721.1/164722" rel="alternate"/>
<author>
<name>Lee, Irene A.</name>
</author>
<author>
<name>Sagartz, Mary</name>
</author>
<author>
<name>Meyer, Patricia</name>
</author>
<author>
<name>Anderson, Emma</name>
</author>
<id>https://hdl.handle.net/1721.1/164722</id>
<updated>2026-03-08T03:39:44Z</updated>
<published>2025-02-24T00:00:00Z</published>
<summary type="text">Making Sense of Models: Connecting Science and Math Through Decoding and Modifying Computational Models
Lee, Irene A.; Sagartz, Mary; Meyer, Patricia; Anderson, Emma
The Making Sense of Models (MSM) curriculum was designed to bridge math and science learning through agent-based modeling and rich computational thinking investigations that do not require teaching computer programming in middle school classrooms. The MSM curriculum supports students in the NGSS skill of reasoning about how and why a phenomenon happens. After developing decoding skills, students are able to assess the validity of a model based on comparing mechanisms in the model to what they learned about the phenomenon being modeled. In this article, the authors describe the decoding approach and how the MSM curriculum supports students’ ability to reason about scientific models and the real world.
</summary>
<dc:date>2025-02-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cycle 7 VLBI Acceptance Report</title>
<link href="https://hdl.handle.net/1721.1/164721" rel="alternate"/>
<author>
<name>Crew, Geoff</name>
</author>
<id>https://hdl.handle.net/1721.1/164721</id>
<updated>2026-03-05T18:39:37Z</updated>
<published>2020-12-09T00:00:00Z</published>
<summary type="text">Cycle 7 VLBI Acceptance Report
Crew, Geoff
This report summarizes the acceptance process for VLBI which was carried out in early 2020 in preparation for the 2020 VLBI Campaigns. Even though the ALMA operations are suspended, the EHTC campaign has been cancelled, and the GMVA is going forwards without ALMA, it is still a useful exercise to report on the Acceptance testing that was done. This is especially true since (for a variety of reasons) the testing was more extensive this past January. It has also been several years since the initial Acceptance of VLBI for Cycle 4, and new features are finally to become available in Cycle 8, so it is reasonable to capture the state of things at this time. Going forward, it has been suggested as desirable for the Acceptance be added to the normal ALMA Acceptance process. This report thus serves to detail the sort of checks that can and should be made in the future. Some Action items are also noted for the near term.&#13;
On the bright side, the system was totally ready.&#13;
This report reviews the setup and on-site checks that can be made in a stand-alone ( no co-observing peers) mode. Then we present results from four VLBI sessions. For each, a CASA reduction is compared with a reduction of the VLBI data which (in three of the four cases) are correlated with participating EHTC sites.
This report was prepared for the formal acceptance of the software required for ALMA Observing Cycle 7.&#13;
Notionally it is ALMA Technical Note #20, but not published (yet) as such.
</summary>
<dc:date>2020-12-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Thermal analog computing: Application to matrix-vector multiplication with inverse-designed metastructures</title>
<link href="https://hdl.handle.net/1721.1/164719" rel="alternate"/>
<author>
<name>Silva, Caio</name>
</author>
<author>
<name>Romano, Giuseppe</name>
</author>
<id>https://hdl.handle.net/1721.1/164719</id>
<updated>2026-02-04T03:08:34Z</updated>
<published>2026-01-29T00:00:00Z</published>
<summary type="text">Thermal analog computing: Application to matrix-vector multiplication with inverse-designed metastructures
Silva, Caio; Romano, Giuseppe
The rising computational demand of modern workloads has renewed interest in energy-efficient paradigms, such as neuromorphic and analog computing. A fundamental operation in these systems is matrix-vector multiplication (MVM), ubiquitous in signal processing and machine learning. Here, we demonstrate MVM using inverse-designed metastructures that exploit heat conduction as the signal carrier. The proposed approach is based on a generalization of effective thermal conductivity to systems with multiple input and output ports: The input signal is encoded as a set of applied temperatures, while the output is represented by the power collected at designated terminals. The metastructures are obtained via density-based topology optimization, enabled by a differentiable thermal transport solver and automatic differentiation, achieving an accuracy greater than 99% in most cases across a pool of matrices with dimensions 2 ×2 and 3 ×3. We apply this methodology—termed thermal analog computing—to realize matrices relevant to practical tasks, including the discrete Fourier transform and convolutional filters. These findings open avenues for analog information processing in thermally active environments, including temperature-gradient sensing in microelectronics and thermal control systems.
</summary>
<dc:date>2026-01-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>When competition becomes contagious: Strategic arms racing spillovers, alliance politics, and the Sino-American nuclear competition</title>
<link href="https://hdl.handle.net/1721.1/164718" rel="alternate"/>
<author>
<name>Seitz, Samuel M.</name>
</author>
<author>
<name>Ji, Elliot S.</name>
</author>
<id>https://hdl.handle.net/1721.1/164718</id>
<updated>2026-02-04T03:08:48Z</updated>
<published>2025-08-10T00:00:00Z</published>
<summary type="text">When competition becomes contagious: Strategic arms racing spillovers, alliance politics, and the Sino-American nuclear competition
Seitz, Samuel M.; Ji, Elliot S.
The development of new conventional counterforce systems and improved missile defence systems enables non-nuclear states to directly influence the strategic nuclear balance. These dynamics increase the possibility of strategic arms racing spillovers, where arms racing in one dyad yields capabilities that threaten third parties’ arsenals and thus creates a type of security dilemma. It also increases the risk of non-nuclear allies entrapping their nuclear patrons in strategic arms racing. We illustrate this argument via the case of North and South Korea’s arms racing.
</summary>
<dc:date>2025-08-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building communities of critical inquiry in the language classroom</title>
<link href="https://hdl.handle.net/1721.1/164717" rel="alternate"/>
<author>
<name>Dessein, Eva</name>
</author>
<author>
<name>Ledford, Julian A</name>
</author>
<id>https://hdl.handle.net/1721.1/164717</id>
<updated>2026-02-04T03:08:44Z</updated>
<published>2025-10-02T00:00:00Z</published>
<summary type="text">Building communities of critical inquiry in the language classroom
Dessein, Eva; Ledford, Julian A
Addressing issues of power, difference, and social stratification is essential in language education, where systemic inequities shape classroom experiences. This study examines the design and impact of three targeted modules implemented in beginner and intermediate French courses at two U.S. institutions. Grounded in critical pedagogical principles, the modules focused on language and power, inclusive language practices, and cultural and intercultural awareness. They aimed to foster critical inquiry through individual reflection and engagement with socially relevant topics. Analysis of student reflections and survey responses indicates that the modules supported learners in critically examining how language reinforces or challenges inequities, particularly in relation to gender biases and colonial legacies. Students reported increased awareness of linguistic hierarchies, a stronger sense of agency, and deeper reflection on language’s sociopolitical dimensions. The modules also encouraged engagement with inclusive language and cultural diversity. While the interventions promoted critical awareness and personal growth, findings point to limited peer interaction and community-building. This suggests a need for more structured opportunities for dialogic learning. Overall, the study highlights the transformative potential of critical pedagogy in language education and the importance of designing inclusive curricula that prepare students to reflect on and challenge systemic inequities.
</summary>
<dc:date>2025-10-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Managing technology-related disruptions and vulnerabilities in highly automated warehouse systems: an integrative review and research agenda</title>
<link href="https://hdl.handle.net/1721.1/164716" rel="alternate"/>
<author>
<name>Rodríguez-García, Miguel</name>
</author>
<author>
<name>Kembro, Joakim Hans</name>
</author>
<author>
<name>Betts, Kellen</name>
</author>
<author>
<name>Ponce-Cueto, Eva</name>
</author>
<id>https://hdl.handle.net/1721.1/164716</id>
<updated>2026-02-04T03:08:41Z</updated>
<published>2025-09-05T00:00:00Z</published>
<summary type="text">Managing technology-related disruptions and vulnerabilities in highly automated warehouse systems: an integrative review and research agenda
Rodríguez-García, Miguel; Kembro, Joakim Hans; Betts, Kellen; Ponce-Cueto, Eva
Recent technological developments in warehousing have introduced new risks. This paper presents an integrative review that combines insights from highly automated warehouse systems (HAWS) and risk management, providing a comprehensive understanding of technology-related warehouse disruptions and vulnerabilities. We identify five major disruptions that can affect HAWS: cyberattacks, technology sabotage, technology failures, power and network outages, and human-machine interaction issues. Moreover, we identify 48 technology-related vulnerabilities across all disruptions. In particular, HAWS have become vulnerable to cyberattacks due to the increasing number of warehouse technology suppliers, greater complexity of multi-robot networks such as AMRs, reliance on cloud-based systems, and cascading effect of cyberattacks due to higher levels of interconnectivity in HAWS networks. Our review also shows that risk management strategies in HAWS are unevenly covered in the literature. In response, we propose a research agenda with 17 pathways aimed at enhancing prevention, detection, mitigation, and recovery strategies for HAWS. Managers also benefit from the identified disruptions and vulnerabilities, as they serve as a reference point for understanding their specific technology-related risks in HAWS. In addition, managers can use our review of current risk management practices as a benchmark and our research agenda to think about areas that they could develop further.
</summary>
<dc:date>2025-09-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Preliminary Investigation of Gamma Radiation on the Chemical and Physical Characteristics of an Organic Coolant</title>
<link href="https://hdl.handle.net/1721.1/164715" rel="alternate"/>
<author>
<name>Vasquez, Angel</name>
</author>
<author>
<name>Seshadri, Arunkumar</name>
</author>
<author>
<name>Shirvan, Koroush</name>
</author>
<author>
<name>Buongiorno, Jacopo</name>
</author>
<id>https://hdl.handle.net/1721.1/164715</id>
<updated>2026-02-04T03:08:46Z</updated>
<published>2025-12-05T00:00:00Z</published>
<summary type="text">Preliminary Investigation of Gamma Radiation on the Chemical and Physical Characteristics of an Organic Coolant
Vasquez, Angel; Seshadri, Arunkumar; Shirvan, Koroush; Buongiorno, Jacopo
Organic-cooled reactor concepts offer potential advantages over traditional light water reactors, including operation at elevated temperatures and reduced pressures. However, radiation-induced degradation of organic coolants remains a critical concern requiring thorough investigation. This study examines the effects of gamma irradiation (1-MGy dose) on Dowtherm A (27% biphenyl, 73% diphenyl ether) under varying atmospheric conditions (ambient air versus argon) and temperatures (room temperature versus 250°C). Chemical characterization using Fourier transform infrared spectroscopy, ultraviolet-visible spectroscopy (UV-Vis), and gas chromatography-mass spectrometry revealed the formation of higher molecular weight byproducts, including terphenyls and quaterphenyls, along with notable biphenyl degradation. Physical property measurements using differential scanning calorimetry, rheometry, and thermal conductivity analysis demonstrated significant changes in the thermophysical properties, including decreased heat capacity and viscosity, with increased thermal conductivity observed under argon irradiation conditions. Pronounced photodarkening occurred in all the irradiated samples, with atmospheric conditions significantly influencing degradation pathways. UV-Vis analysis indicated that oxygen presence during irradiation suppresses certain chromophoric species formation. These findings provide crucial insights into radiation-induced degradation mechanisms and their impact on coolant performance, informing future organic coolant system design and optimization strategies for advanced reactor applications.
</summary>
<dc:date>2025-12-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Housing data politics in the United States: Inequitable open data, informal networks, and strategic neutrality</title>
<link href="https://hdl.handle.net/1721.1/164714" rel="alternate"/>
<author>
<name>Aizman, Asya</name>
</author>
<author>
<name>So, Wonyoung</name>
</author>
<author>
<name>Navalkha, Chenab</name>
</author>
<author>
<name>D’Ignazio, Catherine</name>
</author>
<id>https://hdl.handle.net/1721.1/164714</id>
<updated>2026-02-04T03:08:43Z</updated>
<published>2025-09-11T00:00:00Z</published>
<summary type="text">Housing data politics in the United States: Inequitable open data, informal networks, and strategic neutrality
Aizman, Asya; So, Wonyoung; Navalkha, Chenab; D’Ignazio, Catherine
Open housing data—property transactions, eviction filings, 311 complaints, and rental registries—have been a crucial resource for policymaking and real estate professionals. Meanwhile, housing data actors increasingly collect, analyze, and use data to address housing inequality, including efforts related to eviction prevention and land use reform, among others. This paper examines the motivations and practices of grassroots and institutional housing data actors. From a field scan of 67 entities engaged in housing data work across 12 U.S. states and 18 municipalities, we conducted 18 in-depth interviews to explore how housing data actors operate, their political goals, and data processes. We put forward a two‑axis framework that positions housing data actors according to their organizational structure (institutional/grassroots) and their stated data ideology (neutral/political). This framework contributes to understanding how different actors navigate complex issues such as embedded power dynamics and ethics in housing data. This two-axis view supplies a vocabulary for tracing how normative commitments and material constraints shape housing data pipelines and, ultimately, housing outcomes across the broader housing information ecosystem.
</summary>
<dc:date>2025-09-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unconstrained Sovereignty: Delegation of Authority and Reversibility</title>
<link href="https://hdl.handle.net/1721.1/164713" rel="alternate"/>
<author>
<name>Grinberg, Mariya</name>
</author>
<id>https://hdl.handle.net/1721.1/164713</id>
<updated>2026-02-04T03:08:49Z</updated>
<published>2025-10-17T00:00:00Z</published>
<summary type="text">Unconstrained Sovereignty: Delegation of Authority and Reversibility
Grinberg, Mariya
The concept of sovereignty shapes our understanding of the world. Yet our current understanding of sovereignty conflates delegation of authority with loss of sovereignty. Delegation is relatively cheap, quick, and leads to an assured outcome; it’s an affirmation of sovereignty. Use of force, however, is required to regain lost sovereignty. I propose a definition of sovereignty that draws a clear distinction between sovereignty and delegated authority. Adopting this definition shows that sovereignty applies across time and space, it is indivisible, institutions do not place permanent constraints on supreme authority, and popular sovereignty is not a well-grounded concept.
</summary>
<dc:date>2025-10-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Concurrent Balanced Augmented Trees</title>
<link href="https://hdl.handle.net/1721.1/164712" rel="alternate"/>
<author>
<name>Wrench, Evan</name>
</author>
<author>
<name>Singh, Ajay</name>
</author>
<author>
<name>Roh, Younghun</name>
</author>
<author>
<name>Fatourou, Panagiota</name>
</author>
<author>
<name>Jayanti, Siddhartha</name>
</author>
<author>
<name>Ruppert, Eric</name>
</author>
<author>
<name>Wei, Yuanhao</name>
</author>
<id>https://hdl.handle.net/1721.1/164712</id>
<updated>2026-02-03T05:03:30Z</updated>
<published>2026-01-28T00:00:00Z</published>
<summary type="text">Concurrent Balanced Augmented Trees
Wrench, Evan; Singh, Ajay; Roh, Younghun; Fatourou, Panagiota; Jayanti, Siddhartha; Ruppert, Eric; Wei, Yuanhao
Augmentation makes search trees tremendously more versatile, allowing them to support efficient aggregation queries, order-statistic queries, and range queries in addition to insertion, deletion, and lookup. In this paper, we present the first lock-free augmented balanced search tree supporting generic augmentation functions. Our algorithmic ideas build upon a recent augmented unbalanced search tree presented by Fatourou and Ruppert [DISC, 2024]. We implement both data structures, solving some memory reclamation challenges in the process, and provide an experimental performance analysis of them. We also present optimized versions of our balanced tree that use delegation to achieve better scalability and performance (by more than 2x in most workloads). Our experiments show that our augmented balanced tree completes updates 2.2 to 30 times faster than the unbalanced augmented tree, and outperforms unaugmented trees by up to several orders of magnitude on 120 threads.
PPoPP ’26, Sydney, NSW, Australia
</summary>
<dc:date>2026-01-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building Intelligent Agents with Neuro-Symbolic Concepts</title>
<link href="https://hdl.handle.net/1721.1/164711" rel="alternate"/>
<author>
<name>Mao, Jiayuan</name>
</author>
<author>
<name>Tenenbaum, Joshua</name>
</author>
<author>
<name>Wu, Jiajun</name>
</author>
<id>https://hdl.handle.net/1721.1/164711</id>
<updated>2026-02-03T05:03:25Z</updated>
<published>2026-01-28T00:00:00Z</published>
<summary type="text">Building Intelligent Agents with Neuro-Symbolic Concepts
Mao, Jiayuan; Tenenbaum, Joshua; Wu, Jiajun
This article presents a concept-centric paradigm for building agents that can learn continually and reason flexibly. The concept-centric agent utilizes a vocabulary of neuro-symbolic concepts. These concepts, such as object, relation, and action concepts, are grounded on sensory inputs and actuation outputs. They are also compositional, allowing for the creation of novel concepts through their structural combination. To facilitate learning and reasoning, the concepts are typed and represented using a combination of symbolic programs and neural network representations. Leveraging such neuro-symbolic concepts, the agent can efficiently learn and recombine them to solve various tasks across different domains, ranging from 2D images, videos, 3D scenes, and robotic manipulation tasks. This concept-centric framework offers several advantages, including data efficiency, compositional generalization, continual learning, and zero-shot transfer.
</summary>
<dc:date>2026-01-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Foundational Verification of Running-Time Bounds for Interactive Programs</title>
<link href="https://hdl.handle.net/1721.1/164710" rel="alternate"/>
<author>
<name>Tockman, Andy</name>
</author>
<author>
<name>Singh, Pratap</name>
</author>
<author>
<name>Erbsen, Andres</name>
</author>
<author>
<name>Gruetter, Samuel</name>
</author>
<author>
<name>Chlipala, Adam</name>
</author>
<id>https://hdl.handle.net/1721.1/164710</id>
<updated>2026-02-03T05:03:33Z</updated>
<published>2026-01-08T00:00:00Z</published>
<summary type="text">Foundational Verification of Running-Time Bounds for Interactive Programs
Tockman, Andy; Singh, Pratap; Erbsen, Andres; Gruetter, Samuel; Chlipala, Adam
Some important domains of software demand concrete bounds on how long functions may run, for instance for real-time cyberphysical systems where missed deadlines may damage industrial machinery. Such programs may interact with external devices throughout execution, where time deadlines ought to depend on, for instance, sensor readings (e.g. we only scramble to close a valve immediately when a sensor reports that a tank is about to overflow). We present the first software-development toolchain that delivers first-principles proofs of meaningful time bounds for interactive machine code, while allowing all per-application programming and verification to happen at the source-code level. We allow C-like programs to be proved against separation-logic specifications that also constrain their running time, and such proofs are composed with verification of a compiler to RISC-V machine code. All components are implemented and proved inside the Rocq proof assistant, producing final theorems whose statements depend only on machine-language formal semantics and some elementary specification constructions for describing running time. As a capstone case study, we extended a past verification (of a real microcontroller-based cyberphysical system) to bound time between arrival of network packets and actuation of an attached device.
CPP ’26, Rennes, France
</summary>
<dc:date>2026-01-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>Network-RBV for Critical Minerals: How Standards, Permits, and Licensing Shape Midstream Bottlenecks</title>
<link href="https://hdl.handle.net/1721.1/164709" rel="alternate"/>
<author>
<name>Kegenbekov, Zhandos</name>
</author>
<author>
<name>Alipova, Alima</name>
</author>
<author>
<name>Jackson, Ilya</name>
</author>
<id>https://hdl.handle.net/1721.1/164709</id>
<updated>2026-02-03T05:04:11Z</updated>
<published>2026-01-20T00:00:00Z</published>
<summary type="text">Network-RBV for Critical Minerals: How Standards, Permits, and Licensing Shape Midstream Bottlenecks
Kegenbekov, Zhandos; Alipova, Alima; Jackson, Ilya
Critical mineral supply chains underpin electric mobility, power electronics, clean hydrogen, and advanced manufacturing. Drawing on the resource-based view (RBV), the relational view, and dynamic capabilities, we conceptualize advantage not as ownership of ore bodies but as orchestration of multi-tier resource systems: upstream access, midstream processing know-how, standards and permits, and durable inter-organizational ties. In a world of high concentration at key stages (refining, separation, engineered materials), full “decoupling” is economically costly and technologically constraining. We argue for structured cooperation among the United States, European Union, China, and other producers and consumers, combined with selective domestic capability building for bona fide security needs. Methodologically, we conduct a structured conceptual synthesis integrating RBV, relational view, dynamic capabilities, and network-of-network research, combined with a structured comparative policy analysis of U.S./EU/Chinese instruments anchored in official documents. We operationalize the argument via technology–material dependency maps that identify midstream bottlenecks and the policy/standard levers most likely to expand qualified, compliant capacity.
</summary>
<dc:date>2026-01-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>REEV SENSE IMUs for Spatiotemporal Gait Analysis in Post-Stroke Patients: Validation Against Optical Motion Capture</title>
<link href="https://hdl.handle.net/1721.1/164708" rel="alternate"/>
<author>
<name>Marsan, Thibault</name>
</author>
<author>
<name>Clauzade, Sacha</name>
</author>
<author>
<name>Zhang, Xiang</name>
</author>
<author>
<name>Grandin, Nicolas</name>
</author>
<author>
<name>Urman, Tatiana</name>
</author>
<author>
<name>Linton, Evan</name>
</author>
<author>
<name>Sibachir, Samy</name>
</author>
<author>
<name>Ricciardi, Catherine E.</name>
</author>
<author>
<name>Temporelli, Robin</name>
</author>
<id>https://hdl.handle.net/1721.1/164708</id>
<updated>2026-02-03T05:04:02Z</updated>
<published>2026-01-18T00:00:00Z</published>
<summary type="text">REEV SENSE IMUs for Spatiotemporal Gait Analysis in Post-Stroke Patients: Validation Against Optical Motion Capture
Marsan, Thibault; Clauzade, Sacha; Zhang, Xiang; Grandin, Nicolas; Urman, Tatiana; Linton, Evan; Sibachir, Samy; Ricciardi, Catherine E.; Temporelli, Robin
Objective gait assessment is essential for post-stroke rehabilitation monitoring, yet optical motion capture systems remain inaccessible to most clinical settings due to cost and infrastructure constraints. This study assessed the validity of the REEV SENSE IMU for measuring spatiotemporal gait parameters in post-stroke individuals and evaluated assistive device effects on measurement accuracy. Twenty chronic post-stroke participants were enrolled, and fourteen completed the study (ten without an assistive device, four using a cane) after applying pre-defined exclusion criteria (walking speed &lt;0.28 m/s, n = 6). Participants walked at self-selected speed while simultaneously being recorded by REEV SENSE IMUs and optical motion capture. Spatiotemporal parameters from matched heel strikes were compared using intraclass correlation coefficients (ICC), mean relative error (MRE), and Bland–Altman analysis. Temporal parameters demonstrated excellent reliability: contact time (ICC 0.96–0.99, MRE 2.77–5.45%), stride duration (ICC 0.95–0.99, MRE 2.57–2.62%), and cadence (ICC 0.98–0.99, MRE 1.80–1.93%). Spatial parameters showed greater variability, with stride length degrading substantially in slow-walking conditions (Cane group: ICC 0.76, MRE 8.60%). REEV SENSE provides reliable temporal parameter measurement comparable to commercial systems, positioning it as a practical tool for clinical gait monitoring in post-stroke rehabilitation. However, spatial parameter accuracy requires cautious interpretation in slow-walking regimes, necessitating independent validation when clinical decisions depend on precise stride length estimates.
</summary>
<dc:date>2026-01-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Who Am I? Eyebrow Follicles Minimize Donor-Derived DNA for Germline Testing After Hematopoietic Stem Cell Transplantation</title>
<link href="https://hdl.handle.net/1721.1/164707" rel="alternate"/>
<author>
<name>Mertens, Matthias</name>
</author>
<author>
<name>Sadlo, Mona</name>
</author>
<author>
<name>Kühl, Jörn-Sven</name>
</author>
<author>
<name>Metzeler, Klaus</name>
</author>
<author>
<name>Zschenderlein, Louisa</name>
</author>
<author>
<name>Edelmann, Jeanett</name>
</author>
<author>
<name>Lehmann, Claudia</name>
</author>
<author>
<name>Thull, Sarah</name>
</author>
<author>
<name>Karakaya, Mert</name>
</author>
<author>
<name>Velmans, Clara</name>
</author>
<author>
<name>Tumewu, Theresa</name>
</author>
<author>
<name>Böhme, Matthias</name>
</author>
<author>
<name>Klötzer, Christina</name>
</author>
<author>
<name>Weigert, Anne</name>
</author>
<author>
<name>Vucinic, Vladan</name>
</author>
<author>
<name>Hentschel, Julia</name>
</author>
<author>
<name>Mertens, Mareike</name>
</author>
<id>https://hdl.handle.net/1721.1/164707</id>
<updated>2026-02-03T05:03:59Z</updated>
<published>2026-01-11T00:00:00Z</published>
<summary type="text">Who Am I? Eyebrow Follicles Minimize Donor-Derived DNA for Germline Testing After Hematopoietic Stem Cell Transplantation
Mertens, Matthias; Sadlo, Mona; Kühl, Jörn-Sven; Metzeler, Klaus; Zschenderlein, Louisa; Edelmann, Jeanett; Lehmann, Claudia; Thull, Sarah; Karakaya, Mert; Velmans, Clara; Tumewu, Theresa; Böhme, Matthias; Klötzer, Christina; Weigert, Anne; Vucinic, Vladan; Hentschel, Julia; Mertens, Mareike
Germline genetic testing plays a critical role in diagnosing inherited predispositions and increasingly guides therapeutic and surveillance choices—but becomes technically challenging after allogeneic hematopoietic stem cell transplantation (HSCT), when donor-derived DNA contaminates host tissues. To address this, we compared donor-derived DNA across three accessible tissues—buccal swab, nail, and eyebrow follicles—in recipients after hematopoietic stem cell transplantation using two orthogonal assays (34-SNP next-generation sequencing and a 27-marker short tandem repeat panel) and modeled clinical covariates that influence chimerism. Eyebrow follicles showed consistently low donor DNA (median 1% by NGS; 3% by STR) whereas buccal swabs and nails carried substantially higher donor fractions (+25 and +22 percentage points versus eyebrow, respectively; both p &lt; 0.01). Across methods, STR yielded on average ≈6 percentage points higher donor fractions than NGS at low-level chimerism. Several transplant covariates correlated with chimerism: matched-related donors and a perfect HLA match (10/10) were each associated with lower donor DNA (≈12–14 and 15–20 percentage points, respectively); longer times since hematopoietic stem cell transplantation correlated with lower levels for nail samples, and donor–recipient sex match correlated with higher donor DNA (~7–8 percentage points). Even low-level chimerism can distort germline variant interpretation. We propose a pragmatic protocol for post-hematopoietic stem cell transplantation germline testing that prioritizes eyebrow follicles as the default tissue. An SNP-based quality control assay is used to flag unsafe donor fractions (≥ 5–10%) before comprehensive germline analysis, reducing the risk that chimeric donor DNA distorts germline variant interpretation.
</summary>
<dc:date>2026-01-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effect of settlements on the stresses in building frames</title>
<link href="https://hdl.handle.net/1721.1/164706" rel="alternate"/>
<author>
<name>Granberg, Robert J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164706</id>
<updated>2026-02-03T04:59:35Z</updated>
<published>1935-01-01T00:00:00Z</published>
<summary type="text">The effect of settlements on the stresses in building frames
Granberg, Robert J.
Thesis: B.S., Massachusetts Institute of Technology, Department of Building Engineering and Construction, 1935; Includes bibliographical references.
</summary>
<dc:date>1935-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Irradiation grafting of styrene onto dacron fibers and films</title>
<link href="https://hdl.handle.net/1721.1/164705" rel="alternate"/>
<author>
<name>Schnetzer, L. J.</name>
</author>
<author>
<name>Hendren, J. W.</name>
</author>
<id>https://hdl.handle.net/1721.1/164705</id>
<updated>2026-02-03T04:59:32Z</updated>
<published>1959-01-01T00:00:00Z</published>
<summary type="text">Irradiation grafting of styrene onto dacron fibers and films
Schnetzer, L. J.; Hendren, J. W.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1959; Includes bibliographical references (leaves 43-44).
</summary>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An investigation of sound transmission irregularity in a one dimensional enclosure</title>
<link href="https://hdl.handle.net/1721.1/164704" rel="alternate"/>
<author>
<name>Foster, Isaac C.</name>
</author>
<id>https://hdl.handle.net/1721.1/164704</id>
<updated>2026-02-03T04:59:29Z</updated>
<published>1949-01-01T00:00:00Z</published>
<summary type="text">An investigation of sound transmission irregularity in a one dimensional enclosure
Foster, Isaac C.
Thesis: B.S., Massachusetts Institute of Technology, Department of Physics, 1949
</summary>
<dc:date>1949-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deposition and characterization of very low pressure CVD silicon/silicon-germanium heteroepitaxial structures</title>
<link href="https://hdl.handle.net/1721.1/164703" rel="alternate"/>
<author>
<name>Tsai, Curtis.</name>
</author>
<id>https://hdl.handle.net/1721.1/164703</id>
<updated>2026-02-03T03:48:14Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">Deposition and characterization of very low pressure CVD silicon/silicon-germanium heteroepitaxial structures
Tsai, Curtis.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1992; Includes bibliographical references (leaves 135-146).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An experimental study of the law of parity conservation in electromagnetic interactions.</title>
<link href="https://hdl.handle.net/1721.1/164702" rel="alternate"/>
<author>
<name>Hegblom, Edwin Richard.</name>
</author>
<id>https://hdl.handle.net/1721.1/164702</id>
<updated>2026-02-03T03:48:30Z</updated>
<published>1965-01-01T00:00:00Z</published>
<summary type="text">An experimental study of the law of parity conservation in electromagnetic interactions.
Hegblom, Edwin Richard.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1965
</summary>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of a dynamic sales call policy model.</title>
<link href="https://hdl.handle.net/1721.1/164701" rel="alternate"/>
<author>
<name>Karash, Richard Ivan.</name>
</author>
<id>https://hdl.handle.net/1721.1/164701</id>
<updated>2026-02-03T04:59:26Z</updated>
<published>1968-01-01T00:00:00Z</published>
<summary type="text">Analysis of a dynamic sales call policy model.
Karash, Richard Ivan.
Thesis: B.S., Massachusetts Institute of Technology, Department of Physics, 1968; Bibliography: leaf 97.
</summary>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MEKIN numerical modeling and simulation of the SPERT-III E-core transient tests</title>
<link href="https://hdl.handle.net/1721.1/164700" rel="alternate"/>
<author>
<name>Tan, Lip-Bu.</name>
</author>
<id>https://hdl.handle.net/1721.1/164700</id>
<updated>2026-02-03T04:58:28Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">MEKIN numerical modeling and simulation of the SPERT-III E-core transient tests
Tan, Lip-Bu.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1980; Includes bibliographical references.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparative tests of the Boston Elevated Co's surface cars</title>
<link href="https://hdl.handle.net/1721.1/164699" rel="alternate"/>
<author>
<name>Jones, Philip C.</name>
</author>
<author>
<name>Katsainos, Nicholas M.</name>
</author>
<id>https://hdl.handle.net/1721.1/164699</id>
<updated>2026-02-03T04:59:24Z</updated>
<published>1912-01-01T00:00:00Z</published>
<summary type="text">Comparative tests of the Boston Elevated Co's surface cars
Jones, Philip C.; Katsainos, Nicholas M.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1912
</summary>
<dc:date>1912-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rules for ring closure and aspects of organolithium chemistry</title>
<link href="https://hdl.handle.net/1721.1/164698" rel="alternate"/>
<author>
<name>Dupont, William Alan.</name>
</author>
<id>https://hdl.handle.net/1721.1/164698</id>
<updated>2026-02-03T03:48:24Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">Rules for ring closure and aspects of organolithium chemistry
Dupont, William Alan.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemistry, 1980; Vita.; Includes bibliographical references.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dracut nickel ore ; Geology and concentration, ore no. 2592</title>
<link href="https://hdl.handle.net/1721.1/164697" rel="alternate"/>
<author>
<name>Burton, Eugene.</name>
</author>
<author>
<name>Spalding, William Livingston.</name>
</author>
<id>https://hdl.handle.net/1721.1/164697</id>
<updated>2026-02-03T04:58:59Z</updated>
<published>1905-01-01T00:00:00Z</published>
<summary type="text">Dracut nickel ore ; Geology and concentration, ore no. 2592
Burton, Eugene.; Spalding, William Livingston.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mining Engineering and Metallurgy, 1905
</summary>
<dc:date>1905-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluation of recirculating well technology with a cost comparison to pump and treat technology for containment of the CS-10 contaminant plume at the Massachusetts Military Reservation</title>
<link href="https://hdl.handle.net/1721.1/164696" rel="alternate"/>
<author>
<name>Smith, Mathew D.
            (Mathew Darin)</name>
</author>
<id>https://hdl.handle.net/1721.1/164696</id>
<updated>2026-02-03T04:58:24Z</updated>
<published>1997-01-01T00:00:00Z</published>
<summary type="text">Evaluation of recirculating well technology with a cost comparison to pump and treat technology for containment of the CS-10 contaminant plume at the Massachusetts Military Reservation
Smith, Mathew D.
            (Mathew Darin)
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 1997; Includes bibliographical references (leaves 43-45).
</summary>
<dc:date>1997-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The crystallization of sucrose</title>
<link href="https://hdl.handle.net/1721.1/164695" rel="alternate"/>
<author>
<name>Brown, Ernest K.</name>
</author>
<id>https://hdl.handle.net/1721.1/164695</id>
<updated>2026-02-03T04:58:17Z</updated>
<published>1929-01-01T00:00:00Z</published>
<summary type="text">The crystallization of sucrose
Brown, Ernest K.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1929; Includes bibliographical references (leaf 81).
</summary>
<dc:date>1929-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantum Circuits Are Just a Phase</title>
<link href="https://hdl.handle.net/1721.1/164694" rel="alternate"/>
<author>
<name>Heunen, Chris</name>
</author>
<author>
<name>Lemonnier, Louis</name>
</author>
<author>
<name>McNally, Christopher</name>
</author>
<author>
<name>Rice, Alex</name>
</author>
<id>https://hdl.handle.net/1721.1/164694</id>
<updated>2026-02-03T05:03:32Z</updated>
<published>2026-01-08T00:00:00Z</published>
<summary type="text">Quantum Circuits Are Just a Phase
Heunen, Chris; Lemonnier, Louis; McNally, Christopher; Rice, Alex
Quantum programs today are written at a low level of abstraction---quantum circuits akin to assembly languages - and the unitary parts of even advanced quantum programming languages essentially function as circuit description languages. This state of affairs impedes scalability, clarity, and support for higher-level reasoning. More abstract and expressive quantum programming constructs are needed.&#13;
&#13;
To this end, we introduce a simple syntax for generating unitaries from "just a phase"; we combine a (global) phase operation that captures phase shifts with a quantum analogue of the "if let" construct that captures subspace selection via pattern matching. This minimal language lifts the focus from gates to eigendecomposition, conjugation, and controlled unitaries; common building blocks in quantum algorithm design.&#13;
&#13;
We demonstrate several aspects of the expressive power of our language in several ways. Firstly, we establish that our representation is universal by deriving a universal quantum gate set. Secondly, we show that important quantum algorithms can be expressed naturally and concisely, including Grover's search algorithm, Hamiltonian simulation, Quantum Fourier Transform, Quantum Signal Processing, and the Quantum Eigenvalue Transformation. Furthermore, we give clean denotational semantics grounded in categorical quantum mechanics. Finally, we implement a prototype compiler that efficiently translates terms of our language to quantum circuits, and prove that it is sound with respect to these semantics. Collectively, these contributions show that this construct offers a principled and practical step toward more abstract and structured quantum programming.
</summary>
<dc:date>2026-01-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Digital Engineering Framework for Piston Pin Bearings via Multi-Physics Thermo-Elasto-Hydrodynamic Modeling</title>
<link href="https://hdl.handle.net/1721.1/164693" rel="alternate"/>
<author>
<name>Shu, Zhiyuan</name>
</author>
<author>
<name>Tian, Tian</name>
</author>
<id>https://hdl.handle.net/1721.1/164693</id>
<updated>2026-02-03T05:04:01Z</updated>
<published>2026-01-10T00:00:00Z</published>
<summary type="text">A Digital Engineering Framework for Piston Pin Bearings via Multi-Physics Thermo-Elasto-Hydrodynamic Modeling
Shu, Zhiyuan; Tian, Tian
The piston pin operates under severe mechanical and thermal conditions, making accurate lubrication prediction essential for engine durability. This study presents a comprehensive digital engineering framework for piston pin bearings, built upon a fully coupled thermo-elasto-hydrodynamic (TEHD) formulation. The framework integrates: (1) a Reynolds-equation hydrodynamic solver with temperature-/pressure-dependent viscosity and cavitation; (2) elastic deformation obtained from FEA (finite element analysis)-based compliance matrices; (3) a break-in module that iteratively adjusts surface profiles before steady-state simulation; (4) a three-body heat transfer model resolving heat conduction, convection, and solid&amp;ndash;liquid interfacial heat exchange. Applied to a heavy-duty diesel engine, the framework reproduces experimentally observed behaviors, including bottom-edge rounding at the small end and the slow unidirectional drift of the floating pin. By integrating multi-physics modeling with design-level flexibility, this work aims to provide a robust digital twin for the piston-pin system, enabling virtual diagnostics, early-stage failure prediction, and data-driven design optimization for engine development.
</summary>
<dc:date>2026-01-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Wafold: Curvature-Driven Termination and Dimensional Compression in Black Holes</title>
<link href="https://hdl.handle.net/1721.1/164692" rel="alternate"/>
<author>
<name>Viaña, Javier</name>
</author>
<id>https://hdl.handle.net/1721.1/164692</id>
<updated>2026-02-03T05:04:13Z</updated>
<published>2025-12-23T00:00:00Z</published>
<summary type="text">The Wafold: Curvature-Driven Termination and Dimensional Compression in Black Holes
Viaña, Javier
This work explores a geometric description of black holes in which spacetime terminates on a curvature-triggered hypersurface rather than extending to an interior singularity. We study the implications of a scenario in which, upon reaching a critical curvature threshold, the three-dimensional spatial geometry compresses into a thin, closed boundary identified here as the wafold. Beyond this, the manifold would no longer continue, and all mass–energy and information would be confined to the hypersurface itself. This framework combines two well-explored paths: (1) curvature-driven geometric compression, in which extreme curvature forces the bulk degrees of freedom to become supported on a thin hypersurface (without altering the underlying dimensionality of spacetime), and (2) the motivation underlying the holographic principle, namely that black-hole entropy scales with surface area rather than volume, suggesting that information is governed by a boundary geometry rather than a bulk volume. We elaborate a dimensional conversion law that would be required to describe the collapse of spatial volume into surface area as a conserved flux of geometric capacity across the wafold, and we analyze the resulting consequences of treating this hypersurface as the terminal boundary of the manifold.
</summary>
<dc:date>2025-12-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Implementation of an Automated Thermal Imaging Device for Lower Limb Prosthetic Applications</title>
<link href="https://hdl.handle.net/1721.1/164691" rel="alternate"/>
<author>
<name>Pizarro, Daniel</name>
</author>
<author>
<name>Huegel, Joel C.</name>
</author>
<author>
<name>Diaz, Elias</name>
</author>
<author>
<name>Alemon, Beatriz</name>
</author>
<author>
<name>Herr, Hugh</name>
</author>
<author>
<name>Felix-Herran, Luis C.</name>
</author>
<id>https://hdl.handle.net/1721.1/164691</id>
<updated>2026-02-03T05:04:10Z</updated>
<published>2025-12-18T00:00:00Z</published>
<summary type="text">Design and Implementation of an Automated Thermal Imaging Device for Lower Limb Prosthetic Applications
Pizarro, Daniel; Huegel, Joel C.; Diaz, Elias; Alemon, Beatriz; Herr, Hugh; Felix-Herran, Luis C.
Since elevated temperature and humidity may occur at the prosthetic socket–skin interface, it is essential to collect thermal data from the residual limb, as this information serves as an indicator of adverse effects such as irritation, postural problems, and significant damage to health. These data are obtained non-invasively through the execution of a thermal imaging (TI) procedure. However, the precision and repeatability of a TI procedure rely significantly on its execution technique. This work presents the design and implementation of a mechatronic device that automates a thermal imaging technique. The application of the device is in lower-limb prosthetics evaluation. The proposed system improves data acquisition consistency by reducing execution time and minimizing human error, thereby enhancing the reproducibility and reliability of thermal measurements. The introduced device, Thermal Imaging Booth, proposes an automated solution for TI standardization in clinical and research settings. By minimizing inconsistencies, this system improves the diagnostic potential of thermography, facilitating its adoption in biomedical applications.
</summary>
<dc:date>2025-12-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Peepco: Batch-Based Consistency Optimization</title>
<link href="https://hdl.handle.net/1721.1/164690" rel="alternate"/>
<author>
<name>Kuraj, Ivan</name>
</author>
<author>
<name>Feser, John</name>
</author>
<author>
<name>Polikarpova, Nadia</name>
</author>
<author>
<name>Solar-Lezama, Armando</name>
</author>
<id>https://hdl.handle.net/1721.1/164690</id>
<updated>2026-02-01T07:32:05Z</updated>
<published>2025-04-09T00:00:00Z</published>
<summary type="text">Peepco: Batch-Based Consistency Optimization
Kuraj, Ivan; Feser, John; Polikarpova, Nadia; Solar-Lezama, Armando
We present batch-based consistency, a new approach for consistency optimization that allows programmers to specialize consistency with application-level integrity properties. We implement the approach with a two-step process: we statically infer optimal consistency requirements for executions of bounded sets of operations, and then, use the inferred requirements to parameterize a new distributed protocol to relax operation reordering at run time when it is safe to do so. Our approach supports standard notions of consistency. We implement batch-based consistency in Peepco, demonstrate its expressiveness for partial data replication, and examine Peepco&amp;#8217;s run-time performance impact in different settings.
</summary>
<dc:date>2025-04-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Finch: Sparse and Structured Tensor Programming with Control Flow</title>
<link href="https://hdl.handle.net/1721.1/164689" rel="alternate"/>
<author>
<name>Ahrens, Willow</name>
</author>
<author>
<name>Collin, Teodoro</name>
</author>
<author>
<name>Patel, Radha</name>
</author>
<author>
<name>Deeds, Kyle</name>
</author>
<author>
<name>Hong, Changwan</name>
</author>
<author>
<name>Amarasinghe, Saman</name>
</author>
<id>https://hdl.handle.net/1721.1/164689</id>
<updated>2026-02-01T07:31:58Z</updated>
<published>2025-04-09T00:00:00Z</published>
<summary type="text">Finch: Sparse and Structured Tensor Programming with Control Flow
Ahrens, Willow; Collin, Teodoro; Patel, Radha; Deeds, Kyle; Hong, Changwan; Amarasinghe, Saman
From FORTRAN to NumPy, tensors have revolutionized how we express computation. However, tensors in these, and almost all prominent systems, can only handle dense rectilinear integer grids.  Real world tensors often contain underlying structure, such as sparsity, runs of repeated values, or symmetry.  Support for structured data is fragmented and incomplete.  Existing frameworks limit the tensor structures and program control flow they support to better simplify the problem.&#13;
&#13;
In this work, we propose a new programming language, Finch, which supports both flexible control flow and diverse data structures. Finch facilitates a programming model which resolves the challenges of computing over structured tensors by combining control flow and data structures into a common representation where they can be co-optimized. Finch automatically specializes control flow to data so that performance engineers can focus on experimenting with many algorithms. Finch supports a familiar programming language of loops, statements, ifs, breaks, etc., over a wide variety of tensor structures, such as sparsity, run-length-encoding, symmetry, triangles, padding, or blocks. Finch reliably utilizes the key properties of structure, such as structural zeros, repeated values, or clustered non-zeros. We show that this leads to dramatic speedups in operations such as SpMV and SpGEMM, image processing, and graph analytics.
</summary>
<dc:date>2025-04-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Smooth, Integrated Proofs of Cryptographic Constant Time for Nondeterministic Programs and Compilers</title>
<link href="https://hdl.handle.net/1721.1/164688" rel="alternate"/>
<author>
<name>Conoly, Owen</name>
</author>
<author>
<name>Erbsen, Andres</name>
</author>
<author>
<name>Chlipala, Adam</name>
</author>
<id>https://hdl.handle.net/1721.1/164688</id>
<updated>2026-02-01T07:32:16Z</updated>
<published>2025-06-13T00:00:00Z</published>
<summary type="text">Smooth, Integrated Proofs of Cryptographic Constant Time for Nondeterministic Programs and Compilers
Conoly, Owen; Erbsen, Andres; Chlipala, Adam
Formal verification of software and compilers has been used to rule out large classes of security-critical issues, but risk of unintentional information leakage has received much less consideration. It is a key requirement for formal specifications to leave some details of a system's behavior unspecified so that future implementation changes can be accommodated, and yet it is nonetheless expected that these choices would not be made based on confidential information the system handles. This paper formalizes that notion using omnisemantics and plain single-copy assertions, giving for the first time a specification of what it means for a nondeterministic program to be constant-time or more generally to avoid leaking (a part of) its inputs. We use this theory to prove data-leak-free execution of core cryptographic routines compiled from Bedrock2 C to RISC-V machine code, showing that the smooth specification and proof experience omnisemantics provides for nondeterminism extends to constant-time properties in the same setting. We also study variants of the key program-compiler contract, highlighting pitfalls of tempting simplifications and subtle consequences of how inputs to nondeterministic choices are constrained. Our results are backed by modular program-logic and compiler-correctness theorems, and they integrate into a neat end-to-end theorem in the Coq proof assistant.
</summary>
<dc:date>2025-06-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>NeuroChat: A Neuroadaptive AI Chatbot for Customizing Learning Experiences</title>
<link href="https://hdl.handle.net/1721.1/164687" rel="alternate"/>
<author>
<name>Baradari, D?nya</name>
</author>
<author>
<name>Kosmyna, Nataliya</name>
</author>
<author>
<name>Petrov, Oscar</name>
</author>
<author>
<name>Kaplun, Rebecah</name>
</author>
<author>
<name>Maes, Pattie</name>
</author>
<id>https://hdl.handle.net/1721.1/164687</id>
<updated>2026-02-01T07:32:11Z</updated>
<published>2025-07-08T00:00:00Z</published>
<summary type="text">NeuroChat: A Neuroadaptive AI Chatbot for Customizing Learning Experiences
Baradari, D?nya; Kosmyna, Nataliya; Petrov, Oscar; Kaplun, Rebecah; Maes, Pattie
Generative AI is reshaping education by enabling personalized, on-demand learning experiences. However, current AI systems lack awareness of the learner’s cognitive state, limiting their adaptability. In parallel, electroencephalography (EEG)-based neuroadaptive systems have shown promise in enhancing engagement through real-time physiological feedback. This paper introduces NeuroChat, a neuroadaptive AI tutor that integrates real-time EEG-based engagement tracking with a large language model to adapt its conversational responses. By continuously monitoring learners’ cognitive engagement, NeuroChat dynamically adjusts content complexity, tone, and response style in a closed-loop interaction. In a within-subjects study (n = 24), NeuroChat significantly increased both EEG-measured and self-reported engagement compared to a non-adaptive chatbot. However, no significant differences in short-term learning outcomes were observed. These findings demonstrate the feasibility of real-time brain–AI interaction for education and highlight opportunities for deeper personalization, longer-term adaptation, and richer learning assessment in future neuroadaptive systems.
CUI ’25, Waterloo, ON, Canada
</summary>
<dc:date>2025-07-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Synthetic to Human: The Gap Between AI-Predicted and Actual Pro-Environmental Behavior Change After Chatbot Persuasion</title>
<link href="https://hdl.handle.net/1721.1/164686" rel="alternate"/>
<author>
<name>Doudkin, Alexander</name>
</author>
<author>
<name>Pataranutaporn, Pat</name>
</author>
<author>
<name>Maes, Pattie</name>
</author>
<id>https://hdl.handle.net/1721.1/164686</id>
<updated>2026-02-01T07:32:08Z</updated>
<published>2025-07-07T00:00:00Z</published>
<summary type="text">From Synthetic to Human: The Gap Between AI-Predicted and Actual Pro-Environmental Behavior Change After Chatbot Persuasion
Doudkin, Alexander; Pataranutaporn, Pat; Maes, Pattie
Pro-environmental behavior (PEB) is vital to combat climate change, yet turning awareness into intention and action remains elusive. We explore large language models (LLMs) as tools to promote PEB, comparing their impact across 3,600 participants: real humans (n=1,200), simulated humans based on actual participant data (n=1,200), and fully synthetic personas (n=1,200). All three participant groups faced either personalized chatbots, standard chatbots, or static statements, employing four persuasion strategies (moral foundations, future self-continuity, action orientation, or ”freestyle” chosen by the LLM). Results reveal a ”synthetic persuasion paradox”: synthetic and simulated participants significantly change their post-intervention PEB stance, while human attitudes barely shift. Simulated participants better approximate human behavior but still overestimate effects. This disconnect underscores LLM’s potential for pre-evaluating PEB interventions but warns of its limits in predicting human responses. We call for refined synthetic modeling and sustained and extended human trials to align conversational AI’s promise with tangible sustainability outcomes.
CUI ’25, Waterloo, ON, Canada
</summary>
<dc:date>2025-07-07T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adaptive Approximation Schemes for Matching Queues</title>
<link href="https://hdl.handle.net/1721.1/164685" rel="alternate"/>
<author>
<name>AmaniHamedani, Alireza</name>
</author>
<author>
<name>Aouad, Ali</name>
</author>
<author>
<name>Saberi, Amin</name>
</author>
<id>https://hdl.handle.net/1721.1/164685</id>
<updated>2026-02-01T07:32:06Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Adaptive Approximation Schemes for Matching Queues
AmaniHamedani, Alireza; Aouad, Ali; Saberi, Amin
We study a continuous-time, infinite-horizon dynamic bipartite matching problem. Suppliers arrive according to a Poisson process; while waiting, they may abandon the queue at a uniform rate. Customers on the other hand must be matched upon arrival. The objective is to minimize the expected long-term average cost subject to a throughput constraint on the total match rate.&#13;
Previous literature on dynamic matching focuses on ”static” policies, where the matching decisions do not depend explicitly on the state of the supplier queues, achieving constant-factor approximations. By contrast, we design ”adaptive” policies, which leverage queue length information, and obtain near-optimal polynomial-time algorithms for several classes of instances.&#13;
First, we develop a bi-criteria fully polynomial-time approximation scheme for dynamic matching on networks with a constant number of queues—that computes a (1−є)-approximation of the optimal policy in time polynomial in both the input size and 1/є. A key new technique is a hybrid LP relaxation, which combines static and state-dependent LP approximations of the queue dynamics, after a decomposition of the network. Networks with a constant number of queues are motivated by deceased organ donation schemes, where the supply types can be divided according to blood and tissue types.&#13;
The above algorithm, combined with a careful cell decomposition gives a polynomial-time approximation scheme for dynamic matching on Euclidean networks of fixed dimension. The Euclidean case is of interest in ride-hailing and spatial service platforms, where the goal is to fulfill as many trips as possible while minimizing driving distances.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Output-Sensitive Approximate Counting via a Measure-Bounded Hyperedge Oracle, or: How Asymmetry Helps Estimate &amp;#55349;&amp;#56408;-Clique Counts Faster</title>
<link href="https://hdl.handle.net/1721.1/164684" rel="alternate"/>
<author>
<name>Censor-Hillel, Keren</name>
</author>
<author>
<name>Even, Tomer</name>
</author>
<author>
<name>Vassilevska Williams, Virginia</name>
</author>
<id>https://hdl.handle.net/1721.1/164684</id>
<updated>2026-02-01T07:32:12Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Output-Sensitive Approximate Counting via a Measure-Bounded Hyperedge Oracle, or: How Asymmetry Helps Estimate &amp;#55349;&amp;#56408;-Clique Counts Faster
Censor-Hillel, Keren; Even, Tomer; Vassilevska Williams, Virginia
Dell, Lapinskas and Meeks [DLM SICOMP 2022] presented a general reduction from approximate counting to decision for a class of fine-grained problems that can be viewed as hyperedge counting or detection problems in an implicit hypergraph, thus obtaining tight equivalences between approximate counting and decision for many key problems such as k-clique, k-sum and more. Their result is a reduction from approximately counting the number of hyperedges in an implicit k-partite hypergraph to a polylogarithmic number of calls to a hyperedge oracle that returns whether a given subhypergraph contains an edge.&#13;
The main result of this paper is a generalization of the DLM result for output-sensitive approximate counting, where the running time of the desired counting algorithm is inversely proportional to the number of witnesses. Our theorem is a reduction from approximately counting the (unknown) number of hyperedges in an implicit k-partite hypergraph to a polylogarithmic number of calls to a hyperedge oracle called only on subhypergraphs with a small “measure”. If a subhypergraph has ui nodes in the ith node partition of the k-partite hypergraph, then its measure is ∏i ui.&#13;
Using the new general reduction and by efficiently implementing measure-bounded colorful independence oracles, we obtain new improved output-sensitive approximate counting algorithms for k-clique, k-dominating set and k-sum. In graphs with nt k-cliques, for instance, our algorithm (1± є)-approximates the k-clique count in time Õє(nω(k−t−1/3,k−t/3,k−t+2/3) +n2), where ω(a,b,c) is the exponent of na× nb by nb× nc matrix multiplication. For large k and t&gt;2, this is a substantial improvement over prior work, even if ω=2.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lightweight and Locality-Aware Composition of Black-Box Subroutines</title>
<link href="https://hdl.handle.net/1721.1/164683" rel="alternate"/>
<author>
<name>Bansal, Manya</name>
</author>
<author>
<name>Sharlet, Dillon</name>
</author>
<author>
<name>Ragan-Kelley, Jonathan</name>
</author>
<author>
<name>Amarasinghe, Saman</name>
</author>
<id>https://hdl.handle.net/1721.1/164683</id>
<updated>2026-02-01T07:32:14Z</updated>
<published>2025-06-13T00:00:00Z</published>
<summary type="text">Lightweight and Locality-Aware Composition of Black-Box Subroutines
Bansal, Manya; Sharlet, Dillon; Ragan-Kelley, Jonathan; Amarasinghe, Saman
Subroutines are essential building blocks in software design: users encapsulate common functionality in libraries and write applications by composing calls to subroutines. Unfortunately, performance may be lost at subroutine boundaries due to reduced locality and increased memory consumption. Operator fusion helps recover performance lost at composition boundaries. Previous solutions fuse operators by manually rewriting code into monolithic fused subroutines, or by relying on heavy-weight compilers to generate code that performs fusion. Both approaches require a semantic understanding of the entire computation, breaking the decoupling necessary for modularity and reusability of subroutines.&#13;
&#13;
In this work, we attempt to identify the minimal ingredients required to fuse computations, enabling composition of subroutines without sacrificing performance or modularity. We find that, unlike previous approaches that require a semantic understanding of the computation, most opportunities for fusion require understanding only data production and consumption patterns. Exploiting this insight, we add fusion on top of black-box subroutines by proposing a lightweight enrichment of subroutine declarations to expose data-dependence patterns. We implement our approach in a system called Fern, and demonstrate Fern's benefits by showing that it is competitive with state-of-the-art, high-performance libraries with manually fused operators, can fuse across library and domain boundaries for unforeseen workloads, and can deliver speedups of up to $5\times$ over unfused code.
</summary>
<dc:date>2025-06-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Prolonged photostability in hexagonal boron nitride quantum emitters</title>
<link href="https://hdl.handle.net/1721.1/164682" rel="alternate"/>
<author>
<name>Li, Sylvia Xin</name>
</author>
<author>
<name>Ichihara, Takeo</name>
</author>
<author>
<name>Park, Hyoju</name>
</author>
<author>
<name>He, Guangwei</name>
</author>
<author>
<name>Kozawa, Daichi</name>
</author>
<author>
<name>Wen, Yi</name>
</author>
<author>
<name>Koman, Volodymyr B</name>
</author>
<author>
<name>Zeng, Yuwen</name>
</author>
<author>
<name>Kuehne, Matthias</name>
</author>
<author>
<name>Yuan, Zhe</name>
</author>
<author>
<name>Faucher, Samuel</name>
</author>
<author>
<name>Warner, Jamie H</name>
</author>
<author>
<name>Strano, Michael S</name>
</author>
<id>https://hdl.handle.net/1721.1/164682</id>
<updated>2026-02-01T07:32:52Z</updated>
<published>2023-03-06T00:00:00Z</published>
<summary type="text">Prolonged photostability in hexagonal boron nitride quantum emitters
Li, Sylvia Xin; Ichihara, Takeo; Park, Hyoju; He, Guangwei; Kozawa, Daichi; Wen, Yi; Koman, Volodymyr B; Zeng, Yuwen; Kuehne, Matthias; Yuan, Zhe; Faucher, Samuel; Warner, Jamie H; Strano, Michael S
Single-photon emitters are crucial building blocks for optical quantum technologies. Hexagonal boron nitride (hBN) is a promising two-dimensional material that hosts bright, room-temperature single-photon emitters. However, photo instability is a persistent challenge preventing practical applications of these properties. Here, we reveal the ubiquitous photobleaching of hBN vacancy emitters. Independent of the source or the number of hBN layers, we find that the photobleaching of a common emission at 1.98 ± 0.05 eV can be described by two consistent time constants, namely a first bleaching lifetime of 5 to 10 s, and a second bleaching lifetime in the range of 150 to 220 s. Only the former is environmentally sensitive and can be significantly mitigated by shielding O&lt;jats:sub&gt;2&lt;/jats:sub&gt;, whereas the latter could be the result of carbon-assisted defect migration. Annular dark-field scanning transmission electron microscopy of photobleached hBN allows for visualizing vacancy defects and carbon substitution at single atom resolution, supporting the migration mechanism along with X-ray photoelectron spectroscopy. Thermal annealing at 850 °C of liquid exfoliated hBN eliminates both bleaching processes, leading to persistent photostability. These results represent a significant advance to potentially engineer hBN vacancy emitters with the photostability requisite for quantum applications.
</summary>
<dc:date>2023-03-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>Discretized hexagonal boron nitride quantum emitters and their chemical interconversion</title>
<link href="https://hdl.handle.net/1721.1/164681" rel="alternate"/>
<author>
<name>Kozawa, Daichi</name>
</author>
<author>
<name>Li, Sylvia Xin</name>
</author>
<author>
<name>Ichihara, Takeo</name>
</author>
<author>
<name>Rajan, Ananth Govind</name>
</author>
<author>
<name>Gong, Xun</name>
</author>
<author>
<name>He, Guangwei</name>
</author>
<author>
<name>Koman, Volodymyr B</name>
</author>
<author>
<name>Zeng, Yuwen</name>
</author>
<author>
<name>Kuehne, Matthias</name>
</author>
<author>
<name>Silmore, Kevin S</name>
</author>
<author>
<name>Parviz, Dorsa</name>
</author>
<author>
<name>Liu, Pingwei</name>
</author>
<author>
<name>Liu, Albert Tianxiang</name>
</author>
<author>
<name>Faucher, Samuel</name>
</author>
<author>
<name>Yuan, Zhe</name>
</author>
<author>
<name>Warner, Jamie</name>
</author>
<author>
<name>Blankschtein, Daniel</name>
</author>
<author>
<name>Strano, Michael S</name>
</author>
<id>https://hdl.handle.net/1721.1/164681</id>
<updated>2026-02-01T07:32:50Z</updated>
<published>2023-01-03T00:00:00Z</published>
<summary type="text">Discretized hexagonal boron nitride quantum emitters and their chemical interconversion
Kozawa, Daichi; Li, Sylvia Xin; Ichihara, Takeo; Rajan, Ananth Govind; Gong, Xun; He, Guangwei; Koman, Volodymyr B; Zeng, Yuwen; Kuehne, Matthias; Silmore, Kevin S; Parviz, Dorsa; Liu, Pingwei; Liu, Albert Tianxiang; Faucher, Samuel; Yuan, Zhe; Warner, Jamie; Blankschtein, Daniel; Strano, Michael S
Quantum emitters in two-dimensional hexagonal boron nitride (hBN) are of significant interest because of their unique photophysical properties, such as single-photon emission at room temperature, and promising applications in quantum computing and communications. The photoemission from hBN defects covers a wide range of emission energies but identifying and modulating the properties of specific emitters remain challenging due to uncontrolled formation of hBN defects. In this study, more than 2000 spectra are collected consisting of single, isolated zero-phonon lines (ZPLs) between 1.59 and 2.25 eV from diverse sample types. Most of ZPLs are organized into seven discretized emission energies. All emitters exhibit a range of lifetimes from 1 to 6 ns, and phonon sidebands offset by the dominant lattice phonon in hBN near 1370 cm−1. Two chemical processing schemes are developed based on water and boric acid etching that generate or preferentially interconvert specific emitters, respectively. The identification and chemical interconversion of these discretized emitters should significantly advance the understanding of solid-state chemistry and photophysics of hBN quantum emission.
</summary>
<dc:date>2023-01-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rational Design and Efficacy of Glucose‐Responsive Insulin Therapeutics and Insulin Delivery Systems by Computation Using Connected Human and Rodent Models</title>
<link href="https://hdl.handle.net/1721.1/164680" rel="alternate"/>
<author>
<name>Yang, Sungyun</name>
</author>
<author>
<name>Yang, Jing Fan</name>
</author>
<author>
<name>Gong, Xun</name>
</author>
<author>
<name>Weiss, Michael A</name>
</author>
<author>
<name>Strano, Michael S</name>
</author>
<id>https://hdl.handle.net/1721.1/164680</id>
<updated>2026-02-01T07:32:49Z</updated>
<published>2023-06-15T00:00:00Z</published>
<summary type="text">Rational Design and Efficacy of Glucose‐Responsive Insulin Therapeutics and Insulin Delivery Systems by Computation Using Connected Human and Rodent Models
Yang, Sungyun; Yang, Jing Fan; Gong, Xun; Weiss, Michael A; Strano, Michael S
Glucose‐responsive insulins (GRIs) use plasma glucose levels in a diabetic patient to activate a specifically designed insulin analogue to a more potent state in real time. Alternatively, some GRI concepts use glucose‐mediated release or injection of insulin into the bloodstream. GRIs hold promise to exhibit much improved pharmacological control of the plasma glucose concentration, particularly for the problem of therapeutically induced hypoglycemia. Several innovative GRI schemes are introduced into the literature, but there remains a dearth of quantitative analysis to aid the development and optimization of these constructs into effective therapeutics. This work evaluates several classes of GRIs that are proposed using a pharmacokinetic model as previously described, PAMERAH, simulating the glucoregulatory system of humans and rodents. GRI concepts are grouped into three mechanistic classes: 1) intrinsic GRIs, 2) glucose‐responsive particles, and 3) glucose‐responsive devices. Each class is analyzed for optimal designs that maintain glucose levels within the euglycemic range. These derived GRI parameter spaces are then compared between rodents and humans, providing the differences in clinical translation success for each candidate. This work demonstrates a computational framework to evaluate the potential clinical translatability of existing glucose‐responsive systems, providing a useful approach for future GRI development.
</summary>
<dc:date>2023-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wearable sensors for monitoring marine environments and their inhabitants</title>
<link href="https://hdl.handle.net/1721.1/164679" rel="alternate"/>
<author>
<name>Kaidarova, Altynay</name>
</author>
<author>
<name>Geraldi, Nathan R</name>
</author>
<author>
<name>Wilson, Rory P</name>
</author>
<author>
<name>Kosel, Jürgen</name>
</author>
<author>
<name>Meekan, Mark G</name>
</author>
<author>
<name>Eguíluz, Víctor M</name>
</author>
<author>
<name>Hussain, Muhammad Mustafa</name>
</author>
<author>
<name>Shamim, Atif</name>
</author>
<author>
<name>Liao, Hanguang</name>
</author>
<author>
<name>Srivastava, Mani</name>
</author>
<author>
<name>Saha, Swapnil Sayan</name>
</author>
<author>
<name>Strano, Michael S</name>
</author>
<author>
<name>Zhang, Xiangliang</name>
</author>
<author>
<name>Ooi, Boon S</name>
</author>
<author>
<name>Holton, Mark</name>
</author>
<author>
<name>Hopkins, Lloyd W</name>
</author>
<author>
<name>Jin, Xiaojia</name>
</author>
<author>
<name>Gong, Xun</name>
</author>
<author>
<name>Quintana, Flavio</name>
</author>
<author>
<name>Tovasarov, Adylkhan</name>
</author>
<author>
<name>Tasmagambetova, Assel</name>
</author>
<author>
<name>Duarte, Carlos M</name>
</author>
<id>https://hdl.handle.net/1721.1/164679</id>
<updated>2026-02-01T07:32:54Z</updated>
<published>2023-06-26T00:00:00Z</published>
<summary type="text">Wearable sensors for monitoring marine environments and their inhabitants
Kaidarova, Altynay; Geraldi, Nathan R; Wilson, Rory P; Kosel, Jürgen; Meekan, Mark G; Eguíluz, Víctor M; Hussain, Muhammad Mustafa; Shamim, Atif; Liao, Hanguang; Srivastava, Mani; Saha, Swapnil Sayan; Strano, Michael S; Zhang, Xiangliang; Ooi, Boon S; Holton, Mark; Hopkins, Lloyd W; Jin, Xiaojia; Gong, Xun; Quintana, Flavio; Tovasarov, Adylkhan; Tasmagambetova, Assel; Duarte, Carlos M
Human societies depend on marine ecosystems, but their degradation continues. Toward mitigating this decline, new and more effective ways to precisely measure the status and condition of marine environments are needed alongside existing rebuilding strategies. Here, we provide an overview of how sensors and wearable technology developed for humans could be adapted to improve marine monitoring. We describe barriers that have slowed the transition of this technology from land to sea, update on the developments in sensors to advance ocean observation and advocate for more widespread use of wearables on marine organisms in the wild and in aquaculture. We propose that large-scale use of wearables could facilitate the concept of an ‘internet of marine life’ that might contribute to a more robust and effective observation system for the oceans and commercial aquaculture operations. These observations may aid in rationalizing strategies toward conservation and restoration of marine communities and habitats.
</summary>
<dc:date>2023-06-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Operational Fuel Inefficiency in Cruise Flight: A Worldwide Geospatial Analysis</title>
<link href="https://hdl.handle.net/1721.1/164678" rel="alternate"/>
<author>
<name>Trávník, Marek</name>
</author>
<author>
<name>Hansman, R. John</name>
</author>
<id>https://hdl.handle.net/1721.1/164678</id>
<updated>2026-02-01T03:01:19Z</updated>
<published>2026-01-30T00:00:00Z</published>
<summary type="text">Operational Fuel Inefficiency in Cruise Flight: A Worldwide Geospatial Analysis
Trávník, Marek; Hansman, R. John
</summary>
<dc:date>2026-01-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Low-Cost Deep Learning for Building Detection with Application to Informal Urban Planning</title>
<link href="https://hdl.handle.net/1721.1/164676" rel="alternate"/>
<author>
<name>González, Lucas</name>
</author>
<author>
<name>Toutouh, Jamal</name>
</author>
<author>
<name>Nesmachnow, Sergio</name>
</author>
<id>https://hdl.handle.net/1721.1/164676</id>
<updated>2026-03-08T03:39:48Z</updated>
<published>2026-01-09T00:00:00Z</published>
<summary type="text">Low-Cost Deep Learning for Building Detection with Application to Informal Urban Planning
González, Lucas; Toutouh, Jamal; Nesmachnow, Sergio
This article studies the application of deep neural networks for automatic building detection in aerial RGB images. Special focus is put on accuracy robustness in both well-structured and poorly planned urban scenarios, which pose significant challenges due to occlusions, irregular building layouts, and limited contextual cues. The applied methodology considers several CNNs using only RBG images as input, and both validation and transfer capabilities are studied. U-Net-based models achieve the highest single-model accuracy, with an Intersection over Union (&#119868;⁢&#119900;⁢&#119880;) of 0.9101. A soft-voting ensemble of the best U-Net models further increases performance, reaching a best ensemble &#119868;⁢&#119900;⁢&#119880; of 0.9665, improving over state-of-the-art building detection methods on standard benchmarks. The approach demonstrates strong generalization using only RGB imagery, supporting scalable, low-cost applications in urban planning and geospatial analysis.
</summary>
<dc:date>2026-01-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>When the Psyche and the 'Net Collide: Sources of and potential methods for preventing Bad Behavior Online</title>
<link href="https://hdl.handle.net/1721.1/164675" rel="alternate"/>
<author>
<name>Wedeman, Sara</name>
</author>
<author>
<name>Clark, David D</name>
</author>
<id>https://hdl.handle.net/1721.1/164675</id>
<updated>2026-02-11T15:35:02Z</updated>
<published>2026-01-29T00:00:00Z</published>
<summary type="text">When the Psyche and the 'Net Collide: Sources of and potential methods for preventing Bad Behavior Online
Wedeman, Sara; Clark, David D
With the emergence of social networking has come a range of harmful and malicious behaviors online, including disinformation, cyberbullying, and sextortion among others. These behaviors arise from a number of causes, including the incentives of the providers of the social networking platforms and the technical affordances of those platforms, which in some cases facilitate these abuses. This report sheds light on the causes and possible mitigations of these behaviors through the lens of behavioral psychology. Results from psychological research suggest that these abuses play on specific human attributes. To design effective mitigations, it is crucial that these human attributes be understood. This report draws on literature from psychology research to outline the important human behavioral attributes, relates these to some of the important affordances found in social networking applications, and suggests possible approaches that can damp the bad behavior we observe online.
This is the final report for an NSF-sponsored study of psychology literature related to the online experience, and the drivers and possible mitigations of bad behavior online.
</summary>
<dc:date>2026-01-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systems of Visualization for Musical Futures</title>
<link href="https://hdl.handle.net/1721.1/164673" rel="alternate"/>
<author>
<name>Naseck, Perry</name>
</author>
<id>https://hdl.handle.net/1721.1/164673</id>
<updated>2026-01-30T03:24:59Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Systems of Visualization for Musical Futures
Naseck, Perry
This thesis investigates how large-scale visual systems can communicate the presence, agency, and foresight of improvising musical agents–human and AI–during live performance. We propose a framework for manifesting AI collaborators on stage through five principles: musical transparency, live improvisational reactivity, demonstrated virtuosity, communication for collaboration, and visual fit. Two public performances operationalize these ideas: an addressable-light sculpture that renders harmonic space, and a stage-sized kinetic sculpture built from novel, low-cost Generic Pan Tilt fixtures that visualize the AI’s planned “musical futures.” The latter combines a real-time, MIDI-conditioned, Transformer-based hand-motion model with deterministic, pattern-based mappings that signal states such as resting and regeneration. Audience surveys indicate that viewers perceived links between musical turns and kinetic gestures while requesting clearer explanatory cues. We document the open-source hardware, firmware, and control protocols of the Generic Pan Tilt platform and reflect on design tradeoffs for accessibility, reliability, and expressivity. Finally, we outline a real-time analysis toolchain–motif detection, parallelism, and continuous energy/tension estimators–that emits OSC triggers for lighting, media, kinetic, and spatial-audio systems, enabling reactive shows beyond timecode. Together, these systems advance performable visualizations of human-improvised and AI-driven musical futures.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Rules for LLM-Generated Code: A RealWorld Case Study</title>
<link href="https://hdl.handle.net/1721.1/164672" rel="alternate"/>
<author>
<name>Lawrence, Jennifer M.</name>
</author>
<id>https://hdl.handle.net/1721.1/164672</id>
<updated>2026-01-30T03:24:56Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Design Rules for LLM-Generated Code: A RealWorld Case Study
Lawrence, Jennifer M.
This thesis conducts a case study exploring the interaction between software design, extensibility, and LLM code generation. The central problem we investigate is whether LLMs violate software design principles in ways that introduce bugs and ultimately hinder extensibility. We examine several repositories belonging to the RealWorld collection, a project that demonstrates combinations of frameworks, database, and programming languages for building full stack web apps modeled on an existing social media application. We create a concept-based implementation of the RealWorld API. Concept Design defines software systems in terms of the abstract purposes and relationships of self-contained units of functionality. It enforces stringent design standards and aims to aid humans better understand complex software behavior. To test code extensibility, we develop three phases of new functionality to be added to the RealWorld API. Each phase is intended to mimic real-world software development, adding functionality that is commonly found in social media platforms while increasing nuance and complexity. The code for these extensions is generated by an AI agent, then reviewed by a human coder who classifies and fixes any bugs. In this study, we examine how LLMs interact with software paradigms like Concept Design, the kinds of design violations they produce, and whether these violations correlate with bugs that impede extensibility.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cognify: An On-Device, AI-powered Learning Assistant</title>
<link href="https://hdl.handle.net/1721.1/164671" rel="alternate"/>
<author>
<name>Huang, Siyong</name>
</author>
<id>https://hdl.handle.net/1721.1/164671</id>
<updated>2026-01-30T03:24:55Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Cognify: An On-Device, AI-powered Learning Assistant
Huang, Siyong
Large Language Models (LLMs) have proven highly effective for a wide range of natural language processing tasks, but their size and compute requirements often restrict their use to powerful cloud-based infrastructures. In recent years, significant progress has been made in shrinking LLMs while maintaining performance levels comparable to much larger models. We are approaching the point where the capabilities of massive, multi-billion parameter models can be realistically replicated on consumer-grade devices. This thesis builds upon that foundation by developing an AI-powered note-taking application that runs entirely offline, using only the compute resources available on a personal laptop. The application is designed to listen to lectures alongside the student and provide support in real-time—through transcription, notes generation, and enabling context-aware search. Achieving this level of interactivity locally introduces challenges in reducing end-to-end latency, which this project addresses through both model-level optimizations and the design of efficient prompting and inference algorithms. A demo of the app can be found on Youtube.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performance Analysis of the Apple AMX Matrix Accelerator</title>
<link href="https://hdl.handle.net/1721.1/164670" rel="alternate"/>
<author>
<name>Zhou, Jonathan</name>
</author>
<id>https://hdl.handle.net/1721.1/164670</id>
<updated>2026-01-30T03:24:55Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Performance Analysis of the Apple AMX Matrix Accelerator
Zhou, Jonathan
Apple Silicon integrates a dedicated Apple Matrix Coprocessor (AMX) that executes outer-product style computations with high throughput, but its public programming model remains largely hidden behind the Accelerate framework. This thesis turns AMX into a more predictable and practical target by combining (i) empirical throughput characterization, (ii) a case study on AMX specific matrix multiplication (GEMM) design, and (iii) an interpretable rule-based latency model that predicts cycle counts for short AMX instruction sequences. First, microbenchmarks quantify AMX load/store and compute limits across matrix and vector modes and data types. We analyze throughput in both GFLOPS and AMX instructions per cycle, and also observe output register based throughput limitations. Second, we develop an in-place GEMM that uses masked outer products and strategically overlapping tiles to avoid scratch buffers used by Accelerate, outperforming Accelerate while preserving simplicity. Third, we introduce a compact latency model that decomposes cycles into per-instruction BaseTime, symmetric SwitchLatency for instruction changes, and instruction FullLatency (data dependency) terms. Fitted with non-negative coordinate descent on length-2 loops and validated on length-3 sequences via a lightweight loop simulation, the model obtains reasonably high accuracy while remaining helpful for those trying to understand the architecture.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Weak Identification and Network Measurement Error in Peer Effects Estimation</title>
<link href="https://hdl.handle.net/1721.1/164669" rel="alternate"/>
<author>
<name>Wang, William Wei</name>
</author>
<id>https://hdl.handle.net/1721.1/164669</id>
<updated>2026-01-30T03:06:54Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Weak Identification and Network Measurement Error in Peer Effects Estimation
Wang, William Wei
The growing availability of social network data has enabled a surge of research on social interactions. In particular, peer effects, once considered unidentifiable, have now been shown to be identified given knowledge of the network structure. Despite this positive result, questions remain about the existence and nature of peer effects, due to concerns about identification strength and the reliability of network data. This work investigates two key threats to the estimation of peer effects: weak identification and network measurement error. We show that weak instrument problems arise in moderately dense networks due to rapid averaging, leading to slow convergence rates even when estimators remain consistent. On the measurement error side, we show that additive edge weight errors can be mitigated in such networks due to the same averaging phenomena, but the error remains a relevant threat to consistency in sparser networks. We further demonstrate that when both issues are present, the resulting estimators exhibit non-vanishing bias, suggesting that the combined effect of weak instruments and measurement error can be more severe than either problem in isolation. Overall, our results aim to clarify how these non-standard estimation challenges impact our ability to study peer effects using network data.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Seeing Beyond Limits with Physics-Informed Priors</title>
<link href="https://hdl.handle.net/1721.1/164668" rel="alternate"/>
<author>
<name>Liu, Yang</name>
</author>
<id>https://hdl.handle.net/1721.1/164668</id>
<updated>2026-01-30T03:06:34Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Seeing Beyond Limits with Physics-Informed Priors
Liu, Yang
Conventional imaging systems are limited by dimensionality and visibility: standard sensors capture only two-dimensional data, while light diffuses or scatters across surfaces and through complex media. This dissertation reformulates imaging as an interplay of optical encoding and neural decoding. It models forward physical processes and iteratively refines them using deep denoisers. By embedding physics-informed priors into this optimization, it aims to surpass conventional limits in dimensionality and visibility. First, I develop Privacy Dual Imaging using an ambient light sensor. This approach tackles both dimensionality and visibility challenges when imaging with a single-point, non-imaging component on smart devices. Inspired by 1984’s “Big Brother” telescreen, I demonstrate how subtle light intensity fluctuations can reveal unseen image information; however, the goal is to highlight privacy concerns, not exploit them. It addresses two visibility limits—pixel-less and lens-less imaging—by using the screen as a spatial modulator and exploiting involuntary motion to create a virtual pinhole effect. A quantized, physics-informed prior improves reconstruction from heavily quantized sensor measurements. Second, I propose Snapshot Compressive Imaging (SCI) augmented with deep plug-and-play physics-informed priors to overcome the dimensionality limit of 2D sensors. SCI compressively encodes multiple temporal, spectral, or angular frames into a single measurement. A deep plug-and-play prior algorithm introduces high-dimensional priors learned from images and videos into the iterative reconstruction process, improving fidelity, speed, and flexibility. Experiments show notable gains in reconstruction quality and efficiency across different SCI datasets, including largeformat 4K UHD scenarios. Third, I introduce Rank-Reduced physics-informed priors, showing that large pretrained AI models—especially diffusion models—can act as general visual priors across both dimensionality and visibility challenges. A relax-then-tighten strategy handles ill-conditioning by applying truncated singular value decomposition to reduce rank deficiencies, followed by a Stable Diffusion refiner (SDEdit) plug-and-play prior that constrains reconstructions to valid image spaces. Simulations and passive non-line-of-sight imaging experiments verify the approach’s stability and effectiveness. Physics-informed priors promise to extend the boundaries of imaging, enabling us to see beyond current dimensionality and visibility limits and to unlock new applications from macro-scale to micro-scale observations.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Large Language Models from a Data SystemsPerspective</title>
<link href="https://hdl.handle.net/1721.1/164667" rel="alternate"/>
<author>
<name>Chen, Peter Baile</name>
</author>
<id>https://hdl.handle.net/1721.1/164667</id>
<updated>2026-01-30T03:24:54Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Optimizing Large Language Models from a Data SystemsPerspective
Chen, Peter Baile
Strong retrieval and reasoning capabilities are essential for large language models (LLMs) to effectively handle a broad spectrum of downstream tasks, such as open-domain question answering and solving math or science problems. While current LLM-based frameworks achieve strong performance on complex retrieval and reasoning tasks, they do so at a high computational cost. Additionally, they often lack structured, systematic problem-solving strategies, leading to unexpected failures. In particular, these models typically operate in an iterative, online, and isolated fashion—failing to exploit relationships across data sources, opportunities for offline computation, and the benefits of reusability—resulting in less-than-optimal outcomes. In contrast, traditional data management systems are engineered for both efficiency and accuracy, with careful coordination across all stages of the query pipeline. Inspired by these principles, this work proposes novel approaches to improve LLMbased retrieval and reasoning by incorporating optimization techniques from data systems. Our evaluation across a range of knowledge- and reasoning-intensive datasets demonstrates significant gains in both accuracy and computational efficiency.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Foundational Abstractions for Quantum Programming</title>
<link href="https://hdl.handle.net/1721.1/164666" rel="alternate"/>
<author>
<name>Yuan, Charles</name>
</author>
<id>https://hdl.handle.net/1721.1/164666</id>
<updated>2026-01-30T03:06:30Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Foundational Abstractions for Quantum Programming
Yuan, Charles
Bringing the promise of quantum computation into reality requires not only building a quantum computer but also correctly programming it to run a quantum algorithm. To obtain asymptotic advantage over classical algorithms for applications including simulation, search, and optimization, quantum algorithms rely on the ability of data in quantum superposition to exhibit phenomena such as interference and entanglement. In turn, an implementation of the algorithm as a program must correctly orchestrate these phenomena in the states of qubits. Otherwise, it would yield incorrect outputs or lose quantum computational advantage.&#13;
&#13;
Given a quantum algorithm, what are the challenges and costs of realizing it as a program that can run on a physical quantum computer? In this thesis, I answer this question by showing how the basic abstractions of programming upon which many quantum algorithms rely – such as data structures and control flow – can fail to work correctly or efficiently on a quantum computer. I then demonstrate how we can leverage insights from research in programming languages to re-invent the software stack – including abstractions, libraries, and compilers – to meet the demands of quantum algorithms. This approach holds out a promise of expressive and efficient tools to program a quantum computer and thereby practically realize its computational advantage.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fabrication of Superconducting Reflectionless Filters for Quantum Microwave Circuits</title>
<link href="https://hdl.handle.net/1721.1/164665" rel="alternate"/>
<author>
<name>Bui, Eric</name>
</author>
<id>https://hdl.handle.net/1721.1/164665</id>
<updated>2026-01-30T03:24:51Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Fabrication of Superconducting Reflectionless Filters for Quantum Microwave Circuits
Bui, Eric
The performance and scalability of superconducting quantum circuits depends critically on the microwave environment. Minimizing signal reflections and suppressing thermal noise are essential for achieving high-fidelity readout and preserving qubit coherence. A significant challenge arises from the use of conventional cryogenic components such as isolators and circulators, which exhibit nonideal out-of-band reflection characteristics. Reflections degrade impedance matching and limit the performance of broadband quantum limited amplifiers. Superconducting implementations of reflectionless microwave filters offer a promising solution to mitigate these issues. The focus of this work is the fabrication and cryogenic characterization of reflectionless filters compatible with superconducting qubit fabrication flows. Devices were implemented on high resistivity silicon substrates using aluminum ground planes, integrated nichrome resistors, and crossovers formed with SiO2 interlayer dielectric. Cryogenic measurements at 20 mK demonstrate high return loss, confirming the viability of these filters for co-fabrication with traveling-wave parametric amplifiers (TWPAs) and circuit quantum electrodynamics (cQED) architectures. The filters exhibit low insertion loss in the passband to maintain quantum measurement efficiency and provide broadband reflection suppression across frequencies relevant to superconducting qubits, offering a scalable way to manage microwave noise in superconducting quantum processors.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CONDOR: Clinical Ontology-aware Networked Data Organization and Retrieval</title>
<link href="https://hdl.handle.net/1721.1/164664" rel="alternate"/>
<author>
<name>Dongo Aguirre, Gyalpo Melchisedeck</name>
</author>
<id>https://hdl.handle.net/1721.1/164664</id>
<updated>2026-01-30T03:24:23Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">CONDOR: Clinical Ontology-aware Networked Data Organization and Retrieval
Dongo Aguirre, Gyalpo Melchisedeck
Until now, state-of-the-art research into AI-driven clinical workflows has been confined to proprietary, closed-source systems from vendors like Epic and Oracle, or private experiments like Stanford’s ChatEHR, creating a critical barrier to academic innovation. This thesis introduces CONDOR, the first fully open-source and replicable research environment designed to simulate an agentic, conversational AI interacting with a high-fidelity Electronic Health Record (EHR). By integrating an open-source, FHIR-native EHR (Medplum) with a complex, realistic public clinical dataset (MIMIC-IV FHIR), CONDOR provides a foundational testbed that has been previously unavailable to the research community. The framework’s primary contribution is a novel alignment and evaluation methodology that adapts the principles of SelfCite to the clinical domain. We propose a ‘ClinicalConfidence‘ score to quantify the trustworthiness of generated statements and programmatically generate a high-quality preference dataset for alignment using Simple Preference Optimization (SimPO). We compare a standard vector-based Retrieval-Augmented Generation (RAG) baseline against a more advanced GraphRAG architecture that leverages a two-tiered knowledge graph of patient data and medical ontologies. Our results demonstrate that the full CONDOR system, combining GraphRAG with SimPO alignment, significantly improves citation quality and verifiability, establishing a new open-source benchmark for the development of safe and reliable clinical AI.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Stage LLM Reasoning for Automated Detection and Classification of High-Impact Misinformation</title>
<link href="https://hdl.handle.net/1721.1/164663" rel="alternate"/>
<author>
<name>Nair, Anushka Manchanda</name>
</author>
<id>https://hdl.handle.net/1721.1/164663</id>
<updated>2026-01-30T03:24:50Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Multi-Stage LLM Reasoning for Automated Detection and Classification of High-Impact Misinformation
Nair, Anushka Manchanda
As of 2025, social platforms have become a primary news source, magnifying the reach of misleading content [1]. Exposure to misinformation has been linked to shifts in public attitudes and behavior, including vaccine uptake [2] and voting behaviors [3]. However, current misinformation detection approaches can often focus on a narrow definition of misinformation: factual claims that can be clearly judged as true or false. However, recent research suggests the problem lies elsewhere: overt falsehoods (“vaccines contain microchips”) can carry little harm, while technically accurate but decontextualized narratives can be more influential. Allen et al. (2024) [4] found that factually accurate ”vaccine-skeptical” content had a much greater impact on vaccine hesitancy than misinformation flagged by fact-checkers. These narratives can work by omitting information, misleading framing, or cherry-picked evidence, forms of manipulation that can elude traditional fact-checking. Though professional fact-checkers are often able to recognize these tactics and the broader context of information, they cannot keep pace with the volume of online content. This thesis designs a Large Language Model (LLM) based pipeline meant to partner with, rather than replace, human fact checkers. The system decomposes content into its explicit and implicit claims, rhetorical tactics, and the “missing context” questions it raises; retrieves evidence from fact-check databases and reliable sources; and synthesizes grounded explanations while assigning calibrated harm scores to guide triage. Evaluated on fact-checked tweets, the pipeline matched expert judgments in 92.6% of cases where experts agreed, and flagged for review posts where experts disagreed, a gray zone requiring human judgment. The system’s explanations ranked higher than crowdsourced Community Notes in helpfulness, clarity, and trustworthiness when assessed by an LLM, and harm evaluations aligned with human reviewers in 87.5% of cases, enabling prioritization of content with greatest potential impact. Despite constraints of sample size and processing latency, the results demonstrate the feasibility of a human–AI workflow that treats disagreement as a signal and directs scarce attention towards high-impact misinformation that current automated systems can miss.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Simple Chemical Heuristics to Model and Discover Materials</title>
<link href="https://hdl.handle.net/1721.1/164662" rel="alternate"/>
<author>
<name>Ma, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/164662</id>
<updated>2026-01-30T03:07:04Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Learning Simple Chemical Heuristics to Model and Discover Materials
Ma, Andrew
Computational approaches have long played an important role in the field of materials science, driving both the scientific study of materials’ fundamental properties and the design of materials for technological applications. Currently, mainstream methods in computational materials science typically rely on either first-principles calculations or deep learning models. In this thesis, we take a different direction by developing remarkably simple data-driven models for predicting fundamental properties of materials, including electronic topology, metallicity, and band gap. These models take the form of highly interpretable chemical heuristics. A key finding of this work is the surprising result that electronic topology diagnosis – often regarded as a highly complex task – can, in fact, be performed heuristically using a simple and intuitive model. We further integrate this model into a workflow for discovering new topological materials. Altogether, this work revisits the classic idea of chemical heuristics through a modern data-driven lens, shedding new light on fundamental problems in materials science.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spatial Emission and Polarization Control in Integrated Photonics for Optical-Trapping and Trapped-Ion Systems</title>
<link href="https://hdl.handle.net/1721.1/164661" rel="alternate"/>
<author>
<name>Sneh, Tal</name>
</author>
<id>https://hdl.handle.net/1721.1/164661</id>
<updated>2026-01-30T03:24:49Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Spatial Emission and Polarization Control in Integrated Photonics for Optical-Trapping and Trapped-Ion Systems
Sneh, Tal
Recent advances in silicon photonics have yielded impressive results in fields including biophotonic optical tweezers and trapped-ion quantum systems. However, the majority of these demonstrations, while offering advantages in size, cost, and dense integration, lag behind their bulk-optic counterparts, limited by a lack of critical advanced functionality such as spatial control of light in the near field or polarization control at visible wavelengths. This thesis addresses this gap by designing and experimentally demonstrating the first, to the best of our knowledge, cell experiments using single-beam integrated optical tweezers, chip-based 3D printers, and integrated polarization rotators and splitters at blue wavelengths. First, we demonstrate optical trapping and tweezing of microspheres using a nearfield-focusing integrated optical phased array, at a standoff distance over two orders of magnitude larger than prior integrated demonstrations. We then use this system to perform the first cell experiments using single-beam integrated optical tweezers. Second, we use a tunable integrated optical phased array operating at red wavelengths to print designs in a visible-light-curing resin, demonstrating the first chip-based 3D printer. Third, we design and experimentally demonstrate the first integrated polarization rotators and splitters operating at blue wavelengths, enabling polarization control on chip for sophisticated integrated manipulation of trapped-ion and neutral-atom quantum systems. Finally, we develop key polarization-diverse integrated-photonics devices and utilize them to implement a variety of integrated-photonics-based polarization-gradient-cooling systems, culminating in the first demonstration of polarization-gradient cooling of a trapped ion by an integrated-photonics-based system.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Language Modeling from Visually Grounded Speech</title>
<link href="https://hdl.handle.net/1721.1/164660" rel="alternate"/>
<author>
<name>Lai, Cheng-I Jeff</name>
</author>
<id>https://hdl.handle.net/1721.1/164660</id>
<updated>2026-01-30T03:06:52Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Language Modeling from Visually Grounded Speech
Lai, Cheng-I Jeff
Recent advancements in spoken language processing have significantly reduced automatic speech recognition (ASR) error rates, driven by large-scale supervised training on paired speech–text data and, more recently, self-supervised pre-training on unpaired speech and audio. These methods have facilitated robust transfer learning across diverse speech and audio tasks. However, fully leveraging multimodal inputs, particularly visual context, remains underexplored. This thesis addresses this gap by developing novel language modeling techniques directly from visually grounded speech. We first introduce the Audio-Visual Neural Syntax Learner (AV-NSL), an unsupervised parser that recovers constituency trees directly from raw speech paired with images, demonstrating how visual context effectively bootstraps grammar induction without textual supervision. Next, we investigate Audio-Visual Word Discovery for Speech Translation, using the Fisher Spanish–English corpus to train a series of speech-to-speech translation models based on pseudo-word units discovered via audio-visual grounding. This study highlights that simplistic acoustic tokens and limited training data degrade re-synthesis and translation quality, underscoring two crucial missing ingredients: richer semantic tokens and large-scale training. Guided by these insights, we present Audio-Visual Gemma (AV-Gemma), a family of multimodal foundation models that condition jointly on images and learned semantic speech tokens. At scale, AV-Gemma generates visually coherent spoken captions and transfers robustly to tasks such as video-to-speech generation and spoken visual question answering, significantly advancing multimodal spoken-language processing.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ALPACA: An Algorithmic Pipeline for Automated&#13;
Contour Annotation of Carnatic Music:&#13;
A Dynamic Programming Framework for Pitch Segmentation and Note Transcription</title>
<link href="https://hdl.handle.net/1721.1/164659" rel="alternate"/>
<author>
<name>Parthasarathi, Sruthi</name>
</author>
<id>https://hdl.handle.net/1721.1/164659</id>
<updated>2026-01-30T03:24:45Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">ALPACA: An Algorithmic Pipeline for Automated&#13;
Contour Annotation of Carnatic Music:&#13;
A Dynamic Programming Framework for Pitch Segmentation and Note Transcription
Parthasarathi, Sruthi
In recent years, a wide range of computational techniques have been developed to extract information from recorded performances of Western music. However, these methods often achieve limited success when applied to non-Western musical traditions. Carnatic music, in particular, poses unique challenges due to the absence of a standardized notation system and the lack of a consistent mapping between frequency bands and note categories. This project introduces a dynamic programming–based transcription framework, incorporating novel methods for label estimation, contour segmentation, and related subtasks, and establishes the foundations for end-to-end automatic transcription of this art form.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Diverse Treatment Policies from Observational Health Data</title>
<link href="https://hdl.handle.net/1721.1/164658" rel="alternate"/>
<author>
<name>Ejilemele, Abe</name>
</author>
<id>https://hdl.handle.net/1721.1/164658</id>
<updated>2026-01-30T03:24:53Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Modeling Diverse Treatment Policies from Observational Health Data
Ejilemele, Abe
Learning policies for real world tasks often requires modeling human behavior, especially in domains like healthcare and driving. In these settings, skills are learned from expert human demonstrations, but such data are typically multimodal, violating the common single expert assumption. We study sequential clinical treatment decision making in the offline imitation learning setting, where environment interaction is prohibited, reflecting the challenges of experimentation in safety critical domains. Existing methods for multi expert offline imitation learning often restrict the latent space, underspecify its structure, or omit objective terms that prevent latent collapse and encourage behavior discovery. We propose a fully offline approach that addresses these shortcomings and improves learning from multi expert demonstrations through modifications to the formulation of the latent approximate posterior and the model architecture. We suggest that our method is more robust to real world settings where the true number of demonstrators may not be known. We also incorporate an occupancy matching term into our objective that injects awareness of the rollout distribution over trajectories into our behavior cloning objective. We evaluate our method against baselines on both simulated multi expert demonstrations from an extended S-CVSim and real world demonstrations from MIMIC. Our approach achieves consistently higher next step action prediction and behavior discovery performance. While ground truth expert policies are unavailable for MIMIC, visual analysis shows our method uncovers clinically meaningful variations in expert strategies, reflecting treatment population diversity.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scalable Assembly of General Objects</title>
<link href="https://hdl.handle.net/1721.1/164657" rel="alternate"/>
<author>
<name>Tian, Yunsheng</name>
</author>
<id>https://hdl.handle.net/1721.1/164657</id>
<updated>2026-01-30T03:06:32Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Scalable Assembly of General Objects
Tian, Yunsheng
In this thesis, I present a scalable system towards fully automated and flexible robotic assembly that generalizes over diverse geometries and complex structures. Most real-world objects are assemblies composed of multiple parts. Assembly presents significant challenges for robots to execute long-horizon, contact-rich manipulation with both reliability and generalization. However, most manufacturing facilities today still rely heavily on manually programmed assembly lines, which require significant labor, time, and setup costs yet offer no flexibility to object variations. My proposed system synergizes global multi-step planning with local reactive learning-based control to enable generalizable and precise assembly. Such an integrated paradigm effectively leverages the best of both worlds, accomplishing results that neither planning nor learning could achieve alone. For planning, I leverage guidance from physical simulation and learned feasibility networks to efficiently search for part sequences, precise motions, and stable grasps for dual-arm robots over long horizons. For learningbased control, I train robust policies via reinforcement learning for submillimeter-level insertion across different part geometries, assembly paths, and grasp poses. I introduce and open-source the largest-scale assembly dataset to date and demonstrate my system’s generalization on thousands of simulated assemblies as well as through end-to-end real robot experiments. By integrating planning and learning, I showcase the first system to achieve complete and generalizable real-world multi-part assembly without domain knowledge or human demonstrations. Although the system plans and learns purely in simulation, it transfers zero-shot to the real world and achieves 80% successful steps. Finally, I will share insights that further scale up robotic assembly and opportunities to extend to general manipulation, and discuss future directions to equip general-purpose robots with multi-step, precise manipulation capabilities.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards a Modular Superconducting Quantum Processor&#13;
using Chiral Waveguide Quantum Electrodynamics</title>
<link href="https://hdl.handle.net/1721.1/164656" rel="alternate"/>
<author>
<name>Yankelevich, Beatriz</name>
</author>
<id>https://hdl.handle.net/1721.1/164656</id>
<updated>2026-01-30T03:24:46Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Towards a Modular Superconducting Quantum Processor&#13;
using Chiral Waveguide Quantum Electrodynamics
Yankelevich, Beatriz
As the field of superconducting quantum computing advances, networking qubits within a single system becomes essential for building modular processors. Modularity allows the system to circumvent scalability constraints and enable architectures and computational schemes that exploit non-local connectivity to enhance processing capabilities. This work proposes non-local entanglement generation methods based on the theory of chiral quantum waveguide dynamics, which is the quantum-optical framework that describes systems of atoms coupled non-reciprocally to a continuum of modes. We leverage these effects to design a chiral communication module composed of multiple superconducting qubits, capable of both directional single photon routing and the realization of chiral, driven-dissipative entanglement protocols.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fine-tuning Boltz for Antibody-Antigen Binding&#13;
Prediction</title>
<link href="https://hdl.handle.net/1721.1/164655" rel="alternate"/>
<author>
<name>Kim, Ji Won</name>
</author>
<id>https://hdl.handle.net/1721.1/164655</id>
<updated>2026-01-30T03:24:32Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Fine-tuning Boltz for Antibody-Antigen Binding&#13;
Prediction
Kim, Ji Won
Accurate prediction of antibody-antigen binding is a central challenge in computational immunology. Its direct implication for therapeutic antibody design and vaccine development has made it one of the most rapidly growing fields. Recent advances in protein language models and structure prediction have provided new tools for modeling, yet these approaches often fall short in capturing the fine-grained features that drive binding specificity in antibody and antigens. This thesis evaluates multiple strategies for improving predictive performance. First, we investigate a custom multiple sequence alignment (MSA) experiment. Standard Boltz-2 training relies on MSAs from broad protein databases, which capture global diversity but under-represent lineage-specific constraints. To address this, we constructed antibody-specific MSAs to test whether restricting the search space to antibody repertoires improves model learning. Unfortunately, gains in downstream binding prediction were limited, suggesting that further work needs to be done in training models for specific databases in the first place. Our second line of investigation focused on fine-tuning Boltz-2, a generative structural foundation model, using curated antibody–antigen data. By leveraging Boltz-2’s internal sequence embeddings, we trained a predictive model for binding affinity. This approach yielded stronger ROC performance compared to baseline models, achieving a validation AUROC of 0.645, demonstrating the advantages of structural generative priors for antibody–antigen binding prediction.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deterministic Circuit Range Avoidance is (Likely) Intractable</title>
<link href="https://hdl.handle.net/1721.1/164654" rel="alternate"/>
<author>
<name>Ilango, Rahul</name>
</author>
<id>https://hdl.handle.net/1721.1/164654</id>
<updated>2026-01-30T03:24:47Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Deterministic Circuit Range Avoidance is (Likely) Intractable
Ilango, Rahul
Circuit Range Avoidance (denoted Avoid) is a computational problem where, given a Boolean circuit with more output bits than input bits, one must output a string outside of the range of the circuit. A simple counting argument implies that such a string must always exist and also guarantees that outputting a uniformly random string is correct with good probability. A natural question is whether this can be derandomized: does there exist an efficient deterministic algorithm for Avoid? We give the first evidence that deterministically solving Avoid is intractable. We show that there is no polynomial-time algorithm for Avoid under plausible assumptions in complexity theory and cryptography. Specifically, our assumptions are that NP ≠ coNP and that subexponentially-secure indistinguishability obfuscation exists.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning to Tackle Task Variations in Control - A Transportation Context</title>
<link href="https://hdl.handle.net/1721.1/164653" rel="alternate"/>
<author>
<name>Jayawardana, Vindula Muthushan</name>
</author>
<id>https://hdl.handle.net/1721.1/164653</id>
<updated>2026-01-30T03:06:23Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Learning to Tackle Task Variations in Control - A Transportation Context
Jayawardana, Vindula Muthushan
Real-world control tasks are messy and often exhibit task variations. Practical solutions to these problems must exhibit generalization across task variations. For example, in the task of controlling traffic signals, control strategies must adapt to different intersection topologies (the variations), each with distinct dynamics. In this thesis, we consider the challenge of coping with task variations in the context of transportation problems, specifically in roadway interventions where many such variations are both common and imperative to handle. We develop machine learning techniques to address three key challenges: 1) quantify the impact of task variations in control, 2) model them to align with the real world, and 3) optimize in the presence of them. To this end, we begin with a large-scale case study of cooperative eco-driving and illustrate how explicitly modeling task variations can surface otherwise overlooked insights. Building on this, we argue for the necessity of formally incorporating task variations into problem specifications, emphasizing that task underspecification due to loosely defined task variations can severely impair decision-making. We then introduce a contextual reinforcement learning algorithm capable of leveraging the structure of task variations to generalize effectively in cooperative eco-driving with autonomous vehicles. We also present IntersectionZoo, a benchmark designed to promote the development of learning algorithms that generalize by exploiting task variation structures, thus standardizing progress in the field. Last, we explore task variation modeling through a generative modeling lens, using human driver behavior modeling as a case study. Overall, this thesis lays the groundwork for robust control methods by leveraging machine learning to tackle task variations, specifically in roadway intervention designs.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Empirical Evaluation of LLMs for the Assessment of Subjective Qualities</title>
<link href="https://hdl.handle.net/1721.1/164652" rel="alternate"/>
<author>
<name>Ranade, Esha</name>
</author>
<id>https://hdl.handle.net/1721.1/164652</id>
<updated>2026-01-30T03:24:42Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">An Empirical Evaluation of LLMs for the Assessment of Subjective Qualities
Ranade, Esha
Large Language Models (LLMs) have achieved remarkable success in natural language processing tasks and are increasingly being used for language generation. Significant advancements in this field have unlocked capabilities that enable their adoption in sophisticated roles, including acting as evaluators or "judges" of text for various attributes such as factuality, relevance, fluency, and reasoning quality. However, their understanding and ability to assess subjective attributes, such as the level of formality in a piece of writing, and produce content matching these subjective attributes remains unclear and underexplored. This research develops a methodology to study how LLMs evaluate subjective attributes. It has three primary contributions: (i) a reproducible user study to generate human-annotated labels for different attributes, (ii) an analysis of the extent to which different LLMs provide subjective labels aligned with human annotators, and (iii) an analysis of the extent to which LLMs generate content aligned with specified intended subjective labels, relative to humans. The user study and the analyses have been conducted both with and without a reference scale. The scale itself, the survey design, and the evaluation questions have all undergone multiple rounds of iteration informed by study tester feedback to improve clarity, consistency, and reliability for the final study. Comparisons between human-generated ratings and LLM-generated ratings for both human-generated content and LLM-generated content reveal the extent to which LLMs align with human judgment, providing insights into their capabilities and limitations. While humans typically do better in their roles, LLMs are able to attain reliably high levels of success in producing and judging text, despite tending to err on the more-formal side. Both groups’ performance increases significantly with the aid of a formalized reference scale. Across the suite of models tested, OpenAI’s GPT family leads overall performance, with Anthropic’s Claude and Meta’s LLaMA series showing notable strengths in specific formality ranges. Although this work focuses on the formality attribute of text, the methodology developed can be used to evaluate other subjective qualities of text, such as conciseness, usefulness, or persuasiveness. Ultimately, these findings may guide future efforts to fine-tune LLMs to produce text that more precisely matches the desired stylistic or ethical standards.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating Burst Parallelism of SigmaOS processes with CRIU</title>
<link href="https://hdl.handle.net/1721.1/164651" rel="alternate"/>
<author>
<name>Tang, Frederick</name>
</author>
<id>https://hdl.handle.net/1721.1/164651</id>
<updated>2026-01-30T03:24:29Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Accelerating Burst Parallelism of SigmaOS processes with CRIU
Tang, Frederick
σOS is a multi-tenant cloud operating system designed to integrate the agility of serverless environments with the interactivity of microservices. A goal of achieving this integration is the ability to start new instances of server processes quickly. However, σOS only handles σcontainer initialization, and does not assist with runtime and app initialization costs. One approach to overcome this challenge is to checkpoint processes using Checkpoint/Restore in Userspace (CRIU). CRIU is a linux toolset which can start new server instances by restoring them from a saved checkpointed state, avoiding the full cost of reinitialization and setup. This thesis introduces σCRIU, which adapts CRIU for burst-parallel spawning of microservices in σOS. σCRIU implements a number of optimizations: compressing checkpointed proc metadata to reduce network communication costs, implementing demand-paging using a lazy page service, and caching kernel metadatadata to reduce CRIU’s restore operation latency. These optimizations allow σCRIU to start new microservices on remote machines quickly while still making use of CRIU’s existing proven checkpoint and restore technology.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modern methods for causal inference and missing data</title>
<link href="https://hdl.handle.net/1721.1/164650" rel="alternate"/>
<author>
<name>Xia, Eric</name>
</author>
<id>https://hdl.handle.net/1721.1/164650</id>
<updated>2026-01-30T03:06:38Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Modern methods for causal inference and missing data
Xia, Eric
The proliferation of data-driven approaches in a wide array of settings is one of the defining characteristic of the modern era. With this rise, there has been much focus on using data to answer causal questions, e.g. whether A causes a change in B. Furthermore aspects of data collection has given rise to datasets that are often quite messy, sometimes missing important entries. These are both problems that are incredibly relevant to practitioners in a variety of disciplines, including policy-makers looking to make critical decisions that can influence lives of many. On the surface these problems seem quite distinct, yet the literature has highlighted deep connections between these two settings. Indeed, many methods for addressing one question can often be repurposed to address the other. These two settings are quite classical and approaches to address the are still quite so, but there has been great interest recently to develop techniques and algorithms to address them that harness modern developments in statistics and machine learning. This thesis contributes to the literature by providing new methods as well as novel understandings of existing ones.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Visually Accurate Database-Enabled Reconstructions of Scenes (VADERS)</title>
<link href="https://hdl.handle.net/1721.1/164649" rel="alternate"/>
<author>
<name>Gosalia, Mehek</name>
</author>
<id>https://hdl.handle.net/1721.1/164649</id>
<updated>2026-01-30T03:24:39Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Visually Accurate Database-Enabled Reconstructions of Scenes (VADERS)
Gosalia, Mehek
This work introduces a novel pipeline for scene reconstruction that jointly prioritizes semantic accuracy and visual fidelity, addressing a gap in current approaches. Prior pipelines often emphasize either semantic analysis or photorealistic rendering, but rarely both. This method combines scene analysis, segmentation, and retexturing to yield reconstructions that preserve structural semantics, while convincingly reflecting the visual qualities of the original image. The motivation lies in the limitations of existing systems. Existing databaseassisted approaches depend on proprietary datasets that restrict stylistic diversity or using in-the-wild assets. This constrains expressiveness and often produces results that are visually misaligned. Conversely, pipelines optimized for visual realism neglect semantic correctness, generating outputs that may appear plausible but lack categorical or structural grounding. Our framework addresses this by first enforcing semantic accuracy via selecting database assets, then editing those assets to be stylistically faithful to the reference, producing reconstructions that are both interpretable and expressive. We begin with database-assisted scene analysis, using an open-source asset database containing chairs, lamps, sofas, tables, and benches. Input images are depth-mapped, segmented, and parsed into object masks, which are matched to database assets based on semantic labels and visual correspondence. Each asset is broken into semantic segments and rescaled per-component using vision-language model predictions to match the reference object better. Finally the asset is retextured based on the image mask of the reference object in the input image. Evaluation on six diverse scenes—both photographs and artworks—shows the pipeline produces semantically grounded, visually accurate reconstructions under non-research conditions. Future work will focus on expanding the asset database, reducing reliance on proprietary texturing, and releasing an open-source implementation to broaden accessibility.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Planar Silicon Solar Cells for Singlet Fission&#13;
Sensitization</title>
<link href="https://hdl.handle.net/1721.1/164648" rel="alternate"/>
<author>
<name>Wang, Janet Z.</name>
</author>
<id>https://hdl.handle.net/1721.1/164648</id>
<updated>2026-01-30T03:24:48Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Designing Planar Silicon Solar Cells for Singlet Fission&#13;
Sensitization
Wang, Janet Z.
Singlet fission (SF)-sensitized silicon (Si) solar cells offer a path towards surpassing the Shockley-Queisser efficiency limit for single-junction solar cells. However, realizing efficient charge transfer from the SF material to Si remains a significant challenge that requires careful interface engineering. Prior work showed that Si microwire cells sensitized with tetracene (Tc) and a zinc phthalocyanine (ZnPc) donor layer can boost photocurrent and external quantum efficiency (EQE). Planar devices are simpler to fabricate than microwire devices and reproduce the planar geometry of optical test samples to connect studies of the interface to device performance. This thesis integrates modeling and experimental approaches to guide the design of planar SF-sensitized Si solar cells. We developed a fabrication process for planar cells comparing varied oxide passivation layer growth conditions and surface treatments, Si(100) versus Si(111) orientation, and junctions formed by diffusion doping versus ion implantation. Complementary surface photovoltage (SPV) measurements on matching optical stacks show evidence of an illumination-induced transient positive charge density at the Tc/ZnPc/oxide/Si interface, consistent with increased field effect passivation. We find that SPV responses on AlOx/n-Si are dominated by substrate band bending; consequently, SiOx is the preferred passivation to suppress the background and isolate the SPV signals driven by the organics. A drift–diffusion model shows that the diffusion doping (exponential) emitters reduce surface recombination rates compared to ion implantation (Gaussian) emitters. We also show that a positive fixed charge density at the surface enhances short wavelength EQE, with the effect strongest for Gaussian emitters. Together, these results provide practical design rules for planar SF-sensitized Si cells and the study of charge transfer at organic-Si interfaces.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Microsecond Time Synchronization for Computing Fiber&#13;
Networks</title>
<link href="https://hdl.handle.net/1721.1/164647" rel="alternate"/>
<author>
<name>Li, Jenny Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/164647</id>
<updated>2026-01-30T03:24:43Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Microsecond Time Synchronization for Computing Fiber&#13;
Networks
Li, Jenny Y.
We present a microsecond-accurate time synchronization method and time localization system for a sensor network of spatially-separated, low-power Bluetooth nodes, with the goal of integrating this system into thermally-drawn computing fibers. Each node consists of an nRF54L15 SoC paired with an ICS-43434 digital I2S microphone, enabling synchronized audio data collection. Our design leverages Bluetooth LE connection events to synchronize local clocks with sub-10 µs accuracy across a multi-peripheral topology; we trigger precise, CPU-independent hardware events to timestamp audio samples. We demonstrate that timestamped I2S data stored in external SPI flash can be correlated across devices to extract TDoA measurements for localizing sound sources. Cross-correlation techniques allow us to estimate direction and position, with localization errors reduced from 4.17 m to 0.39 m through clock synchronization. This prototype provides a roadmap for embedding synchronized sensing and computation within fibers and smart textiles, with implications for on-body audio perception and distributed sensing in flexible electronics.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From String to Structure: Graph Threading for Physical Assembly</title>
<link href="https://hdl.handle.net/1721.1/164646" rel="alternate"/>
<author>
<name>Lin, Rebecca Y. E.</name>
</author>
<id>https://hdl.handle.net/1721.1/164646</id>
<updated>2026-01-30T03:24:37Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">From String to Structure: Graph Threading for Physical Assembly
Lin, Rebecca Y. E.
Many artistic and engineering applications—from beadwork to deployable structures—create intricate, and sometimes dynamic, designs by threading cord through tubular components. We model the underlying design challenge—threading tubes so that they achieve a target connectivity when the string is pulled taut—as graph threading. In this formulation, tubes and their junctions correspond to edges and vertices of a graph, and the goal is to find a closed walk that induces a connected graph at every vertex while avoiding U-turns. We study two optimization objectives motivated by fabrication and deployment: minimizing length to reduce material cost and assembly time, and minimizing turn to reduce frictional resistance during deployment. For the length metric, we present a polynomial-time algorithm via reduction to minimum-weight perfect matching, prove tight worst-case bounds on optimal threadings, and identify special cases with faster algorithms. For the turn metric, we characterize the complexity landscape, proving NP-hardness for graphs of maximum degree 4, tractability for degree 3, and giving exact and approximation algorithms for restricted variants, including rectangular grid graphs. Finally, we turn from theory to fabrication, proposing multi-configuration threading—a new approach for achieving multiple predetermined configurations within a single system. As in earlier chapters, framing the problem in graph-theoretical terms provides access to powerful problem-solving techniques, guiding both algorithmic analysis and physical design.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Steering Vision at Scale: From the Model Weights to Training Data</title>
<link href="https://hdl.handle.net/1721.1/164645" rel="alternate"/>
<author>
<name>Materzyńska, Joanna</name>
</author>
<id>https://hdl.handle.net/1721.1/164645</id>
<updated>2026-01-30T03:06:51Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Steering Vision at Scale: From the Model Weights to Training Data
Materzyńska, Joanna
We study the interpretability and controllability of multimodal and generative models, with a particular focus on text–image representation models and text-to-image diffusion systems. We begin by addressing limitations in CLIP’s multimodal embeddings, specifically the entanglement between visual and textual concepts within images. We demonstrate the consequences of this entanglement in both generative and discriminative tasks, and introduce a method for disentangling visual and textual representations. We showcase the utility of these disentangled embeddings in typographic attack resistance, improved image generation, and robust out-of-domain OCR detection. Building on this foundation, we explore methods to enhance the controllability of diffusion models. First, we tackle the challenge of unwanted concept generation. We introduce a technique to remove specific visual concepts using only their names, leveraging negative prompts and guidance to suppress target content without modifying training data or requiring model retraining. This approach enhances ethical alignment and enables greater user control in generative systems. We then turn to the complementary problem: incorporating new concepts. We present a few-shot motion customization technique for video generation models, which transfers motion patterns from a small set of examples to novel subjects. This method maintains the generalization capabilities of the base model while enabling consistent, subject-agnostic animation that preserves both identity and temporal coherence. To improve the fine-grained control of visual outputs, we propose a method for continuous manipulation of image attributes. This framework introduces smooth, intuitive controls, that allow for dynamic, continuous steering of generated images. Unlike prompt engineering or token-level interventions, our approach offers real-time adjustment without sacrificing output realism. Finally, we examine whether artistic styles in diffusion models require large-scale pretraining or can be learned in a lightweight, post-training manner. To this end, we train a base model on art-free data and introduce a compact adapter method that learns stylistic concepts from a small set of exemplar artworks. Our findings suggest that artistic domains can be integrated efficiently and ethically, without reliance on web-scale scraped datasets.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Data Attribution-Based Approach to Model Diagnosis&#13;
in LC-MS/MS Structure Prediction</title>
<link href="https://hdl.handle.net/1721.1/164644" rel="alternate"/>
<author>
<name>Khoo, Ling Min Serena</name>
</author>
<id>https://hdl.handle.net/1721.1/164644</id>
<updated>2026-01-30T03:24:40Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">A Data Attribution-Based Approach to Model Diagnosis&#13;
in LC-MS/MS Structure Prediction
Khoo, Ling Min Serena
Elucidating the structure of small molecules from complex mixtures using liquid chromatography tandem mass spectrometry (LC-MS/MS) is a challenging task with far-reaching implications in many areas such as drug discovery, environmental science and metabolism research. Yet, despite its importance and significant efforts to develop machine learning (ML) models for the task of elucidating the molecular structures of unknown compounds from LC-MS/MS spectra, the performance of these ML-based models remains limited. As a result, the performance of current ML-based models has been reported as insufficient for practical applications, thereby warranting a deeper investigation into their limitations to advance ML-based molecular structure elucidation from LC-MS/MS and enable their utility in real-world settings. Here, we leverage data attribution methods to systematically identify and validate hypotheses about the sources of generalization challenges that hinder current model performance. Our goal is to automatically uncover insights into the failure modes of existing ML models for LC-MS/MS, thereby laying the foundation for developing more robust and accurate models.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Dynamic Objects in Scenes with Generative Particle Systems</title>
<link href="https://hdl.handle.net/1721.1/164643" rel="alternate"/>
<author>
<name>Li, Eric</name>
</author>
<id>https://hdl.handle.net/1721.1/164643</id>
<updated>2026-01-30T03:24:31Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Modeling Dynamic Objects in Scenes with Generative Particle Systems
Li, Eric
Humans readily interpret the motion of deformable and rigid bodies, even when encountering unfamiliar objects with minimal shape or texture cues. In such cases, motion serves as a critical signal for recognition and understanding. Inspired by this ability, we propose a generative model that represents 3D matter as small Gaussians (“particles”) drawn from clusters capturing groups of coherently moving matter. We develop an e!cient inference algorithm based on parallelized block Gibbs sampling to recover stable particle motion and rigid groupings. Our model provides a tractable, object-centric generalization of as-rigidas-possible (ARAP) regularizers used in motion tracking. To assess alignment with human perceptual judgments, we test our approach on random dot kinematograms—sparse motion displays in which dot trajectories convey latent object structure, often used to probe visual understanding of motion and grouping. In this setting, our approach captures human-like responses, including graded patterns of uncertainty across ambiguous conditions. Applied to naturalistic RGB videos, it infers dense particle representations that track object motion and deformation over time. These results demonstrate that our model enables persistent latent scene structure suitable for object-level reasoning.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Arm Qubit for Faster, Higher Fidelity Readout and Gates</title>
<link href="https://hdl.handle.net/1721.1/164642" rel="alternate"/>
<author>
<name>Kline, Jeremy B.</name>
</author>
<id>https://hdl.handle.net/1721.1/164642</id>
<updated>2026-01-30T03:24:44Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">The Arm Qubit for Faster, Higher Fidelity Readout and Gates
Kline, Jeremy B.
Currently, superconducting qubit processors are bottlenecked by errors during two-qubit gates, readout, and idle time. All three error contributions could be reduced if we improved the speed of operations (without introducing additional leakage errors) compared to the qubit lifetime. Readout and two-qubit gates are multimode interactions and therefore are limited by the coupling strength between the modes. In this thesis, we introduce a two-mode superconducting qubit which uses one mode to facilitate strong coupling to other modes of the quantum processor and one mode to store data with high coherence. Simulations show that this architecture could enable order-of-magnitude reductions in error during readout and two-qubit gates.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Clustering Algorithms for Component Placement in Printed Circuit Boards</title>
<link href="https://hdl.handle.net/1721.1/164641" rel="alternate"/>
<author>
<name>Petrusenko, Vlada</name>
</author>
<id>https://hdl.handle.net/1721.1/164641</id>
<updated>2026-01-30T03:24:41Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Clustering Algorithms for Component Placement in Printed Circuit Boards
Petrusenko, Vlada
In 2024, approximately 12 billion printed circuit boards (PCBs) were manufactured globally [1], with the trend increasing gradually, and the majority of PCB layouts still being completed manually. The manual design process amounts to millions of hours of tedium that can be eased with automation. One of the biggest challenges is that the complex Printed Circuit Board designs typically have hundreds, sometimes thousands of components and even more net connections between them. This makes both manual and automated placement very time-consuming. As a way to improve placement performance, in this thesis, we constructed a custom weighted undirected graph representation of components and nets for any board that would encode physical and electrical constraints. Additionally, we integrated the Louvain and Leiden clustering algorithms for component clustering in PCB placement. We also showed comparative metrics with the spectral clustering algorithm applied to unweighted graph representations, which is the prior state of this project, but it has no knowledge of electrical and physical constraints associated with PCB designs and would thus produce results that require more manual correction. This new clustering approach was able to generate more optimal clustering and reduced average runtime by 51.05%, decreased estimated length of routing by 7.72%, and improved component association score by 12.8%.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Data Drives ML Models Performance</title>
<link href="https://hdl.handle.net/1721.1/164640" rel="alternate"/>
<author>
<name>Khaddaj, Alaa</name>
</author>
<id>https://hdl.handle.net/1721.1/164640</id>
<updated>2026-01-30T03:06:27Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">How Data Drives ML Models Performance
Khaddaj, Alaa
Data has been been playing an increasingly more important role in the machine learning (ML) pipeline. This thesis deepens the understanding of the effect of the data on model performance and reliability. First, we study how choice of training data affects model performance. We consider a transfer learning setting and present a framework for selecting from a large pool of data a pretraining subset that improves model performance on downstream tasks. Our approach, however, requires training multiple target models which becomes prohibitively expensive at large-scale. To that end, we explore using smaller—and cheaper—proxy models to approximate large model behavior and select the pretraining data using that cheaper model. We show the effectiveness of this approach in two dataset selection settings: language modeling and imitation learning. Second, we explore the role of data in model reliability and consider two different threat models: backdoor attacks and malicious data editing. In this first threat model, an adversary injects a few doctered samples into the training set to control model predictions at inference time. We study the effect of these malicious samples on model behavior and then propose a framework for detecting and removing them from the training data. In the second threat model, an adversary leverages generative models such as diffusion models to maliciously modify personal data and generate harmful digital content. We focus on image editing and investigate how we can imperceptibly modify personal images to mitigate editing using diffusion models and raise and the cost of hamrful content generation. Overall, this thesis contributes to the understanding of the role of the data in driving model behavior. Through these efforts, we aim to provide mechanisms for (i) training models that perform better and (ii) are more reliable when deployed in the real world.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Learning and Computation of Linear Correlated Equilibrium in General Convex Games</title>
<link href="https://hdl.handle.net/1721.1/164637" rel="alternate"/>
<author>
<name>Daskalakis, Constantinos</name>
</author>
<author>
<name>Farina, Gabriele</name>
</author>
<author>
<name>Fishelson, Maxwell</name>
</author>
<author>
<name>Pipis, Charilaos</name>
</author>
<author>
<name>Schneider, Jon</name>
</author>
<id>https://hdl.handle.net/1721.1/164637</id>
<updated>2026-01-27T04:50:46Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Efficient Learning and Computation of Linear Correlated Equilibrium in General Convex Games
Daskalakis, Constantinos; Farina, Gabriele; Fishelson, Maxwell; Pipis, Charilaos; Schneider, Jon
We propose efficient no-regret learning dynamics and ellipsoid-based methods for computing linear correlated equilibria—a relaxation of correlated equilibria and a strengthening of coarse correlated equilibria—in general convex games. These are games where the number of pure strategies is potentially exponential in the natural representation of the game, such as extensive-form games. Our work identifies linear correlated equilibria as the tightest known notion of equilibrium that is computable in polynomial time and is efficiently learnable for general convex games. Our results are enabled by a generalization of the seminal framework of Gordon et al. for Φ-regret minimization, providing extensions to this framework that can be used even when the set of deviations Φ is intractable to separate/optimize over. Our polynomial-time algorithms are similarly enabled by extending the Ellipsoid-Against-Hope approach of Papadimitriou and Roughgarden and its generalization to games of non-polynomial type proposed by Farina and Pipis. We provide an extension to these approaches when we do not have access to the separation oracles required by these works for the dual player.
Constantinos Daskalakis, Gabriele Farina, Maxwell Fishelson, Charilaos Pipis, and Jon Schneider. 2025. Efficient Learning and Computation of Linear Correlated Equilibrium in General Convex Games. In Proceedings of the 57th Annual ACM Symposium on Theory of Computing (STOC '25). Association for Computing Machinery, New York, NY, USA, 542–553.
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>When Connectivity Is Hard, Random Walks Are Easy with Non-determinism</title>
<link href="https://hdl.handle.net/1721.1/164636" rel="alternate"/>
<author>
<name>Doron, Dean</name>
</author>
<author>
<name>Pyne, Edward</name>
</author>
<author>
<name>Tell, Roei</name>
</author>
<author>
<name>Williams, R. Ryan</name>
</author>
<id>https://hdl.handle.net/1721.1/164636</id>
<updated>2026-01-27T04:50:34Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">When Connectivity Is Hard, Random Walks Are Easy with Non-determinism
Doron, Dean; Pyne, Edward; Tell, Roei; Williams, R. Ryan
Two fundamental problems on directed graphs are to decide s-t connectivity, and to estimate the behavior of random walks. Currently, there is no known algorithm for s-t connectivity running in polynomial time and no(1) space, and no known algorithm for estimating the n-step random walk matrix running in non-deterministic logspace.&#13;
We show that for every directed graph, at least one of these problems is solvable in time and space that significantly improve on the respective state-of-the-art. In particular, there is a pair of algorithms A1 and A2 such that for every graph G, either:&#13;
A1(G) outputs the transitive closure of G in polynomial time and polylogarithmic space. A2(G) outputs an approximation of the n-step random walk matrix of G in non-deterministic logspace.&#13;
As one application, we show surprisingly tight win-win results for space-bounded complexity. For example, for certain parameter regimes, either Savitch’s theorem can be non-trivially sped up, or randomized space can be almost completely derandomized.&#13;
We also apply our techniques to significantly weaken the assumptions required to derandomize space-bounded computation, and to make non-deterministic space-bounded computation unambiguous. Specifically, we deduce such conclusions from lower bounds against uniform circuits of polynomial size, which is an exponential improvement on the required hardness in previous works (Doron–Pyne–Tell STOC 2024, Li–Pyne–Tell FOCS 2024). We further show similar results for minimal-memory derandomization (Doron–Tell CCC 2024).&#13;
To prove these results, we substantially improve the array of technical tools introduced in recent years for studying hardness-vs.-randomness for bounded-space computation. In particular, we develop derandomized distinguish-to-predict transformations for new types of distinguishers (corresponding to compositions of PRGs with weak distinguishers), we construct a derandomized logspace reconstruction procedure for the Shaltiel–Umans generator (JACM 2005) that can compress hard truth-tables to polylogarithmic size, and we design a version of the Chen–Tell generator (FOCS 2021) that is particularly suitable for the space-bounded setting.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>QMA vs QCMA and Pseudorandomness</title>
<link href="https://hdl.handle.net/1721.1/164635" rel="alternate"/>
<author>
<name>Liu, Jiahui</name>
</author>
<author>
<name>Mutreja, Saachi</name>
</author>
<author>
<name>Yuen, Henry</name>
</author>
<id>https://hdl.handle.net/1721.1/164635</id>
<updated>2026-01-27T04:50:44Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">QMA vs QCMA and Pseudorandomness
Liu, Jiahui; Mutreja, Saachi; Yuen, Henry
We study a longstanding question of Aaronson and Kuperberg on whether there exists a classical oracle separating QMA from QCMA. Settling this question in either direction would yield insight into the power of quantum proofs over classical proofs. We show that such an oracle exists if a certain quantum pseudorandomness conjecture holds. Roughly speaking, the conjecture posits that quantum algorithms cannot, by making few queries, distinguish between the uniform distribution over permutations versus permutations drawn from so-called “dense” distributions.&#13;
Our result can be viewed as establishing a “win-win” scenario: either there is a classical oracle separation of QMA from QCMA, or there is quantum advantage in distinguishing pseudorandom distributions on permutations.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Semantics of Integrating and Differentiating Singularities</title>
<link href="https://hdl.handle.net/1721.1/164634" rel="alternate"/>
<author>
<name>Michel, Jesse</name>
</author>
<author>
<name>Lee, Wonyeol</name>
</author>
<author>
<name>Yang, Hongseok</name>
</author>
<id>https://hdl.handle.net/1721.1/164634</id>
<updated>2026-01-27T04:50:47Z</updated>
<published>2025-06-13T00:00:00Z</published>
<summary type="text">Semantics of Integrating and Differentiating Singularities
Michel, Jesse; Lee, Wonyeol; Yang, Hongseok
A singular function is a partial function such that at one or more points, the left and/or right limit diverge (e.g., the function 1/x). Since programming languages typically support division, programs may denote singular functions. Although on its own, a singularity may be considered a bug, introducing a division-by-zero error, singular integrals—a version of the integral that is well-defined when the integrand is a singular function and the domain of integration contains a singularity—arise in science and engineering, including in physics, aerodynamics, mechanical engineering, and computer graphics.&#13;
In this paper, we present the first semantics of a programming language for singular integration. Our differentiable programming language, SingularFlow, supports the evaluation and differentiation of singular integrals. We formally define the denotational semantics of SingularFlow, deriving all the necessary mathematical machinery so that this work is rigorous and self-contained. We then define an operational semantics for SingularFlow that estimates integrals and their derivatives using Monte Carlo samples, and show that the operational semantics is a well-behaved estimator for the denotational semantics.&#13;
We implement SingularFlow in JAX and evaluate the implementation on a suite of benchmarks that perform the finite Hilbert transform, an integral transform related to the Fourier transform, which arises in domains such as physics and electrical engineering. We then use SingularFlow to approximate the solutions to four singular integral equations—equations where the unknown function is in the integrand of a singular integral—arising in aerodynamics and mechanical engineering.
</summary>
<dc:date>2025-06-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>SoS Certificates for Sparse Singular Values and Their Applications: Robust Statistics, Subspace Distortion, and More</title>
<link href="https://hdl.handle.net/1721.1/164633" rel="alternate"/>
<author>
<name>Diakonikolas, Ilias</name>
</author>
<author>
<name>Hopkins, Samuel B.</name>
</author>
<author>
<name>Pensia, Ankit</name>
</author>
<author>
<name>Tiegel, Stefan</name>
</author>
<id>https://hdl.handle.net/1721.1/164633</id>
<updated>2026-01-27T04:50:50Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">SoS Certificates for Sparse Singular Values and Their Applications: Robust Statistics, Subspace Distortion, and More
Diakonikolas, Ilias; Hopkins, Samuel B.; Pensia, Ankit; Tiegel, Stefan
We study sparse singular value certificates for random rectangular matrices. If M is a d × n matrix with independent Gaussian entries, we give a new family of polynomial-time algorithms which can certify upper bounds on the maximum of ||M u||, where u is a unit vector with at most η n nonzero entries for a given η ∈ (0,1). This basic algorithmic primitive lies at the heart of a wide range of problems across algorithmic statistics and theoretical computer science, including robust mean and covariance estimation, certification of distortion of random subspaces of n, certification of the 2 → p norm of a random matrix, and sparse principal component analysis.&#13;
Our algorithms certify a bound which is asymptotically smaller than the naive one, given by the maximum singular value of M, for nearly the widest-possible range of n,d, and η. Efficiently certifying such a bound for a range of n,d and η which is larger by any polynomial factor than what is achieved by our algorithm would violate lower bounds in the statistical query and low-degree polynomials models. Our certification algorithm makes essential use of the Sum-of-Squares hierarchy. To prove the correctness of our algorithm, we develop a new combinatorial connection between the graph matrix approach to analyze random matrices with dependent entries, and the Efron-Stein decomposition of functions of independent random variables.&#13;
As applications of our certification algorithm, we obtain new efficient algorithms for a wide range of well-studied algorithmic tasks. In algorithmic robust statistics, we obtain new algorithms for robust mean and covariance estimation with tradeoffs between breakdown point and sample complexity, which are nearly matched by statistical query and low-degree polynomial lower bounds (that we establish). We also obtain new polynomial-time guarantees for certification of ℓ1/ℓ2 distortion of random subspaces of n (also with nearly matching lower bounds), sparse principal component analysis, and certification of the 2→ p norm of a random matrix.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Weak Recovery, Hypothesis Testing, and Mutual Information in Stochastic Block Models and Planted Factor Graphs</title>
<link href="https://hdl.handle.net/1721.1/164632" rel="alternate"/>
<author>
<name>Mossel, Elchanan</name>
</author>
<author>
<name>Sly, Allan</name>
</author>
<author>
<name>Sohn, Youngtak</name>
</author>
<id>https://hdl.handle.net/1721.1/164632</id>
<updated>2026-01-27T04:51:10Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Weak Recovery, Hypothesis Testing, and Mutual Information in Stochastic Block Models and Planted Factor Graphs
Mossel, Elchanan; Sly, Allan; Sohn, Youngtak
The stochastic block model is a canonical model of communities in random graphs. It was introduced in the social sciences and statistics as a model of communities, and in theoretical computer science as an average case model for graph partitioning problems under the name of the “planted partition model.” Given a sparse stochastic block model, the two standard inference tasks are: (i) Weak recovery: can we estimate the communities with non-trivial overlap with the true communities? (ii) Detection/Hypothesis testing: can we distinguish if the sample was drawn from the block model or from a random graph with no community structure with probability tending to 1 as the graph size tends to infinity? In this work, we show that for sparse stochastic block models, the two inference tasks are equivalent except at a critical point. That is, weak recovery is information theoretically possible if and only if detection is possible. We thus find a strong connection between these two notions of inference for the model. We further prove that when detection is impossible, an explicit hypothesis test based on low-degree polynomials in the adjacency matrix of the observed graph achieves the optimal statistical power. This low-degree test is efficient as opposed to the likelihood ratio test, which is not known to be efficient. Moreover, we prove that the asymptotic mutual information between the observed network and the community structure exhibits a phase transition at the weak recovery threshold. Our results are proven in much broader settings including the hypergraph stochastic block models and general planted factor graphs. In these settings, we prove that the impossibility of weak recovery implies contiguity and provide a condition that guarantees the equivalence of weak recovery and detection.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Near-Optimal Time-Sparsity Trade-Offs for Solving Noisy Linear Equations</title>
<link href="https://hdl.handle.net/1721.1/164631" rel="alternate"/>
<author>
<name>Bangachev, Kiril</name>
</author>
<author>
<name>Bresler, Guy</name>
</author>
<author>
<name>Tiegel, Stefan</name>
</author>
<author>
<name>Vaikuntanathan, Vinod</name>
</author>
<id>https://hdl.handle.net/1721.1/164631</id>
<updated>2026-01-27T04:50:49Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Near-Optimal Time-Sparsity Trade-Offs for Solving Noisy Linear Equations
Bangachev, Kiril; Bresler, Guy; Tiegel, Stefan; Vaikuntanathan, Vinod
We present a polynomial-time reduction from solving noisy linear equations over in dimension Θ(klogn/(logk,logq,loglogn)) with a uniformly random coefficient matrix to noisy linear equations over in dimension n where each row of the coefficient matrix has uniformly random support of size k. This allows us to deduce the hardness of sparse problems from their dense counterparts. In particular, we derive hardness results in the following canonical settings:&#13;
• Assuming the ℓ-dimensional (dense) learning with errors () problem over a polynomial-size field takes time 2Ω(ℓ), k-sparse in dimension n takes time nΩ(k/(logk · (logk + loglogn))) .&#13;
• Assuming the ℓ-dimensional (dense) learning parity with noise () problem over ℤ/2ℤ takes time 2Ω(ℓ/logℓ), k-sparse in dimension n takes time nΩ(k/(logk · (logk + loglogn)2)) .&#13;
These running time lower bounds are nearly tight as both sparse problems can be solved in time nO(k), given sufficiently many samples.&#13;
Our reduction allows us to derive several consequences in cryptography and the computational complexity of statistical problems. In addition, as a new application, we give a reduction from k-sparse LWE to noisy tensor completion. Concretely, composing the two reductions implies that order-k rank-2k−1 noisy tensor completion in ℝn⊗ k takes time nΩ(k/ logk · (logk + loglogn)), assuming the exponential hardness of standard worst-case lattice problems.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stochastic Matching via In-n-Out Local Computation Algorithms</title>
<link href="https://hdl.handle.net/1721.1/164630" rel="alternate"/>
<author>
<name>Azarmehr, Amir</name>
</author>
<author>
<name>Behnezhad, Soheil</name>
</author>
<author>
<name>Ghafari, Alma</name>
</author>
<author>
<name>Rubinfeld, Ronitt</name>
</author>
<id>https://hdl.handle.net/1721.1/164630</id>
<updated>2026-01-27T04:50:43Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Stochastic Matching via In-n-Out Local Computation Algorithms
Azarmehr, Amir; Behnezhad, Soheil; Ghafari, Alma; Rubinfeld, Ronitt
Consider the following stochastic matching problem. We are given a known graph G=(V, E). An unknown subgraph Gp = (V, Ep) is realized where Ep includes every edge of E independently with some probability p ∈ (0, 1]. The goal is to query a sparse subgraph H of G, such that the realized edges in H include an approximate maximum matching of Gp.&#13;
This problem has been studied extensively over the last decade due to its applications in kidney exchange, online dating, and online labor markets. For any fixed є &gt; 0, [BDH STOC’20] showed that any graph G has a subgraph H with (1/p) = (1/p)(log(1/p)) maximum degree, achieving a (1−є)-approximation. A major open question is the best approximation achievable with (1/p)-degree subgraphs. A long line of work has progressively improved the approximation in the (1/p)-degree regime from .5 [BDH+ EC’15] to .501 [AKL EC’17], .656 [BHFR SODA’19], .666 [AB SOSA’19], .731 [BBD SODA’22] (bipartite graphs), and most recently to .68 [DS ’24].&#13;
In this work, we show that a (1/p)-degree subgraph can obtain a (1−є)-approximation for any desirably small fixed є &gt; 0, achieving the best of both worlds.&#13;
Beyond its quantitative improvement, a key conceptual contribution of our work is to connect local computation algorithms (LCAs) to the stochastic matching problem for the first time.&#13;
While prior work on LCAs mainly focuses on their out-queries (the number of vertices probed to produce the output of a given vertex), our analysis also bounds the in-queries (the number of vertices that probe a given vertex). We prove that the outputs of LCAs with bounded in- and out-queries (in-n-out LCAs for short) have limited correlation, a property that our analysis crucially relies on and might find applications beyond stochastic matching
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Colloidal State Machines as Smart Tracers for Chemical Reactor Analysis</title>
<link href="https://hdl.handle.net/1721.1/164629" rel="alternate"/>
<author>
<name>Zhang, Ge</name>
</author>
<author>
<name>Yang, Jing Fan</name>
</author>
<author>
<name>Yang, Sungyun</name>
</author>
<author>
<name>Brooks, Allan M</name>
</author>
<author>
<name>Koman, Volodymyr B</name>
</author>
<author>
<name>Gong, Xun</name>
</author>
<author>
<name>Strano, Michael S</name>
</author>
<id>https://hdl.handle.net/1721.1/164629</id>
<updated>2026-01-24T03:10:54Z</updated>
<published>2023-06-29T00:00:00Z</published>
<summary type="text">Colloidal State Machines as Smart Tracers for Chemical Reactor Analysis
Zhang, Ge; Yang, Jing Fan; Yang, Sungyun; Brooks, Allan M; Koman, Volodymyr B; Gong, Xun; Strano, Michael S
A widely utilized tool in reactor analysis is passive tracers that report the residence time distribution, allowing estimation of the conversion and other properties of the system. Recently, advances in microrobotics have introduced powered and functional entities with sizes comparable to some traditional tracers. This has motivated the concept of Smart Tracers that could record the local chemical concentrations, temperature, or other conditions as they progress through reactors. Herein, the design constraints and advantages of Smart Tracers by simulating their operation in a laminar flow reactor model conducting chemical reactions of various orders are analyzed. It is noted that far fewer particles are necessary to completely map even the most complex concentration gradients compared with their conventional counterparts. Design criteria explored herein include sampling frequency, memory storage capacity, and ensemble number necessary to achieve the required accuracy to inform a reactor model. Cases of severe particle diffusion and sensor noise appear to bind the functional upper limit of such probes and require consideration for future design. The results of the study provide a starting framework for applying the new technology of microrobotics to the broad and impactful set of problems classified as chemical reactor analysis.
</summary>
<dc:date>2023-06-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mitigation of ventilation air methane (VAM) using novel methanotrophic coating materials: a technical analysis</title>
<link href="https://hdl.handle.net/1721.1/164628" rel="alternate"/>
<author>
<name>Lundberg, Daniel James</name>
</author>
<author>
<name>Kim, Jimin</name>
</author>
<author>
<name>Parviz, Dorsa</name>
</author>
<author>
<name>Strano, Michael S</name>
</author>
<id>https://hdl.handle.net/1721.1/164628</id>
<updated>2026-01-24T03:11:06Z</updated>
<published>2023-10-20T00:00:00Z</published>
<summary type="text">Mitigation of ventilation air methane (VAM) using novel methanotrophic coating materials: a technical analysis
Lundberg, Daniel James; Kim, Jimin; Parviz, Dorsa; Strano, Michael S
Ventilation air methane (VAM) is a potent greenhouse gas source originating from geological wells, current and extinct mineshafts and other terrestrial conduits venting methane to the atmosphere, contributing to global methane emissions and disproportionate warming potential. Herein, we introduce the concept of the &lt;jats:italic&gt;methanotrophic material&lt;/jats:italic&gt; as an engineering solution. Such materials should be capable of converting methane at ambient temperatures and pressures to a binder product, capturing and permanently sequestering the methane while simultaneously restricting its further emission. While such materials are currently under research development, this goal is supported and facilities by the mathematical framework, introduced and used herein, to evaluate the ability to convert methane, using currently published activity data. We include a case study of the conversion of a characteristic stream of VAM (0.6% methane in air, 1.7 × 10&lt;jats:sup&gt;8&lt;/jats:sup&gt; l hr&lt;jats:sup&gt;−1&lt;/jats:sup&gt; equivalent to 100 000 standard cubic feet per minute). We show that when appropriately designed, such systems require a surface coverage of less than 1000 m of mine tunnel length (equivalent to 20 000 m&lt;jats:sup&gt;2&lt;/jats:sup&gt; areal coverage) in order to reduce the methane emission from this stream by over 99%. Finally, we highlight formaldehyde as a reactive intermediate of methane oxidation which may itself be incorporated into these coating materials. As a component of binders and polymers already used ubiquitously in commercial products, this intermediate ultimately allows these systems to sequester the carbon from methane in a stable and solid form. The results presented here are easily extended to the treatment of other methane streams—either more concentrated or dilute—and the results herein will guide the design and development of a new class of carbon-negative materials.
</summary>
<dc:date>2023-10-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>Synergistic multi-source ambient RF and thermal energy harvester for green IoT applications</title>
<link href="https://hdl.handle.net/1721.1/164627" rel="alternate"/>
<author>
<name>Bakytbekov, Azamat</name>
</author>
<author>
<name>Nguyen, Thang Q</name>
</author>
<author>
<name>Zhang, Ge</name>
</author>
<author>
<name>Strano, Michael S</name>
</author>
<author>
<name>Salama, Khaled N</name>
</author>
<author>
<name>Shamim, Atif</name>
</author>
<id>https://hdl.handle.net/1721.1/164627</id>
<updated>2026-01-24T03:11:05Z</updated>
<published>2023-12-01T00:00:00Z</published>
<summary type="text">Synergistic multi-source ambient RF and thermal energy harvester for green IoT applications
Bakytbekov, Azamat; Nguyen, Thang Q; Zhang, Ge; Strano, Michael S; Salama, Khaled N; Shamim, Atif
In a future green Internet of Things (IoT) reality, billions of devices of the IoT infrastructure should be self-powered. Harvesting ambient energy to power IoT devices is an attractive solution that can extend battery life or can completely replace batteries. Considering the global applications of IoT, ubiquitous and continuous availability is an important requirement for ambient energy sources. Radio frequency (RF) energy from mobile phone towers and thermal energy from diurnal cycle temperature fluctuations are good candidates. In this study, we present a synergistic multi-source energy harvester (MSEH) comprising an RF energy harvester (RFEH) and a thermal energy harvester (TEH) integrated through a dual-function component, heatsink antenna. Both harvesters collect ambient energy 24 h a day and are not location specific. The TEH, which is in the shape of a box, collects energy using heatsinks on its sidewalls. The same heatsinks are optimized to also serve as receiving antennas of the RFEH, which collects energy from the GSM900, GSM1800, and 3G bands. Due to the synergistic integration, radiation efficiency of the antenna doubled from 40% to 80% which resulted in ∼ 10% increase in power conversion efficiency of the RFEH. Similarly, the average power of the TEH without heatsinks 120 μ W is doubled to 240 μ W for TEH with heatsinks. Field tests have shown that the outputs of the TEH and RFEH have increased 4 and 3 times compared to the independent TEH and RFEH respectively. A temperature and humidity sensor based IoT node has been successfully powered through this energy harvesting system. Overall, the MSEH can collect 3680 μ W h of energy per day which is sufficient to obtain the sensors data with a time interval of 3.5 s.
</summary>
<dc:date>2023-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Chromatic covalent organic frameworks enabling in-vivo chemical tomography</title>
<link href="https://hdl.handle.net/1721.1/164626" rel="alternate"/>
<author>
<name>Wang, Song</name>
</author>
<author>
<name>Han, Yangyang</name>
</author>
<author>
<name>Reddy, Vaishnavi Amarr</name>
</author>
<author>
<name>Ang, Mervin Chun-Yi</name>
</author>
<author>
<name>Sánchez-Velázquez, Gabriel</name>
</author>
<author>
<name>Saju, Jolly Madathiparambil</name>
</author>
<author>
<name>Cao, Yunteng</name>
</author>
<author>
<name>Khong, Duc Thinh</name>
</author>
<author>
<name>Jayapal, Praveen Kumar</name>
</author>
<author>
<name>Cheerlavancha, Raju</name>
</author>
<author>
<name>Loh, Suh In</name>
</author>
<author>
<name>Singh, Gajendra Pratap</name>
</author>
<author>
<name>Urano, Daisuke</name>
</author>
<author>
<name>Rajani, Sarojam</name>
</author>
<author>
<name>Marelli, Benedetto</name>
</author>
<author>
<name>Strano, Michael S</name>
</author>
<id>https://hdl.handle.net/1721.1/164626</id>
<updated>2026-01-24T03:11:03Z</updated>
<published>2024-10-28T00:00:00Z</published>
<summary type="text">Chromatic covalent organic frameworks enabling in-vivo chemical tomography
Wang, Song; Han, Yangyang; Reddy, Vaishnavi Amarr; Ang, Mervin Chun-Yi; Sánchez-Velázquez, Gabriel; Saju, Jolly Madathiparambil; Cao, Yunteng; Khong, Duc Thinh; Jayapal, Praveen Kumar; Cheerlavancha, Raju; Loh, Suh In; Singh, Gajendra Pratap; Urano, Daisuke; Rajani, Sarojam; Marelli, Benedetto; Strano, Michael S
Covalent organic frameworks designed as chromatic sensors offer opportunities to probe biological interfaces, particularly when combined with biocompatible matrices. Particularly compelling is the prospect of chemical tomography – or the 3D spatial mapping of chemical detail within the complex environment of living systems. Herein, we demonstrate a chromic Covalent Organic Framework (COF) integrated within silk fibroin (SF) microneedles that probe plant vasculature, sense the alkalization of vascular fluid as a biomarker for drought stress, and provide a 3D in-vivo mapping of chemical gradients using smartphone technology. A series of Schiff base COFs with tunable pKa ranging from 5.6 to 7.6 enable conical, optically transparent SF microneedles with COF coatings of 120 to 950 nm to probe vascular fluid and the surrounding tissues of tobacco and tomato plants. The conical design allows for 3D mapping of the chemical environment (such as pH) at standoff distances from the plant, enabling in-vivo chemical tomography. Chromatic COF sensors of this type will enable multidimensional chemical mapping of previously inaccessible and complex environments.
</summary>
<dc:date>2024-10-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decoding early stress signaling waves in living plants using nanosensor multiplexing</title>
<link href="https://hdl.handle.net/1721.1/164625" rel="alternate"/>
<author>
<name>Ang, Mervin Chun-Yi</name>
</author>
<author>
<name>Saju, Jolly Madathiparambil</name>
</author>
<author>
<name>Porter, Thomas K</name>
</author>
<author>
<name>Mohaideen, Sayyid</name>
</author>
<author>
<name>Sarangapani, Sreelatha</name>
</author>
<author>
<name>Khong, Duc Thinh</name>
</author>
<author>
<name>Wang, Song</name>
</author>
<author>
<name>Cui, Jianqiao</name>
</author>
<author>
<name>Loh, Suh In</name>
</author>
<author>
<name>Singh, Gajendra Pratap</name>
</author>
<author>
<name>Chua, Nam-Hai</name>
</author>
<author>
<name>Strano, Michael S</name>
</author>
<author>
<name>Sarojam, Rajani</name>
</author>
<id>https://hdl.handle.net/1721.1/164625</id>
<updated>2026-01-24T03:11:01Z</updated>
<published>2024-01-01T00:00:00Z</published>
<summary type="text">Decoding early stress signaling waves in living plants using nanosensor multiplexing
Ang, Mervin Chun-Yi; Saju, Jolly Madathiparambil; Porter, Thomas K; Mohaideen, Sayyid; Sarangapani, Sreelatha; Khong, Duc Thinh; Wang, Song; Cui, Jianqiao; Loh, Suh In; Singh, Gajendra Pratap; Chua, Nam-Hai; Strano, Michael S; Sarojam, Rajani
Increased exposure to environmental stresses due to climate change have adversely affected plant growth and productivity. Upon stress, plants activate a signaling cascade, involving multiple molecules like H2O2, and plant hormones such as salicylic acid (SA) leading to resistance or stress adaptation. However, the temporal ordering and composition of the resulting cascade remains largely unknown. In this study we developed a nanosensor for SA and multiplexed it with H2O2 nanosensor for simultaneous monitoring of stress-induced H2O2 and SA signals when Brassica rapa subsp. Chinensis (Pak choi) plants were subjected to distinct stress treatments, namely light, heat, pathogen stress and mechanical wounding. Nanosensors reported distinct dynamics and temporal wave characteristics of H2O2 and SA generation for each stress. Based on these temporal insights, we have formulated a biochemical kinetic model that suggests the early H2O2 waveform encodes information specific to each stress type. These results demonstrate that sensor multiplexing can reveal stress signaling mechanisms in plants, aiding in developing climate-resilient crops and pre-symptomatic stress diagnoses.
</summary>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Polymeric Nanocarriers Autonomously Cross the Plant Cell Wall and Enable Protein Delivery for Stress Sensing</title>
<link href="https://hdl.handle.net/1721.1/164624" rel="alternate"/>
<author>
<name>Zhang, Yilin</name>
</author>
<author>
<name>Cao, Yunteng</name>
</author>
<author>
<name>Jiang, Wenzhi</name>
</author>
<author>
<name>Ma, Qingquan</name>
</author>
<author>
<name>Shin, Jinwoo</name>
</author>
<author>
<name>Sun, Hui</name>
</author>
<author>
<name>Cui, Jianqiao</name>
</author>
<author>
<name>Chen, Yongsheng</name>
</author>
<author>
<name>Giraldo, Juan Pablo</name>
</author>
<author>
<name>Strano, Michael S</name>
</author>
<author>
<name>Lowry, Gregory V</name>
</author>
<author>
<name>Sheen, Jen</name>
</author>
<author>
<name>Marelli, Benedetto</name>
</author>
<id>https://hdl.handle.net/1721.1/164624</id>
<updated>2026-01-24T03:10:59Z</updated>
<published>2024-08-16T00:00:00Z</published>
<summary type="text">Polymeric Nanocarriers Autonomously Cross the Plant Cell Wall and Enable Protein Delivery for Stress Sensing
Zhang, Yilin; Cao, Yunteng; Jiang, Wenzhi; Ma, Qingquan; Shin, Jinwoo; Sun, Hui; Cui, Jianqiao; Chen, Yongsheng; Giraldo, Juan Pablo; Strano, Michael S; Lowry, Gregory V; Sheen, Jen; Marelli, Benedetto
Delivery of proteins in plant cells can facilitate the design of desired functions by modulation of biological processes and plant traits but is currently limited by narrow host range, tissue damage, and poor scalability. Physical barriers in plants, including cell walls and membranes, limit protein delivery to desired plant tissues. Herein, a cationic high aspect ratio polymeric nanocarriers (PNCs) platform is developed to enable efficient protein delivery to plants. The cationic nature of PNCs binds proteins through electrostatic. The ability to precisely design PNCs’ size and aspect ratio allowed us to find a cutoff of ≈14 nm in the cell wall, below which cationic PNCs can autonomously overcome the barrier and carry their cargo into plant cells. To exploit these findings, a reduction‐oxidation sensitive green fluorescent protein (roGFP) is deployed as a stress sensor protein cargo in a model plant &lt;jats:italic&gt;Nicotiana benthamiana&lt;/jats:italic&gt; and common crop plants, including tomato and maize. In vivo imaging of PNC‐roGFP enabled optical monitoring of plant response to wounding, biotic, and heat stressors. These results show that PNCs can be precisely designed below the size exclusion limit of cell walls to overcome current limitations in protein delivery to plants and facilitate species‐independent plant engineering.
</summary>
<dc:date>2024-08-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of Glucose Responsive Glucagon Therapeutics using Computational Models of the Glucoregulatory System</title>
<link href="https://hdl.handle.net/1721.1/164623" rel="alternate"/>
<author>
<name>Alizadehmojarad, Ali A</name>
</author>
<author>
<name>Yang, Sungyun</name>
</author>
<author>
<name>Gong, Xun</name>
</author>
<author>
<name>Strano, Michael S</name>
</author>
<id>https://hdl.handle.net/1721.1/164623</id>
<updated>2026-03-08T03:39:38Z</updated>
<published>2024-08-29T00:00:00Z</published>
<summary type="text">Analysis of Glucose Responsive Glucagon Therapeutics using Computational Models of the Glucoregulatory System
Alizadehmojarad, Ali A; Yang, Sungyun; Gong, Xun; Strano, Michael S
Glucose‐responsive glucagon (GRG) therapeutics are a promising technology for reducing the risk of severe hypoglycemia as a complication of diabetes mellitus. Herein, the performance of candidate GRGs in the literature by modeling the kinetics of activation and connecting them as input into physiological glucoregulatory models is evaluated and projected the two distinct GRG designs, experimental results reported in Wu et al. (GRG‐I) and Webber et al. (GRG‐II) is considered. Both are evaluated using a multi‐compartmental glucoregulatory model (IMPACT) and used to compare in‐vivo experimental data of therapeutic performance in rats and mice. For GRG‐I and GRG‐II, the total integrated glucose material balances are overestimated by 41.5% ± 14% and underestimated by 24.8% ± 16% compared to in‐vivo time‐course data, respectively. These large differences to the relatively simple computational descriptions of glucagon dynamics in the model, which underscores the urgent need for improved glucagon models is attributed. Additionally, therapeutic insulin and glucagon infusion pumps are modeled for type 1 diabetes mellitus (T1DM) human subjects to extend the results to additional datasets. These observations suggest that both the representative physiological and non‐physiological models considered in this work require additional refinement to successfully describe clinical data that involve simultaneous, coupled insulin, glucose, and glucagon dynamics.
</summary>
<dc:date>2024-08-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Microrobotic Design for the Spontaneous Tracing of Isochemical Contours in the Environment</title>
<link href="https://hdl.handle.net/1721.1/164622" rel="alternate"/>
<author>
<name>Brooks, A Merritt</name>
</author>
<author>
<name>Yang, Sungyun</name>
</author>
<author>
<name>Kang, Byung Ha</name>
</author>
<author>
<name>Strano, Michael S</name>
</author>
<id>https://hdl.handle.net/1721.1/164622</id>
<updated>2026-03-08T03:39:42Z</updated>
<published>2024-09-09T00:00:00Z</published>
<summary type="text">A Microrobotic Design for the Spontaneous Tracing of Isochemical Contours in the Environment
Brooks, A Merritt; Yang, Sungyun; Kang, Byung Ha; Strano, Michael S
Microrobotic platforms hold significant potential to advance a variety of fields, from medicine to environmental sensing. Herein, minimally functional robotic entities modeled on readily achievable state-of-the-art features in a modern lab or cleanroom are computationally simulated. Inspired by Dou and Bishop (Phys Rev Res. 2019;1(3):1–5), it is shown that the simple combination of unidirectional steering connected to a single environmental (chemical) sensor along with constant propulsion gives rise to highly complex functions of significant utility. Such systems can trace the contours orthogonal to arbitrary chemical gradients in the environment. Also, pairs of such robots that are additionally capable of emitting the same chemical signal are shown to exhibit coupled relative motion. When the pair has unidirectional steering in opposite directions within the 2D plane (i.e., counter-rotating), they move in parallel trajectories to each other. Alternatively, when steering is in the same direction (corotation), the two move in the same epicyclical trajectory. In this way, the chirality of the unidirectional steering produces two distinct emergent phenomena. The behavior is understood as a ratchet mechanism that exploits the differential in the radii of curvature corresponding to different spatial locations. Applications to environmental detection, remediation, and monitoring are discussed.
</summary>
<dc:date>2024-09-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Electrokinetic Motion of Neurotransmitter Ions through a 1.01 nm Diameter Single-Walled Carbon Nanotube</title>
<link href="https://hdl.handle.net/1721.1/164621" rel="alternate"/>
<author>
<name>Ellison, Mark D</name>
</author>
<author>
<name>Allen, Jacqueline</name>
</author>
<author>
<name>Bonfiglio, Michael</name>
</author>
<author>
<name>Seeburger, Matthew</name>
</author>
<author>
<name>Setenet, Jean</name>
</author>
<author>
<name>DiGinto, Biagio</name>
</author>
<author>
<name>Bonanny, Harrison</name>
</author>
<author>
<name>Russell, Aaliyah</name>
</author>
<author>
<name>Baird, David</name>
</author>
<author>
<name>Davis, Liana</name>
</author>
<author>
<name>McCarthy, Ella</name>
</author>
<author>
<name>Manley, Alyson</name>
</author>
<author>
<name>Blatt, Sarah</name>
</author>
<author>
<name>Lippe, David</name>
</author>
<author>
<name>Ragone, Daniel</name>
</author>
<author>
<name>Dyer, Brock</name>
</author>
<author>
<name>Osgood, Jillian</name>
</author>
<author>
<name>Strano, Michael S</name>
</author>
<id>https://hdl.handle.net/1721.1/164621</id>
<updated>2026-03-08T03:39:40Z</updated>
<published>2025-03-11T00:00:00Z</published>
<summary type="text">Electrokinetic Motion of Neurotransmitter Ions through a 1.01 nm Diameter Single-Walled Carbon Nanotube
Ellison, Mark D; Allen, Jacqueline; Bonfiglio, Michael; Seeburger, Matthew; Setenet, Jean; DiGinto, Biagio; Bonanny, Harrison; Russell, Aaliyah; Baird, David; Davis, Liana; McCarthy, Ella; Manley, Alyson; Blatt, Sarah; Lippe, David; Ragone, Daniel; Dyer, Brock; Osgood, Jillian; Strano, Michael S
The transport of cations of the neurotransmitters acetylcholine, choline, and dopamine through a 1.01 nm-diameter, 1.1 mm-long single-walled carbon nanotube (SWNT) has been studied for the first time. As a comparison, sodium and aniline ion transport was also investigated. All of these ions exhibited significantly enhanced electrophoretic mobilities over bulk transport. The electrophoretic mobilities of acetylcholine, choline, and sodium were found to depend on pH, specifically increasing as pH decreases. This result is explained by hydrogen ions saturating the surface charges of the SWNT. Conversely, dopamine and aniline have mobilities that do not depend on pH. This difference is attributed to the benzene ring and the size of these ions. An analysis of the time required for an ion to traverse the nanotube shows that the ions adsorb to and desorb from the walls as they pass through the tube. Acetylcholine, choline, and sodium show desorption rate constants that decrease with increasing pH, whereas dopamine and aniline have rate constants that remain constant over different pH values. This is consistent with the relationship between adsorption and desorption rate constants and mobility from an adsorption/desorption kinetic model.
</summary>
<dc:date>2025-03-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advancements in Plant Diagnostic and Sensing Technologies</title>
<link href="https://hdl.handle.net/1721.1/164620" rel="alternate"/>
<author>
<name>Krishnamoorthi, Shalini</name>
</author>
<author>
<name>Koh, Sally Shuxian</name>
</author>
<author>
<name>Ang, Mervin Chun‐Yi</name>
</author>
<author>
<name>Teo, Mark Ju Teng</name>
</author>
<author>
<name>Jie, Randall Ang</name>
</author>
<author>
<name>Dinish, US</name>
</author>
<author>
<name>Strano, Michael S</name>
</author>
<author>
<name>Urano, Daisuke</name>
</author>
<id>https://hdl.handle.net/1721.1/164620</id>
<updated>2026-03-08T03:39:42Z</updated>
<published>2025-06-23T00:00:00Z</published>
<summary type="text">Advancements in Plant Diagnostic and Sensing Technologies
Krishnamoorthi, Shalini; Koh, Sally Shuxian; Ang, Mervin Chun‐Yi; Teo, Mark Ju Teng; Jie, Randall Ang; Dinish, US; Strano, Michael S; Urano, Daisuke
Recent advancements in plant sensing technologies have significantly improved agricultural productivity while reducing resource inputs, resulting in higher yields by enabling early disease detection, precise diagnostics, and optimized fertilizer and pesticide applications. Each adopted technology offers unique advantages suitable for various farm operations, breeding programs, and laboratory research. This review article first summarizes key target traits, endogenous structures, and metabolites that serve as focal points for plant diagnostic and sensing technologies. Next, conventional plant sensing technologies based on light reflectance and fluorescence, which rely on foliar phytopigments and fluorophores such as chlorophylls are discussed. These methods, along with advanced analytical strategies incorporating machine learning, enable accurate stress detection and classification beyond general assessments of plant health and stress status. Advanced optical techniques such as Fourier transform infrared spectroscopy (FT‐IR) and Raman spectroscopy, which allow specific measurements of various plant metabolites and structural components are then highlighted. Furthermore, the design and applications of nanotechnology chemical sensors capable of highly sensitive and selective detection of specific phytochemicals, including phytohormones and signaling second messengers, which regulate physiological and developmental processes at micro‐ to sub‐micromolar concentrations are introduced. By selecting appropriate sensing methodologies, agricultural production, and relevant research activities can be significantly improved.
</summary>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Jacobi Factoring Circuit: Quantum Factoring with Near-Linear Gates and Sublinear Space and Depth</title>
<link href="https://hdl.handle.net/1721.1/164619" rel="alternate"/>
<author>
<name>Kahanamoku-Meyer, Gregory D.</name>
</author>
<author>
<name>Ragavan, Seyoon</name>
</author>
<author>
<name>Vaikuntanathan, Vinod</name>
</author>
<author>
<name>Van Kirk, Katherine</name>
</author>
<id>https://hdl.handle.net/1721.1/164619</id>
<updated>2026-03-08T03:22:28Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">The Jacobi Factoring Circuit: Quantum Factoring with Near-Linear Gates and Sublinear Space and Depth
Kahanamoku-Meyer, Gregory D.; Ragavan, Seyoon; Vaikuntanathan, Vinod; Van Kirk, Katherine
We present a compact quantum circuit for factoring a large class of integers, including some whose classical hardness is expected to be equivalent to RSA (but not including RSA integers themselves). Most notably, we factor n-bit integers of the form P2 Q with logQ = Θ(na) for a ∈ (2/3, 1) in space and depth sublinear in n (specifically, O(logQ)) using O(n) quantum gates; for these integers, no known classical algorithms exploit the relatively small size of Q to run asymptotically faster than general-purpose factoring algorithms. To our knowledge, this is the first polynomial-time circuit to achieve sublinear qubit count for a classically-hard factoring problem. We thus believe that factoring such numbers has potential to be the most concretely efficient classically-verifiable proof of quantumness currently known.&#13;
Our circuit builds on the quantum algorithm for squarefree decomposition discovered by Li, Peng, Du, and Suter (Nature Scientific Reports 2012), which relies on computing the Jacobi symbol in quantum superposition. The technical core of our contribution is a new space-efficient quantum algorithm to compute the Jacobi symbol of A mod B, in the regime where B is classical and much larger than A. Our circuit for computing the Jacobi symbol generalizes to related problems such as computing the greatest common divisor and modular inverses, and thus could be of independent interest.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Classical Commitments to Quantum States</title>
<link href="https://hdl.handle.net/1721.1/164618" rel="alternate"/>
<author>
<name>Gunn, Sam</name>
</author>
<author>
<name>Tauman Kalai, Yael</name>
</author>
<author>
<name>Natarajan, Anand</name>
</author>
<author>
<name>Vill?nyi, ?gi</name>
</author>
<id>https://hdl.handle.net/1721.1/164618</id>
<updated>2026-03-08T03:22:25Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Classical Commitments to Quantum States
Gunn, Sam; Tauman Kalai, Yael; Natarajan, Anand; Vill?nyi, ?gi
We define the notion of a classical commitment scheme to quantum states, which allows a quantum prover to compute a classical commitment to a quantum state, and later open each qubit of the state in either the standard or the Hadamard basis. Our notion is a strengthening of the measurement protocol from Mahadev (STOC 2018). We construct such a commitment scheme from the post-quantum Learning With Errors (LWE) assumption, and more generally from any noisy trapdoor claw-free function family that has the distributional strong adaptive hardcore bit property (a property that we define in this work).&#13;
Our scheme is succinct in the sense that the running time of the verifier in the commitment phase depends only on the security parameter (independent of the size of the committed state), and its running time in the opening phase grows only with the number of qubits that are being opened (and the security parameter). As a corollary we obtain a classical succinct argument system for QMA under the post-quantum LWE assumption. Previously, this was only known assuming post-quantum secure indistinguishability obfuscation. As an additional corollary we obtain a generic way of converting any X/Z quantum PCP into a succinct argument system under the quantum hardness of LWE.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Symmetric Perceptrons, Number Partitioning and Lattices</title>
<link href="https://hdl.handle.net/1721.1/164617" rel="alternate"/>
<author>
<name>Vafa, Neekon</name>
</author>
<author>
<name>Vaikuntanathan, Vinod</name>
</author>
<id>https://hdl.handle.net/1721.1/164617</id>
<updated>2026-03-08T03:22:22Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Symmetric Perceptrons, Number Partitioning and Lattices
Vafa, Neekon; Vaikuntanathan, Vinod
The symmetric binary perceptron (SBPκ) problem with parameter κ : ℝ≥1 → [0,1] is an average-case search problem defined as follows: given a random Gaussian matrix A ∼ N(0,1)n × m as input where m ≥ n, output a vector x ∈ {−1,1}m such that || A x ||∞ ≤ κ(m/n) · √m .&#13;
The number partitioning problem (NPPκ) corresponds to the special case of setting n=1. There is considerable evidence that both problems exhibit large computational-statistical gaps.&#13;
In this work, we show (nearly) tight average-case hardness for these problems, assuming the worst-case hardness of standard approximate shortest vector problems on lattices.&#13;
• For SBPκ, statistically, solutions exist with κ(x) = 2−Θ(x) (Aubin, Perkins and Zdeborová, Journal of Physics 2019). For large n, the best that efficient algorithms have been able to achieve is a far cry from the statistical bound, namely κ(x) = Θ(1/√x) (Bansal and Spencer, Random Structures and Algorithms 2020). The problem has been extensively studied in the TCS and statistics communities, and Gamarnik, Kızıldağ, Perkins and Xu (FOCS 2022) conjecture that Bansal-Spencer is tight: namely, κ(x) = Θ(1/√x) is the optimal value achieved by computationally efficient algorithms. We prove their conjecture assuming the worst-case hardness of approximating the shortest vector problem on lattices.&#13;
• For NPPκ, statistically, solutions exist with κ(m) = Θ(2−m) (Karmarkar, Karp, Lueker and Odlyzko, Journal of Applied Probability 1986). Karmarkar and Karp’s classical differencing algorithm achieves κ(m) = 2−O(log2 m) . We prove that Karmarkar-Karp is nearly tight: namely, no polynomial-time algorithm can achieve κ(m) = 2−Ω(log3 m), once again assuming the worst-case subexponential hardness of approximating the shortest vector problem on lattices to within a subexponential factor.&#13;
Our hardness results are versatile, and hold with respect to different distributions of the matrix A (e.g., i.i.d. uniform entries from [0,1]) and weaker requirements on the solution vector x.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>DNF Learning via Locally Mixing Random Walks</title>
<link href="https://hdl.handle.net/1721.1/164616" rel="alternate"/>
<author>
<name>Alman, Josh</name>
</author>
<author>
<name>Nadimpalli, Shivam</name>
</author>
<author>
<name>Patel, Shyamal</name>
</author>
<author>
<name>Servedio, Rocco A.</name>
</author>
<id>https://hdl.handle.net/1721.1/164616</id>
<updated>2026-03-08T03:22:21Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">DNF Learning via Locally Mixing Random Walks
Alman, Josh; Nadimpalli, Shivam; Patel, Shyamal; Servedio, Rocco A.
We give two results on PAC learning DNF formulas using membership queries in the challenging “distribution-free” learning framework, where learning algorithms must succeed for an arbitrary and unknown distribution over {0,1}n.&#13;
(1) We first give a quasi-polynomial time “list-decoding” algorithm for learning a single term of an unknown DNF formula. More precisely, for any target s-term DNF formula f = T1 ∨ ⋯ ∨ Ts over {0,1}n and any unknown distribution D over {0,1}n, our algorithm, which uses membership queries and random examples from D, runs in quasipoly(n,s) time and outputs a list L of candidate terms such that with high probability some term Ti of f belongs to L.&#13;
(2) We then use result (1) to give a quasipoly(n,s)-time algorithm, in the distribution-free PAC learning model with membership queries, for learning the class of size-s DNFs in which all terms have the same size. Our algorithm learns using a DNF hypothesis.&#13;
The key tool used to establish result (1) is a new result on “locally mixing random walks,” which, roughly speaking, shows that a random walk on a graph that is covered by a small number of expanders has a non-negligible probability of mixing quickly in a subset of these expanders.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Near Optimal Constant Inapproximability under ETH for Fundamental Problems in Parameterized Complexity</title>
<link href="https://hdl.handle.net/1721.1/164615" rel="alternate"/>
<author>
<name>Bafna, Mitali</name>
</author>
<author>
<name>Karthik C. S.</name>
</author>
<author>
<name>Minzer, Dor</name>
</author>
<id>https://hdl.handle.net/1721.1/164615</id>
<updated>2026-03-08T03:22:48Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Near Optimal Constant Inapproximability under ETH for Fundamental Problems in Parameterized Complexity
Bafna, Mitali; Karthik C. S.; Minzer, Dor
We prove that under the Exponential Time Hypothesis (ETH), for every ε &gt; 0, there exists a constant C &gt; 0 such that no algorithm running in time nk / logC k can determine whether a given 2-CSP instance with k variables, O(k) constraints, and alphabet size n, is perfectly satisfiable or if every assignment satisfies at most an ε fraction of the constraints.&#13;
By known reductions in the literature, the above result implies near-optimal conditional lower bounds for approximating a host of parameterized problems, such as the k-Clique problem, k-Max-Coverage problem, k-Unique Set Cover problem, k-Median and k-Means problems, parameterized variants of the Nearest Codeword problem, Minimum Distance of a Code problem, Closest Vector problem, and Shortest Vector problem.&#13;
We also establish a densification theorem for the parameterized 2-CSP problem, showing that the aforementioned conditional lower bound for sparse 2-CSPs also holds when the constraint graph is a complete graph. From this densification, we conclude that assuming ETH, there is no algorithm running in time n√k / logC k that approximates the k-Directed Steiner Network problem and the k-Strongly Connected Steiner Subgraph problem to some constant factors.
Mitali Bafna, Karthik C. S., and Dor Minzer. 2025. Near Optimal Constant Inapproximability under ETH for Fundamental Problems in Parameterized Complexity. In Proceedings of the 57th Annual ACM Symposium on Theory of Computing (STOC '25). Association for Computing Machinery, New York, NY, USA, 2118–2129.
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Oblivious Defense in ML Models: Backdoor Removal without Detection</title>
<link href="https://hdl.handle.net/1721.1/164614" rel="alternate"/>
<author>
<name>Goldwasser, Shafi</name>
</author>
<author>
<name>Shafer, Jonathan</name>
</author>
<author>
<name>Vafa, Neekon</name>
</author>
<author>
<name>Vaikuntanathan, Vinod</name>
</author>
<id>https://hdl.handle.net/1721.1/164614</id>
<updated>2026-03-08T03:22:47Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Oblivious Defense in ML Models: Backdoor Removal without Detection
Goldwasser, Shafi; Shafer, Jonathan; Vafa, Neekon; Vaikuntanathan, Vinod
As society grows more reliant on machine learning, ensuring the security of machine learning systems against sophisticated attacks becomes a pressing concern. A recent result of&#13;
Goldwasser, Kim, Vaikuntanathan, and Zamir (FOCS ’22) shows that an adversary can plant undetectable backdoors in machine learning models, allowing the adversary to covertly control the model’s behavior. Backdoors can be planted in such a way that the backdoored machine learning model is computationally indistinguishable from an honest model without backdoors.&#13;
In this paper, we present strategies for defending against backdoors in ML models, even if they are undetectable. The key observation is that it is sometimes possible to provably mitigate or even remove backdoors without needing to detect them, using techniques inspired by the notion of random self-reducibility. This depends on properties of the ground-truth labels (chosen by nature), and not of the proposed ML model (which may be chosen by an attacker).&#13;
We give formal definitions for secure backdoor mitigation, and proceed to show two types of results. First, we show a “global mitigation” technique, which removes all backdoors from a machine learning model under the assumption that the ground-truth labels are close to a Fourier-heavy function. Second, we consider distributions where the ground-truth labels are close to a linear or polynomial function in ℝn. Here, we show “local mitigation” techniques, which remove backdoors with high probability for every input of interest, and are computationally cheaper than global mitigation. All of our constructions are black-box, so our techniques work without needing access to the model’s representation (i.e., its code or parameters). Along the way we prove a simple result for robust mean estimation.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Faster Rates for No-Regret Learning in General Games via Cautious Optimism</title>
<link href="https://hdl.handle.net/1721.1/164613" rel="alternate"/>
<author>
<name>Soleymani, Ashkan</name>
</author>
<author>
<name>Piliouras, Georgios</name>
</author>
<author>
<name>Farina, Gabriele</name>
</author>
<id>https://hdl.handle.net/1721.1/164613</id>
<updated>2026-03-08T03:22:27Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Faster Rates for No-Regret Learning in General Games via Cautious Optimism
Soleymani, Ashkan; Piliouras, Georgios; Farina, Gabriele
We establish the first uncoupled learning algorithm that attains O(n log2 d logT) per-player regret in multi-player general-sum games, where n is the number of players, d is the number of actions available to each player, and T is the number of repetitions of the game. Our results exponentially improve the dependence on d compared to the O(n  d logT) regret attainable by Log-Regularized Lifted Optimistic FTRL introduced by Farina, Anagnostides, Luo, Lee, Kroer, and Sandholm [2022], and also reduce the dependence on the number of iterations T from log4 T to logT compared to Optimistic Hedge, the previously well-studied algorithm with O(n logd log4 T) regret shown by Daskalakis, Fishelson, and Golowich [2021]. Our algorithm is obtained by combining the classic Optimistic Multiplicative Weights Update (OMWU) with an adaptive, non-monotonic learning rate that paces the learning process of the players, making them more cautious when their regret becomes too negative.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Explicit Two-Sided Vertex Expanders beyond the Spectral Barrier</title>
<link href="https://hdl.handle.net/1721.1/164612" rel="alternate"/>
<author>
<name>Hsieh, Jun-Ting</name>
</author>
<author>
<name>Lin, Ting-Chun</name>
</author>
<author>
<name>Mohanty, Sidhanth</name>
</author>
<author>
<name>O'Donnell, Ryan</name>
</author>
<author>
<name>Zhang, Rachel Yun</name>
</author>
<id>https://hdl.handle.net/1721.1/164612</id>
<updated>2026-03-08T03:22:27Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Explicit Two-Sided Vertex Expanders beyond the Spectral Barrier
Hsieh, Jun-Ting; Lin, Ting-Chun; Mohanty, Sidhanth; O'Donnell, Ryan; Zhang, Rachel Yun
We construct the first explicit two-sided vertex expanders that bypass the spectral barrier.&#13;
Previously, the strongest known explicit vertex expanders were given by d-regular Ramanujan graphs, whose spectral properties imply that every small subset of vertices S has at least 0.5d|S| distinct neighbors. However, it is possible to construct Ramanujan graphs containing a small set S with no more than 0.5d|S| neighbors. In fact, no explicit construction was known to break the 0.5 d-barrier.&#13;
In this work, we give an explicit construction of an infinite family of d-regular graphs (for large enough d) where every small set expands by a factor of ≈ 0.6d.&#13;
More generally, for large enough d1,d2, we give an infinite family of (d1,d2)-biregular graphs where small sets on the left expand by a factor of ≈ 0.6d1, and small sets on the right expand by a factor of ≈ 0.6d2. In fact, our construction satisfies an even stronger property: small sets on the left and right have unique-neighbor expansion 0.6d1 and 0.6d2 respectively.&#13;
Our construction follows the tripartite line product framework of Hsieh et. al., and instantiates it using the face-vertex incidence of the 4-dimensional Ramanujan clique complex as its base component. As a key part of our analysis, we derive new bounds on the triangle density of small sets in the Ramanujan clique complex.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>All-Pairs Shortest Paths with Few Weights per Node</title>
<link href="https://hdl.handle.net/1721.1/164611" rel="alternate"/>
<author>
<name>Abboud, Amir</name>
</author>
<author>
<name>Fischer, Nick</name>
</author>
<author>
<name>Jin, Ce</name>
</author>
<author>
<name>Williams, Virginia Vassilevska</name>
</author>
<author>
<name>Xi, Zoe</name>
</author>
<id>https://hdl.handle.net/1721.1/164611</id>
<updated>2026-03-08T03:22:36Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">All-Pairs Shortest Paths with Few Weights per Node
Abboud, Amir; Fischer, Nick; Jin, Ce; Williams, Virginia Vassilevska; Xi, Zoe
We study the central All-Pairs Shortest Paths (APSP) problem under the restriction that there are at most d distinct weights on the outgoing edges from every node.&#13;
For d=n this is the classical (unrestricted) APSP problem that is hypothesized to require cubic time n3−o(1), and at the other extreme, for d=1, it is equivalent to the Node-Weighted APSP problem.&#13;
We present new algorithms that achieve the following results:&#13;
* Node-Weighted APSP can be solved in time Õ(n(3+ω)/2) = Õ(n2.686), improving on the 15-year-old subcubic bounds Õ(n(9+ω)/4) = Õ(n2.843) [Chan; STOC ’07] and Õ(n2.830) [Yuster; SODA ’09]. This positively resolves the question of whether Node-Weighted APSP is an ”intermediate” problem in the sense of having complexity n2.5+o(1) if ω=2, in which case it also matches an n2.5−o(1) conditional lower bound.&#13;
* For up to d ≤ n3−ω−є distinct weights per node (where є &gt; 0), the problem can be solved in subcubic time O(n3−f(є)) (where f(є) &gt; 0). In particular, assuming that ω = 2, we can tolerate any sublinear number of distinct weights per node d ≤ n1−є, whereas previous work [Yuster; SODA ’09] could only handle d ≤ n1/2−є in subcubic time. This promotes our understanding of the APSP hypothesis showing that the hardest instances must exhaust a linear number of weights per node. With the current bounds on ω, we achieve a subcubic algorithm for d ≤ n0.628 whereas previously a subcubic running time could only be achieved for d ≤ n0.384. Our result also applies to the All-Pairs Exact Triangle problem, thus generalizing a result of Chan and Lewenstein on “Clustered 3SUM” from arrays to matrices. Notably, our technique constitutes a rare application of additive combinatorics in graph algorithms.&#13;
We complement our algorithmic results with simple hardness reductions extending the n2.5−o(1) conditional lower bound for Node-Weighted APSP to undirected graphs. Interestingly, under fine-grained assumptions, the complexity in the undirected case jumps from O(nω) for d=1 to n2.5−o(1) for d ≥ 2.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Weak Poincaré Inequalities, Simulated Annealing, and Sampling from Spherical Spin Glasses</title>
<link href="https://hdl.handle.net/1721.1/164610" rel="alternate"/>
<author>
<name>Huang, Brice</name>
</author>
<author>
<name>Mohanty, Sidhanth</name>
</author>
<author>
<name>Rajaraman, Amit</name>
</author>
<author>
<name>Wu, David X.</name>
</author>
<id>https://hdl.handle.net/1721.1/164610</id>
<updated>2026-03-08T03:22:30Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Weak Poincaré Inequalities, Simulated Annealing, and Sampling from Spherical Spin Glasses
Huang, Brice; Mohanty, Sidhanth; Rajaraman, Amit; Wu, David X.
There has been a recent surge of powerful tools to show rapid mixing of Markov chains, via functional inequalities such as Poincaré inequalities. In many situations, Markov chains fail to mix rapidly from a worst-case initialization, yet are expected to approximately sample from a random initialization. For example, this occurs if the target distribution has metastable states, small clusters accounting for a vanishing fraction of the mass that are essentially disconnected from the bulk of the measure. Under such conditions, a Poincaré inequality cannot hold, necessitating new tools to prove sampling guarantees.&#13;
We develop a framework to analyze simulated annealing, based on establishing so-called weak Poincaré inequalities. These inequalities imply mixing from a suitably warm start, and simulated annealing provides a way to chain such warm starts together into a sampling algorithm. We further identify a local-to-global principle to prove weak Poincaré inequalities, mirroring the spectral independence and localization schemes frameworks for analyzing mixing times of Markov chains.&#13;
As our main application, we prove that simulated annealing samples from the Gibbs measure of a spherical spin glass for inverse temperatures up to a natural threshold, matching recent algorithms based on algorithmic stochastic localization. This provides the first Markov chain sampling guarantee that holds beyond the uniqueness threshold for spherical spin glasses, where mixing from a worst-case initialization is provably slow due to the presence of metastable states. As an ingredient in our proof, we prove bounds on the operator norm of the covariance matrix of spherical spin glasses in the full replica-symmetric regime.&#13;
Additionally, we resolve a question related to sampling using data-based initializations.&#13;
The full version of this paper can be found on arXiv (arXiv ID: 2411.09075).
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bypassing the Noisy Parity Barrier: Learning Higher-Order Markov Random Fields from Dynamics</title>
<link href="https://hdl.handle.net/1721.1/164609" rel="alternate"/>
<author>
<name>Gaitonde, Jason</name>
</author>
<author>
<name>Moitra, Ankur</name>
</author>
<author>
<name>Mossel, Elchanan</name>
</author>
<id>https://hdl.handle.net/1721.1/164609</id>
<updated>2026-03-08T03:22:17Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Bypassing the Noisy Parity Barrier: Learning Higher-Order Markov Random Fields from Dynamics
Gaitonde, Jason; Moitra, Ankur; Mossel, Elchanan
We consider the problem of learning graphical models, also known as Markov random fields (MRFs) from temporally correlated samples. As in many traditional statistical settings, fundamental results in the area all assume independent samples from the distribution. However, these samples generally will not directly correspond to more realistic observations from nature, which instead evolve according to some stochastic process. From the computational lens, even generating a single sample from the true MRF distribution is intractable unless NP=RP, and moreover, any algorithm to learn from i.i.d. samples requires prohibitive runtime due to hardness reductions to the parity with noise problem. These computational barriers for sampling and learning from the i.i.d. setting severely lessen the utility of these breakthrough results for this important task; however, dropping this assumption typically only introduces further algorithmic and statistical complexities. In this work, we surprisingly demonstrate that the direct trajectory data from a natural evolution of the MRF overcomes the fundamental computational lower bounds to efficient learning. In particular, we show that given a trajectory with Ok(n) site updates of an order k MRF from the Glauber dynamics, a well-studied, natural stochastic process on graphical models, there is an algorithm that recovers the graph and the parameters in Ok(n2) time. By contrast, all prior algorithms for learning order k MRFs inherently suffer from nΘ(k) runtime even in sparse instances due to the reductions to sparse parity with noise. Our results thus surprisingly show that this more realistic, but intuitively less tractable, model for MRFs actually leads to efficiency far beyond what is known and believed to be true in the traditional i.i.d. case.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulating Time with Square-Root Space</title>
<link href="https://hdl.handle.net/1721.1/164608" rel="alternate"/>
<author>
<name>Williams, R. Ryan</name>
</author>
<id>https://hdl.handle.net/1721.1/164608</id>
<updated>2026-03-08T03:22:41Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Simulating Time with Square-Root Space
Williams, R. Ryan
We show that for all functions t(n) ≥ n, every multitape Turing machine running in time t can be simulated in space only O(√t logt). This is a substantial improvement over Hopcroft, Paul, and Valiant’s simulation of time t in O(t/logt) space from 50 years ago [FOCS 1975, JACM 1977]. Among other results, our simulation implies that bounded fan-in circuits of size s can be evaluated on any input in only √s · poly(logs) space, and that there are explicit problems solvable in O(n) space which require at least n2−ε time on every multitape Turing machine for all ε &gt; 0, thereby making a little progress on the P versus PSPACE problem.&#13;
Our simulation reduces the problem of simulating time-bounded multitape Turing machines to a series of implicitly-defined Tree Evaluation instances with nice parameters, leveraging the remarkable space-efficient algorithm for Tree Evaluation recently found by Cook and Mertz [STOC 2024].
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Model Stealing for Any Low-Rank Language Model</title>
<link href="https://hdl.handle.net/1721.1/164607" rel="alternate"/>
<author>
<name>Liu, Allen</name>
</author>
<author>
<name>Moitra, Ankur</name>
</author>
<id>https://hdl.handle.net/1721.1/164607</id>
<updated>2026-03-08T03:39:26Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Model Stealing for Any Low-Rank Language Model
Liu, Allen; Moitra, Ankur
Model stealing, where a learner tries to recover an unknown model via carefully chosen queries, is a critical problem in machine learning, as it threatens the security of proprietary models and the privacy of data they are trained on. In recent years, there has been particular interest in stealing large language models (LLMs). In this paper, we aim to build a theoretical understanding of stealing language models by studying a simple and mathematically tractable setting. We study model stealing for Hidden Markov Models (HMMs), and more generally low-rank language models.&#13;
We assume that the learner works in the conditional query model, introduced by Kakade, Krishnamurthy, Mahajan and Zhang. Our main result is an efficient algorithm in the conditional query model, for learning any low-rank distribution. In other words, our algorithm succeeds at stealing any language model whose output distribution is low-rank. This improves upon the previous result which also requires the unknown distribution to have high “fidelity” ­– a property that holds only in restricted cases. There are two key insights behind our algorithm: First, we represent the conditional distributions at each timestep by constructing barycentric spanners among a collection of vectors of exponentially large dimension. Second, for sampling from our representation, we iteratively solve a sequence of convex optimization problems that involve projection in relative entropy to prevent compounding of errors over the length of the sequence. This is an interesting example where, at least theoretically, allowing a machine learning model to solve more complex problems at inference time can lead to drastic improvements in its performance.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Maximum Circuit Lower Bounds for Exponential-Time Arthur Merlin</title>
<link href="https://hdl.handle.net/1721.1/164606" rel="alternate"/>
<author>
<name>Chen, Lijie</name>
</author>
<author>
<name>Li, Jiatu</name>
</author>
<author>
<name>Liang, Jingxun</name>
</author>
<id>https://hdl.handle.net/1721.1/164606</id>
<updated>2026-03-08T03:22:32Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Maximum Circuit Lower Bounds for Exponential-Time Arthur Merlin
Chen, Lijie; Li, Jiatu; Liang, Jingxun
We show that the complexity class of exponential-time Arthur Merlin with sub-exponential advice (AMEXP/2nε) requires circuit complexity at least 2n/n. Previously, the best known such near-maximum lower bounds were for symmetric exponential time by Chen, Hirahara, and Ren (STOC’24) and Li (STOC’24), or randomized exponential time with MCSP oracle and sub-exponential advice by Hirahara, Lu, and Ren (CCC’23).&#13;
Our result is proved by combining the recent iterative win-win paradigm of Chen, Lu, Oliveira, Ren, and Santhanam (FOCS’23) together with the uniform hardness-vs-randomness connection for Arthur-Merlin protocols by Shaltiel-Umans (STOC’07) and van Melkebeek-Sdroievski (CCC’23). We also provide a conceptually different proof using a novel ”critical win-win” argument that extends a technique of Lu, Oliveira, and Santhanam (STOC’21).&#13;
Indeed, our circuit lower bound is a corollary of a new explicit construction for properties in coAM. We show that for every dense property P ∈ coAM, there is a quasi-polynomial-time Arthur-Merlin protocol with short advice such that the following holds for infinitely many n: There exists a canonical string wn ∈ P ∩ {0,1}n so that (1) there is a strategy of Merlin such that Arthur outputs wn with probability 1 and (2) for any strategy of Merlin, with probability 2/3, Arthur outputs either wn or a failure symbol ⊥. As a direct consequence of this new explicit construction, our circuit lower bound also generalizes to circuits with an AM ∩ coAM oracle. To our knowledge, this is the first unconditional lower bound against a strong non-uniform class using a hard language that is only ”quantitatively harder”.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>List-Decoding Capacity Implies Capacity on the &#119902;-ary Symmetric Channel</title>
<link href="https://hdl.handle.net/1721.1/164605" rel="alternate"/>
<author>
<name>Pernice, Francisco</name>
</author>
<author>
<name>Sprumont, Oscar</name>
</author>
<author>
<name>Wootters, Mary</name>
</author>
<id>https://hdl.handle.net/1721.1/164605</id>
<updated>2026-03-08T03:22:33Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">List-Decoding Capacity Implies Capacity on the &#119902;-ary Symmetric Channel
Pernice, Francisco; Sprumont, Oscar; Wootters, Mary
It is known that the Shannon capacity of the q-ary symmetric channel (qSC) is the same as the list-decoding capacity of an adversarial channel, raising the question of whether there is a formal (and black-box) connection between the two. We show that there is: Any linear code C⊆ Fqn that has superconstant minimum distance and achieves list-decoding capacity also achieves capacity on the qSC.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>A molecularly impermeable polymer from two-dimensional polyaramids</title>
<link href="https://hdl.handle.net/1721.1/164604" rel="alternate"/>
<author>
<name>Ritt, Cody L</name>
</author>
<author>
<name>Quien, Michelle</name>
</author>
<author>
<name>Wei, Zitang</name>
</author>
<author>
<name>Gress, Hagen</name>
</author>
<author>
<name>Dronadula, Mohan T</name>
</author>
<author>
<name>Altmisdort, Kaan</name>
</author>
<author>
<name>Nguyen, Huong Giang T</name>
</author>
<author>
<name>Zangmeister, Christopher D</name>
</author>
<author>
<name>Tu, Yu-Ming</name>
</author>
<author>
<name>Garimella, Sanjay S</name>
</author>
<author>
<name>Amirabadi, Shahab</name>
</author>
<author>
<name>Gadaloff, Michael</name>
</author>
<author>
<name>Hu, Weiguo</name>
</author>
<author>
<name>Aluru, Narayana R</name>
</author>
<author>
<name>Ekinci, Kamil L</name>
</author>
<author>
<name>Bunch, J Scott</name>
</author>
<author>
<name>Strano, Michael S</name>
</author>
<id>https://hdl.handle.net/1721.1/164604</id>
<updated>2026-03-08T03:39:23Z</updated>
<published>2025-11-12T00:00:00Z</published>
<summary type="text">A molecularly impermeable polymer from two-dimensional polyaramids
Ritt, Cody L; Quien, Michelle; Wei, Zitang; Gress, Hagen; Dronadula, Mohan T; Altmisdort, Kaan; Nguyen, Huong Giang T; Zangmeister, Christopher D; Tu, Yu-Ming; Garimella, Sanjay S; Amirabadi, Shahab; Gadaloff, Michael; Hu, Weiguo; Aluru, Narayana R; Ekinci, Kamil L; Bunch, J Scott; Strano, Michael S
All polymers exhibit gas permeability through the free volume of entangled polymer chains1, 2–3. By contrast, two-dimensional (2D) materials including graphene stack densely and can exhibit molecular impermeability4, 5–6. Solution-synthesized 2D polymers that exhibit the latter by poly-condensation have been a longstanding goal. Herein, we demonstrate self-supporting, spin-coated 2D polyaramid nanofilms that exhibit nitrogen permeability below 3.1 × 10−9 Barrer, nearly four orders of magnitude lower than every class of existing polymer, and similar for other gases tested (helium, argon, oxygen, methane and sulfur hexafluoride). Optical interference during the pressurization of nanofilm-coated microwells allows measurement of mechanosensitive rim opening and sealing, creating gas-filled bulges that are stable exceeding three years. This discovery enables 2D polymer resonators with high resonance frequencies (about 8 MHz) and quality factors up to 537, similar to graphene. A 60-nm coating of air-sensitive perovskites reduces the lattice degradation rate 14-fold with an oxygen permeability of 3.3 × 10−8 Barrer. Molecularly impermeable polymers promise the next generation of barriers that are synthetically processable, chemically amenable and maximize molecular rejection with minimal material, ultimately advancing sustainability goals.
</summary>
<dc:date>2025-11-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding and Overcoming Optimization Barriers in Non-convex and Non-smooth Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/164603" rel="alternate"/>
<author>
<name>Gatmiry, Khashayar</name>
</author>
<id>https://hdl.handle.net/1721.1/164603</id>
<updated>2026-01-21T03:25:03Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Understanding and Overcoming Optimization Barriers in Non-convex and Non-smooth Machine Learning
Gatmiry, Khashayar
At their core, our machine learning systems are trained by solving an optimization problem, where the goal is to minimize a predefined objective function by adjusting model parameters based on the data. Despite the wealth of structure and prior knowledge present in the data and feedback, our training methods remain relatively simple and independent of this structure. In spite of, or perhaps because of, this simplicity, these methods are often lacking in theoretical guarantees. To design machine learning algorithms that are less data-hungry while ensuring theoretical guarantees on both computational efficiency and output validity, it is essential to better understand and leverage the rich structure within the learning setup and the data distribution, e.g. by altering the geometry of the solution space or adjusting the objective function to induce a more effective learning procedure. This approach moves beyond classical algorithm design, which focuses primarily on handling worst-case instances. This thesis investigates the optimization landscape of central learning problems and develops geometric and analytic schemes adapted to their structure, leading to algorithms with superior computational and statistical performance. In addition, it seeks to advance our mathematical understanding of the principles underlying the success of deep learning.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Seeing the Forest Through the Trees: Knowledge Retrieval for Streamlining Particle Physics Analysis</title>
<link href="https://hdl.handle.net/1721.1/164602" rel="alternate"/>
<author>
<name>McGreivy, James C.</name>
</author>
<id>https://hdl.handle.net/1721.1/164602</id>
<updated>2026-01-21T04:07:55Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Seeing the Forest Through the Trees: Knowledge Retrieval for Streamlining Particle Physics Analysis
McGreivy, James C.
Generative Large Language Models (LLMs) are a promising approach to structuring knowledge contained within otherwise unmanageable corpora of research literature produced by large-scale and long-running scientific collaborations. Within experimental particle physics, such structured knowledge bases could expedite methodological and editorial review. Complementarily, within the broader scientific community, generative LLM systems grounded in published work could make for reliable companions allowing non-experts to analyze openaccess data. Techniques such as Retrieval Augmented Generation (RAG) rely on semantically matching localized text chunks, but struggle to maintain coherent context when relevant information spans multiple segments, leading to a fragmented representation devoid of global cross-document information. In this work I utilize the hierarchical organization of experimental physics articles to build a tree representation of the corpus, and present the SciTreeRAG system which leverages this structure with the aim of constructing contexts more focused and contextually rich than a standard RAG. Additionally, I develop methods for using LLMs to transform the unstructured corpus into a structured knowledge graph representation. I then implement SciGraphRAG, a retrieval system that leverages this knowledge graph to access global cross-document relationships eluding standard RAG, with the goal of encapsulating domain-specific connections and expertise. I demonstrate proof-of-concept implementations of both systems using the corpus of the LHCb experiment at CERN.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Low-Cost In Situ Gas Sensors for Oceanographic Applications</title>
<link href="https://hdl.handle.net/1721.1/164601" rel="alternate"/>
<author>
<name>Gower, Elizabeth Ann</name>
</author>
<id>https://hdl.handle.net/1721.1/164601</id>
<updated>2026-01-21T04:07:47Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Development of Low-Cost In Situ Gas Sensors for Oceanographic Applications
Gower, Elizabeth Ann
Anthropogenic activity has increased atmospheric carbon dioxide (CO₂) levels, disrupting the global carbon cycle and driving widespread environmental change. The ocean acts as a major sink. Accurate and scalable in situ monitoring of oceanic carbon chemistry is vital for understanding the impacts of climate change and informing marine carbon dioxide removal (mCDR) strategies. Many existing in situ instruments for marine applications are constrained by their size, cost, power requirements, or reliance on consumable reagents. Developing low-cost, compact, low-power, and accurate in situ sensors would significantly enhance the spatiotemporal resolution of oceanographic data and enable widespread monitoring of dissolved gases throughout the ocean. This, in turn, would deepen our understanding of how, where, and when changes are occurring within the marine carbon cycle. Two key variables essential for studying this cycle are the partial pressure of carbon dioxide (pCO₂) and dissolved inorganic carbon (DIC). This thesis presents the development of two sensors, one for in situ pCO₂ measurement and another for novel DIC quantification, both designed to be affordable, reliable, and scalable tools for advancing our understanding of ocean chemistry and the global carbon system. First, the development, calibration, and open-ocean deployment of a miniaturized Dissolved Multi-Gas Sensor (DMGS) that measures pCO₂ and partial pressure of oxygen (pO₂) is presented. The sensor was integrated into a custom-built surface drifter designed to entangle with Sargassum mats and send data autonomously. The drifter utilized commercial off-theshelf (COTS) components and cost roughly $1000 to build. After lab testing, a drifter was deployed in the Great Atlantic Sargassum Belt (GASB) and collected data for 22-days. In addition to gas data, the drifter tracked temperature, light intensity, humidity, pressure, and location sending measurements via an Iridium satellite. The resulting data captured dynamic changes in localized gas concentrations, temperature, and light levels that highlighted photosynthetic and respiratory activity within Sargassum patches. These drifters demonstrate the value of in situ data to investigate marine biogeochemical processes that contribute to the marine carbon cycle, especially in areas with high biologic activity. Next, this thesis presents the iterative development of a novel DIC sensor with potential for future in situ applications. Initial prototypes tested the feasibility of using a COTS CO2 sensor in both static and flow-through configurations, however sensor saturation issues prompted a shift to a pressure-based detection method. Multiple test setups were evaluated for pressure stability and sensor sensitivity, culminating in a bottle-based flow system that demonstrated the potential for reagent-minimized, pressure-based DIC quantification. With the final setup, a COTS pressure sensor that sat behind a gas permeable membrane was found to repeatably and accurately quantify DIC from acidified seawater. This approach of quantifying DIC via pressure change is novel in the field of gas sensing and maintains a low-cost, accessible design. Together, the sensors developed in this thesis expand the toolkit for marine carbon monitoring and provide a foundation for affordable, distributed sensing networks. These technologies enable higher-resolution insights into ocean biogeochemistry and support critical monitoring, reporting, and verification (MRV) frameworks needed to evaluate the effectiveness of mCDR techniques. Continued refinement of these low-cost platforms could play a key role in understanding and mitigating anthropogenic impacts on marine systems.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applying Reference Class Forecasting to Multifamily Investments: Identifying and Capturing Operational Alpha through the Outside View</title>
<link href="https://hdl.handle.net/1721.1/164600" rel="alternate"/>
<author>
<name>Firouzian, Fardean</name>
</author>
<id>https://hdl.handle.net/1721.1/164600</id>
<updated>2026-01-21T04:07:55Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Applying Reference Class Forecasting to Multifamily Investments: Identifying and Capturing Operational Alpha through the Outside View
Firouzian, Fardean
This thesis applies Reference Class Forecasting (RCF) to multifamily real estate underwriting as a means of countering optimism bias, strategic misrepresentation, and other distortions embedded in the traditional “inside view.” Adapted from its proven application in infrastructure and corporate capital budgeting, RCF anchors projections in the actual performance distributions of comparable assets rather than in deal-specific narratives. The research centers on the development of the “Comp Warehouse,” a structured repository of property-level financials organized by market, asset class, vintage, and unit scale. By benchmarking assumptions against statistically valid reference classes, the approach enforces empirical discipline and highlights opportunities for “operational alpha”—the marginal increase in net operating income (NOI) achieved when underperforming assets converge on median peer performance. A South Florida case study demonstrates the method’s utility in an acquisition context. Analysis of 48 assets across Melbourne, Miami, Fort Lauderdale, and West Palm Beach shows that while rent levels cluster tightly around market medians, operating expenses vary widely, producing large dispersion in realized NOI. Applying the framework to a 191-unit Class A property in Fort Lauderdale illustrates how RCF can ground underwriting assumptions by distinguishing between defensible revenue-driven growth strategies and less plausible expense-reduction projections proposed in a bidding scenario. Recognizing constraints of both scale and frequency, this thesis also explores artificial intelligence as a tool for automating the ingestion and standardization of operating statements and rent rolls. Properly deployed in a human-in-the-loop framework, AI can reduce data friction, expand sample sizes, and sharpen forecasting precision. The contribution of this thesis is twofold: it demonstrates the feasibility of applying RCF to the multifamily sector—an asset class whose relative standardization, liquidity, and data availability make it especially conducive to outside-view benchmarking—and it situates the methodology within a technology-native architecture designed to scale empirical discipline, enhance underwriting rigor, and systematically capture operational alpha.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Concretely-Efficient Multi-Key Homomorphic Secret Sharing and Applications</title>
<link href="https://hdl.handle.net/1721.1/164599" rel="alternate"/>
<author>
<name>He, Kaiwen</name>
</author>
<id>https://hdl.handle.net/1721.1/164599</id>
<updated>2026-01-21T04:07:53Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Concretely-Efficient Multi-Key Homomorphic Secret Sharing and Applications
He, Kaiwen
Homomorphic secret sharing (HSS) is a powerful cryptographic primitive that enables efficient, low-communication secure computation without the use of fully homomorphic encryption. Public-key HSS is a well-known variant that supports inputs from multiple parties, but all parties must agree on a joint public key before any party can encode their inputs, requiring extra rounds of communication in applications. Recently, Couteau et al. (EUROCRYPT 2025) constructed multi-key HSS (MKHSS)—a new primitive which allows parties to encode their inputs under independent keys—under the DCR assumption. MKHSS assumes only a reusable common reference string, without the need for prior interactions between parties or a public-key infrastructure. In this paper, we construct and implement the first concretely-efficient MKHSS scheme under the same assumptions used by Couteau et al. Using an algorithmic insight that reduces the largest modulus in Couteau et al. from N⁴ to N², our optimized implementation can homomorphically multiply inputs in 5.0 milliseconds—while an implementation of Couteau et al. requires 224.6 milliseconds—thereby achieving a 45× speedup. A powerful application of MKHSS is to realize attribute-based non-interactive key exchange (ANIKE), which generalizes password-based key exchange (PAKE) to arbitrary attribute policies. ANIKE is currently only known from MKHSS. We use our implementation to evaluate the first concretely-efficient ANIKE schemes for a range of practically useful policies. Using our implementation, two parties can perform a geolocation-based key exchange in 1.65 seconds and a fuzzy PAKE on an 8-word passphrase in 7.59 seconds for realistic parameters, on a single core. Compared to using Couteau et al., which requires 62.5 and 253 seconds, we achieve 38× and 33× speedups, respectively.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reciprocity and Normality in the Scattering Matrix of Disordered Media</title>
<link href="https://hdl.handle.net/1721.1/164598" rel="alternate"/>
<author>
<name>Bharadwaj, Shreyas K.</name>
</author>
<id>https://hdl.handle.net/1721.1/164598</id>
<updated>2026-01-21T04:07:46Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Reciprocity and Normality in the Scattering Matrix of Disordered Media
Bharadwaj, Shreyas K.
The scattering matrix formalism provides a practical characterization of wave transport in linear, source-free systems by relating a set of operationally defined input and output spatial channels. The matrix is structured as a block operator, with diagonal blocks encoding same-side reflection matrices (RMs) and off-diagonal blocks encoding transmission matrices (TMs) in opposing propagation directions. Under Helmholtz reciprocity, symmetry relations are imposed: RMs are symmetric, and forward and reverse TMs are mathematical transposes of each other. These relations were employed as constraints to correct system-induced aberrations in measured scattering matrices of complex optical media via a matrix-based gradient descent procedure. Resulting phase corrections corresponded closely with classical aberration modes without heuristic parameterizations, suggesting that these modes naturally arise to restore reciprocity-induced symmetry. Vectorial TMs were measured for single- and double-pass propagation through step-index MMFs and scattering samples, with corrected phase terms showing agreement across sample types. Furthermore, matrix normality was introduced as a descriptor of stable modal transport. Normal matrices admit unitary diagonalization, reflecting orthogonal eigenchannels and spectrally coherent propagation. Near-normal behavior was observed in fiber TMs, while RMs of scattering slabs remained strongly non-normal, as quantified by a normalized Henrici departure. Sufficient conditions for normality were identified in terms of the system Green’s function and its bi-compression onto the measurement basis. A complementary dispersion experiment investigated two regimes: nearly-normal MMFs, where the Wigner–Smith time-delay operator was jointly diagonalizable and supported accurate first-order spectral models; and mechanically compressed fibers, where loss of normality produced noncommuting operators and collapse of model fidelity. These results suggest that normality captures well-behaved modal transport, underpinning the validity of parametric models and other operator-based analyses of disordered media. Together, reciprocity and normality impose complementary constraints on wave transport: reciprocity governs global symmetry, while normality captures internal coherence of modal propagation. Relevance is noted for matrix-based imaging, inverse scattering theory, and non-Hermitian wave physics, where symmetry and modal stability remain central.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mesh Differentiable Rendering for Real-World Scenes</title>
<link href="https://hdl.handle.net/1721.1/164597" rel="alternate"/>
<author>
<name>Charatan, David</name>
</author>
<id>https://hdl.handle.net/1721.1/164597</id>
<updated>2026-01-21T04:07:54Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Mesh Differentiable Rendering for Real-World Scenes
Charatan, David
Differentiable rendering has established itself as an effective tool for 3D reconstruction and novel view synthesis. Most state-of-the-art differentiable rendering methods use purpose-built renderers to optimize specialized, nonstandard 3D representations. However, most downstream applications of differentiable rendering rely on 3D meshes, which are near-universally supported due to their suitability for a wide range of rendering, simulation, and 3D modeling workflows. While prior methods have explored using 3D meshes directly within gradient-based optimization, they have been limited to object-centric scenes and cannot reconstruct real-world, unbounded scenes. This work addresses this shortcoming via a differentiable rendering formulation that combines an off-the-shelf, non-differentiable triangle rasterizer with a 3D representation that consists of nested mesh shells. During every forward pass, these shells are extracted from an underlying signed distance field. Then, the shells are independently rasterized and the resulting images are alpha-composited using opacities derived from the shells' per-vertex signed distance values. Notably, the shells' vertex positions are updated only via the underlying signed distance field, not via backpropagation through the rasterizer itself. This makes our method compatible with off-the-shelf, non-differentiable triangle rasterizers. To the best of our knowledge, our method is the first differentiable mesh rendering method that scales to unbounded, real-world 3D scenes, where it produces high-quality novel view synthesis results whose quality approaches the quality of state-of-the-art, non-mesh-based methods. Our method's performance is also competitive with state-of-the-art surface rendering methods on object-centric scenes. Ultimately, our method suggests that it may be possible to solve the differentiable rendering problem using tools from the conventional graphics toolbox rather than relying on specialized renderers.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Task-Aware Spatial and Temporal Aggregation for Capacity Expansion Planning</title>
<link href="https://hdl.handle.net/1721.1/164596" rel="alternate"/>
<author>
<name>Duguey, Gabriel</name>
</author>
<id>https://hdl.handle.net/1721.1/164596</id>
<updated>2026-01-21T04:07:49Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Task-Aware Spatial and Temporal Aggregation for Capacity Expansion Planning
Duguey, Gabriel
As we plan tomorrow’s electricity system, we face fundamental questions: where should new power plants go, which technologies deserve investment, and how much transmission is enough? These decisions are the domain of Capacity Expansion Planning (CEP), a class of optimization models that guide long-term infrastructure investments in power systems. To be realistic, CEP models must capture fine-grained spatial and temporal variations because demand varies by city and climate, while wind and solar output depend on weather patterns that shift hour by hour and location by location. But representing the system with thousands of time steps and hundreds of nodes makes the optimization problem computationally too large to solve. &#13;
&#13;
This thesis addresses the core question: how can spatial and temporal aggregation in CEP models be designed to preserve planning-relevant patterns that drive investment decisions? Existing approaches often treat aggregation as a neutral preprocessing step, relying on heuristics like political boundaries or geographic proximity. In contrast, we propose a task-aware pipeline that treats aggregation as an integral modeling decision, explicitly aligned with planning objectives.&#13;
&#13;
The approach builds a composite similarity metric that blends diverse planning-relevant signals, including, but not limited to, duration curves, ramping behavior, and spatial correlation, and uses k-medoids clustering to define spatial zones. Temporal aggregation is then applied to daily system-wide profiles, selecting representative days that maintain cross-zonal interactions. The result is a reduced spatio-temporal dataset fed into a CEP model. The resulting investment decisions are re-evaluated at full resolution to evaluate their feasibility and real cost.&#13;
&#13;
Experiments on a New England case study show the pipeline consistently outperforms common baselines like political boundaries, geographic proximity, or capacity factor statistics. Among 50 feature weightings, the best design reduces system cost by 13% compared to heuristics. Correlation-based features drive the best results, while raw amplitude and geographic location often degrade performance when used alone.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Earth as Equity Partner: A Revolutionary Approach to Ecological Conservation through Housing Development</title>
<link href="https://hdl.handle.net/1721.1/164595" rel="alternate"/>
<author>
<name>McDonough, Kate</name>
</author>
<id>https://hdl.handle.net/1721.1/164595</id>
<updated>2026-01-21T04:07:45Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Earth as Equity Partner: A Revolutionary Approach to Ecological Conservation through Housing Development
McDonough, Kate
Duddington Farm is a 312-acre site north of Baltimore, Maryland. A stream restoration project was completed at the location nearly a decade ago in concert with the State of Maryland, the Manor Conservancy, Ecotone, and landowners Harry and Tara McDonough. The project was conducted with some success, however due to a lack of State oversight and long-term management provisions, the ecology has since declined. The following proposal outlines a new model for long-term land restoration and conservation, whereby land conservation and restoration are financed not solely through short term grants and fragile easements, but through the thoughtful use of modest real estate interventions. A small cluster of homes is developed on one portion of the site. The act increases the value of the land, generates equity, and establishes a permanent conservation fund. The design protects habitat and invites people into a deeper relationship with the natural world. The plan offers scalability in taking the land value capture and applying it to future land conservation projects, compounding returns and projecting a model to preserve hundreds of thousands of acres of critical land across the United States. This model highlights Indigenous ecological knowledge (TEK) and traditional practices of engaging with the land, highlighting a deeper understanding of how humans and nature can coexist in mutually healthy ways. The model is designed at a time when watersheds, national parks, and old-growth forests are faced with the greatest threat to global ecology. Duddington Farm is used as a retrospective case, but the broader goal is to create a regenerative framework for conservation-based development across critical watershed regions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Causal Effects of Mandatory Quarterly Earnings Guidance on Corporate Information Environment and Corporate Short-Termism</title>
<link href="https://hdl.handle.net/1721.1/164594" rel="alternate"/>
<author>
<name>Wang, Yuting</name>
</author>
<id>https://hdl.handle.net/1721.1/164594</id>
<updated>2026-01-21T03:24:57Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">The Causal Effects of Mandatory Quarterly Earnings Guidance on Corporate Information Environment and Corporate Short-Termism
Wang, Yuting
I examine the causal effects of mandatory quarterly earnings guidance using a regulatory mandate in China that required a subset of listed firms to issue bundled quarterly earnings guidance from 2007 to 2018. A difference-in-differences analysis shows that when these firms are no longer required to issue such guidance, their corporate information environment deteriorates, evidenced by reduced analyst coverage, fewer site visits, and lower price timeliness, meaning that stock prices incorporate less information about current and future earnings. However, these firms increase R&amp;D and SG&amp;A spending, consistent with alleviated managerial myopia as short-term market pressure eases. These findings highlight the dual-edged nature of the mandatory quarterly earnings guidance and offer insights for both practitioners and policymakers.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Finite Elements</title>
<link href="https://hdl.handle.net/1721.1/164593" rel="alternate"/>
<author>
<name>Collin, Teodoro Fields</name>
</author>
<id>https://hdl.handle.net/1721.1/164593</id>
<updated>2026-01-21T04:07:51Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Automated Finite Elements
Collin, Teodoro Fields
Finite element methods (FEMs) are a powerful and ubiquitous tool for solving engineering problems. Experimenting with different finite elements can improve the quality and efficiency of solutions. Furthermore, in some cases, the wrong (but nonetheless most common) choice of finite element will produce solutions which converge to the wrong answer regardless of mesh resolution. However, in practice, the choice of finite element is not explored due to the complexity of re-deriving and re-implementing finite element methods. Trying a new finite element is challenging because practitioners must manually deduce formulas to use these elements and they must implement these formulas within the context of a potentially complex system. We address this problem by introducing ElementForge, a finite element system that is parametric over the literate mathematical specification of a finite element in a domain-specific language (DSL). The ElementForge compiler reasons about tensor spaces, tensors, and tensor bases from first principles to derive implementations of finite elements. The ElementForge compiler is able to automatically derive implementations of finite elements previously only derived by hand. Further, ElementForge minimally couples several key mathematical concepts, mainly tensor fields, mesh topologies, sparse tensors, and assembled finite element operators, to produce a complete finite element system that is parametric over the choice of element. Consequently, the elements derived by the compiler can be applied parametrically to new meshes, PDEs, and boundary conditions. We evaluate our system by implementing several simulations with different finite elements, demonstrating that our system can explore tradeoffs in generality, accuracy, speed, and representational complexity. For example, we are able to implement the Morley, Bell, Argyris, and Hermite like elements with less than 50 lines of code and use them all in a single simulation.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bespoke Threat Models: Achieving Realistic Privacy Guarantees for Deployed Protocols</title>
<link href="https://hdl.handle.net/1721.1/164592" rel="alternate"/>
<author>
<name>Hogan, Kyle</name>
</author>
<id>https://hdl.handle.net/1721.1/164592</id>
<updated>2026-01-21T03:24:48Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Bespoke Threat Models: Achieving Realistic Privacy Guarantees for Deployed Protocols
Hogan, Kyle
This thesis focuses on the question of what degree of privacy is achievable in the real world for long-running applications. We explore this question in two main settings: private advertising and anonymous communication. In doing so, we consider constraints each application may have in practice and what adversarial model is realistic for the context in which the application will be deployed. For real world applications, achieving perfect privacy — especially against a worst case adversary — can be impossible. That is, perfect privacy, while achievable in theory, may in practice require assumptions that conflict with usability, deployability, or utility requirements. This presents a challenge as privacy-preserving technologies can, necessarily, only provide privacy for the people who use them. Because of this, designing around user experience is critical, even if doing so requires compromises in the theoretical degree of privacy a system can provide or the strength of adversaries considered in its threat model. In the space of private advertising, we first propose a novel protocol, AdVeil, that eliminates leakage of user data beyond that revealed by the input/output of the ads ecosystem as a whole. We then provide a minimal modeling of the functionality of digital advertising which we use to prove that, even for systems like AdVeil with minimal leakage, the advertising metrics released at the end of the protocol are sufficient to leak information about end users to advertisers when combined with their audience targeting criteria. In the space of anonymous communication, we propose ShorTor, a new routing protocol for Tor that utilizes techniques popular with content distribution networks (CDNs) to reduce latency while maintaining Tor’s existing anonymity guarantees. We evaluate this protocol using a dataset of over 400,000 latency measurements we collected between the 1,000 most popular Tor relays.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Operationalizing Reliable Machine Learning: From Data Collection to Model Presentation</title>
<link href="https://hdl.handle.net/1721.1/164591" rel="alternate"/>
<author>
<name>Balagopalan, Aparna</name>
</author>
<id>https://hdl.handle.net/1721.1/164591</id>
<updated>2026-01-21T03:25:00Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Operationalizing Reliable Machine Learning: From Data Collection to Model Presentation
Balagopalan, Aparna
Automated systems driven by machine learning (ML) have made exciting progress across a spectrum of applications. Despite such progress, encoded biases and other failure modes may create barriers to the real-world utility and reliability of such systems. For example, nonrandom data missingness, biased algorithmic optimization objectives, or model presentation strategies that incorrectly impact user trust can all cause models to fail in practice. In this thesis, guided by such observations and prior work on pipeline-awareness in machine learning, we aim to operationalize reliable ML. Under this goal, we propose a framework consisting of the following three components: responsible data collection, robust algorithm development, and fair model presentation. We first conduct two case studies to advance responsible data collection. We investigate whether standard procedures for acquiring data can be repurposed when training models to mimic human judgments about norm violations. We also demonstrate patterns of delayed demographic data reporting within a longitudinal healthcare dataset and show that timevarying missingness due to such delays can distort disparity assessments. Second, we introduce two novel algorithms to improve reliability: a method that leverages representations from vision-language models to filter noisy training data, and a method to produce fair rankings that account for properties of search queries. Finally, since the presentation design of predictions impacts trust in model consumers, we propose metrics to quantify the fairness of post-hoc explainability techniques. Thus, with this thesis, we re-evaluate measurements throughout the machine learning pipeline and contribute to the broader goal of reliable machine learning.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Anti-phage defense as a driver of molecular innovation</title>
<link href="https://hdl.handle.net/1721.1/164590" rel="alternate"/>
<author>
<name>Doering, Christopher Ross</name>
</author>
<id>https://hdl.handle.net/1721.1/164590</id>
<updated>2026-01-21T03:24:56Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Anti-phage defense as a driver of molecular innovation
Doering, Christopher Ross
Bacteriophages, or phages for short, pose a near-constant threat to the bacteria they infect. Billions of years of conflict has been a catalyzing force for the creation of bacterial defense systems and corresponding phage evasion strategies. To counter phage predation, bacteria have developed a vast diversity of enzyme chemistries and molecular sensing mechanisms whose study has produced new biotechnological tools and insights into our own immune systems. In this work, I have investigated anti-phage defense mechanisms at multiple scales using a combination of genetic, biochemical, and bioinformatic approaches. First, I characterized the mechanism of action of the anti-phage defense system CmdTAC, a toxin-antitoxin-chaperone system that recognizes a viral structural protein to activate a novel mRNA ADP-ribosyltransferase, thereby halting infection. Next, I examined the diversity and distribution of anti-phage mechanisms encoded by E. coli lysogenic phages – phages capable of integrating into and lying dormant within their bacterial hosts. This analysis uncovered overlooked classes of lysogenic phages harboring novel candidate defense systems, including one newly validated system with no detectable homology to previously known mechanisms. Together, this work broadens our understanding of bacterial immune systems, expands the pool of known enzyme chemistries, and highlights areas where continued study can reveal additional mechanisms of anti-phage defense.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shaping Function Through Space: The Role of Spatial Organization in Microbial Communities</title>
<link href="https://hdl.handle.net/1721.1/164589" rel="alternate"/>
<author>
<name>Toneatti Vercelli, Gabriel</name>
</author>
<id>https://hdl.handle.net/1721.1/164589</id>
<updated>2026-01-21T03:24:46Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Shaping Function Through Space: The Role of Spatial Organization in Microbial Communities
Toneatti Vercelli, Gabriel
Spatial organization plays a critical role in microbial community function, influencing how cells exchange metabolites, coordinate behavior, and compete for resources. This thesis investigates the consequences of spatial structure in natural microbial systems and introduces a novel method to engineer these systems with high precision and scalability. First, we examine the colonization of chitin particles by marine bacteria, a model for particulate organic matter degradation. Using high-throughput phenotyping of natural isolates, we show that vitamin cross-feeding is essential for successful colonization of chitin-particles by many auxotrophic strains. We then model two distinct vitamin cross-feeding mechanisms: lysis and secretion. Using a resource-explicit modeling approach, we leverage metabolic-flux and physiological measurements to predict the colonization success of auxotrophic cross-feeders in this spatially structured environment. Second, we introduce a new chemical method for engineering microbial cell surfaces that enables covalent attachment of molecules such as enzymes and DNA strands to the cell surface. We show that this surface functionalization procedure leads to the acquisition of new phenotypes like antibiotic resistance and programmable adhesion. Altogether, this work reinforces the importance of spatial organization for microbial community function and introduces a new technique to harness this community feature and turn it into a design principle for synthetic microbial systems.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>LumiModeling: A Gaussian Splatting-Based Tool for Recreating Dynamic Material and Lighting Interaction in Architecture</title>
<link href="https://hdl.handle.net/1721.1/164588" rel="alternate"/>
<author>
<name>Cao, Biru</name>
</author>
<id>https://hdl.handle.net/1721.1/164588</id>
<updated>2026-01-21T04:07:43Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">LumiModeling: A Gaussian Splatting-Based Tool for Recreating Dynamic Material and Lighting Interaction in Architecture
Cao, Biru
This thesis presents LumiModeling, a real-time visualization tool based on Gaussian Splatting (GS) that simulates the dynamic interplay between materiality and lighting in architectural environments. While conventional design workflows rely on geometric modeling and photorealistic rendering, they often abstract complex material behaviors and fall short in capturing light-material interactions. In contrast, GS enables the reconstruction of high-fidelity 3D models from 2D image sets, representing viewdependent effects such as reflection, transparency, and surface roughness. A comparative analysis using real-world data from the MIT Stata Center and the Met Warehouse demonstrates GS’s advantages over mesh-based photogrammetry, particularly in rendering reflective and transparent materials. This work extends existing GS capabilities by implementing a relightable pipeline based on the existing model Relightable3DGaussian (Gao et al., 2023), in which each Gaussian point is augmented with physical parameters, including BRDF, surface normals, and incident lighting. The Stata Center dataset is used to test the relighting of GS. A user study involving architecture professionals reveals that perceptual focus shifts from geometry to materiality and lighting as visual realism increases. The findings highlight the potential of relightable GS in architectural visualization and anticipate its integration into future design workflows.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Urban Data Memory: Using Generative AI to Structure and Visualize Zoning Data for Urban Planning Evaluation</title>
<link href="https://hdl.handle.net/1721.1/164587" rel="alternate"/>
<author>
<name>Kupershmidt, Adi</name>
</author>
<id>https://hdl.handle.net/1721.1/164587</id>
<updated>2026-01-21T04:07:52Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Urban Data Memory: Using Generative AI to Structure and Visualize Zoning Data for Urban Planning Evaluation
Kupershmidt, Adi
Urban planners face significant challenges in systematically and quantitatively evaluating past planning practices, stemming, among other reasons, from the scarcity of accessible structured data. The period from a plan’s initiation to implementation can span generations; recorded data from the planning processes are often deemed obsolete for addressing present concerns by the time of post-occupancy evaluation. This research examines whether generative AI can help bridge this gap and under what conditions - highlighting both challenges and opportunities - by introducing a system that responsively transforms qualitative zoning data into structured, queryable formats to support the quantitative analysis of planning practices. &#13;
A database of ~150 approved semi-structured urban plans under Tel Aviv municipality’s local jurisdiction supports this project's case study. The system relies on proprietary LLMs (ChatGPT, Claude), streamlining a natural language query input through 3 agentic tasks: (1) RAG (Retrieval Augmented Generation) based querying, generating free-text answers from all plans, (2) structuring the answers to a valid JSON, and (3) visualizing structured data. Key findings indicate an 85.45% precision of the system, as evaluated through an end-to-end assessment of 11 representative queries, each validated against 40 manually labeled plans. The tool provides actionable insights, enabling queries such as trends in sheltered bicycle parking approvals or the status of affordable housing planning over the past decade.&#13;
This research underlines the significance of flexibly structuring non- and semi-structured data for urban science. It addresses the growing gap between static legacy data collection and real-time policymaking, democratizing access to planning information and fostering informed decision-making practices. Integrating cutting-edge AI-driven tools contributes to the current discourse on AI applications for city management and planning by providing a replicable model for more cities and planning datasets to build upon and improve.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing Methods for Enhanced Measurement of DNA Single-Strand Breaks and Somatic Variants</title>
<link href="https://hdl.handle.net/1721.1/164586" rel="alternate"/>
<author>
<name>Elacqua, Juniper J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164586</id>
<updated>2026-01-21T03:24:52Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Developing Methods for Enhanced Measurement of DNA Single-Strand Breaks and Somatic Variants
Elacqua, Juniper J.
Maintenance and repair of DNA are essential for proper cellular functioning and preventing the emergence of disease states. As cells divide, mutations accumulate in the genome which contributes to aging phenotypes and can result in genetic diseases such as cancer. The rate at which a cell develops mutations can be accelerated through exposure to genotoxic agents that introduce lesions which, if left unrepaired, prevent accurate replication of the genome. As such, it is crucial to understand the ways in which DNA becomes damaged, how cells respond to various types of damage, and how this damage contributes to mutagenesis and the development of genetic disease. These fields of study have been greatly advanced by improvements in DNA sequencing technologies, and here we present two sequencing-based methods that aim to enable deeper study of DNA damage, repair, and mutagenesis. First, we demonstrate DENT-seq, a method that identifies single-strand breaks with single-nucleotide resolution. Single-strand breaks are the most common form of DNA damage, occurring at rates of ~10,000 per cell per day, but have to date been understudied due to lack of an unbiased, high-resolution method for their detection. Second, we improve upon lineage sequencing, a previously reported method that uniquely measures somatic single nucleotide variants in dividing cells to achieve high specificity/sensitivity as well as the ability to temporally resolve variants and to relate sequenced genotypes to optically observed cellular phenotypes. Despite the high-quality data and unique capabilities offered by this method, it has so far been underused due to a need for complex, microfluidic-based cell collection. We demonstrate novel protocols for performing lineage sequencing that enable easy adoption of the method without the need for highly specialized equipment or expertise. In addition, we expand the repertoire of mutations measurable with the technique to include indels and variants that arise specifically in response to a genotoxic treatment. The methods we show can be applied to reveal novel findings regarding the causes and consequences of DNA damage and mutagenesis that underly numerous genetic diseases.Maintenance and repair of DNA are essential for proper cellular functioning and preventing the emergence of disease states. As cells divide, mutations accumulate in the genome which contributes to aging phenotypes and can result in genetic diseases such as cancer. The rate at which a cell develops mutations can be accelerated through exposure to genotoxic agents that introduce lesions which, if left unrepaired, prevent accurate replication of the genome. As such, it is crucial to understand the ways in which DNA becomes damaged, how cells respond to various types of damage, and how this damage contributes to mutagenesis and the development of genetic disease. These fields of study have been greatly advanced by improvements in DNA sequencing technologies, and here we present two sequencing-based methods that aim to enable deeper study of DNA damage, repair, and mutagenesis. First, we demonstrate DENT-seq, a method that identifies single-strand breaks with single-nucleotide resolution. Single-strand breaks are the most common form of DNA damage, occurring at rates of ~10,000 per cell per day, but have to date been understudied due to lack of an unbiased, high-resolution method for their detection. Second, we improve upon lineage sequencing, a previously reported method that uniquely measures somatic single nucleotide variants in dividing cells to achieve high specificity/sensitivity as well as the ability to temporally resolve variants and to relate sequenced genotypes to optically observed cellular phenotypes. Despite the high-quality data and unique capabilities offered by this method, it has so far been underused due to a need for complex, microfluidic-based cell collection. We demonstrate novel protocols for performing lineage sequencing that enable easy adoption of the method without the need for highly specialized equipment or expertise. In addition, we expand the repertoire of mutations measurable with the technique to include indels and variants that arise specifically in response to a genotoxic treatment. The methods we show can be applied to reveal novel findings regarding the causes and consequences of DNA damage and mutagenesis that underly numerous genetic diseases.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reconfigurable and Interference-Tolerant Receivers for Next Generation Wireless Systems</title>
<link href="https://hdl.handle.net/1721.1/164585" rel="alternate"/>
<author>
<name>Araei, Soroush</name>
</author>
<id>https://hdl.handle.net/1721.1/164585</id>
<updated>2026-01-21T03:24:43Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Reconfigurable and Interference-Tolerant Receivers for Next Generation Wireless Systems
Araei, Soroush
An “all-in-one” radio, programmable across the sub-7 GHz spectrum, offers significant hardware efficiency for 5G systems. However, addressing strong interferers in this wide and congested spectrum remains a major design challenge. N-path filters offer a promising solution for efficiently suppressing interference, thanks to their clock-controlled reconfigurability and excellent linearity against in-band and adjacent-channel blockers. While widely adopted in modern receiver architectures, these switched-capacitor circuits remain inherently vulnerable to blockers at clock harmonics, due to their hard-switching nature. These blockers, common in 5G bands, pose a key bottleneck, delaying the realization of fully integrated multi-band, multi-mode radios. This dissertation introduces fully passive topologies to address this challenge. The first design leverages simultaneous charge sharing and capacitor stacking to implement harmonic rejection filtering. It operates entirely without active circuitry and exhibits exceptionally low loss. A second-generation technique, termed “harmonic reset switching”, builds on this approach by rejecting harmonic blockers directly at the driving point of the N-path filter, achieving superior performance with reduced circuit complexity. As a result, existing reconfigurable receiver topologies can be seamlessly transformed into harmonic blocker–resilient architectures. For example, a taped-out mixer-first receiver adopting this technique achieves a 100× improvement in third-harmonic blocker tolerance compared to state-of-the-art broadband receivers. This dissertation also proposes a reconfigurable receiver for IoT-class radios that is tolerant to both close-in and far-out blockers. A scalable clock bootstrapping technique is introduced to enhance linearity while maintaining both power and cost efficiency. All designs are validated through prototypes fabricated in advanced 22-nm and 45-nm silicon-on-insulator (SOI) technologies. By addressing this long-standing challenge, this work paves the way for fully reconfigurable, interference-resilient radios for 5G and beyond.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Video as the Language of Embodied Intelligence</title>
<link href="https://hdl.handle.net/1721.1/164584" rel="alternate"/>
<author>
<name>Chen, Boyuan</name>
</author>
<id>https://hdl.handle.net/1721.1/164584</id>
<updated>2026-01-21T03:24:38Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Video as the Language of Embodied Intelligence
Chen, Boyuan
Achieving general-purpose embodied intelligence remains a central challenge in artificial intelligence. While recent efforts have extended Large Language Models (LLMs) to robotics by incorporating additional modalities, these adaptations face critical limitations in perception, grounding, and control. For example, spatial reasoning—a simple yet indispensable capability for robots—reveals one of such shortcoming clearly: multimodal LLMs often fail even basic spatial perception tasks like estimating distances. This thesis begins by examining these failures through SpatialVLM, a system that augments vision-language models with 3D spatial reasoning. Although more effective in spatial estimation, this work reveals a deeper issue: the fundamental expressive limitations of language-only outputs in capturing sensorimotor dynamics. Based on these findings, the thesis advocates for a ground-up methodology for robot foundation models, starting with identifying an appropriate “language” for embodied AI, then architecting models and training regimes accordingly. We investigate video as the foundational language, integrated with model-based planning for decision-making. This new paradigm is instantiated through two core contributions. The first is Diffusion Forcing, a hybrid modeling framework that combines causal next-token prediction with full-sequence diffusion. This approach supports stable, coherent rollouts far beyond the training horizon and allows guided generation for decision-making tasks, bridging predictive modeling and planning. Building on Diffusion Forcing, we introduce the Diffusion Forcing Transformer (DFoT), a natural architectural extension designed for flexible video generation conditioned on variable-length histories. To further support long-horizon world-modeling, we propose History Guidance, a set of techniques that enhance sample fidelity, temporal consistency, and compositional generalization. Together, these methods enable robust modeling of visual dynamics across extended timeframes. Finally, we present a preliminary yet promising video foundation model for zero-shot robot motion planning, highlighting the potential of video as the foundational language of embodied intelligence.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Biologically Interpretable Representation Learning for Mechanistic Insights into Cancer Immunotherapy Resistance</title>
<link href="https://hdl.handle.net/1721.1/164583" rel="alternate"/>
<author>
<name>Tariq, Ifrah</name>
</author>
<id>https://hdl.handle.net/1721.1/164583</id>
<updated>2026-01-21T03:24:22Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Biologically Interpretable Representation Learning for Mechanistic Insights into Cancer Immunotherapy Resistance
Tariq, Ifrah
Resistance to immune checkpoint inhibitors (ICIs) remains a critical barrier to effective cancer therapy, driven by complex, multi-scale interactions that current biomarkers often fail to capture. This dissertation introduces the Biologically Disentangled Variational Autoencoder (BDVAE)—an interpretable deep learning framework designed to uncover mechanistic drivers of ICI resistance through multi-omic data integration. Using RNA-seq and wholeexome sequencing data from 366 patients across melanoma, renal cell, urothelial, and gastric cancers, BDVAE learns low-dimensional latent representations that are both predictive of response and biologically meaningful. The model reveals distinct latent dimensions aligned with immune regulation, tumorintrinsic signaling, metabolism, and neuroimmune interactions. SHAP-based interpretation and pathway analysis highlight key resistance-associated programs, including immunosuppressive cytokine signaling, metabolic signaling, and neuroactive pathways such as calcium and cAMP signaling. Unsupervised clustering identifies three tumor subtypes—responder-dominant, non-responder-dominant, and an intermediate group—suggesting plastic or transitional immune states. Survival analyses confirm the clinical relevance of these clusters and expose heterogeneity within standard RECIST categories. Overall, this work presents a novel, interpretable framework for modeling ICI response, offering insights into resistance mechanisms and actionable paths for biomarker discovery, patient stratification, and therapeutic innovation in precision immuno-oncology.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geometric interpretations of structural demand for the analysis and reduction of design complexity</title>
<link href="https://hdl.handle.net/1721.1/164582" rel="alternate"/>
<author>
<name>Lee, Keith Janghyun</name>
</author>
<id>https://hdl.handle.net/1721.1/164582</id>
<updated>2026-01-21T03:24:19Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Geometric interpretations of structural demand for the analysis and reduction of design complexity
Lee, Keith Janghyun
This dissertation presents a computational framework to effectively interpret the distribution of structural demand that emerges from the design of large-scale structural systems, and develops methods for its quantification and manipulation. Structural demand is the required strength and geometry of individual building components that emerges from design as a result of global geometry, topology, and loading. Existing metrics of structural performance fail to consider how variations in demand at the component level can lead to designs that are theoretically efficient but difficult to construct. This has led to a rejection of low-carbon, high-performance design solutions in practice, or the need for extensive post-hoc rationalization, both under the presumption of untenable design complexity for conventional building practices. This dissertation argues that an explicit consideration of the distribution of induced structural demand can bridge this gap between design intent and construction feasibility.&#13;
&#13;
To achieve this, structural demand is interpreted as sets of geometric objects in n-dimensional feature spaces, where each dimension represents an independent component of demand, such as area, length, or stiffness. By directly visualizing the spatial distribution of demand, designers are presented with a richer context of non-physical structural design information, and can evaluate how decisions in structural form affect this distribution. Further, spatial interpretations of information allow for spatial metrics of similarity and variation to be defined, from which quantitative measures of design complexity are derived that account for the shape and distribution of demand. This framework, named \emph{Demand Space Analysis}, is explored in depth and applied to a range of structural scales, from the demand of truss elements and their connections, to the relationship between demand and fixed sets of capacity. Advancements in structural optimization are also presented, enabling more efficient and direct minimization of modern structural performance metrics, from which the relationship between design performance and demand complexity can be explored. Through case studies in each chapter, this dissertation demonstrates how geometric analysis of structural demand information can inform the designer of the implications of decisions on the perceived complexity of design, and provides tools for its quantification and reduction.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Behavioral Responses to Congestion Pricing in New York&#13;
City: Mode Shift, Preference Change, and Effect Persistence</title>
<link href="https://hdl.handle.net/1721.1/164581" rel="alternate"/>
<author>
<name>Shen, ChenAn</name>
</author>
<id>https://hdl.handle.net/1721.1/164581</id>
<updated>2026-01-21T04:07:41Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Behavioral Responses to Congestion Pricing in New York&#13;
City: Mode Shift, Preference Change, and Effect Persistence
Shen, ChenAn
This thesis examines the behavioral impacts of New York City’s congestion pricing policy on weekday peak-hour travel into the pricing zone. Using a two-stage Bayesian Multinomial Logit framework applied to monthly aggregate mobility data, the study disentangles underlying preference shifts from observed mode share changes in response to the toll. Stage 1 estimates population-level travel sensitivities to cost and time, while Stage 2 uses a hierarchical structure to capture heterogeneity across demographic segments defined by income, age, and gender. The analysis spans January–June 2025 and compares results to the same months in 2024 as a counterfactual scenario without pricing. Findings show that while the policy generated a sustained mode shift away from private automobiles toward public transit, preference adaptation varied by demographic group and evolved over time. Some cohorts reinforced the intended policy effects through reduced transit travel time sensitivity, while others exhibited partial reversal as cost sensitivity shifted. These dynamic patterns underscore the importance of evaluating both immediate and evolving behavioral responses when designing congestion pricing strategies and highlight the value of aggregate behavioral modeling for timely, data-driven policy assessment.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Inhabited Arctic: Architecture, Time, and the Making of the Past in the Bering Strait (1760–1980)</title>
<link href="https://hdl.handle.net/1721.1/164580" rel="alternate"/>
<author>
<name>Springstubb, Phoebe</name>
</author>
<id>https://hdl.handle.net/1721.1/164580</id>
<updated>2026-01-21T03:24:08Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">The Inhabited Arctic: Architecture, Time, and the Making of the Past in the Bering Strait (1760–1980)
Springstubb, Phoebe
Our view of antiquity is not objective. From the eighteenth century on, the same actors and institutions involved in colonizing the Arctic shaped understandings of its deep past. Commercial whalers erected outposts on the Arctic Ocean’s edges; miners stripped tundra; trading companies raised forts. The demands of these projects complicated the Western imperial fiction of an Arctic without a past. Grappling with Arctic terrain, foreigners were confronted by a landscape inhabited not only by people and animals but by time and temporal imaginations that long preceded European colonization. They encountered contemporary Indigenous settlements coexisting with ancestral houses, fossil animals, the ruins of earlier colonial ventures, and ancient routes of exchange. This dissertation, centered on the Bering Sea and its adjacent geographies of eastern Siberia and Arctic North America, tells the story of how imperial upheaval and the rooting of colonial projects in the ground sparked a deliberate historiographic project to write the Arctic’s deep past. At the heart of this project was a conflict of different cultural views of time. Who had the right to narrate history in these northernmost borderlands? In episodes spanning two centuries, from the Russian empire’s claim to the Bering Sea to the rise of modern decolonial movements, this dissertation traces the central role of diverse Native architectures and technologies. Iñupiaq houses built from great whale skeletons, Unangax watercraft hewn from circulating driftwood, and Chukchi ice cellars carved into permafrost were both prisms for temporal explanations and sites driving change. Russian colonial administrators, British geologists, US ethnographers, Orthodox priests, and Soviet engineers co-opted them to the lineal, geological, eschatological, and paleolithic time that scaffolded imperial projects. Simultaneously, these material practices were vital sites for reinvention and identity, where Native nations built futures out of rupture. Illuminating how the ecological and epistemic limits to empire-building spurred new theories of Arctic time, this project shows history-making to be a crucial tool different states adopted to justify and naturalize their possessions of Native lands. At stake was not static historical truth but how politically situated temporalities structured their present-day actions. The ethical dimensions of deep time, imagined from the Bering Strait’s modern lands and seas, empowered empire’s practical work. How the past was conceived in different intellectual traditions informed whether animals and plants were exploitable resources or ancestors giving their bodies to architecture. This project contends that how people understood themselves as being in time was a decisive fulcrum ordering collective beliefs in what was owed to a larger, nonhuman world. Taking time as an analytical lens, this dissertation identifies repeated efforts to cleave the Arctic’s human history from nature’s past. Used to justify a wide range of colonial hierarchies and violence in the long nineteenth century, it underlies a contemporary bias toward seeing the Arctic as a region of deep naturalism. Viewed as a place where an “extreme” climate dominates manifold other historicities, the past so circumscribed continues to shape future possibilities.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Asset Limits, Savings Behavior, and Welfare: Evidence from the SIPP and a Life-Cycle Model</title>
<link href="https://hdl.handle.net/1721.1/164579" rel="alternate"/>
<author>
<name>Gamble IV, James Monroe</name>
</author>
<id>https://hdl.handle.net/1721.1/164579</id>
<updated>2026-01-21T04:07:33Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Asset Limits, Savings Behavior, and Welfare: Evidence from the SIPP and a Life-Cycle Model
Gamble IV, James Monroe
This paper examines how asset limits in means-tested welfare programs shape household saving behavior. I exploit cross-state variation in Temporary Assistance for Needy Families (TANF) asset limits by linking these limits to individual-level data from the Survey of Income and Program Participation (SIPP) and estimating ordinary least squares (OLS) regressions with state and year fixed effects. I find that a $1 increase in the liquid asset limit corresponds to a $0.75 decrease in non-housing wealth among single mothers without a high school diploma. This suggests that less stringent asset tests reduce incentives to save, consistent with models in which more generous public insurance lowers the need for precautionary saving.&#13;
&#13;
To interpret these findings, I develop a dynamic life-cycle model of saving under income and medical expense risk, calibrated to key moments from the Hubbard, Skinner, and Zeldes framework. The model embeds Medicaid-style transfer rules and a guaranteed consumption floor. Simulations indicate that a $7,000 consumption floor can reduce median assets by up to 20% among low-education households, reflecting a decrease in self-insurance as public support increases. I then extend the model to include Achieving a Better Life Experience (ABLE) accounts, which are tax-advantaged savings vehicles for individuals with disabilities exempt from means testing. Simulations indicate that ABLE eligibility increases early-life consumption by approximately $10,000 and reduces retirement savings, with account holders shifting more spending into their working years. Together, these results yield a direct mapping from policy levers, including asset-limit generosity, earnings disregards, childcare subsidies, and ABLE exemption rules, to predicted shifts in median household assets. This offers policymakers a practical tool to balance public insurance and private precautionary savings.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building the 3D Genome from the Ground Up: Local Interactions Give Rise to Global Order</title>
<link href="https://hdl.handle.net/1721.1/164578" rel="alternate"/>
<author>
<name>Athreya, Advait</name>
</author>
<id>https://hdl.handle.net/1721.1/164578</id>
<updated>2026-01-21T03:23:55Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Building the 3D Genome from the Ground Up: Local Interactions Give Rise to Global Order
Athreya, Advait
The three-dimensional organization of the genome within the nucleus plays a central role in determining gene regulation and establishing cellular identity, but the mechanisms by which local molecular interactions give rise to global chromatin architecture remain an active area of study. Interactions between nucleosomes—modulated by histone tail post-translational modifications, histone sequence variants, and the DNA sequence itself—are thought to be a major driver of this emergent structure. In this thesis, I address the question of how these intrinsic physicochemical properties of nucleosomes drive the formation of large-scale structures such as chromatin compartments. I develop a theoretical framework based on Flory-Huggins solution theory to derive pairwise internucleosome contact energies from the results of condense-seq, a novel experimental technique that measures the phase separation likelihood of native nucleosomes. I then use these derived energies to parameterize coarse-grained molecular dynamics simulations of chromatin at various resolutions, ranging from 25kb segments to simulate an entire chromosome, down to individual nucleosomes to simulate up to 10Mb genomic regions. These simulations demonstrate that the intrinsic nucleosome properties alone can capture a significant degree of A/B compartment formation observed in Hi-C experiments, despite the deliberate exclusion of all other factors such as loop extrusion and transcriptionfactor-mediated phenomena. This finding establishes that local nucleosome properties play a fundamental role in genome organization. To capture more detailed chromatin physics, I develop an extended chromatin force-field that incorporates anisotropic nucleosome stacking interactions and linker DNA properties using a novel approach for simulating reversible bond formation in molecular dynamics. This model reveals how nucleosome stacking strength, linker DNA geometry, and torsional stress collectively influence higher-order structures. Early results show that the linker-length-dependent DNA torsion contributes to nematic ordering of chromatin, consistent with experimental studies. Future development of this model will enable probing of discrete domain formation observed in imaging studies. Finally, I address a critical consideration for researchers in the chromatin organization field when analyzing Hi-C results. I compare two software tools — cooltools and dcHiC — highlighting the importance of careful parameter selection and analytical choices in designing workflows to ensure reproducible research. Taken together, this work establishes a quantitative, bottom-up modeling framework that directly links the local physicochemical properties of nucleosomes to the global principles governing three-dimensional genome organization. It provides a complementary approach to more data-driven top-down models that have made significant inroads but are challenging to interpret mechanistically. With further development, the work presented in this thesis will contribute towards predicting the structural consequences of specific epigenetic modifications and move us closer to understanding the molecular grammar of chromatin and its role in cellular function and disease.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Subannual Variability in the Abyssal Ocean</title>
<link href="https://hdl.handle.net/1721.1/164577" rel="alternate"/>
<author>
<name>Chen, Si Yuan</name>
</author>
<id>https://hdl.handle.net/1721.1/164577</id>
<updated>2026-01-21T03:23:46Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">On Subannual Variability in the Abyssal Ocean
Chen, Si Yuan
The abyssal ocean is a critical yet understudied component of the climate system and is of growing economic interest. This thesis combines field observations and numerical modeling to advance our understanding of subannual variability in the abyssal ocean and its broader implications.&#13;
&#13;
First, hydrographic measurements from the Clarion-Clipperton Zone of the tropical Northeastern Pacific are used to characterize the structure and variability of the bottom mixed layer (BML) in a region targeted for deep-sea mining. The observations reveal a spatially and temporally variable BML with a mean thickness of ~250 m and influenced by interactions with mesoscale eddies and abyssal thermal fronts. A simplified model of sediment transport suggests that such variations in BML structure could significantly influence the dispersal of sediments resuspended by seabed mining activities.&#13;
&#13;
Second, idealized model experiments are conducted to explore the genesis of benthic storms – episodes of strong near-bottom flows and sediment entrainment – underneath an unstable, surface-intensified jet resembling the Gulf Stream east of Cape Hatteras. In these experiments, the baroclinic instability of the jet gives rise to deep cyclonic and anticyclonic eddies through eddy barotropization and to high levels of eddy kinetic energy at abyssal depths through the convergence of vertical eddy pressure fluxes. The near-bottom currents are comparable in magnitude to those observed during benthic storms, with vertical shears strong enough to produce BMLs up to O(100) m thick. Deep cyclonic eddies transport particles from near the bottom over the entire BML and could contribute to benthic nepheloid layers. The results suggest that the abyssal response to the intrinsic instability of surface-intensified currents could contribute significantly to subannual variability near the seafloor.&#13;
&#13;
Third, a model simulation of western North Atlantic circulation is performed to study the deep cyclones (DCs) observed beneath Gulf Stream meander troughs. The characteristics of the simulated DCs compare well with field observations. The negative pressure tendency during cyclogenesis arises from a small imbalance between the sea surface depression and the vertically-integrated increase in seawater density. Vortex stretching is the primary source of cyclonic vorticity, while vortex tilting is a non-negligible sink. The deep pressure tendency, vorticity fluxes, and ageostrophic flows are diagnosed, and their similarities and differences with mid-latitude synoptic cyclones in the atmosphere are discussed. Near-bottom currents in DCs dominate the basin-scale bottom energy dissipation and transport fluid over ≥1000 km horizontally and O(100) m vertically within 3~4 months, suggesting that they provide an efficient mechanism for tracer and material transport in the abyssal interior.&#13;
&#13;
Collectively, this thesis highlights the importance of transient, mesoscale processes in contributing to subannual variability in the abyssal ocean, particularly near the seafloor. The findings have broader relevance for monitoring the environmental impacts of human activities, including deep-sea mining and carbon sequestration. While further questions remain for future investigation, this work underscores the need for sustained in-situ observations in the abyssal ocean and calls for the implementation of high vertical resolution in numerical ocean circulation models.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Democratizing High-Performance DSL Development with the BuildIt Framework</title>
<link href="https://hdl.handle.net/1721.1/164576" rel="alternate"/>
<author>
<name>Brahmakshatriya, Ajay</name>
</author>
<id>https://hdl.handle.net/1721.1/164576</id>
<updated>2026-01-21T03:23:34Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Democratizing High-Performance DSL Development with the BuildIt Framework
Brahmakshatriya, Ajay
Modern high-performance software from a variety of domains relies on hand-written and hand-optimized libraries to obtain the best performance. Besides general fine-grained operators that can be composed to write entire applications, these libraries also provide coarser-grained fused and hand-optimized operators that are much faster due to being optimized for a specific sequence of operations. However, as application needs keep growing, library writers are not able to keep up and have to make the tradeoff of either sacrificing performance or generality. Domain-specific languages or DSLs are able to break this tradeoff by automatically generating the best implementation for any arbitrary sequence of operations specified by the end user. However, DSL compilers suffer from a bigger challenge that they require a lot of compiler knowledge to implement parsers, IR, analysis and transformations, and code generation, which is outside the scope of a typical domain expert. To make compiler technology and the benefits of code-generation more accessible to domain experts, I propose the use of multi-stage programming to allow writers to write library-like code while also combining it to generate the most efficient implementation for any whole program. In this thesis, I discuss the design of different multi-stage programming systems, the benefits and drawbacks. Next, I propose Re-Execution Based Multi-Staging (REMS) that addresses a critical flaw in many imperative Multi-Staging systems - the side-effect leak problem. I introduce BuildIt, an implementation of REMS in one of the most popular languages for writing high-performance applications, C++ in a type-based, lightweight way without changing the compiler. I describe the internals of BuildIt and how it implements the key features of REMS. Furthermore, I describe a set of extensions implemented on top of BuildIt that facilitate the development of high-performance DSLs with ease. I show the application of BuildIt to create three DSLs - EasyGraphit, NetBlocks, and BREeze that target graph analytics, ad-hoc network protocol generation, and Regex matching. All these case studies show 10-100x reduction in the amount of effort required to implement these DSLs that perform on-par with or better than state-of-the-art compiler frameworks while targeting diverse architectures like CPUs and GPUS. Finally, I introduce D2X, a system that is designed to add extensible and contextual debugging support to DSL implementations without having to make any changes to off-the-shelf debuggers or mess with complex debugging formats. Next, I show how applying D2X to the BuildIt system greatly improves the debugging experience for all DSLs written with BuildIt.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tailoring Li₄Ti₅O₁₂ Thin Film Carrier Kinetics Through Solid Solution Doping for Battery and Memristor Applications</title>
<link href="https://hdl.handle.net/1721.1/164575" rel="alternate"/>
<author>
<name>Buzzell, Drew E.</name>
</author>
<id>https://hdl.handle.net/1721.1/164575</id>
<updated>2026-01-21T03:23:32Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Tailoring Li₄Ti₅O₁₂ Thin Film Carrier Kinetics Through Solid Solution Doping for Battery and Memristor Applications
Buzzell, Drew E.
A Lithium titanate, Li₄Ti₅O₁₂ (LTO4), due to its zero-strain behavior during cycling, excellent chemical stability and cyclability, is a promising anode material for solid-state batteries (SSB) applications. As a thin film, its applications expand to integrated circuits, sensors, flexible batteries, IoT devices, and memristors. Across these, precise control of mixed Li⁺ ionic–electronic transport is vital. While dopants have been shown to improve electron conduction and Li⁺ diffusion in LTO4 powders, thin-film studies remain limited. To bridge this gap, we investigate solid solution dopants (Nb⁵⁺, V⁵⁺, Mg²⁺, Cu²⁺) and their effects on LTO4 thin-film kinetics and performance in batteries and memristors. Films doped with Mg, Cu, Nb, and V with a 0.2M dopant concentration were deposited on Nbdoped SrTiO₂ substrates. Cyclic voltammetry and impedance spectroscopy show that Mg, Nb, and V improve kinetic metrics, while Cu reduces diffusivity but boosts electronic conductivity. Through galvanostatic cycling-based capacity, rate capability, and stability measurements, we found that while all dopants displayed enhanced rate performance, the capacity improved only with Mg, Nb, and V. Furthermore, the Mg-doped film was found to have an unstable capacity leading to Nb- and V-doped thin-films as the best overall performing battery anodes. For memristors, current–voltage cycling measurements revealed that low concentrations (0.05 M) of Cu and Nb doped devices presented the largest improvements in cycle-to-cycle stability, switching ON-voltages, ON-OFF current ratios, and lower loss in peak current with increasing scan rate. With increasing dopant concentrations however, devices would see relative drops in performance. In summary, the inclusion of dopants in LTO4 at the right concentration level leads to improvements in both battery and memristor performance allowing for one material multi-functional systems.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strain-resolved transcriptomics: exploring functional heterogeneity of the gut microbiota in health and disease</title>
<link href="https://hdl.handle.net/1721.1/164574" rel="alternate"/>
<author>
<name>Burgos Robles, Emanuel Felipe</name>
</author>
<id>https://hdl.handle.net/1721.1/164574</id>
<updated>2026-01-21T04:08:09Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Strain-resolved transcriptomics: exploring functional heterogeneity of the gut microbiota in health and disease
Burgos Robles, Emanuel Felipe
The gut microbiome plays a critical role in inflammatory bowel diseases (IBDs), yet current analyses treat bacterial species as functionally uniform, ignoring extensive strain-level diversity that may drive disease mechanisms. Here, we developed a strain-resolved metatranscriptomics framework to investigate how transcriptional activity varies across bacterial lineages and relates to IBD pathogenesis. Using paired metagenomics and metatranscriptomics data from 1,067 fecal samples (103 IBD and 335 non-IBD patients), we first constructed phylogenetic trees for over 250 bacterial species using the single nucleotide variants within essential housekeeping genes, enabling the identification of bacterial strains. Next, we devised a statistical approach to assign mRNA reads to these strains, leveraging the natural genetic variation that is present across them. My analysis revealed that closely related bacterial strains exhibit dramatically different transcriptional programs, with some strains enriched in IBD patients showing upregulation of genes involved in stress response, sugar metabolism pathways, and antimicrobial resistance. Notably, we identified transcriptionally active but genomically low-abundance taxa, highlighting the importance of measuring the transcriptional activities of strains beyond species composition. Lineage-aware differential expression analysis uncovered strain-specific adaptations to inflammatory environments. This strain-resolved approach provides a powerful framework for understanding microbial functional heterogeneity and identifying specific bacterial lineages that could potentially contribute to disease pathogenesis, potentially guiding more targeted microbiome-based therapeutic interventions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Role of CH–π Interactions in Protein-Carbohydrate Binding</title>
<link href="https://hdl.handle.net/1721.1/164573" rel="alternate"/>
<author>
<name>Keys, Allison M.</name>
</author>
<id>https://hdl.handle.net/1721.1/164573</id>
<updated>2026-01-21T03:24:35Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">The Role of CH–π Interactions in Protein-Carbohydrate Binding
Keys, Allison M.
Protein-carbohydrate binding is essential for biological processes, including cellular recognition and immune signaling. Binding is driven by several types of non-covalent interactions: hydrogen bonding, metal ion coordination, and the less well-understood CH–π interactions. CH–π interactions are pervasive in protein-carbohydrate binding sites and have emerged as critical drivers of protein–carbohydrate recognition; however, the energetics of CH–π stacking interactions, their orientational landscapes, and their interplay with other non-covalent interactions have been unclear. &#13;
In this thesis, I identified carbohydrate-aromatic CH–π stacking interactions from crystallographic structures in the Protein Data Bank. I performed quantum mechanical calculations to quantify interaction energies and found that CH–π stacking interactions can be more favorable than hydrogen bonds. Using atomistic simulations, I also demonstrated that CH–π stacking interactions are necessary for human galectin-3 binding to lactose. To assess the orientational landscape of CH–π stacking interactions, I evaluated the orientations of CH–π stacking interactions formed by β-D-galactose and found that numerous orientations are highly favorable. I then identified carbon atom distances that define an orientational landscape for these interactions. To assess the interplay between non-covalent interactions in protein-carbohydrate binding sites, I used CH–π distance features to bias metadynamics simulations of a curated set of protein–β-D-galactoside complexes. From these simulations, I found that while bound carbohydrates sample many CH–π stacking orientations, the hydrogen bonds in the protein binding site drive the optimal orientation of each ligand. Longer carbohydrate ligands with more hydrogen bonding constraints have more specific orientational dependence, while ligands in binding sites with a reduced number of hydrogen bonds occupy a broader range of orientations. Unlike hydrogen bonds, CH–π stacking interactions confer orientational flexibility: enzymes can exploit multiple CH–π stacking interactions to facilitate the translocation of polysaccharide substrates. Extending this analysis to other carbohydrates, I showed that carbohydrate stereochemistry drives the orientational preferences of CH–π stacking interactions; however, there is also a tradeoff between the presence of hydrogen bonds to charged amino acids and the CH–π interaction strength for each carbohydrate. Overall, this thesis demonstrates that CH–π interactions are favorable and confer high orientational flexibility and that hydrogen bonds act in concert with CH–π interactions to stabilize protein-carbohydrate binding. Tuning the number and positions of these interactions through protein engineering should alter protein selectivity and ligand movement in protein binding sites.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantum Networking using Waveguide Quantum Electrodynamics</title>
<link href="https://hdl.handle.net/1721.1/164572" rel="alternate"/>
<author>
<name>Almanakly, Aziza</name>
</author>
<id>https://hdl.handle.net/1721.1/164572</id>
<updated>2026-01-21T03:23:30Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Quantum Networking using Waveguide Quantum Electrodynamics
Almanakly, Aziza
The architectural principle of modularity enables the construction of complex systems from simpler components, each responsible for a particular function. The quantum computer is an intricate system comprising fragile, error-prone parts known as qubits. Entanglement distribution across a network of non-local processing modules facilitates robust and extensible quantum computation. In modular quantum architectures, photons are natural quantum information carriers which propagate through interconnects between processing nodes. In this thesis, we engineer a quantum interconnect between superconducting modules underpinned by the physics of waveguide Quantum Electrodynamics (wQED). First, we realize a multi-qubit module that exploits quantum interference to emit microwave photons into a waveguide with a specified propagation direction. Next, we construct the quantum interconnect by coupling two modules to a common waveguide and demonstrate directional (chiral) photon emission and absorption. Finally, using this chiral quantum interconnect, we generate remote entanglement, establishing a key resource for distributed quantum computation in an all-to-all network architecture.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Large Language Models and Quantifying the Regulatory Expenses of Affordable Housing: A Thorough Examination Utilizing Generative Assessment</title>
<link href="https://hdl.handle.net/1721.1/164571" rel="alternate"/>
<author>
<name>Xu, Bangjie</name>
</author>
<id>https://hdl.handle.net/1721.1/164571</id>
<updated>2026-01-21T04:08:08Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Large Language Models and Quantifying the Regulatory Expenses of Affordable Housing: A Thorough Examination Utilizing Generative Assessment
Xu, Bangjie
This thesis presents an innovative methodology using Large Language Model-based methods to extract and quantify housing regulations from municipal zoning codes, making possible the most comprehensive examination of regulatory costs at the municipal level across California to date. A multi-staged extraction framework is devised that delivers 85-95% accuracy in the identification and standardization of complex regulatory requirements from legal documents. Applying this methodology to over twenty California cities over the period 2015-2025, it is estimated that regulatory constraints raise the cost of developing a housing unit by roughly between 5% to 10% (or $50,000 and $100,000+) per housing unit, with the most acute constraints in the state’s coastal metros. This method is used to find that factors such as regulation costs limit housing supply elasticity from 1.24 in low-regulation jurisdictions to 0.08 in high-regulation areas. The LLM-based framework allows us to conduct analyses at an unprecedented scale and granularity and to reveal, for example, that the relaxation of regulation by streamlining policies like the Los Angeles Transit Oriented Communities program boosts housing production in eligible zoned areas by 43%. This study makes significant contributions to the restructuring of California’s housing regulation system in response to the affordability crisis, and its methodology presents a replicable tool for regulatory analysis in other policy domains.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Essays in Macro-Finance</title>
<link href="https://hdl.handle.net/1721.1/164570" rel="alternate"/>
<author>
<name>Batista, Quentin</name>
</author>
<id>https://hdl.handle.net/1721.1/164570</id>
<updated>2026-01-21T03:24:44Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Essays in Macro-Finance
Batista, Quentin
In Chapter 1 (joint with J.R. Scott), we revisit the high-frequency and narrative approaches to estimating the effects of monetary policy shocks. We find that state-of-the-art estimates using both approaches are biased: high-frequency estimates due to nonlinear predictability and narrative estimates due to regularization. To correct for the bias in these approaches, we propose a new estimation procedure called LP-DML that combines ideas from double/debiased machine learning with the local projections framework. We find that LP-DML results in significantly smaller effects of monetary policy on macroeconomic outcomes. In Chapter 2 (joint with Taisuke Nakata and Takeki Sunakawa), we study the following question: how a central bank credibly implement a ”lower-for-longer” strategy? To answer this question, we analyze a series of optimal sustainable policy problems—indexed by the duration of reputational loss—in a sticky-price model with an effective lower bound (ELB) constraint on nominal interest rates. We find that, even when the central bank lacks commitment, the central bank can still credibly keep the policy rate at the ELB for an extended period though not as extended as under the optimal commitment policy—and meaningfully mitigate the adverse effects of the ELB constraint on economic activity. In Chapter 3, I examine the impact of central bank real estate purchases on financial markets, focusing on the Bank of Japan’s (BoJ) intervention in the Real Estate Investment Trust (REIT) market. Using a regression discontinuity design that exploits a discontinuity in the BoJ’s policy rule, I find that a typical intervention — amounting to about 0.014% of market capitalization — leads to an increase of 0.1% to 0.2% of REIT prices in the hours following the intervention. However, at longer horizons, the interventions do not have a significant effect on REIT prices. These findings suggest that the BoJ did not achieve the program’s intended objective of significantly reducing the risk premium on real estate assets.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward an Integrative Study of Human-AI Interaction</title>
<link href="https://hdl.handle.net/1721.1/164569" rel="alternate"/>
<author>
<name>Alsobay, Mohammed</name>
</author>
<id>https://hdl.handle.net/1721.1/164569</id>
<updated>2026-01-21T03:24:10Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Toward an Integrative Study of Human-AI Interaction
Alsobay, Mohammed
As artificial intelligence (AI) systems are increasingly embedded in the workflows of individuals and groups, designers and researchers of human-AI interaction (HAI) navigate a vast design space of possible configurations, making decisions that span algorithmic parameters, interface choice, and interaction protocols. This thesis develops an integrative approach that examines how design factors combine and interact to determine the outcomes of human-AI collaboration. &#13;
&#13;
Chapter 1 synthesizes prior HAI research into a coherent design space framework encompassing algorithms, interfaces, users, and task settings, motivating a research program for systematic exploration of interdependencies between these factors. Chapters 2 and 3 turn to group-AI interaction through large-scale behavioral experiments. Chapter 2 investigates how social information---both direct conversation and peer behavior indicators---affects individual reliance on algorithmic decision support. The study reveals that while social information modulates the effects of performance feedback and model explanations on reliance, it does not improve predictive accuracy, illuminating critical tensions between social mechanisms and system design. Chapter 3 examines large language models as facilitators of group deliberation in hidden profile tasks. While LLM facilitation increased information sharing volume, density, and breadth, it did not improve decision quality, highlighting fundamental challenges in group-AI system design beyond information aggregation.&#13;
&#13;
Chapter 4 advances an integrative approach to HAI research, emphasizing shared design spaces, systematic exploration strategies, and predictive models that generalize across contexts. The chapter provides methodological guidance and a tractable roadmap for advancing this integrative research agenda, laying the foundation for a more context-aware science of human-AI collaboration.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging design to build with less: Evaluating the embodied carbon reduction potential of architectural design across scales</title>
<link href="https://hdl.handle.net/1721.1/164568" rel="alternate"/>
<author>
<name>Feickert, Kiley</name>
</author>
<id>https://hdl.handle.net/1721.1/164568</id>
<updated>2026-01-21T03:24:40Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Leveraging design to build with less: Evaluating the embodied carbon reduction potential of architectural design across scales
Feickert, Kiley
Reducing embodied carbon (EC) in structural systems -- the most significant contributor to EC in a building -- is urgent to address the simultaneous need to reduce global warming and increase urban density. Much of the policy and research to date to reduce EC has focused on material-scale interventions or substitutions. However, EC depends on both: 1) the carbon intensity of the processes used to manufacture construction materials, and 2) the volume of raw materials required. Architects have significant agency to reduce the volume of structural materials in a building (and the resulting emissions) since the required quantity depends on design decisions architects make, including column spacing, structural typology, massing, etc. To date, most methods used to estimate EC during early-stage design do not: 1) integrate with architects’ existing design workflows, 2) evaluate multiple material systems simultaneously, and/or 3) include structural analysis to estimate material quantities. This functionality is critical so that designers can understand which decisions EC is sensitive to and evaluate design and EC tradeoffs before significant carbon is locked in.&#13;
&#13;
To address this problem, this dissertation presents a method towards transparent estimation of structural material quantities, intending to inform architectural design and policy, or other emerging EC standards. This method is used to contribute an analysis of the effectiveness of emerging U.S. EC policies, which focus on different scales of intervention, at the building scale. These policies are evaluated in isolation and in combination with strategic design levers that take advantage of structural mechanics to reduce material quantities for various building configurations and material systems. It finds that the most prominent policy approach, “Buy Clean” materials, only reduces EC by ~9% and ~16% for steel and concrete systems, respectively, compared to strategic design choices that have the potential to yield savings of up to ~79%. This dissertation also identifies building massing as a key lever in the EC outcomes of structural systems and proposes a method to quantify the impact of massing using automated structural design and analysis. It finds that in some situations, cantilevered massing typologies can be materialized for no carbon penalty if efficient configurations are used. Indeed, if inefficient configurations are used, they can incur a significant carbon penalty (2.4x) compared to normative massing. The presented results highlight the potential of design to reduce demand-side EC across scales.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generalizable Robot Manipulation through Unified Perception, Policy Learning, and Planning</title>
<link href="https://hdl.handle.net/1721.1/164567" rel="alternate"/>
<author>
<name>Fang, Xiaolin</name>
</author>
<id>https://hdl.handle.net/1721.1/164567</id>
<updated>2026-01-21T03:23:49Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Generalizable Robot Manipulation through Unified Perception, Policy Learning, and Planning
Fang, Xiaolin
Advancing robotic manipulation to achieve generalization across diverse goals, environments, and embodiments is a critical challenge in robotics research. While the availability of data and large-scale training has brought exciting progress in robotics manipulation, current methods often struggle with generalizing to unseen, unstructured environments and solving long-horizon tasks. In this thesis, I will present my work in robot learning and planning that enables multi-step manipulation in partially observable environments, towards general-purpose embodied agents. Specifically, I will talk about my work in 1) constructing a modular framework that estimates affordances with learned perception models with task-and-motion-planning (TAMP) for object rearrangement in unstructured scenes, 2) learning generative diffusion models of robot skills, which can be composed to solve unseen combination of environmental constraints through infeference-time optimization, 3) leveraging large vision-language models (VLMs) in building task-oriented visual abstractions, allowing skills to generalize across different environments with only 5 to 10 demonstrations. Together, these approaches contribute to the generality and scalability of embodied agents towards solving real-world manipulation in unstructured environments.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards High-Dimensional Generalization in Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/164566" rel="alternate"/>
<author>
<name>Boopathy, Akhilan</name>
</author>
<id>https://hdl.handle.net/1721.1/164566</id>
<updated>2026-01-21T03:24:21Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Towards High-Dimensional Generalization in Neural Networks
Boopathy, Akhilan
Neural networks excel in a wide range of applications due to their ability to generalize beyond training data. However, their performance degrades on high-dimensional tasks without large-scale data, a challenge known as the curse of dimensionality. This thesis addresses this limitation by pursuing three key objectives aimed at understanding and improving neural network generalization. 1. We aim to investigate the scaling laws underlying generalization in neural networks including double descent, a phenomenon in which as a model’s capacity or training data is increased, the test error temporarily increases at a certain point before continuing to decrease. In particular, we will have two goals: 1) a better understanding of when double descent can and cannot be empirically observed and 2) a better understanding of scaling laws with respect to training time. 2. Inductive bias refers to the set of assumptions a learning algorithm makes to predict outputs on inputs it has not encountered. We propose quantifying the amount of inductive bias required for a model to generalize well with a fixed amount of training data. By developing methods to measure inductive bias, we can assess how much information model designers need to incorporate into neural networks to improve their generalizability. This quantification can guide the design of harder tasks that better test a model’s generalization. 3. Finally, we aim to develop new methods to enhance neural network generalization, particularly focusing on reducing the exponential number of training samples required for high-dimensional tasks. This involves creating algorithms and architectures that can learn effectively from limited data by incorporating stronger inductive biases. In particular, we will focus on two inductive biases in particular: 1) learning features of the training loss landscape correlated with generalization and 2) using modular neural network architectures. We expect that these techniques can improve generalization, particularly in high-dimensional tasks. Together, these contributions aim to deepen our theoretical understanding and develop practical tools for enabling neural networks to generalize effectively from limited data.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Remote Control: Art, Technology, and the Politics of Distance (1966-1972)</title>
<link href="https://hdl.handle.net/1721.1/164565" rel="alternate"/>
<author>
<name>Wexelblatt, Nina Rrose</name>
</author>
<id>https://hdl.handle.net/1721.1/164565</id>
<updated>2026-01-21T03:23:42Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Remote Control: Art, Technology, and the Politics of Distance (1966-1972)
Wexelblatt, Nina Rrose
Platforms carrying dancers across a stage, doors sliding open as if by magic, and simultaneous Happenings in Berlin and Buenos Aires: remote control promised thrills as postwar artists experimented with technologies of distance. Focused on the half-decade between 1966 and 1972, this thesis intervenes in the history of art and technology to argue that a desire to activate the supposedly empty space between artist, art object, and audience effected a new fixation on the nature of that distanced interval, leading artists to incorporate actual remote control technologies into their work. This impulse grew from an unorthodox reading of the work of modernist painters, particularly Jackson Pollock. Where a generation of critics had canonized “presentness” and medium specificity, a younger cohort read the work differently, finding in it permission to embrace remoteness, intermedia experimentation, and political messaging. &#13;
&#13;
Artists including Robert Rauschenberg, Allan Kaprow, Marta Minujín, Wolf Vostell, and Carolee Schneemann, among others, undertook radical experiments with remote systems, often in collaboration with engineers. Theirs was not a technocratically neutral position; this thesis demonstrates that these artists consciously cast the “remoteness” enabled by new technologies as a charged concept, just as controlled distance emerged to define military and industrial relations on domestic, urban, and geopolitical scales. Remote control enabled artists to incorporate, not reject, the expanding frames of reference taking place outside of the sanctioned spaces of the art studio or gallery, from automation to satellite communications to warfare. Artists’ uses of remote technologies intentionally surfaced questions about critical power relations, tying the stakes of their work to debates about the future of U.S. social and economic control and development. In doing so, it also crystallized a newly diffuse, participatory artistic subject: the controller.&#13;
&#13;
The introduction theorizes “remote control” in historical and historiographic context. A second chapter follows Automation House (1970-1972), a Manhattan art space that combined labor mediation and media art to experiment with the American postindustrial labor economy to come. A third chapter centers on Three Country Happening (1966), which took place in New York, Buenos Aires, and Berlin, supposedly mediated by satellite—foiled by the uneven development of the Cold War-era satellite system itself. A fourth chapter delves into Snows (1967), a multimedia performance in protest of the war in Vietnam, which incorporated audience-controlled feedback sensors. A concluding discussion traces the ongoing nature of remote control as it implicates artists and audiences alike in a network of shared responsibility.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reimagining Public Land: Municipal Land Use to Ease Housing Crisis in Boston</title>
<link href="https://hdl.handle.net/1721.1/164564" rel="alternate"/>
<author>
<name>Murphy, Ryan</name>
</author>
<id>https://hdl.handle.net/1721.1/164564</id>
<updated>2026-01-21T04:08:06Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Reimagining Public Land: Municipal Land Use to Ease Housing Crisis in Boston
Murphy, Ryan
Boston is in the midst of a severe housing crisis, driven by decades of underproduction, rising construction costs, restrictive zoning, and an inelastic real estate market that has resulted in persistent affordability challenges. This thesis explores the untapped potential of city-owned land as a powerful tool to increase housing supply and affordability in Boston. Using Boston’s 2022 Citywide Land Audit and detailed development assumptions, the analysis estimates that between 19,000 and 31,000 new housing units could be constructed across city-controlled parcels, including between 3,200 and 6,100 affordable units under the current Inclusionary Development Policy. The research draws on case studies from peer cities such as Chicago and Atlanta where municipal land has been successfully leveraged through transparent disposition processes, fast-tracked entitlements, and flexible affordability models. It argues for a policy shift in Boston toward a more streamlined, market-aware, and scalable land release strategy that prioritizes speed, cross-subsidization, and financial feasibility. Key recommendations include expanding the Welcome Home, Boston program to include mixed-income and rental housing, implementing predictable RFP cycles, offering tax abatements, and expediting the entitlement process.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modular Zipping for Transformable and Dynamic Systems</title>
<link href="https://hdl.handle.net/1721.1/164563" rel="alternate"/>
<author>
<name>Hagemann, Niklas</name>
</author>
<id>https://hdl.handle.net/1721.1/164563</id>
<updated>2026-01-21T04:08:01Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Modular Zipping for Transformable and Dynamic Systems
Hagemann, Niklas
There is a need for products, machines and environments that can change shape, transform and evolve according to their use. This thesis proposes the design of a simple, modular actuator based on reversible folding and interlocking (zipping) of flexible 3D printed strips. The proposed zipper design allows for continuous control states between a compact and fully deployed state. The modular actuators can be integrated into a variety of systems to enable compact, shape- and stiffness-changing structures, robots and other devices. Designs are presented for single- and double-zipper modules using the same basic zipper design. The modules can be used as modular components of compact robotic systems with the ability to expand and contract according to their environment, or used as adjustable structural components to create deployable, shape-and stiffness-changing objects. The zipper design points the way towards simplified mono-material components that embed transformation and reversibility into everyday devices, products and spaces, and enabling objects that are as easy to transform, reconfigure and reverse as they are to manufacture.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Embodied Representation of Time in Virtual Reality</title>
<link href="https://hdl.handle.net/1721.1/164562" rel="alternate"/>
<author>
<name>Kim, Suwan</name>
</author>
<id>https://hdl.handle.net/1721.1/164562</id>
<updated>2026-01-21T04:08:03Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Embodied Representation of Time in Virtual Reality
Kim, Suwan
Recent advancements in 3D graphics and AI-assisted generative techniques have accelerated the creation of realistic scenes for immersive technologies, including virtual reality, yet most systems continue to encode time as a linear parameter, relying on timeline-based playback. Mesh-based representations are typically constrained by fixed topologies and rely on predefined animations, which limit their capacity to encode temporal change as a spatial or perceptual phenomenon. In reality, human experience of time is embodied and dynamic, perceived through interaction and memory. Existing digital systems fail to capture this dimension, reducing time to a passive parameter. This thesis proposes a framework for representing time as an embodied and spatial dimension within virtual reality by embedding it directly into the geometry and interaction logic of point cloud data. The system consists of three parts: (1) processing 2D images into layered volumetric point clouds to enable structural fluidity and temporally responsive spatial form; (2) enabling perceptual and spatial modulation in response to user distance and contact, with color influencing the character of change and opacity shaping its perceptual reveal at both global and local scales; and (3) enabling real-time visualization of modulated point cloud through a custom pipeline optimized for mobile virtual reality. By embedding temporal dynamics directly into geometry and interaction logic, this thesis contributes a novel representational approach to spatiotemporal modeling in immersive systems. By doing so, we create new opportunities for architectural visualization, interactive simulations, game design, and reimagining how we perceive and construct digital spaces.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Essays on Behavioral Economics and Sophisticated Procrastination</title>
<link href="https://hdl.handle.net/1721.1/164561" rel="alternate"/>
<author>
<name>Chen, Xi</name>
</author>
<id>https://hdl.handle.net/1721.1/164561</id>
<updated>2026-02-20T03:14:30Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Essays on Behavioral Economics and Sophisticated Procrastination
Chen, Xi
Procrastination is a widespread yet complex behavior that resists simple explanation. This dissertation integrates theoretical modeling with experimental evidence to examine procrastination through the lens of sophisticated decision-making. It reframes procrastination not merely as a deviation from rationality, but as a behavior shaped by strategic trade-offs, self-awareness, and individual heterogeneity. The first essay develops a theoretical model of Perfectionistic Procrastination, proposing that individuals with high internal standards may delay tasks not as a simple lapse in self-control, but as a strategic response to the anticipated costs of sustained effort. In this framework, deadlines act as external constraints that help perfectionists limit open-ended striving and bring tasks to completion. An accompanying experiment tests the model’s prediction and finds that perfectionists are more likely to prefer deadlines. These results suggest that, in some cases, procrastination may reflect a structured strategy rather than a purely irrational failure of self-control. The second essay explores the phenomenon of Sophisticated Procrastination, challenging traditional models that attribute procrastination to naïveté. Instead, it proposes that even individuals who are aware of their tendency to delay may struggle to act on that awareness. Two experimental studies using a menu-choice framework examine how people choose task timings. In Study 1, participants preferred earlier deadlines when flexibility was available but shifted toward later options when required to commit, revealing a gap between intention and action. Study 2 identified diverse patterns of deadline preferences: while many participants actively avoided the latest possible deadline, their hesitation to commit to any specific deadline suggests a deeper tension rooted in uncertainty or discomfort with commitment. These findings provide early empirical support for Sophisticated Procrastination, indicating that self-awareness alone may not be sufficient to overcome procrastination. The third essay introduces the idea of Prosocial Procrastination, describing the tendency to delay tasks that benefit others, such as charitable activities, more than those with self-interested outcomes. Using two distinct experimental designs, one based on conjoint analysis and the other on single-attribute choice, the studies show that individuals are more likely to prefer longer deadlines when working for a charity than when working for themselves. These findings offer suggestive evidence for Prosocial Procrastination and contribute to the growing literature on the intersection of social preferences and time preferences.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Patent Visibility and the Diffusion of Trapped Knowledge:&#13;
Evidence from US Grants</title>
<link href="https://hdl.handle.net/1721.1/164560" rel="alternate"/>
<author>
<name>Yao, Randol H.</name>
</author>
<id>https://hdl.handle.net/1721.1/164560</id>
<updated>2026-01-21T04:08:04Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Patent Visibility and the Diffusion of Trapped Knowledge:&#13;
Evidence from US Grants
Yao, Randol H.
Valuable knowledge developed in one part of the world may remain “trapped" locally due to frictions in how knowledge is recognized and shared globally. This paper examines how granting US patents to foreign-origin inventions—by elevating their visibility and credibility— untraps the knowledge and facilitates global diffusion. Using examiner leniency as an instrument, complemented by a difference-in-differences design, I find that US grants of home country patents significantly increase both the likelihood and intensity of forward citations, including marked increases from third countries. A novel measure of “trappedness” reveals that knowledge from historically more trapped countries and sectors sees larger diffusion benefits after the US grants. These findings highlight the central role of the US as a platform of global knowledge recognition and diffusion, particularly in turning overlooked ideas into globally relevant innovations.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterization of the East China Sea Continental Shelf&#13;
Circulation Northeast of Taiwan Surrounding Mien-Hua Canyon</title>
<link href="https://hdl.handle.net/1721.1/164559" rel="alternate"/>
<author>
<name>Rafferty, Lieutenant Commander Keefe</name>
</author>
<id>https://hdl.handle.net/1721.1/164559</id>
<updated>2026-01-21T04:07:59Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Characterization of the East China Sea Continental Shelf&#13;
Circulation Northeast of Taiwan Surrounding Mien-Hua Canyon
Rafferty, Lieutenant Commander Keefe
Submarine canyons have a proven and direct influence on continental shelf circulation and flow dynamics, especially in relation to western boundary currents. There are two key circulation features northeast of Taiwan on the East China Sea continental shelf: (1) the cold dome, a cyclonic feature that appears primarily in summer and is associated with upwelling, and (2) Kuroshio intrusions onto the continental shelf in the vicinity of Mien-Hua Canyon. This paper is a descriptive physical oceanography study with a focus on characterizing the circulation patterns northeast of Taiwan surrounding Mien-Hua Canyon, closely correlating these patterns with the migration of the Kuroshio and its variability and intrusions onto the southern East China Sea continental shelf, leading to the formation of the cold dome. The Institute of Oceanography at the National Taiwan University and WHOI executed a joint international field survey at Mien-Hua Canyon aiming to improve the understanding of canyon flow dynamics between the East China Sea continental shelf northeast of Taiwan and the Kuroshio as the North Pacific Gyre westward boundary current. This joint oceanographic expedition expands on previous joint US/Taiwan physical oceanographic and ocean acoustic studies in the China Seas dating back to ASIAEX in the South China Sea during 2000-2001 and QPE in the East China Sea during 2008-2009. The strengthening and weakening of Kuroshio transport and intensity northeast of Taiwan is closely correlated to the timescales of mesoscale westward propagating eddies arriving to the East Taiwan Channel. When a canyon has a Rossby number ~1 or Rossby radius equivalent to the width of the canyon in a region of left-bounded flow, induced cyclonic flow will experience an upwelling regime within the canyon system with dominant upwelling located at the downstream canyon rim vertically constrained by Rossby Height. Observational analysis of canyon bottom-moored ADCPs and vertical temperature arrays supports previous theory on submarine canyon dynamics on a continental shelf. Satellite sea surface temperature and absolute dynamic topography observations render the formation of a cold dome northeast of Taiwan coincident with this joint oceanographic survey.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Wallets to Wages: Consumer Income, Job Design, and Pay Disparities</title>
<link href="https://hdl.handle.net/1721.1/164558" rel="alternate"/>
<author>
<name>Roh, Soohyun</name>
</author>
<id>https://hdl.handle.net/1721.1/164558</id>
<updated>2026-01-21T04:08:01Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">From Wallets to Wages: Consumer Income, Job Design, and Pay Disparities
Roh, Soohyun
Pay differences between organizations are a key source of wage inequality. I propose a novel account of these differences by starting from the consumers that these businesses serve. Firms that serve high-income consumers specialize jobs into higher-paying and higher-skilled positions focused on quality, while those that serve lower-income consumers emphasize cost minimization by requiring workers to perform a wider range of general tasks. Matching consumer foot traffic data and establishment-level wage records, I find that establishments serving higher-income consumers pay their workers more. This effect holds comparing among establishments in the same neighborhoods and industries. Longitudinally, establishments increase wages when they shift toward higher-income customers. Analysis of online job postings further reveals that jobs at higher-income-serving firms involve a narrower set of tasks that command higher market value. These findings show how consumer markets shape firms’ internal job design and contribute to pay inequality across organizations.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Fate of Federal Buildings: Real Estate Disposition and the Future of Washington, D.C.</title>
<link href="https://hdl.handle.net/1721.1/164557" rel="alternate"/>
<author>
<name>Mulcahy, Robby L.</name>
</author>
<id>https://hdl.handle.net/1721.1/164557</id>
<updated>2026-01-21T04:07:56Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">The Fate of Federal Buildings: Real Estate Disposition and the Future of Washington, D.C.
Mulcahy, Robby L.
The United States federal government is the largest property owner in the country, with more than 370 million square feet of real estate under its control. Much of this portfolio is outdated, underutilized, and located in the urban cores of American cities. Nowhere is this more evident—or more consequential—than in Washington, D.C., where the federal government controls approximately 27% of the office market. As federal agencies adopt hybrid work models, and as the operational needs of government evolve, the existing real estate footprint has become increasingly inefficient, expensive, and misaligned with civic and market realities. This thesis investigates the opportunity to rethink federal land ownership and management as a catalyst for urban regeneration, civic stewardship, and housing production.&#13;
&#13;
Using the James V. Forrestal Building as a focal case study, the research examines the historical, policy, and spatial dynamics that have led to the current moment of reckoning. Located on Independence Avenue SW, straddling 10th Street between the National Mall and the Wharf, Forrestal is emblematic of the postwar federal design ethos: monumental, inward-facing, and hostile to street life. Once a symbol of bureaucratic permanence, the building now stands as a physical and symbolic barrier to urban connectivity and civic vitality. The case of Forrestal is used to explore broader questions: How can the federal government dispose of surplus property more effectively? What policy tools exist—or are needed—to unlock value and enable redevelopment? And what role should cities play in shaping the outcomes of federal land disposition?&#13;
&#13;
The thesis employs a mixed-methods approach that includes policy analysis, stakeholder interviews, precedent case studies, and spatial analysis of Southwest D.C. The work identifies a range of obstacles to effective disposition, including Title V of the McKinney-Vento Homeless Assistance Act, opaque OMB budget scoring rules, jurisdictional fragmentation, and the absence of a coordinating authority across federal agencies. It also identifies key lessons from successful projects such as The Yards, Walter Reed, and the Volpe Center, where thoughtful structuring and strong federal-local partnerships enabled transformative redevelopment of surplus land.&#13;
&#13;
The thesis concludes with ten detailed recommendations for reform, including reauthorization of the Federal Assets Sale and Transfer Act (FASTA), modernization of Title V and OMB scoring, the creation of Federal Redevelopment Zones, and the prioritization of housing, civic infrastructure, and design quality in disposition strategy. It argues that the federal government must shift from a passive landlord to an active steward of public land—one that collaborates with cities, integrates public benefit, and reflects democratic values through the built environment.&#13;
&#13;
In this moment of shifting federal needs, declining office demand, and urban transformation, the question is not whether federal real estate reform is needed—it is whether we will seize the opportunity. The fate of buildings like Forrestal will shape not only the skyline of Washington, D.C., but also the federal government’s legacy in America’s cities for generations to come.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyst Incentives</title>
<link href="https://hdl.handle.net/1721.1/164556" rel="alternate"/>
<author>
<name>Green, Brice</name>
</author>
<id>https://hdl.handle.net/1721.1/164556</id>
<updated>2026-01-21T04:08:00Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Analyst Incentives
Green, Brice
Analyst forecasts have been shown to reflect substantial behavioral biases and predict a number of macroeconomic phenomena. While we typically treat reported forecasts as statistical expectations, under uncertainty the reported point estimate will be sensitive to the payoff structure facing the forecaster. Using data on careers from LinkedIn, I describe the incentive structures faced by analysts, shedding light the extent to which pay and career success are tied to performance. Further, I extend a causal estimator to identify credible counterfactual forecasts and provide tentative causal evidence of the relationship between forecast errors and promotions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Maverick Neuroscientist: What Does a Life in Science Look Like Outside the Ivory Tower? : Dr. Eugenio Vargas-Pena is a renowned psychiatrist in Paraguay, who conducts neuroscience research without university ties, funding, or peer review. ls his embodiment of the gentleman scientist an alternate path for those who want to break away from institutionalized science?</title>
<link href="https://hdl.handle.net/1721.1/164555" rel="alternate"/>
<author>
<name>Chomik-Morales, Jessica</name>
</author>
<id>https://hdl.handle.net/1721.1/164555</id>
<updated>2026-01-21T04:07:57Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Maverick Neuroscientist: What Does a Life in Science Look Like Outside the Ivory Tower? : Dr. Eugenio Vargas-Pena is a renowned psychiatrist in Paraguay, who conducts neuroscience research without university ties, funding, or peer review. ls his embodiment of the gentleman scientist an alternate path for those who want to break away from institutionalized science?
Chomik-Morales, Jessica
This longterm narrative investigates the life and work of Dr. Eugenio Vargas-Peña, a neuropsychiatrist in Asunción, Paraguay who built a fully functional lab in his countryside home. Vargas-Peña conducts brain research independently, guided by decades of self-study, clinical practice, and an unwavering belief in the value of curiosity-driven inquiry. The piece interweaves historical context, character study, and personal narrative, using the author's own background in neuroscience and science communication to frame an inquiry into legitimacy, recognition, and alternative pathways in science. It asks: What defines a scientist today? Who gets to decide which ideas are taken seriously? And what are the consequences-creative or catastrophic-of working outside institutional boundaries? Through the lens of one man's eccentric yet earnest intellectual journey, this thesis invites broader reflection on the pressures shaping contemporary research and the enduring romance of unorthodox scholarship.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning Systems for Unsupervised Time Series Anomaly&#13;
Detection</title>
<link href="https://hdl.handle.net/1721.1/164554" rel="alternate"/>
<author>
<name>Alnegheimish, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/164554</id>
<updated>2026-01-21T03:23:36Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Machine Learning Systems for Unsupervised Time Series Anomaly&#13;
Detection
Alnegheimish, Sarah
Modern assets – from launched satellites to electric vehicles – output dense, multivariate time series data that must be monitored for deviations from “normal” behavior. This monitoring task is referred to as time series anomaly detection. The current state of the industry still depends on fixed or heuristic thresholds that often drown operators in false alarms, and can miss the subtle, context-dependent faults that matter most. This thesis addresses unsupervised time series anomaly detection as an end-to-end problem, asking how we can learn, evaluate, and deploy models that judiciously flag anomalies while remaining intuitive to the end user.&#13;
This thesis provides contributions in the form of both algorithms and systems. First, it introduces three models that enlarge the design space of unsupervised time series anomaly detection: TadGAN, which leverages adversarial reconstruction; AER, which unifies predictive&#13;
and reconstructive objectives in a single hybrid score; and MixedLSTM, which explicitly incorporates interdependencies to improve anomaly detection in multivariate time series. We propose two range-based evaluation metrics that quantify detection quality over temporal intervals. Second, it presents our system Orion, which abstracts anomaly detection pipelines as directed acyclic graphs of reusable primitives, providing user-friendly APIs and enabling interactive visual inspection. Building on this infrastructure, OrionBench performs periodic, fully reproducible benchmarks, producing leaderboards that align research innovations with the needs of end users. Third, the thesis explores a new paradigm – foundation models for unsupervised time series anomaly detection – by formulating SigLLM , which employs large language models and time series foundation models for zero-shot anomaly detection via prompting and forecasting. This paradigm indicates a promising path to developing scalable models for anomaly detection. Finally, beyond evaluating our systems on publicly available datasets, we provide extensive experiments on two industrial case studies that demonstrate improved detection accuracy and practical usability of our system.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Techniques for Reliability and Robustness in Integrated Electronic and Photonic Systems</title>
<link href="https://hdl.handle.net/1721.1/164553" rel="alternate"/>
<author>
<name>Chakraborty, Uttara</name>
</author>
<id>https://hdl.handle.net/1721.1/164553</id>
<updated>2026-01-21T03:23:15Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Techniques for Reliability and Robustness in Integrated Electronic and Photonic Systems
Chakraborty, Uttara
Reliability and robustness are key concerns in the development of novel electronic and photonic materials, devices, and systems. This thesis presents statistical and machine learning techniques for reliability analysis of heterogeneously-integrated systems, extraction of variations from photonic test structure measurements, making smart decisions about test configurations in the face of time and resource constraints, and robust design of photonic components. To estimate reliability model parameters from lifetime datasets where multiple underlying failure mechanisms are present, a differential evolution framework and a boundconstrained expectation maximization algorithm are developed; both these approaches significantly outperform the gradient-based L-BFGS-B algorithm. New schemes for strategic failure analysis on a subset of the failed units are presented, both for detecting the presence of a second failure mechanism and for improving two-mechanism reliability models. A regression-based protocol is also presented for optimally selecting reliability test conditions to verify physical failure mechanism models. A maximum-likelihood-estimation-based approach is demonstrated for the simultaneous extraction of waveguide index and thickness variations using integrated photonic direction couplers and Mach-Zehnder interferometers. Schemes are proposed for optimal selection of cut-back test structures and for propagation loss estimation with a Bayesian prior distribution for fiber-coupling error. Finally, a robust Bayesian optimization algorithm using a new tunable acquisition function is presented for photonic component design. The methods developed in this thesis are expected to be broadly applicable to a wide variety of electronic and photonic devices and systems.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Molecularly Thin Polyaramid Nanomechanical Resonators</title>
<link href="https://hdl.handle.net/1721.1/164552" rel="alternate"/>
<author>
<name>Gress, Hagen</name>
</author>
<author>
<name>Ritt, Cody L</name>
</author>
<author>
<name>Shomakhov, Inal</name>
</author>
<author>
<name>Altmisdort, Kaan</name>
</author>
<author>
<name>Quien, Michelle</name>
</author>
<author>
<name>Wei, Zitang</name>
</author>
<author>
<name>Lawall, John R</name>
</author>
<author>
<name>Boddeti, Narasimha</name>
</author>
<author>
<name>Strano, Michael S</name>
</author>
<author>
<name>Bunch, J Scott</name>
</author>
<author>
<name>Ekinci, Kamil L</name>
</author>
<id>https://hdl.handle.net/1721.1/164552</id>
<updated>2026-01-17T06:32:32Z</updated>
<published>2025-12-03T00:00:00Z</published>
<summary type="text">Molecularly Thin Polyaramid Nanomechanical Resonators
Gress, Hagen; Ritt, Cody L; Shomakhov, Inal; Altmisdort, Kaan; Quien, Michelle; Wei, Zitang; Lawall, John R; Boddeti, Narasimha; Strano, Michael S; Bunch, J Scott; Ekinci, Kamil L
Two-dimensional polyaramids exhibit strong hydrogen bonding to create molecularly thin nanosheets analogous to graphene. Here, we report the first nanomechanical resonators made out of a two-dimensional polyaramid, 2DPA-1, with thicknesses as small as 8 nm. To fabricate these molecular-scale resonators, we transferred nanofilms of 2DPA-1 onto chips with previously etched arrays of circular microwells. We then characterized the thermal resonances of these resonators under different conditions. When there is no residual gas inside the 2DPA-1-covered microwells, the eigenfrequencies are well-described by a tensioned plate theory, providing the Young's modulus and tension of the 2DPA-1 nanofilms. With gas present, the nanofilms bulge up and mechanical resonances are modified due to the adhesion, bulging and slack present in the system. The fabrication and mechanical characterization of these first 2DPA-1 nanomechanical resonators represent a convincing path toward molecular-scale polymeric NEMS with high mechanical strength, low density, and synthetic processability.
</summary>
<dc:date>2025-12-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interferometric Deflection Analysis of Suspended 2D Polyaramid Thin Films</title>
<link href="https://hdl.handle.net/1721.1/164551" rel="alternate"/>
<author>
<name>Quien, Michelle</name>
</author>
<author>
<name>Ritt, Cody L</name>
</author>
<author>
<name>Garimella, Sanjay S</name>
</author>
<author>
<name>Gress, Hagen</name>
</author>
<author>
<name>Ekinci, Kamil L</name>
</author>
<author>
<name>Bunch, Joseph Scott</name>
</author>
<author>
<name>Strano, Michael S</name>
</author>
<id>https://hdl.handle.net/1721.1/164551</id>
<updated>2026-01-17T06:32:30Z</updated>
<published>2025-12-05T00:00:00Z</published>
<summary type="text">Interferometric Deflection Analysis of Suspended 2D Polyaramid Thin Films
Quien, Michelle; Ritt, Cody L; Garimella, Sanjay S; Gress, Hagen; Ekinci, Kamil L; Bunch, Joseph Scott; Strano, Michael S
The 2D nanofilm bulge test, which uses an Atomic Force Microscope (AFM) to measure the deflection of a suspended film under various conditions, has emerged as an important measurement platform for understanding mechanical, barrier, and permeability properties of 2D materials as thickness approaches the angstrom scale. The problem considered in this work is the limitation of such bulge analyses imposed by the AFM whereby dynamic measurements under high pressure, high temperature, and chemically corrosive conditions are limited. In this work, a technique is developed for measuring nanofilm deflection using only visible light interferometry. Both theoretical and semi‐empirical models are applied to translate multicolor interference patterns from broadband excitation into estimates of nano‐film deflection, allowing nanoscale precision in most cases. The technique and algorithm advanced in this work allows the use of widespread optical microscopy to widen the study of these important 2D nanofilm systems to more relevant conditions.
</summary>
<dc:date>2025-12-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantum One-Time Programs, Revisited</title>
<link href="https://hdl.handle.net/1721.1/164550" rel="alternate"/>
<author>
<name>Gupte, Aparna</name>
</author>
<author>
<name>Liu, Jiahui</name>
</author>
<author>
<name>Raizes, Justin</name>
</author>
<author>
<name>Roberts, Bhaskar</name>
</author>
<author>
<name>Vaikuntanathan, Vinod</name>
</author>
<id>https://hdl.handle.net/1721.1/164550</id>
<updated>2026-01-17T06:31:55Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Quantum One-Time Programs, Revisited
Gupte, Aparna; Liu, Jiahui; Raizes, Justin; Roberts, Bhaskar; Vaikuntanathan, Vinod
One-time programs (Goldwasser, Kalai and Rothblum, CRYPTO 2008) are programs that can be run on any single input of a user’s choice, but not on a second input. Classically, they are unachievable without trusted hardware, but the destructive nature of quantum measurements seems to provide an alternate path to constructing them. Unfortunately, Broadbent, Gutoski and Stebila (CRYPTO 2013) showed that even with quantum techniques,&#13;
a strong notion of one-time programs, similar to ideal obfuscation, cannot be achieved for any non-trivial quantum function. On the positive side, Ben-David and Sattath (Quantum, 2023) showed how to construct a quantum one-time program for a certain (probabilistic) digital signature scheme, under a weaker notion of one-time program security. There is a vast gap between achievable and provably impossible notions of one-time program security, and it is unclear what functionalities are one-time programmable and which are not, under the achievable notions of security.&#13;
In this work, we present new, meaningful, yet achievable definitions of one-time program security for probabilistic classical functions. We show how to construct one time programs satisfying these definitions for all functions in the classical oracle model and for constrained pseudorandom functions in the plain model. Finally, we examine the limits of these notions: we show a class of functions which cannot be one-time programmed in the plain model, as well as a class of functions which appears to be highly random given a single query, but whose quantum one-time program leaks the entire function even in the oracle model.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning the Closest Product State</title>
<link href="https://hdl.handle.net/1721.1/164549" rel="alternate"/>
<author>
<name>Bakshi, Ainesh</name>
</author>
<author>
<name>Bostanci, John</name>
</author>
<author>
<name>Kretschmer, William</name>
</author>
<author>
<name>Landau, Zeph</name>
</author>
<author>
<name>Li, Jerry</name>
</author>
<author>
<name>Liu, Allen</name>
</author>
<author>
<name>O'Donnell, Ryan</name>
</author>
<author>
<name>Tang, Ewin</name>
</author>
<id>https://hdl.handle.net/1721.1/164549</id>
<updated>2026-01-17T06:31:54Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Learning the Closest Product State
Bakshi, Ainesh; Bostanci, John; Kretschmer, William; Landau, Zeph; Li, Jerry; Liu, Allen; O'Donnell, Ryan; Tang, Ewin
We study the problem of finding a product state with optimal fidelity to an unknown n-qubit quantum state ρ, given copies of ρ. This is a basic instance of a fundamental question in quantum learning: is it possible to efficiently learn a simple approximation to an arbitrary state? We give an algorithm which finds a product state with fidelity ε-close to optimal, using N = npoly(1/ε) copies of ρ and poly(N) classical overhead. We further show that estimating the optimal fidelity is NP-hard for error ε = 1/poly(n), showing that the error dependence cannot be significantly improved. For our algorithm, we build a carefully-defined cover over candidate product states, qubit by qubit, and then demonstrate that extending the cover can be reduced to approximate constrained polynomial optimization. For our proof of hardness, we give a formal reduction from polynomial optimization to finding the closest product state. Together, these results demonstrate a fundamental connection between these two seemingly unrelated questions. Building on our general approach, we also develop more efficient algorithms in three simpler settings: when the optimal fidelity exceeds 5/6; when we restrict ourselves to a discrete class of product states; and when we are allowed to output a matrix product state.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Breaking the T^(2/3) Barrier for Sequential Calibration</title>
<link href="https://hdl.handle.net/1721.1/164548" rel="alternate"/>
<author>
<name>Dagan, Yuval</name>
</author>
<author>
<name>Daskalakis, Constantinos</name>
</author>
<author>
<name>Fishelson, Maxwell</name>
</author>
<author>
<name>Golowich, Noah</name>
</author>
<author>
<name>Kleinberg, Robert</name>
</author>
<author>
<name>Okoroafor, Princewill</name>
</author>
<id>https://hdl.handle.net/1721.1/164548</id>
<updated>2026-01-17T06:31:46Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Breaking the T^(2/3) Barrier for Sequential Calibration
Dagan, Yuval; Daskalakis, Constantinos; Fishelson, Maxwell; Golowich, Noah; Kleinberg, Robert; Okoroafor, Princewill
A set of probabilistic forecasts is calibrated if each prediction of the forecaster closely approximates the empirical distribution of outcomes on the subset of timesteps where that prediction was made. We study the fundamental problem of online calibrated forecasting of binary sequences, which was initially studied by Foster and Vohra. They derived an algorithm with O(T2/3) calibration error after T time steps, and showed a lower bound of Ω(T1/2). These bounds remained stagnant for two decades, until Qiao and Valiant improved the lower bound to Ω(T0.528) by introducing a combinatorial game called sign preservation and showing that lower bounds for this game imply lower bounds for calibration.&#13;
In this paper, we give the first improvement to the O(T2/3) upper bound on calibration error of Foster and Vohra.&#13;
We do this by introducing a variant of Qiao and Valiant’s game that we call sign preservation with reuse (SPR). We prove that the relationship between SPR and calibrated forecasting is bidirectional: not only do lower bounds for SPR translate into lower bounds for calibration, but algorithms for SPR also translate into new algorithms for calibrated forecasting. We then give an improved upper bound for the SPR game, which implies, via our equivalence, a forecasting algorithm with calibration error O(T2/3 − ) for some &gt; 0, improving Foster and Vohra’s upper bound for the first time. Using similar ideas, we then prove a slightly stronger lower bound than that of Qiao and Valiant, namely Ω(T0.54389). Our lower bound is obtained by an oblivious adversary, marking the first ω(T1/2) calibration lower bound for oblivious adversaries.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>PrEP attitudes, willingness, and preferences among men incarcerated in jail in Massachusetts</title>
<link href="https://hdl.handle.net/1721.1/164547" rel="alternate"/>
<author>
<name>Al Abosy, Jude</name>
</author>
<author>
<name>Kalavacherla, Sruthi</name>
</author>
<author>
<name>Koutoujian, Peter J.</name>
</author>
<author>
<name>Siddiqi, Kashif</name>
</author>
<author>
<name>Senst, Thomas</name>
</author>
<author>
<name>Caro, Jose</name>
</author>
<author>
<name>Grossman, Anna</name>
</author>
<author>
<name>Dong, Kimberly R.</name>
</author>
<id>https://hdl.handle.net/1721.1/164547</id>
<updated>2026-01-17T06:32:28Z</updated>
<published>2025-11-26T00:00:00Z</published>
<summary type="text">PrEP attitudes, willingness, and preferences among men incarcerated in jail in Massachusetts
Al Abosy, Jude; Kalavacherla, Sruthi; Koutoujian, Peter J.; Siddiqi, Kashif; Senst, Thomas; Caro, Jose; Grossman, Anna; Dong, Kimberly R.
Background People who inject drugs (PWID) are both disproportionately incarcerated and affected by HIV infection. Systemic inequities perpetuate the cyclic nature of injection drug use (IDU) and incarceration, and both IDU and incarceration are linked to higher rates of HIV infection. Pre-exposure prophylaxis (PrEP) is highly effective in HIV prevention and is currently available as a daily oral pill. Longer-acting PrEP options, such as injectables and implants, are also in development to improve accessibility and adherence. Despite these advancements, PrEP uptake remains low among PWID and individuals recently released from jail, and there is limited literature exploring the preferences for PrEP uptake within this population. Methods We conducted qualitative interviews using a semi-structured interview guide with 20 male participants (19 incarcerated in a Massachusetts jail and 1 recently released) to assess perceived HIV risk, knowledge of PrEP, barriers to PrEP uptake, and preferences for PrEP modality and frequency. The data were analyzed using a directed content analysis approach. Results Most participants were aware of their HIV risk but were largely unaware of PrEP and had never been educated about PrEP by a healthcare provider. Participants cited a lack of access to healthcare, stigma around HIV infection, and feasibility as barriers to uptake. While participants expressed interest in longer-acting PrEP, most preferred the oral pill due to distrust of the safety and efficacy of injectables and implants, countering the assumption that modality changes alone can improve low PrEP uptake. Conclusions Our findings underscore the urgent need for targeted education and interventions to improve HIV prevention in vulnerable populations impacted by incarceration. While long-acting injectables have been touted to help address barriers to accessing healthcare among this population, skepticism about the efficacy of long-acting injectables among this population may prevent these efforts. It is important to further research the willingness to uptake PrEP and modality preferences among this population to meet their needs.
</summary>
<dc:date>2025-11-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Search for a new scalar resonance decaying to a Higgs boson and another new scalar particle in the final state with two bottom quarks and two photons in proton-proton collisions at $$\sqrt{s}=13$$ TeV</title>
<link href="https://hdl.handle.net/1721.1/164546" rel="alternate"/>
<author>
<name>Hayrapetyan, A.</name>
</author>
<author>
<name>Makarenko, V.</name>
</author>
<author>
<name>Tumasyan, A.</name>
</author>
<author>
<name>Adam, W.</name>
</author>
<author>
<name>Andrejkovic, J. W.</name>
</author>
<author>
<name>Benato, L.</name>
</author>
<author>
<name>Bergauer, T.</name>
</author>
<author>
<name>Dragicevic, M.</name>
</author>
<author>
<name>Giordano, C.</name>
</author>
<author>
<name>Hussain, P. S.</name>
</author>
<author>
<name>Jeitler, M.</name>
</author>
<author>
<name>Krammer, N.</name>
</author>
<author>
<name>Li, A.</name>
</author>
<author>
<name>Liko, D.</name>
</author>
<author>
<name>Matthewman, M.</name>
</author>
<author>
<name>Mikulec, I.</name>
</author>
<id>https://hdl.handle.net/1721.1/164546</id>
<updated>2026-01-17T06:32:26Z</updated>
<published>2025-12-23T00:00:00Z</published>
<summary type="text">Search for a new scalar resonance decaying to a Higgs boson and another new scalar particle in the final state with two bottom quarks and two photons in proton-proton collisions at $$\sqrt{s}=13$$ TeV
Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Dragicevic, M.; Giordano, C.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Matthewman, M.; Mikulec, I.
A search is presented for a new scalar resonance, X, decaying to a standard model Higgs boson and another new scalar particle, Y, in the final state where the Higgs boson decays to a $$\text{b}\overline{\text{b} }$$ pair, while the Y particle decays to a pair of photons. The search is performed in the mass range 240–1000 GeV for the resonance X, and in the mass range 70–800 GeV for the particle Y, using proton-proton collision data collected by the CMS experiment at $$\sqrt{s}=13$$ TeV, corresponding to an integrated luminosity of 132 fb−1. In general, the data are found to be compatible with the standard model expectation. Observed (expected) upper limits at 95% confidence level on the product of the production cross section and the relevant branching fraction are extracted for the X → YH process, and are found to be within the range of 0.05–2.69 (0.08–1.94) fb, depending on mX and mY. The most significant deviation from the background-only hypothesis is observed for X and Y masses of 300 and 77 GeV, respectively, with a local (global) significance of 3.33 (0.65) standard deviations.
</summary>
<dc:date>2025-12-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Public Service Provision and the Virtuous Circle: Evidence from Malawi</title>
<link href="https://hdl.handle.net/1721.1/164545" rel="alternate"/>
<author>
<name>Chen, Nuole</name>
</author>
<author>
<name>Grady, Christopher</name>
</author>
<author>
<name>Dulani, Boniface</name>
</author>
<author>
<name>Masumbu, Mwayi</name>
</author>
<author>
<name>Chiona, Busta</name>
</author>
<author>
<name>Bowers, Jake</name>
</author>
<author>
<name>Winters, Matthew S.</name>
</author>
<id>https://hdl.handle.net/1721.1/164545</id>
<updated>2026-01-17T06:32:24Z</updated>
<published>2025-12-29T00:00:00Z</published>
<summary type="text">Public Service Provision and the Virtuous Circle: Evidence from Malawi
Chen, Nuole; Grady, Christopher; Dulani, Boniface; Masumbu, Mwayi; Chiona, Busta; Bowers, Jake; Winters, Matthew S.
Many governments struggle to obtain the resources they need to govern effectively. In the virtuous circle model of state development, tax revenue allows governments to provide public goods and services to citizens, and citizens comply with taxation when governments provide sufficient levels of goods and services. The model, however, also suggests a vicious version of the circle, where citizens do not pay taxes, governments lack revenue to provide public goods and services, and citizens therefore continue to not pay taxes. Under this suboptimal equilibrium, governments cannot deliver on their governing and service provision mandates. We study whether a shock to public service provision in a major city in Malawi can induce citizens to pay taxes, thereby shifting the relationship between the city and its citizens from a vicious circle to a virtuous circle. With a difference-in-differences-style analysis, we show that households exposed to new government-provided waste collection expressed more trust in and better perceptions of the local government. Most importantly, these households were more likely to make tax payments. We find that this increase in tax payments largely came from people paying more of what they owed rather than from new taxpayers entering the rolls.
</summary>
<dc:date>2025-12-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>NonlinearSolve.jl: High-Performance and Robust Solvers for Systems of Nonlinear Equations in Julia</title>
<link href="https://hdl.handle.net/1721.1/164544" rel="alternate"/>
<author>
<name>Pal, Avik</name>
</author>
<author>
<name>Holtorf, Flemming</name>
</author>
<author>
<name>Larsson, Axel</name>
</author>
<author>
<name>Loman, Torkel</name>
</author>
<author>
<name>Rajput, Utkarsh</name>
</author>
<author>
<name>Sch?fer, Frank</name>
</author>
<author>
<name>Qu, Qingyu</name>
</author>
<author>
<name>Edelman, Alan</name>
</author>
<author>
<name>Rackauckas, Chris</name>
</author>
<id>https://hdl.handle.net/1721.1/164544</id>
<updated>2026-01-17T06:32:22Z</updated>
<published>2025-12-01T00:00:00Z</published>
<summary type="text">NonlinearSolve.jl: High-Performance and Robust Solvers for Systems of Nonlinear Equations in Julia
Pal, Avik; Holtorf, Flemming; Larsson, Axel; Loman, Torkel; Rajput, Utkarsh; Sch?fer, Frank; Qu, Qingyu; Edelman, Alan; Rackauckas, Chris
Efficiently solving nonlinear equations underpins numerous scientific and engineering disciplines, yet scaling these solutions for challenging system models remains a challenge. This paper presents NonlinearSolve.jl -- a suite of high-performance open-source nonlinear equation solvers implemented natively in the Julia programming language. NonlinearSolve.jl distinguishes itself by offering a unified API that accommodates a diverse range of solver specifications alongside features such as automatic algorithm selection based on runtime analysis, support for static array kernels for improved GPU computation on smaller problems, and the utilization of sparse automatic differentiation and Jacobian-free Krylov methods for large-scale problem-solving. Through rigorous comparison with established tools such as PETSc SNES, Sundials KINSOL, and MINPACK, NonlinearSolve.jl demonstrates robustness and efficiency, achieving significant advancements in solving nonlinear equations while being implemented in a high-level programming language. The capabilities of NonlinearSolve.jl unlock new potentials in modeling and simulation across various domains, making it a valuable addition to the computational toolkit of researchers and practitioners alike.
</summary>
<dc:date>2025-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Property Testing with Online Adversaries</title>
<link href="https://hdl.handle.net/1721.1/164543" rel="alternate"/>
<author>
<name>Ben Eliezer, Omri</name>
</author>
<author>
<name>Kelman, Esty</name>
</author>
<author>
<name>Meir, Uri</name>
</author>
<author>
<name>Raskhodnikova, Sofya</name>
</author>
<id>https://hdl.handle.net/1721.1/164543</id>
<updated>2026-01-17T06:32:20Z</updated>
<published>2025-12-02T00:00:00Z</published>
<summary type="text">Property Testing with Online Adversaries
Ben Eliezer, Omri; Kelman, Esty; Meir, Uri; Raskhodnikova, Sofya
The online manipulation-resilient testing model, proposed by Kalemaj, Raskhodnikova and Varma (Theory of Computing 2023), studies property testing in situations where access to the input degrades continuously and adversarially.    Our main contributions are as follows:    - An extension of the model, introducing \emph{batch queries} where multiple queries are made and answered between each round of manipulation, and \emph{fractional manipulation rate}, where the adversary makes less than one manipulation per round.    - New optimal testers for linearity testing of Boolean functions in the original online and offline models.        - A new lower-bound for testing low-degree of Boolean functions in the original model which can be overcome by an algorithm using batch queries.         - Efficient testers for local properties of sequences when the manipulation rate is fractional. Specifically, for sortedness, we show a sharp transition from optimal query complexity to the impossibility of testability, depending on the manipulation rate.
</summary>
<dc:date>2025-12-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Engineered yeast tolerance enables efficient production from toxified lignocellulosic feedstocks</title>
<link href="https://hdl.handle.net/1721.1/164542" rel="alternate"/>
<author>
<name>Lam, Felix H</name>
</author>
<author>
<name>Turanlı-Yıldız, Burcu</name>
</author>
<author>
<name>Liu, Dany</name>
</author>
<author>
<name>Resch, Michael G</name>
</author>
<author>
<name>Fink, Gerald R</name>
</author>
<author>
<name>Stephanopoulos, Gregory</name>
</author>
<id>https://hdl.handle.net/1721.1/164542</id>
<updated>2026-03-08T03:39:24Z</updated>
<published>2021-06-25T00:00:00Z</published>
<summary type="text">Engineered yeast tolerance enables efficient production from toxified lignocellulosic feedstocks
Lam, Felix H; Turanlı-Yıldız, Burcu; Liu, Dany; Resch, Michael G; Fink, Gerald R; Stephanopoulos, Gregory
Lignocellulosic biomass remains unharnessed for the production of renewable fuels and chemicals due to challenges in deconstruction and the toxicity its hydrolysates pose to fermentation microorganisms. Here, we show in Saccharomyces cerevisiae that engineered aldehyde reduction and elevated extracellular potassium and pH are sufficient to enable near-parity production between inhibitor-laden and inhibitor-free feedstocks. By specifically targeting the universal hydrolysate inhibitors, a single strain is enhanced to tolerate a broad diversity of highly toxified genuine feedstocks and consistently achieve industrial-scale titers (cellulosic ethanol of &gt;100 grams per liter when toxified). Furthermore, a functionally orthogonal, lightweight design enables seamless transferability to existing metabolically engineered chassis strains: We endow full, multifeedstock tolerance on a xylose-consuming strain and one producing the biodegradable plastics precursor lactic acid. The demonstration of “drop-in” hydrolysate competence enables the potential of cost-effective, at-scale biomass utilization for cellulosic fuel and nonfuel products alike.
</summary>
<dc:date>2021-06-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Removal of lycopene substrate inhibition enables high carotenoid productivity in Yarrowia lipolytica</title>
<link href="https://hdl.handle.net/1721.1/164541" rel="alternate"/>
<author>
<name>Ma, Yongshuo</name>
</author>
<author>
<name>Liu, Nian</name>
</author>
<author>
<name>Greisen, Per</name>
</author>
<author>
<name>Li, Jingbo</name>
</author>
<author>
<name>Qiao, Kangjian</name>
</author>
<author>
<name>Huang, Sanwen</name>
</author>
<author>
<name>Stephanopoulos, Gregory</name>
</author>
<id>https://hdl.handle.net/1721.1/164541</id>
<updated>2026-03-08T03:39:23Z</updated>
<published>2022-01-31T00:00:00Z</published>
<summary type="text">Removal of lycopene substrate inhibition enables high carotenoid productivity in Yarrowia lipolytica
Ma, Yongshuo; Liu, Nian; Greisen, Per; Li, Jingbo; Qiao, Kangjian; Huang, Sanwen; Stephanopoulos, Gregory
Substrate inhibition of enzymes can be a major obstacle to the production of valuable chemicals in engineered microorganisms. Here, we show substrate inhibition of lycopene cyclase as the main limitation in carotenoid biosynthesis in &lt;jats:italic&gt;Yarrowia lipolytica&lt;/jats:italic&gt;. To overcome this bottleneck, we exploit two independent approaches. Structure-guided protein engineering yields a variant, Y27R, characterized by complete loss of substrate inhibition without reduction of enzymatic activity. Alternatively, establishing a geranylgeranyl pyrophosphate synthase-mediated flux flow restrictor also prevents the onset of substrate inhibition by diverting metabolic flux away from the inhibitory metabolite while maintaining sufficient flux towards product formation. Both approaches result in high levels of near-exclusive β-carotene production. Ultimately, we construct strains capable of producing 39.5 g/L β-carotene at a productivity of 0.165 g/L/h in bioreactor fermentations (a 1441-fold improvement over the initial strain). Our findings provide effective approaches for removing substrate inhibition in engineering pathways for efficient synthesis of natural products.
</summary>
<dc:date>2022-01-31T00:00:00Z</dc:date>
</entry>
<entry>
<title>Isotope tracing in health and disease</title>
<link href="https://hdl.handle.net/1721.1/164540" rel="alternate"/>
<author>
<name>Dong, Wentao</name>
</author>
<author>
<name>Rawat, Eshaan S</name>
</author>
<author>
<name>Stephanopoulos, Gregory</name>
</author>
<author>
<name>Abu-Remaileh, Monther</name>
</author>
<id>https://hdl.handle.net/1721.1/164540</id>
<updated>2026-03-08T03:39:26Z</updated>
<published>2022-08-01T00:00:00Z</published>
<summary type="text">Isotope tracing in health and disease
Dong, Wentao; Rawat, Eshaan S; Stephanopoulos, Gregory; Abu-Remaileh, Monther
Biochemical characterization of metabolism provides molecular insights for understanding biology in health and disease. Over the past decades, metabolic perturbations have been implicated in cancer, neurodegeneration, and diabetes, among others. Isotope tracing is a technique that allows tracking of labeled atoms within metabolites through biochemical reactions. This technique has become an integral component of the contemporary metabolic research. Isotope tracing measures substrate contribution to downstream metabolites and indicates its utilization in cellular metabolic networks. In addition, isotopic labeling data are necessary for quantitative metabolic flux analysis. Here, we review recent work utilizing metabolic tracing to study health and disease, and highlight its application to interrogate subcellular, intercellular, and in vivo metabolism. We further discuss the current challenges and opportunities to expand the utility of isotope tracing to new research areas.
</summary>
<dc:date>2022-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Engineering a universal and efficient platform for terpenoid synthesis in yeast</title>
<link href="https://hdl.handle.net/1721.1/164539" rel="alternate"/>
<author>
<name>Ma, Yongshuo</name>
</author>
<author>
<name>Zu, Yuexuan</name>
</author>
<author>
<name>Huang, Sanwen</name>
</author>
<author>
<name>Stephanopoulos, Gregory</name>
</author>
<id>https://hdl.handle.net/1721.1/164539</id>
<updated>2026-03-08T03:39:25Z</updated>
<published>2022-12-28T00:00:00Z</published>
<summary type="text">Engineering a universal and efficient platform for terpenoid synthesis in yeast
Ma, Yongshuo; Zu, Yuexuan; Huang, Sanwen; Stephanopoulos, Gregory
Engineering microbes for the production of valuable natural products is often hindered by the regulation of native competing metabolic networks in host. This is particularly evident in the case of terpenoid synthesis in yeast, where the canonical terpenoid precursors are tightly coupled to the biosynthesis of sterols essential for yeast viability. One way to circumvent this limitation is by engineering product pathways less connected to the host native metabolism. Here, we introduce a two-step isopentenol utilization pathway (IUP) in&#13;
            &lt;jats:italic&gt;Saccharomyces cerevisiae&lt;/jats:italic&gt;&#13;
            to augment the native mevalonate pathway by providing a shortcut to the synthesis of the common terpenoid precursors, isopentenyl diphosphate (IPP) and dimethylallyl diphosphate (DMAPP). As such, the IUP was capable of elevating the IPP/DMAPP pool by 147-fold compared with the native pathway. We further demonstrate that cofeeding isoprenol and prenol enhances geranyl diphosphate (GPP) content for monoterpene biosynthesis. More importantly, we established a synthetic three-step route for efficient synthesis of di-and tetraterpene precursor geranylgeranyl diphosphate (GGPP), circumventing the competition with farnesyl diphosphate (FPP) for sterol biosynthesis and elevating the GGPP level by 374-fold. We combine these IUP-supported precursor-forming platforms with downstream terpene synthases to harness their potential and improve the production of industrially relevant terpenoids by several fold. Our exploration provides a universal and effective platform for supporting terpenoid synthesis in yeast.
</summary>
<dc:date>2022-12-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Oscillatory control of cortical space as a computational dimension</title>
<link href="https://hdl.handle.net/1721.1/164538" rel="alternate"/>
<author>
<name>Chen, Zhen</name>
</author>
<author>
<name>Brincat, Scott L.</name>
</author>
<author>
<name>Lundqvist, Mikael</name>
</author>
<author>
<name>Loonis, Roman F.</name>
</author>
<author>
<name>Warden, Melissa R.</name>
</author>
<author>
<name>Miller, Earl K.</name>
</author>
<id>https://hdl.handle.net/1721.1/164538</id>
<updated>2026-01-16T03:07:45Z</updated>
<published>2025-12-22T00:00:00Z</published>
<summary type="text">Oscillatory control of cortical space as a computational dimension
Chen, Zhen; Brincat, Scott L.; Lundqvist, Mikael; Loonis, Roman F.; Warden, Melissa R.; Miller, Earl K.
Flexible cognition depends on the ability to represent and apply relevant information to the current task at hand. This allows the brain to interpret sensory input and guide behavior in a context-dependent manner. Recent work has proposed “spatial computing” as a mechanism for this flexibility, suggesting that task-related signals organize information processing through spatial patterns of oscillatory activity across the cortical surface. These patterns are proposed to act as “inhibitory stencils” that constrain where sensory-related information (the “content” of cognition) can be expressed in spiking activity. Here, we provide a comprehensive empirical test of spatial computing using multi-electrode recordings from the lateral prefrontal cortex in non-human primates performing a range of cognitive tasks (object working memory, sequence working memory, and categorization). We found that alpha/beta oscillations encoded task-related information, were organized into spatial patterns that changed with task conditions, and inversely correlated with the spatial expression of sensory-related spiking activity. Furthermore, we found that alpha/beta oscillations reflected misattributions of task conditions and correlated with subjects’ trial-by-trial decisions. These findings validate core predictions of spatial computing, suggesting that oscillatory dynamics not only gate information in time but also shape where in the cortex cognitive content is represented. This framework offers a unifying principle for understanding how the brain flexibly coordinates cognition through structured population dynamics.
</summary>
<dc:date>2025-12-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Working memory readout varies with frontal theta rhythms</title>
<link href="https://hdl.handle.net/1721.1/164537" rel="alternate"/>
<author>
<name>Han, Hio-Been</name>
</author>
<author>
<name>Brincat, Scott L.</name>
</author>
<author>
<name>Buschman, Timothy J.</name>
</author>
<author>
<name>Miller, Earl K.</name>
</author>
<id>https://hdl.handle.net/1721.1/164537</id>
<updated>2026-01-16T03:07:46Z</updated>
<published>2026-01-07T00:00:00Z</published>
<summary type="text">Working memory readout varies with frontal theta rhythms
Han, Hio-Been; Brincat, Scott L.; Buschman, Timothy J.; Miller, Earl K.
Increasing evidence suggests that attention varies rhythmically, phase locked to ongoing cortical oscillations. Here, we report that the phase of theta oscillations (3–6 Hz) in the frontal eye field (FEF) is associated with the spatiotemporal variation of information readout from working memory (WM). Non-human primates were briefly shown a sample array of colored squares. A short time later, they viewed a test array and were rewarded for identifying which square changed color (the target). Behavioral performance varied systematically with theta phase at the time of test array onset, as well as with the target’s location. This is consistent with theta “scanning” across the FEF and thus visual space from top to bottom. Theta was coupled, on opposing phases, to both spiking and beta (12–20 Hz). These results could be explained by a wave of activity that moves across the FEF, modulating the readout of information from WM.
</summary>
<dc:date>2026-01-07T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nuclear Ship Safety Handbook</title>
<link href="https://hdl.handle.net/1721.1/163117.2" rel="alternate"/>
<author>
<name>Valiaveedu, Anthony</name>
</author>
<author>
<name>Edmonds, Nat</name>
</author>
<author>
<name>Izurieta, Jose</name>
</author>
<id>https://hdl.handle.net/1721.1/163117.2</id>
<updated>2026-01-21T15:07:41Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Nuclear Ship Safety Handbook
Valiaveedu, Anthony; Edmonds, Nat; Izurieta, Jose
At present, there exists no clear, unified public document in the incorporation of design safety for nuclear civilian ships. Historically, there has been developed research into this area due to political development in the “Atoms for Peace” era. However, as of recent, the only development has been through standards institutions related to Floating Nuclear Power Plants (commonly known as FLOPPS) and by the Russian Federation with their nuclear icebreaker development. This paper uses this research data and standards and combines it with the operational experiences during civilian maritime nuclear operations to provide unique insights into potential issues and resolutions in the design efficacy of maritime nuclear operations. The goal, therefore, is to provide a strong basis for initial safety on key areas that require nuclear and maritime regulatory research and development in the coming years to prepare for nuclear propulsion in the maritime industry. The paper is isolated into multiple chapters in the areas that involve overlapping nuclear/maritime safety design decisions that will be encountered by engineers. Chapter 1 establishes the principles andm philosophy behind the safety discussion for nuclear maritime and discusses key topics that relate to the overall ship design. Chapter 2 provides design details on the reactor compartment and other considerations when designing the reactor compartment. Chapter 3 describes the various hazards the reactor plant should be resilient against and avenues in establishing resiliency. Chapter 4 discusses the propulsion system and key considerations when evaluating different propulsion designs. Chapter 5 provides emergency power considerations for design determinations. Chapter 6 provides an event tree analysis on the major initiating events when operating a nuclear ship. Chapter 7 outlines the port operating procedures including avenues for establishing porting requirements for nuclear ships.
Contact information: Anthony Valiaveedu (arv7@mit.edu); Nat Edmonds (edmondsn@mit.edu)
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring the Emotional Effects of Enhanced Interoception via Heartbeat-Synchronized Haptic Feedback</title>
<link href="https://hdl.handle.net/1721.1/164536" rel="alternate"/>
<author>
<name>Kim, Minsol</name>
</author>
<author>
<name>Whitmore, Nathan</name>
</author>
<author>
<name>Chua, Phoebe</name>
</author>
<author>
<name>Pei, Serena</name>
</author>
<author>
<name>Abdalla, Malak</name>
</author>
<author>
<name>Maes, Pattie</name>
</author>
<id>https://hdl.handle.net/1721.1/164536</id>
<updated>2026-03-08T03:38:53Z</updated>
<published>2025-12-02T00:00:00Z</published>
<summary type="text">Exploring the Emotional Effects of Enhanced Interoception via Heartbeat-Synchronized Haptic Feedback
Kim, Minsol; Whitmore, Nathan; Chua, Phoebe; Pei, Serena; Abdalla, Malak; Maes, Pattie
This study examines how amplifying real-time heartbeat feedback affects emotion regulation. Accurate heartbeat perception—a key facet of cardiac interoception—has been linked to emotional awareness and mental well-being, yet the causal role of interoceptive feedback in emotion regulation remains underexplored. We empirically tested whether making heart rate signals more perceptible through wearable haptic feedback could facilitate implicit emotion regulation during emotionally evocative experiences. Using a custom Fitbit-based system, thirty participants received real-time, sham, or no heartbeat-synchronized vibrations while viewing fear- and amusement-inducing film clips. Interoceptive accuracy, emotional disturbance, and the linguistic complexity of emotion descriptions were measured. Exploratory analyses showed that real-time feedback reduced emotional disturbance during fear stimuli, especially among individuals attentive to bodily sensations, though effects did not remain significant after multiple comparisons correction. Feedback primarily modulated arousal rather than valence and did not significantly affect heartbeat counting or linguistic complexity. As one of the first causal, empirical investigations of interoceptive feedback and emotion regulation, this work identifies boundary conditions for its effectiveness and offers insights for designing personalized, interoception-aware wearable technologies.
</summary>
<dc:date>2025-12-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>EcoLearn: Optimizing the Carbon Footprint of Federated Learning</title>
<link href="https://hdl.handle.net/1721.1/164535" rel="alternate"/>
<author>
<name>Mehboob, Talha</name>
</author>
<author>
<name>Bashir, Noman</name>
</author>
<author>
<name>Iglesias, Jesus Oma?a</name>
</author>
<author>
<name>Zink, Michael</name>
</author>
<author>
<name>Irwin, David</name>
</author>
<id>https://hdl.handle.net/1721.1/164535</id>
<updated>2026-03-08T03:39:04Z</updated>
<published>2025-12-03T00:00:00Z</published>
<summary type="text">EcoLearn: Optimizing the Carbon Footprint of Federated Learning
Mehboob, Talha; Bashir, Noman; Iglesias, Jesus Oma?a; Zink, Michael; Irwin, David
Federated Learning (FL) distributes machine learning (ML) training across edge devices to reduce data transfer overhead and protect data privacy. Since FL model training may span hundreds of devices and is thus resource- and energy-intensive, it has a significant carbon footprint. Importantly, since energy's carbon-intensity differs substantially (by up to 60×) across locations, training on the same device using the same amount of energy, but at different locations, can incur widely different carbon emissions. While prior work has focused on improving FL's resource- and energy-efficiency by optimizing time-to-accuracy, it implicitly assumes all energy has the same carbon intensity and thus does not optimize carbon efficiency, i.e., work done per unit of carbon emitted.&#13;
To address the problem, we design EcoLearn, which minimizes FL's carbon footprint without significantly affecting model accuracy or training time. EcoLearn achieves a favorable tradeoff by integrating carbon awareness into multiple aspects of FL training, including i) selecting clients with high data utility and low carbon, ii) provisioning more clients during the initial training rounds, and iii) mitigating stragglers by dynamically adjusting client over-provisioning based on carbon. We implement EcoLearn and its carbon-aware FL training policies in the Flower framework and show that it reduces the carbon footprint of training (by up to 10.8×) while maintaining model accuracy and training time (within ~1%) compared to state-of-the-art approaches.
SEC ’25, Arlington, VA, USA
</summary>
<dc:date>2025-12-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Democratizing Multi-Granularity Spatio-Temporal Intelligence with Multi-Agent Systems</title>
<link href="https://hdl.handle.net/1721.1/164534" rel="alternate"/>
<author>
<name>Wu, Che-Cheng</name>
</author>
<author>
<name>Huang, Syuan-Bo</name>
</author>
<author>
<name>Song, Yu-Lun</name>
</author>
<author>
<name>Lin, Po-Han</name>
</author>
<author>
<name>Lin, Michael</name>
</author>
<author>
<name>Lin, Yu-Ta</name>
</author>
<id>https://hdl.handle.net/1721.1/164534</id>
<updated>2026-03-08T03:39:00Z</updated>
<published>2025-11-02T00:00:00Z</published>
<summary type="text">Democratizing Multi-Granularity Spatio-Temporal Intelligence with Multi-Agent Systems
Wu, Che-Cheng; Huang, Syuan-Bo; Song, Yu-Lun; Lin, Po-Han; Lin, Michael; Lin, Yu-Ta
We propose a system that democratizes multi-granularity spatio-temporal analysis by integrating a Discrete Global Grid System (DGGS) data pipeline with a Multi-Agent System (MAS). Unlike existing single-agent spatial AI solutions that primarily target experts and lack support for heterogeneous data, persistent memory, and validation, our platform converts diverse datasets into standardized H3-indexed cells, enabling consistent analysis across scales. To enhance usability for non-experts, the system interactively guides users to refine queries, which are decomposed into sub-tasks managed by specialized agents for data retrieval, transformation, analysis, and visualization. Agents communicate through a decentralized framework with shared memory, supporting persistent reasoning and multi-turn dialogue. Reflection modules and human-in-the-loop validation further strengthen robustness. Demonstrated through real-world scenarios, such as analyzing the relationship between aging rate patterns and average income to inform social welfare policy in Taiwan, the system illustrates how natural language queries, combined with intuitive map- and chart-based visualizations, can support evidence-based decision-making.
GeoGenAgent ’25, Minneapolis, MN, USA
</summary>
<dc:date>2025-11-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>One String to Pull Them All: Fast Assembly of Curved Structures from Flat Auxetic Linkages</title>
<link href="https://hdl.handle.net/1721.1/164533" rel="alternate"/>
<author>
<name>Zaman, Akib</name>
</author>
<author>
<name>Aslarus, Jacqueline</name>
</author>
<author>
<name>Li, Jiaji</name>
</author>
<author>
<name>Mueller, Stefanie</name>
</author>
<author>
<name>Konakovic Lukovic, Mina</name>
</author>
<id>https://hdl.handle.net/1721.1/164533</id>
<updated>2026-03-08T03:39:20Z</updated>
<published>2025-12-04T00:00:00Z</published>
<summary type="text">One String to Pull Them All: Fast Assembly of Curved Structures from Flat Auxetic Linkages
Zaman, Akib; Aslarus, Jacqueline; Li, Jiaji; Mueller, Stefanie; Konakovic Lukovic, Mina
We present a computational approach for designing freeform structures that can be rapidly assembled from initially flat configurations by a single string pull. The target structures are decomposed into rigid spatially varied quad tiles that are optimized to approximate the user-provided surface, forming a flat mechanical linkage. Our algorithm then uses a two-step method to find a physically realizable string path that controls only a subset of tiles to smoothly actuate the structure from flat to assembled configuration. We initially compute the minimal subset of tiles that are required to be controlled with the string considering the geometry of the structure and interaction among the tiles. We then find a valid string path through these tiles that minimizes friction, which will assemble the flat linkage into the target 3D structure upon tightening a single string. The resulting designs can be easily manufactured with computational fabrication techniques such as 3D printing, CNC milling, molding, etc. in flat configuration that, in addition to manufacturing, facilitates storage and transportation. We validate our approach by developing a series of physical prototypes and showcasing various application case studies, ranging from medical devices, space shelters, to architectural designs.
</summary>
<dc:date>2025-12-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Discovering Folding Lines for Surface Compression</title>
<link href="https://hdl.handle.net/1721.1/164532" rel="alternate"/>
<author>
<name>Aoki, Toshiki</name>
</author>
<author>
<name>Tachi, Tomohiro</name>
</author>
<author>
<name>Konakovic Lukovic, Mina</name>
</author>
<id>https://hdl.handle.net/1721.1/164532</id>
<updated>2026-03-08T03:39:22Z</updated>
<published>2025-12-14T00:00:00Z</published>
<summary type="text">Discovering Folding Lines for Surface Compression
Aoki, Toshiki; Tachi, Tomohiro; Konakovic Lukovic, Mina
The miniaturization of shell structures presents a versatile and complex challenge, bridging geometry with diverse practical applications. In this paper, we introduce a novel approach for computing origami crease patterns to compress arbitrary 3D shell objects. First, we employ the adapted Material Point Method (MPM) to simulate the compression of a target surface and obtain an initial folded configuration. Since MPM produces overly smooth curved surfaces, their crease patterns are unsuitable for practical origami fabrication. We then propose a novel Folding Line Extraction (FLE) method that optimizes these smoothed surfaces to extract folding lines that achieve the target compression with minimal deformation and stretching outside the crease lines. This method produces smooth curved folding lines. Fabrication and experimental validation of the extracted patterns demonstrate their effectiveness and applicability in real-world scenarios.
SA Conference Papers ’25, Hong Kong, Hong Kong
</summary>
<dc:date>2025-12-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>3DPR: Single Image 3D Portrait Relighting with Generative Priors</title>
<link href="https://hdl.handle.net/1721.1/164531" rel="alternate"/>
<author>
<name>Rao, Pramod</name>
</author>
<author>
<name>Meka, Abhimitra</name>
</author>
<author>
<name>Zhou, Xilong</name>
</author>
<author>
<name>Fox, Gereon</name>
</author>
<author>
<name>B R, Mallikarjun</name>
</author>
<author>
<name>Zhan, Fangneng</name>
</author>
<author>
<name>Weyrich, Tim</name>
</author>
<author>
<name>Bickel, Bernd</name>
</author>
<author>
<name>Pfister, Hanspeter</name>
</author>
<author>
<name>Matusik, Wojciech</name>
</author>
<author>
<name>Beeler, Thabo</name>
</author>
<author>
<name>Elgharib, Mohamed</name>
</author>
<author>
<name>Habermann, Marc</name>
</author>
<author>
<name>Theobalt, Christian</name>
</author>
<id>https://hdl.handle.net/1721.1/164531</id>
<updated>2026-03-08T03:39:16Z</updated>
<published>2025-12-14T00:00:00Z</published>
<summary type="text">3DPR: Single Image 3D Portrait Relighting with Generative Priors
Rao, Pramod; Meka, Abhimitra; Zhou, Xilong; Fox, Gereon; B R, Mallikarjun; Zhan, Fangneng; Weyrich, Tim; Bickel, Bernd; Pfister, Hanspeter; Matusik, Wojciech; Beeler, Thabo; Elgharib, Mohamed; Habermann, Marc; Theobalt, Christian
Rendering novel, relit views of a human head, given a monocular portrait image as input, is an inherently underconstrained problem. The traditional graphics solution is to explicitly decompose the input image into geometry, material and lighting via differentiable rendering; but this is constrained by the multiple assumptions and approximations of the underlying models and parameterizations of these scene components. We propose 3DPR, an image-based relighting model that leverages generative priors learnt from multi-view One-Light-at-A-Time (OLAT) images captured in a light stage. We introduce a new diverse and large-scale multi-view 4K OLAT dataset of 139 subjects to learn a high-quality prior over the distribution of high-frequency face reflectance. We leverage the latent space of a pre-trained generative head model that provides a rich prior over face geometry learnt from in-the-wild image datasets. The input portrait is first embedded in the latent manifold of such a model through an encoder-based inversion process. Then a novel triplane-based reflectance network trained on our lightstage data is used to synthesize high-fidelity OLAT images to enable image-based relighting. Our reflectance network operates in the latent space of the generative head model, crucially enabling a relatively small number of lightstage images to train the reflectance model. Combining the generated OLATs according to a given HDRI environment maps yields physically accurate environmental relighting results. Through quantitative and qualitative evaluations, we demonstrate that 3DPR outperforms previous methods, particularly in preserving identity and in capturing lighting effects such as specularities, self-shadows, and subsurface scattering.
SA Conference Papers ’25, December 15–18, 2025, Hong Kong, Hong Kong
</summary>
<dc:date>2025-12-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shoot-Bounce-3D: Single-Shot Occlusion-Aware 3D from Lidar by Decomposing Two-Bounce Light</title>
<link href="https://hdl.handle.net/1721.1/164530" rel="alternate"/>
<author>
<name>Klinghoffer, Tzofi</name>
</author>
<author>
<name>Somasundaram, Siddharth</name>
</author>
<author>
<name>Xiang, Xiaoyu</name>
</author>
<author>
<name>Fan, Yuchen</name>
</author>
<author>
<name>Richardt, Christian</name>
</author>
<author>
<name>Dave, Akshat</name>
</author>
<author>
<name>Raskar, Ramesh</name>
</author>
<author>
<name>Ranjan, Rakesh</name>
</author>
<id>https://hdl.handle.net/1721.1/164530</id>
<updated>2026-03-08T03:39:03Z</updated>
<published>2025-12-14T00:00:00Z</published>
<summary type="text">Shoot-Bounce-3D: Single-Shot Occlusion-Aware 3D from Lidar by Decomposing Two-Bounce Light
Klinghoffer, Tzofi; Somasundaram, Siddharth; Xiang, Xiaoyu; Fan, Yuchen; Richardt, Christian; Dave, Akshat; Raskar, Ramesh; Ranjan, Rakesh
3D scene reconstruction from a single measurement is challenging, especially in the presence of occluded regions and specular materials, such as mirrors. We address these challenges by leveraging single-photon lidars. These lidars estimate depth from light that is emitted into the scene and reflected directly back to the sensor. However, they can also measure light that bounces multiple times in the scene before reaching the sensor. This multi-bounce light contains additional information that can be used to recover dense depth, occluded geometry, and material properties. Prior work with single-photon lidar, however, has only demonstrated these use cases when a laser sequentially illuminates one scene point at a time. We instead focus on the more practical – and challenging – scenario of illuminating multiple scene points simultaneously. The complexity of light transport due to the combined effects of multiplexed illumination, two-bounce light, shadows, and specular reflections is challenging to invert analytically. Instead, we propose a data-driven method to invert light transport in single-photon lidar. To enable this approach, we create the first large-scale simulated dataset of ~100k lidar transients for indoor scenes. We use this dataset to learn a prior on complex light transport, enabling measured two-bounce light to be decomposed into the constituent contributions from each laser spot. Finally, we experimentally demonstrate how this decomposed light can be used to infer 3D geometry in scenes with occlusions and mirrors from a single measurement. Our code and dataset are released on our project webpage.
SA Conference Papers ’25, Hong Kong, Hong Kong
</summary>
<dc:date>2025-12-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>PhysiOpt: Physics-Driven Shape Optimization for 3D Generative Models</title>
<link href="https://hdl.handle.net/1721.1/164529" rel="alternate"/>
<author>
<name>Zhan, Xiao</name>
</author>
<author>
<name>Jambon, Cl?ment</name>
</author>
<author>
<name>Thompson, Evan</name>
</author>
<author>
<name>Ng, Kenney</name>
</author>
<author>
<name>Konakovi? Lukovi?, Mina</name>
</author>
<id>https://hdl.handle.net/1721.1/164529</id>
<updated>2026-03-08T03:39:00Z</updated>
<published>2025-12-14T00:00:00Z</published>
<summary type="text">PhysiOpt: Physics-Driven Shape Optimization for 3D Generative Models
Zhan, Xiao; Jambon, Cl?ment; Thompson, Evan; Ng, Kenney; Konakovi? Lukovi?, Mina
Generative models have recently demonstrated impressive capabilities in producing high-quality 3D shapes from a variety of user inputs (e.g., text or images). However, generated objects often lack physical integrity. We introduce PhysiOpt, a differentiable physics optimizer designed to improve the physical behavior of 3D generative outputs, enabling them to transition from virtual designs to physically plausible, real-world objects. While most generative models represent geometry as continuous implicit fields, physics-based approaches often rely on the finite element method (FEM), requiring ad hoc mesh extraction to perform shape optimization. In addition, these methods are typically slow, limiting their integration in fast, iterative generative design workflows. Instead, we bridge the representation gap and propose a fast and effective differentiable simulation pipeline that optimizes shapes directly in the latent space of generative models using an intuitive and easy-to-implement differentiable mapping. This approach enables fast optimization while preserving semantic structure, unlike traditional methods relying on local mesh-based adjustments. We demonstrate the versatility of our optimizer across a range of shape priors, from global and part-based latent models to a state-of-the-art large-scale 3D generator, and compare it to a traditional mesh-based shape optimizer. Our method preserves the native representation and capabilities of the underlying generative model while supporting user-specified materials, loads, and boundary conditions. The resulting designs exhibit improved physical behavior, remain faithful to the learned priors, and are suitable for fabrication. We demonstrate the effectiveness of our approach on both virtual and fabricated objects.
SA Conference Papers ’25, Hong Kong, Hong Kong
</summary>
<dc:date>2025-12-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Low-Rank Adaptation of Neural Fields</title>
<link href="https://hdl.handle.net/1721.1/164528" rel="alternate"/>
<author>
<name>Truong, Anh</name>
</author>
<author>
<name>Mahmoud, Ahmed</name>
</author>
<author>
<name>Konakovi? Lukovi?, Mina</name>
</author>
<author>
<name>Solomon, Justin</name>
</author>
<id>https://hdl.handle.net/1721.1/164528</id>
<updated>2026-03-08T03:39:13Z</updated>
<published>2025-12-14T00:00:00Z</published>
<summary type="text">Low-Rank Adaptation of Neural Fields
Truong, Anh; Mahmoud, Ahmed; Konakovi? Lukovi?, Mina; Solomon, Justin
Processing visual data often involves small adjustments or sequences of changes, e.g., image filtering, surface smoothing, and animation. While established graphics techniques like normal mapping and video compression exploit redundancy to encode such small changes efficiently, the problem of encoding small changes to neural fields—neural network parameterizations of visual or physical functions—has received less attention. We propose a parameter-efficient strategy for updating neural fields using low-rank adaptations (LoRA). LoRA, a method from the parameter-efficient fine-tuning LLM community, encodes small updates to pre-trained models with minimal computational overhead. We adapt LoRA for instance-specific neural fields, avoiding the need for large pre-trained models and yielding lightweight updates. We validate our approach with experiments in image filtering, geometry editing, video compression, and energy-based editing, demonstrating its effectiveness and versatility for representing neural field updates.
Anh Truong, Ahmed H. Mahmoud, Mina Konaković Luković, and Justin Solomon. 2025. Low-Rank Adaptation of Neural Fields. In Proceedings of the SIGGRAPH Asia 2025 Conference Papers (SA Conference Papers '25). Association for Computing Machinery, New York, NY, USA, Article 86, 1–12.
</summary>
<dc:date>2025-12-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Participatory Evolution of Artificial Life Systems via Semantic Feedback</title>
<link href="https://hdl.handle.net/1721.1/164527" rel="alternate"/>
<author>
<name>Li, Shuowen</name>
</author>
<author>
<name>Wang, Kexin</name>
</author>
<author>
<name>Fang, Minglu</name>
</author>
<author>
<name>Huang, Danqi</name>
</author>
<author>
<name>Asadipour, Ali</name>
</author>
<author>
<name>Mi, Haipeng</name>
</author>
<author>
<name>Sun, Yitong</name>
</author>
<id>https://hdl.handle.net/1721.1/164527</id>
<updated>2026-03-08T03:39:34Z</updated>
<published>2025-12-14T00:00:00Z</published>
<summary type="text">Participatory Evolution of Artificial Life Systems via Semantic Feedback
Li, Shuowen; Wang, Kexin; Fang, Minglu; Huang, Danqi; Asadipour, Ali; Mi, Haipeng; Sun, Yitong
We present a semantic-feedback framework that treats natural language as a regulatory signal for evolving artificial-life systems. Instead of using prompts to select finished images, text in our system shapes the dynamics of an interactive ecosystem, allowing audiences to cultivate behaviors over time. The framework couples a learned mapping from prompts to simulation parameters with evolutionary search and vision–language evaluation, so user intent modulates both visible outcomes and the underlying generative rules. It supports iterative prompt refinement, multi-agent interaction, and the synthesis of new collective rules from community input. In a user study, participants achieved higher semantic alignment and reported a greater sense of control than with manual tuning, while behaviors remained diverse across generations. As an art-led contribution, the work reframes authoring as participatory cultivation and advances open-ended evolution as a socially distributed, not solely algorithmic, process; as a tool contribution, it offers a practical platform for co-creative generative design.
SA Art Papers ’25, Hong Kong, Hong Kong
</summary>
<dc:date>2025-12-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Physical Manifestation of Generative AI Music Systems for Live Performance</title>
<link href="https://hdl.handle.net/1721.1/164526" rel="alternate"/>
<author>
<name>Naseck, Perry</name>
</author>
<author>
<name>Blanchard, Lancelot</name>
</author>
<author>
<name>Lavakare, Madhav</name>
</author>
<author>
<name>Lecamwasam, Kimaya</name>
</author>
<author>
<name>Paradiso, Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/164526</id>
<updated>2026-03-08T03:39:33Z</updated>
<published>2025-12-14T00:00:00Z</published>
<summary type="text">Physical Manifestation of Generative AI Music Systems for Live Performance
Naseck, Perry; Blanchard, Lancelot; Lavakare, Madhav; Lecamwasam, Kimaya; Paradiso, Joseph
This paper explores the physical manifestation of generative AI music systems for live performance, focusing on bridging the expressive gap between AI-generated music and audience perception. Through a year-long collaboration with a human performer, we constructed a kinetic sculpture that visualizes the outputs of an AI jam_bot during concerts. The sculpture, powered by ML-based and pattern-driven mapping methodologies, interprets real-time AI musical decisions as expressive movements. Audience feedback indicates increased engagement and curiosity, although interpretability remains a challenge. Our work highlights the potential of embodied visualization to establish communicative presence for AI performers and suggests avenues for future research.
SA Art Papers ’25, Hong Kong, Hong Kong
</summary>
<dc:date>2025-12-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performant Unified GPU Kernels for Portable Singular Value Computation Across Hardware and Precision</title>
<link href="https://hdl.handle.net/1721.1/164525" rel="alternate"/>
<author>
<name>Ringoot, Evelyne</name>
</author>
<author>
<name>Alomairy, Rabab</name>
</author>
<author>
<name>Churavy, Valentin</name>
</author>
<author>
<name>Edelman, Alan</name>
</author>
<id>https://hdl.handle.net/1721.1/164525</id>
<updated>2026-03-08T03:39:31Z</updated>
<published>2025-12-20T00:00:00Z</published>
<summary type="text">Performant Unified GPU Kernels for Portable Singular Value Computation Across Hardware and Precision
Ringoot, Evelyne; Alomairy, Rabab; Churavy, Valentin; Edelman, Alan
This paper presents a portable, GPU-accelerated implementation of a QR-based singular value computation algorithm in Julia. The singular value decomposition (SVD) is a fundamental numerical tool in scientific computing and machine learning, providing optimal low-rank matrix approximations. Its importance has increased even more in large-scale machine learning pipelines, including large language models (LLMs), where it enables low-rank adaptation (LoRA). The implemented algorithm is based on the classic two-stage QR reduction, consisting of successive matrix reduction to band form and bidiagonal form. Our implementation leverages Julia’s multiple dispatch and metaprogramming capabilities, integrating with the GPUArrays and KernelAbstractions frameworks to provide a unified type and hardware-agnostic function. It supports diverse GPU architectures and data types, and is, to our knowledge, the first GPU-accelerated singular value implementation to support Apple Metal GPUs and half precision. Performance results on multiple GPU backends and data types demonstrate that portability does not require sacrificing performance: the unified function outperforms most linear algebra libraries (MAGMA, SLATE, rocSOLVER, oneMKL) for matrix sizes larger than 1024 × 1024, and achieves 80%-90% of the performance of cuSOLVER for large matrices.
ICPP ’25, San Diego, CA, USA
</summary>
<dc:date>2025-12-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>UQGNN: Uncertainty Quantification of Graph Neural Networks for Multivariate Spatiotemporal Prediction</title>
<link href="https://hdl.handle.net/1721.1/164524" rel="alternate"/>
<author>
<name>Yu, Dahai</name>
</author>
<author>
<name>Zhuang, Dingyi</name>
</author>
<author>
<name>Jiang, Lin</name>
</author>
<author>
<name>Xu, Rongchao</name>
</author>
<author>
<name>Ye, Xinyue</name>
</author>
<author>
<name>Bu, Yuheng</name>
</author>
<author>
<name>Wang, Shenhao</name>
</author>
<author>
<name>Wang, Guang</name>
</author>
<id>https://hdl.handle.net/1721.1/164524</id>
<updated>2026-03-08T03:39:29Z</updated>
<published>2025-12-12T00:00:00Z</published>
<summary type="text">UQGNN: Uncertainty Quantification of Graph Neural Networks for Multivariate Spatiotemporal Prediction
Yu, Dahai; Zhuang, Dingyi; Jiang, Lin; Xu, Rongchao; Ye, Xinyue; Bu, Yuheng; Wang, Shenhao; Wang, Guang
Spatiotemporal prediction plays a critical role in numerous real-world applications such as urban planning, transportation optimization, disaster response, and pandemic control. In recent years, researchers have made significant progress by developing advanced deep learning models for spatiotemporal prediction. However, most existing models are deterministic, i.e., predicting only the expected mean values without quantifying uncertainty, leading to potentially unreliable and inaccurate outcomes. While recent studies have introduced probabilistic models to quantify uncertainty, they typically focus on a single phenomenon (e.g., taxi, bike, crime, or traffic crashes), thereby neglecting the inherent correlations among heterogeneous urban phenomena. To address the research gap, we propose a novel Graph Neural Network with Uncertainty Quantification, termed UQGNN for multivariate spatiotemporal prediction. UQGNN introduces two key innovations: (i) an Interaction-aware Spatiotemporal Embedding Module that integrates a multivariate diffusion graph convolutional network and an interaction-aware temporal convolutional network to effectively capture complex spatial and temporal interaction patterns, and (ii) a multivariate probabilistic prediction module designed to estimate both expected mean values and associated uncertainties. Extensive experiments on four real-world multivariate spatiotemporal datasets from Shenzhen, New York City, and Chicago demonstrate that UQGNN consistently outperforms state-of-the-art baselines in both prediction accuracy and uncertainty quantification. For example, on the Shenzhen dataset, UQGNN achieves a 5% improvement in both prediction accuracy and uncertainty quantification.
SIGSPATIAL ’25, Minneapolis, MN, USA
</summary>
<dc:date>2025-12-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>SONAR Web: A Platform-Agnostic Framework for Real-Time Decentralized Learning Across Heterogeneous Edge Clients</title>
<link href="https://hdl.handle.net/1721.1/164523" rel="alternate"/>
<author>
<name>Yuan, Joyce</name>
</author>
<author>
<name>Le, Brian</name>
</author>
<author>
<name>Le, Kathryn</name>
</author>
<author>
<name>Shi, Yichuan</name>
</author>
<author>
<name>Singh, Abhishek</name>
</author>
<author>
<name>Sharma, Rishi</name>
</author>
<author>
<name>Patricio, Angel</name>
</author>
<author>
<name>Raskar, Ramesh</name>
</author>
<id>https://hdl.handle.net/1721.1/164523</id>
<updated>2026-03-08T03:39:37Z</updated>
<published>2025-12-02T00:00:00Z</published>
<summary type="text">SONAR Web: A Platform-Agnostic Framework for Real-Time Decentralized Learning Across Heterogeneous Edge Clients
Yuan, Joyce; Le, Brian; Le, Kathryn; Shi, Yichuan; Singh, Abhishek; Sharma, Rishi; Patricio, Angel; Raskar, Ramesh
Most federated learning (FL) frameworks assume reliable networks and homogeneous devices, limiting their applicability in mobile and edge environments where connectivity is intermittent and devices are highly heterogeneous. We introduce SONAR Web, an open-source framework for fully decentralized, cross-platform collaborative learning between browsers, servers, tablets, and smartphones. SONAR Web decouples the learning protocol from the underlying client platform through a platform-agnostic configuration interface—enabling Python, JavaScript, and mobile clients to seamlessly interoperate in real time. By combining peer-to-peer RTC protocols with communication-efficient techniques from FL, SONAR Web supports privacy-preserving training without centralized orchestration. We demonstrate SONAR Web's robustness through deployments on real-world devices and networks, showing resilience under heterogeneous network conditions and resource variability. SONAR Web provides a unified, language-agnostic interface for decentralized learning, enabling seamless collaboration across heterogeneous devices and runtimes—advancing scalable, inclusive, and real-time model training at the mobile and edge frontier.
FLEdge-AI ’25, November 4-8, 2025, Hong Kong, China
</summary>
<dc:date>2025-12-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ferrozuit: Ferromagnetic Electronic Textile System for Zero-Gravity Spatial Anchoring</title>
<link href="https://hdl.handle.net/1721.1/164522" rel="alternate"/>
<author>
<name>Honnet, Cedric</name>
</author>
<author>
<name>Freire, Rachel</name>
</author>
<author>
<name>Cherston, Juliana</name>
</author>
<author>
<name>Guenther, Maximilian</name>
</author>
<author>
<name>Paradiso, Joseph</name>
</author>
<author>
<name>Wicaksono, Irmandy</name>
</author>
<id>https://hdl.handle.net/1721.1/164522</id>
<updated>2026-03-08T03:39:36Z</updated>
<published>2025-12-29T00:00:00Z</published>
<summary type="text">Ferrozuit: Ferromagnetic Electronic Textile System for Zero-Gravity Spatial Anchoring
Honnet, Cedric; Freire, Rachel; Cherston, Juliana; Guenther, Maximilian; Paradiso, Joseph; Wicaksono, Irmandy
Long-duration human space missions introduce persistent physical, physiological, and psychological challenges stemming from the absence of gravity. Beyond major concerns like bone deterioration, cardiovascular deconditioning, and muscle atrophy, astronauts frequently experience spatial disorientation, discomfort during routine tasks, and difficulty maintaining stable body positioning. These subtle yet pervasive issues impact daily functioning, underscoring the need for lightweight, unobtrusive solutions that support orientation, comfort, and stability in microgravity environments. Ferrozuit introduces a solution to address these challenges in microgravity. It is a prototype crafted from custom ferromagnetic thread, woven and tailored to interact with programmable (electro)permanent magnets embedded within the microgravity environment. This system aims to provide an anchoring force intended to improve stability during tasks, enhance comfort during rest, and create a sense of orientation. This paper details the design rationale, the fabrication of the ferromagnetic textile, the magnetic docking system, initial technical evaluations, and potential applications. Ferrozuit reimagines spatial anchoring as an embedded, textile-driven experience, blending textile craft with advanced materials for adaptive wearable anchoring in microgravity environments.
UbiComp Companion ’25, Espoo, Finland
</summary>
<dc:date>2025-12-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Intelligent Soft Wearables</title>
<link href="https://hdl.handle.net/1721.1/164521" rel="alternate"/>
<author>
<name>Yu, Tianhong</name>
</author>
<author>
<name>Honnet, Cedric</name>
</author>
<author>
<name>Cheng, Tingyu</name>
</author>
<author>
<name>Takahashi, Ryo</name>
</author>
<author>
<name>Zhou, Bo</name>
</author>
<author>
<name>Zhang, Cheng</name>
</author>
<author>
<name>Lukowicz, Paul</name>
</author>
<author>
<name>Kawahara, Yoshihiro</name>
</author>
<author>
<name>Hester, Josiah</name>
</author>
<author>
<name>Paradiso, Joseph</name>
</author>
<author>
<name>Luo, Yiyue</name>
</author>
<author>
<name>Wicaksono, Irmandy</name>
</author>
<id>https://hdl.handle.net/1721.1/164521</id>
<updated>2026-03-08T03:39:35Z</updated>
<published>2025-12-29T00:00:00Z</published>
<summary type="text">Intelligent Soft Wearables
Yu, Tianhong; Honnet, Cedric; Cheng, Tingyu; Takahashi, Ryo; Zhou, Bo; Zhang, Cheng; Lukowicz, Paul; Kawahara, Yoshihiro; Hester, Josiah; Paradiso, Joseph; Luo, Yiyue; Wicaksono, Irmandy
Human bodies are almost always in contact with soft materials like clothing, for warmth, protection, self-expression, etc. Recent advancements in intelligent soft wearables have augmented these on-body soft objects with computational functions and intelligence with little compromise on the softness and comforts of wearables, allowing prolonged wear. These innovations, which combine advanced soft sensor design, fabrication, and computational power, offer unprecedented opportunities to improve our health, productivity, and overall well-being with monitoring and assistive capabilities. However, the inherent physical properties of soft materials present unique challenges in achieving practical interactions. The complexity of intelligent soft wearables, multiplexing intricate designs, soft materials, flexible electronics, advanced signal processing algorithms, and machine learning models, necessitates collaborative efforts from experts across diverse domains. This workshop aims to bring together interested researchers and practitioners across relevant domains to discuss the challenges and opportunities of intelligent soft wearables.
</summary>
<dc:date>2025-12-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantum dots: A journey from fundamental discovery to technological impacts</title>
<link href="https://hdl.handle.net/1721.1/164520" rel="alternate"/>
<author>
<name>Hassan, Abeera</name>
</author>
<author>
<name>Kaur, Jaspreet</name>
</author>
<author>
<name>Chen, Ou</name>
</author>
<author>
<name>Bawendi, Moungi G.</name>
</author>
<id>https://hdl.handle.net/1721.1/164520</id>
<updated>2026-01-13T04:55:00Z</updated>
<published>2025-11-13T00:00:00Z</published>
<summary type="text">Quantum dots: A journey from fundamental discovery to technological impacts
Hassan, Abeera; Kaur, Jaspreet; Chen, Ou; Bawendi, Moungi G.
This article traces the evolution of quantum dots (QDs) from their initial discovery to growing technological impacts. We highlight the key breakthroughs in the development of colloidal QDs that have enabled precise control over their unique optical and optoelectronic properties. We also discuss a range of QD-based applications and address commercialization efforts. Finally, we examine ongoing challenges and emerging opportunities that are set to shape the future of QD research and technological advancement.
</summary>
<dc:date>2025-11-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Proximity-labeling proteomics reveals remodeled interactomes and altered localization of pathogenic SHP2 variants</title>
<link href="https://hdl.handle.net/1721.1/164519" rel="alternate"/>
<author>
<name>van Vlimmeren, Anne E.</name>
</author>
<author>
<name>Tang, Lauren C.</name>
</author>
<author>
<name>Jiang, Ziyuan</name>
</author>
<author>
<name>Iyer, Abhishek</name>
</author>
<author>
<name>Voleti, Rashmi</name>
</author>
<author>
<name>Krismer, Konstantin</name>
</author>
<author>
<name>Gaublomme, Jellert T.</name>
</author>
<author>
<name>Jovanovic, Marko</name>
</author>
<author>
<name>Shah, Neel H.</name>
</author>
<id>https://hdl.handle.net/1721.1/164519</id>
<updated>2026-03-08T03:39:31Z</updated>
<published>2025-12-22T00:00:00Z</published>
<summary type="text">Proximity-labeling proteomics reveals remodeled interactomes and altered localization of pathogenic SHP2 variants
van Vlimmeren, Anne E.; Tang, Lauren C.; Jiang, Ziyuan; Iyer, Abhishek; Voleti, Rashmi; Krismer, Konstantin; Gaublomme, Jellert T.; Jovanovic, Marko; Shah, Neel H.
Missense mutations in PTPN11, which encodes the protein tyrosine phosphatase SHP2, are common in several developmental disorders and cancers. While many mutations disrupt auto-inhibition and hyperactivate SHP2, several do not enhance catalytic activity. Both activating and non-activating mutations could potentially drive pathogenic signaling by altering SHP2 interactions or localization. We employed proximity-labeling proteomics to map the interaction networks of wild-type SHP2, ten clinically relevant mutants, and SHP2 bound to an inhibitor that stabilizes its auto-inhibited state. Our analyses reveal mutation- and inhibitor-dependent alterations in the SHP2 interactome, with several mutations also changing localization. Some mutants show increased mitochondrial localization and impact mitochondrial function. This study provides a resource for exploring SHP2 signaling and offers new insights into the molecular basis of SHP2-driven diseases. Furthermore, this work highlights the capacity for proximity-labeling proteomics to detect missense-mutation-dependent changes in protein interactions and localization.
</summary>
<dc:date>2025-12-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Perspective on patient and non-academic partner engagement for the responsible integration of large language models in health chatbots</title>
<link href="https://hdl.handle.net/1721.1/164518" rel="alternate"/>
<author>
<name>Jaiswal, Nikhil</name>
</author>
<author>
<name>Ma, Yuanchao</name>
</author>
<author>
<name>Lebouché, Bertrand</name>
</author>
<author>
<name>Poenaru, Dan</name>
</author>
<author>
<name>Pomey, Marie-Pascale</name>
</author>
<author>
<name>Achiche, Sofiane</name>
</author>
<author>
<name>Lessard, David</name>
</author>
<author>
<name>Engler, Kim</name>
</author>
<author>
<name>Montiel, Zully</name>
</author>
<author>
<name>Acevedo, Hector</name>
</author>
<author>
<name>Gameiro, Rodrigo R.</name>
</author>
<author>
<name>Celi, Leo A.</name>
</author>
<author>
<name>Osmanlliu, Esli</name>
</author>
<id>https://hdl.handle.net/1721.1/164518</id>
<updated>2026-03-08T03:39:30Z</updated>
<published>2025-12-22T00:00:00Z</published>
<summary type="text">Perspective on patient and non-academic partner engagement for the responsible integration of large language models in health chatbots
Jaiswal, Nikhil; Ma, Yuanchao; Lebouché, Bertrand; Poenaru, Dan; Pomey, Marie-Pascale; Achiche, Sofiane; Lessard, David; Engler, Kim; Montiel, Zully; Acevedo, Hector; Gameiro, Rodrigo R.; Celi, Leo A.; Osmanlliu, Esli
Uses of large language models (LLMs) in health chatbots are expanding into high-stakes clinical contexts, heightening the need for tools that are evidence-based, accountable, accurate, and patient-centred. This conceptual, practice-informed Perspective reflects on engaging patients and non-academic partners for the responsible integration of LLMs, grounded in the co-construction of MARVIN (for people living with HIV) and in an emerging collaboration with MIT Critical Data. Organised by the Software Development Life Cycle, we describe: conception/needs assessment with patient partners to identify use cases, acceptable trade-offs, and privacy expectations; development that prioritises grounding via vetted sources, structured human feedback, and data-validation committees including patient partners; testing and evaluation using patient-reported outcome measures (PROMs) and patient-reported experience measures (PREMs) chosen in collaboration with patients to capture usability, acceptability, trust, and perceived safety, alongside task performance and harmful-output monitoring; and implementation via diverse governance boards, knowledge-mobilisation materials to set expectations, and risk-management pathways for potentially unsafe outputs. Based on our experience with MARVIN, we recommend early and continuous engagement of patients and non-academic partners, fair compensation, shared decision-making power, transparent decision logging, and inclusive, adaptable governance that can evolve with changing models and standards. These lessons highlight how patient partnership can directly shape chatbot design and oversight, helping teams align LLM-enabled tools with patient-centred goals while building accountable, safe, and equitable systems.
</summary>
<dc:date>2025-12-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Study of charm mixing and CP violation with D0 → K±π∓π±π∓ decays</title>
<link href="https://hdl.handle.net/1721.1/164517" rel="alternate"/>
<author>
<name>Aaij, R.</name>
</author>
<author>
<name>Abdelmotteleb, A. S. W.</name>
</author>
<author>
<name>Abellan Beteta, C.</name>
</author>
<author>
<name>Abudinén, F.</name>
</author>
<author>
<name>Ackernley, T.</name>
</author>
<author>
<name>Adefisoye, A. A.</name>
</author>
<author>
<name>Adeva, B.</name>
</author>
<author>
<name>Adinolfi, M.</name>
</author>
<author>
<name>Adlarson, P.</name>
</author>
<author>
<name>Agapopoulou, C.</name>
</author>
<author>
<name>Aidala, C. A.</name>
</author>
<author>
<name>Ajaltouni, Z.</name>
</author>
<author>
<name>Akar, S.</name>
</author>
<author>
<name>Akiba, K.</name>
</author>
<author>
<name>Albicocco, P.</name>
</author>
<author>
<name>Albrecht, J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164517</id>
<updated>2026-03-08T03:39:28Z</updated>
<published>2025-12-19T00:00:00Z</published>
<summary type="text">Study of charm mixing and CP violation with D0 → K±π∓π±π∓ decays
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
A study of charm mixing and CP violation in D0 → K±π∓π±π∓ decays is performed using data collected by the LHCb experiment in proton-proton collisions from 2015 to 2018, corresponding to an integrated luminosity of 6 fb−1. The ratio of promptly produced D0 → K+π−π+π− to D0 → K−π+π−π+ decay rates is measured as a function of D0 decay time, both inclusive over phase space and in bins of phase space. Taking external inputs for the D 0 − D ¯ 0 mixing parameters x and y allows constraints to be obtained on the hadronic parameters of the charm decay. When combined with previous measurements from charm-threshold experiments and at LHCb, improved knowledge is obtained for these parameters, which is valuable for studies of the angle γ of the Unitarity Triangle. An alternative analysis is also performed, in which external inputs are taken for the hadronic parameters, and the mixing parameters are determined, including ∆x and ∆y, which are nonzero in the presence of CP violation. It is found that x = 0 . 85 − 0.24 + 0.15 % , y = 0 . 21 − 0.27 + 0.29 % , ∆x = (−0.02 ± 0.04) % and Δ y = 0.0 2 − 0.03 + 0.04 % . These results are consistent with previous measurements and the hypothesis of CP conservation.
</summary>
<dc:date>2025-12-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Energy-energy correlator at hadron colliders: celestial blocks and singularities</title>
<link href="https://hdl.handle.net/1721.1/164516" rel="alternate"/>
<author>
<name>Chen, Hao</name>
</author>
<author>
<name>Ruan, Hongyi</name>
</author>
<author>
<name>Zhu, Hua X.</name>
</author>
<id>https://hdl.handle.net/1721.1/164516</id>
<updated>2026-03-08T03:39:01Z</updated>
<published>2025-12-22T00:00:00Z</published>
<summary type="text">Energy-energy correlator at hadron colliders: celestial blocks and singularities
Chen, Hao; Ruan, Hongyi; Zhu, Hua X.
Energy-energy correlator (EEC) is an event shape observable that characterizes the distribution of energy flux in collision events. We initiate the study of full-range EEC at hadron colliders, generalizing the extensively studied EEC in e+e− collision as well as the transverse EEC in hadron collisions. We derive celestial blocks from Lorentz symmetry to perform partial wave decomposition of the EEC at hadron colliders. These celestial blocks are essentially conformal blocks on the 2d celestial sphere, which have additional dependence on the collinear spin of “light-ray transition matrix” along the collision axis. In this work, we perform the leading-order (LO) analytic calculation of this observable in pure Yang-Mills theory and use it as an example to illustrate the block decomposition. Numerically, the block expansion demonstrates superior accuracy in the collinear limit compared to conventional power series expansion. Analytically, we observe in this example that the block coefficients exhibit analyticity in both collinear and transverse spin. In addition, we analyze several kinematic limits at LO — collinear, back-to-back, opposite coplanar and Regge limit. While the first three limits naturally generalize their e+e− collision counterparts or transverse EEC and are governed by soft-collinear dynamics, the Regge limit requires complete angular dependence and reveals BFKL physics. Phenomenologically, we propose a realistic experimental setup and briefly discuss how the convolution of parton distribution function modifies the perturbative EEC result. Our work suggests that the full-range EEC at hadron colliders is an elegant observable which probes a broader kinematic space and connects various regimes of different QCD dynamics through a single measurement.
</summary>
<dc:date>2025-12-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Synthesis and Applications of Large-Area Monolayer Graphene</title>
<link href="https://hdl.handle.net/1721.1/164515" rel="alternate"/>
<author>
<name>Wang, Zhien (Abigail)</name>
</author>
<id>https://hdl.handle.net/1721.1/164515</id>
<updated>2026-01-13T03:36:27Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Synthesis and Applications of Large-Area Monolayer Graphene
Wang, Zhien (Abigail)
Graphene, renowned for its exceptional electrical, mechanical, and chemical properties, is a promising candidate for next-generation electronics, photonics, and biosensing. However, realizing its full potential depends critically on the ability to synthesize high-quality monolayer graphene. In this thesis, we present a robust chemical vapor deposition (CVD) approach for synthesizing large-area, adlayer-free, single-orientation graphene on Cu(111) foil and Cu(111) film/sapphire. A comparative analysis between these two substrates reveals critical differences in wrinkle density, grain size, and strain — offering insights for optimizing graphene growth.&#13;
We further identify and characterize defective merging behavior in single-orientation graphene domains. Contrary to conventional assumptions, these merging regions contain permeable defects, revealing previously unrecognized limitations in using single-orientation stitched graphene as an impermeable barrier. To scale up production while reducing human error, we also develop an autonomous CVD platform with automated sample handling, growth and post-growth oxidation. This system enables high-throughput and reproducible graphene synthesis with minimal supervision.&#13;
Building on these synthesis advances, we explore multiple applications of large-area monolayer graphene. We discover that graphene can promote interfacial oxidation of metals like aluminum and titanium during deposition, whereas metals such as nickel remain stable — a finding that informs the engineering of metal-graphene interfaces for electronic devices. In parallel, we explored diverse applications of graphene, including its role as a transparent, flexible electrode in organic solar cells, along with several collaborative efforts demonstrating its use as a sensor for cardiac microtissues, and as a tunable microheater in mid-infrared devices.&#13;
Altogether, this work advances both the fundamental understanding and technological scalability of monolayer graphene, positioning it as a versatile platform for future applications across electronics, optoelectronics, and biointerfaces.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mapping signaling networks and rapidly evolving genes in the developing Arabidopsis seed at single-nucleus resolution</title>
<link href="https://hdl.handle.net/1721.1/164514" rel="alternate"/>
<author>
<name>Martin, Caroline A.</name>
</author>
<id>https://hdl.handle.net/1721.1/164514</id>
<updated>2026-01-13T03:36:58Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Mapping signaling networks and rapidly evolving genes in the developing Arabidopsis seed at single-nucleus resolution
Martin, Caroline A.
Seeds are an exceptional evolutionary innovation that enables the conditional allocation of maternal resources to successfully fertilized ovules. During early development, seeds accumulate nutrients that are utilized either by the embryo or by humans who harvest seed crops for food, biofuel, and livestock feed. Moreover, the grains of maize, rice, and wheat provide approximately 60% of the calories consumed worldwide. Although seeds are a cornerstone for ecosystems and modern agriculture, fundamental aspects of their development are incompletely understood. In this thesis, I develop a transcriptional atlas of seed development using the model plant Arabidopsis thaliana to clarify the functional compartmentalization, diversity, and developmental dynamics of cell types in the seed. I focus my analyses on how seed cell types communicate with one another to ensure successful propagation, and how genetic conflicts in the seed may drive rapid evolution in specific cell types. After characterizing the extent of short, secreted peptide expression in specific seed cell types, I perform in silico screens to match potential peptide hormones with their receptors. In total, I show that the seed coat shows functional compartmentalization around the gateway for maternal resources into seeds, that seed genes differentially expressed in a maternal resource transfer structure are rapidly evolving, and that genes underlying brassinosteroid biosynthesis and response are expressed in adjacent tissues, among other findings. This thesis illuminates potentially new mechanisms for inter-tissue coordination and provides a transcriptional reference for future seed studies.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization in Deep Learning: Structured, Realistic and Interpretable Learning for Decision-Making</title>
<link href="https://hdl.handle.net/1721.1/164513" rel="alternate"/>
<author>
<name>Tsiourvas, Asterios</name>
</author>
<id>https://hdl.handle.net/1721.1/164513</id>
<updated>2026-01-13T03:36:43Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Optimization in Deep Learning: Structured, Realistic and Interpretable Learning for Decision-Making
Tsiourvas, Asterios
In recent years, deep learning has emerged as a powerful tool for data-driven decisionmaking. However, its adoption in high-stakes applications is often constrained by challenges related to interpretability, fairness, and generalization in structured or complex environments. This thesis develops new optimization methodologies to enhance the realism, structureawareness, and interpretability of deep learning models in decision-making tasks. We begin, in Chapter 2, by addressing the challenge of optimizing trained neural networks for data-driven decision-making. Although neural networks can encode rich representations of preferences or outcomes, directly optimizing their outputs can be computationally intractable and often may produce unrealistic prescriptions. We introduce scalable algorithms that leverage the piecewise-linear structure of ReLU networks, reducing the original hard-to-solve mixed-integer program into tractable linear programs. To ensure realism, we introduce constraints that restrict decisions to lie on the data manifold. We then extend this framework to any differentiable neural network or MIP-expressible model and show that it scales for networks with millions of parameters. In Chapter 3, we focus on decision-making under observational data. First, we study personalized treatment recommendations under discrete treatments. We introduce the Prescriptive ReLU (P-ReLU) network, a piecewise-linear model that partitions the input space into polyhedral regions, assigning treatments uniformly within each, and that can be translated into an equivalent interpretable decision tree. We demonstrate that P-ReLU achieves strong prescriptive accuracy and accommodates structural/prescriptive constraints with ease. Next, we consider the problem of large language model (LLM) routing, where a query must be dynamically routed to the best model under competing metrics like accuracy and cost. We develop a causal, end-to-end approach that learns routing policies directly from logged observational data, minimizing directly decision-making regret. Finally, we tackle the problem of generating realistic, manifold-aligned counterfactual explanations. To address this problem, we present a MIP formulation where we explicitly enforce manifold alignment by reformulating the highly nonlinear Local Outlier Factor (LOF) metric as a set of mixed-integer constraints. To address the computational challenge, we leverage the geometry of the network and propose an efficient decomposition scheme that reduces the initial hard-to-solve problem into a series of significantly smaller, easier-to-solve problems. We further extend this framework to any differentiable neural network or MIP-expressible machine learning model. In Chapter 4, we focus on structured machine learning. We first address the problem of hierarchical time series forecasting, where predictions must be both accurate and consistent with the aggregation structure of the hierarchy. While prior methods rely on fixed projection matrices, we propose learning the optimal oblique projection directly from data. The proposed end-to-end approach jointly trains the forecasting model and projection layer, significantly improving accuracy and coherence. Next, we study the problem of creating a highly expressive, interpretable, and fair machine learning model. We propose Neural-Informed Decision Trees (NIDTs), a model that combines the predictive power of neural networks with the inherent interpretability of decision trees. NIDTs use axis-aligned splits on dataset features to form transparent decision paths, and at each leaf, apply a linear predictor based on both the original features and neural embeddings from a task-specific network to capture non-linearities. To generate NIDTs, we develop a decomposition training scheme that supports direct integration of fairness constraints via a constrained convex optimization problem solved at each leaf. Finally, in Chapter 5, we address fairness and efficiency in emergency department (ED) operations, where prolonged length of stay (LOS) has been linked to adverse outcomes such as increased mortality and higher risk of hospital-acquired infections. We focus on the patient prioritization and placement aspects of ED operations to improve throughput and reduce wait times. We propose a novel MIP predictive-prescriptive framework that decomposes predicted LOS into actionable components, enabling a more granular and operationally meaningful model of ED dynamics. Fairness considerations are explicitly incorporated into the formulation. To address uncertainty, we introduce a sampling-based solution approach. Our method increases ED throughput by 50–100% and reduces average wait time by 50–75%, depending on current utilization levels, while achieving near-optimal performance compared to a clairvoyant oracle. This work was conducted in collaboration with a major U.S. academic medical center. To facilitate practical implementation, we also design an interpretable metamodel that approximates the predictive-prescriptive algorithm with high fidelity. Together, these contributions provide a unified perspective on deep learning for reliable decision-making, grounded in optimization and encompassing interpretability, structure-awareness, and causal reasoning, well-suited for high-stakes operational environments.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reverberation Mapping of Supermassive Black Holes using Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/164512" rel="alternate"/>
<author>
<name>Lewin, Collin</name>
</author>
<id>https://hdl.handle.net/1721.1/164512</id>
<updated>2026-01-13T03:36:38Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Reverberation Mapping of Supermassive Black Holes using Machine Learning
Lewin, Collin
Accreting supermassive black holes at the centers of galaxies, known as active galactic nuclei (AGN), offer a unique window into the physics of accretion and feedback that shape galactic evolution. Yet, the small spatial scales of these regions remain inaccessible to direct imaging. Reverberation mapping circumvents this limitation by using time delays between correlated emission at different wavelengths to infer physical size scales. While X-ray reverberation probes the innermost accretion flow, continuum reverberation in the UV, optical, and infrared (UVOIR) traces reprocessing by the accretion disk and broad-line region (BLR). In this thesis, I develop and apply frequency-domain timing techniques based on Gaussian Process (GP) regression to study AGN reverberation across X-ray and UVOIR regimes. By modeling the empirical variability of AGN light curves with GPs, I interpolate onto an evenly sampled time grid, enabling robust estimation of Fourier-resolved time lags despite irregular sampling or large time gaps. I apply this method to NuSTAR observations of the Narrow-line Seyfert 1 galaxy Ark 564, introducing a multi-task GP model that jointly learns kernel hyperparameters across light curves. This enables the first simultaneous modeling of lag and flux spectra from both NuSTAR and XMM-Newton using a relativistic reverberation model to constrain black hole mass and disk properties. Recent reverberation campaigns with the Neil Gehrels Swift Observatory and ground-based telescopes have revealed significant discrepancies between observed inter-band lags and standard accretion disk theory. These include unexpectedly large lag amplitudes (the “accretion disk size problem”) and weak correlations between X-ray and UV/optical light curves. To investigate further, I analyze recent Swift campaigns of Mrk 335 and Mrk 817 using GP-based frequency-resolved lag analysis. In both sources, standard disk lags appear only on short timescales (high frequencies), while longer-than-expected lags dominate at low frequencies. These lag excesses are consistent with reprocessing at larger radii, similar to the BLR. Mrk 817 offers a rare opportunity to connect the inner and outer accretion flow: I detect the first simultaneous measurement of X-ray and UVOIR lags, effectively mapping the full disk. These lags vary significantly over the campaign, with longer delays during periods of stronger X-ray obscuration. This suggests that a disk wind may modulate the observed lags by introducing additional reprocessing and/or blocking ionizing flux from reaching more-distant material. To test this obscuration effect across a population, I conduct the first statistical study of UV/optical lag excess versus physical parameters across the Swift campaigns. The results show that the lag excess is driven entirely by obscured AGN, while the lags of unobscured sources are, on average, consistent with thin-disk theory. Regression analysis reveals that X-ray column density explains over 80% of the variance in lag excess. As for the X-ray/UV connection, obscured AGN also tend to show weaker correlations and more variable lags, suggesting that line-of-sight absorption not only contributes additional reprocessed emission that extends the UV/optical lags, but may also decouple or delay the X-ray and UV variability. To make GP-based time series analysis accessible to the community, I developed the STELA Toolkit, a fully documented Python package for computing frequency-domain data products using GPs. I also benchmark GP performance against other interpolation methods, including state-of-the-art transformers, paving the way for scalable, ML-enabled timing analysis in the era of time-domain surveys like Vera Rubin.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Nonlinear Dynamics: Methods and Applications</title>
<link href="https://hdl.handle.net/1721.1/164511" rel="alternate"/>
<author>
<name>Rossi, Baptiste T.</name>
</author>
<id>https://hdl.handle.net/1721.1/164511</id>
<updated>2026-01-13T03:36:32Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Learning Nonlinear Dynamics: Methods and Applications
Rossi, Baptiste T.
Accurate modeling of dynamical systems through differential equations is essential for scientific prediction and prescriptive control. Traditional model development, which relies on expert knowledge, parameter fitting and validation, is often iterative, time-consuming, and complicated by real-world data complexities such as noise and missing observations. This thesis addresses these challenges by developing robust, scalable, and interpretable methods for learning nonlinear ordinary differential equations (ODEs) and partial differential equations (PDEs) directly from data, with a particular emphasis on applications in fluid dynamics.&#13;
&#13;
In Chapter 2, we introduce a novel methodology for learning arbitrary nonlinear ODEs using collocation methods combined with interpolation. This approach demonstrates enhanced robustness to noise and significant computational speed-ups compared to classical system identification techniques, including the popular SINDy framework. It also provides a constructive method for reconstructing unobserved system components, making it applicable to partially observed systems, and offers theoretical guarantees on accuracy traditionally absent in strong-form identification.&#13;
&#13;
In Chapter 3, we combine the approach from Chapter 2 with sparse regression to derive sparse ODEs from data, demonstrating enhanced robustness to observational noise. Our method shows improved performance in recovering the true structures and coefficients on canonical benchmark tests under significant noise, while the performance of traditional surrogate methods deteriorates even with minimal noise.&#13;
&#13;
In Chapter 4, we extend this methodology to Partial Differential Equations (PDEs) using the method of lines, addressing issues related to data scale and interpolation ill-posedness. With a focus on Computational Fluid Dynamics (CFD), we show that our method goes beyond recovering complex nonlinear PDEs, such as the Navier-Stokes equations, from simulation data. The method can also be used as an a-posteriori indicator of simulation quality, providing insights into the effective PDEs represented by a given simulation, and pinpointing error-generating areas to inform adaptive mesh techniques.&#13;
&#13;
Lastly, in Chapter 5, we introduce a novel data-driven framework for modeling turbulent phenomena, a long-standing challenge in aerospace and climate science. Our approach addresses the Reynolds-Averaged Navier-Stokes (RANS) closure problem, which involves modeling the unobserved eddy viscosity field. We tackle two interconnected inverse problems: reconstructing the eddy viscosity from flow data and discovering its governing partial differential equations (PDEs), thereby proposing a new pathway to uncover new or refined RANS closure models directly from high-fidelity simulations. This chapter establishes a tractable baseline using a composite loss function, which we evaluate on canonical turbulent flows. Our results demonstrate that while the approach can recover governing equations when the ground truth eddy viscosity is known, significant challenges remain due to noise and numerical errors. We conclude that a more advanced reconstruction methodology is essential for robustly discovering these models, underscoring the potential of this data-driven approach and identifying critical areas for future research.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Electronic Compressibility of Rhombohedral Graphene Multilayers</title>
<link href="https://hdl.handle.net/1721.1/164510" rel="alternate"/>
<author>
<name>Aronson, Samuel H.</name>
</author>
<id>https://hdl.handle.net/1721.1/164510</id>
<updated>2026-01-13T03:37:38Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">The Electronic Compressibility of Rhombohedral Graphene Multilayers
Aronson, Samuel H.
In condensed matter systems, energy bands with narrow dispersion frequently host correlated electronic phases that arise from strong Coulomb interactions. When these bands also have concentrated Berry curvature, the correlated phases may be topologically non-trivial. The low-energy bands of rhombohedral graphene multilayers possess both of these ingredients, making this a promising class of materials in which to search for correlated topological electronic ground states. This thesis describes our electronic compressibility measurements on rhombohedral graphene multilayers, with a particular focus on the pentalayer system (R5G). We utilize a planar capacitance technique that probes the thermodynamic density of states and enables us to extract energy gaps of incompressible phases. We observe a variety of correlated electronic phenomena including half and quarter metals, layer antiferromagnetism, correlation-driven Chern insulators, and thermodynamic signatures of potential Wigner crystallization. We also study the electronic compressibility of R5G aligned to a hexagonal boron nitride (hBN) substrate to form a moiré superlattice. Motivated by the recent discovery of the fractional quantum anomalous Hall effect in this system when the electrons are pushed away from the moiré interface by an external electric displacement field, we study the opposite moiré-proximal limit, in which the superlattice potential is considerably stronger. We observe integer and fractional Chern insulator states that persist down to low magnetic fields in addition to numerous trivial and topological charge density waves. We map out a phase diagram that is highly sensitive to both displacement and magnetic fields, establishing the R5G-hBN superlattice as a highly-tunable system for studying the interplay between intrinsic band topology and strong lattice effects.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hadronic Structure with Classical and Quantum Computing</title>
<link href="https://hdl.handle.net/1721.1/164509" rel="alternate"/>
<author>
<name>Avkhadiev, Artur</name>
</author>
<id>https://hdl.handle.net/1721.1/164509</id>
<updated>2026-01-13T03:36:35Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Hadronic Structure with Classical and Quantum Computing
Avkhadiev, Artur
Calculations in lattice quantum chromodynamics (QCD) — presently the only known systematically improvable approach to describe the strong nuclear force in the nonperturbative regime from first principles — are playing an increasingly important role in revealing how hadrons emerge from the interactions of the underlying degrees of freedom: quarks and gluons. With computational and theoretical advances, more fruitful connections have emerged between lattice QCD and phenomenology, and the field is now well into a stage ripe for deriving tighter constraints on hadronic structure through joint analyses of numerical lattice QCD results with experimental data.&#13;
 This thesis summarizes lattice QCD calculations of the Collins-Soper (CS) kernel: a nonperturbative function whose inclusion in joint analyses has the potential to advance the study of multidimensional hadronic structure. The CS kernel is an anomalous dimension of transverse-momentum-dependent (TMD) distributions describing a three-dimensional structure of ultrarelativistic hadrons as a function of quark-gluon momenta collinear with and transverse to the hadron's motion. Constraints on the CS kernel at nonperturbative transverse-momentum scales are instrumental to relate TMDs across scales and processes. The kernel differs for quark and gluon TMDs, but is otherwise universal. This thesis presents the first lattice QCD determination of the quark CS kernel with systematic control over operator mixing, quark mass, and lattice discretization, and a proof-of-principle lattice calculation of the gluon CS kernel providing the first nonperturbative constraints on this quantity.&#13;
 Additionally, this thesis summarizes exploratory studies on how Hamiltonian calculations — realized with quantum-computer simulations and tensor networks — may be combined with conventional Monte Carlo calculations based on Lagrangian formulations in Euclidean space. These studies examine how constructions of interpolating operators, used in conventional calculations to map between the vacuum and a ground state of interest, may be optimized in Hamiltonian calculations to increase overlap with the target state. Results, limited to the Schwinger model, support further investigations of this approach in theories more closely resembling QCD as quantum-computing and tensor-network technologies continue to mature.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating RTL Simulation Through Fine-grained Task Dataflow and Selective Execution</title>
<link href="https://hdl.handle.net/1721.1/164508" rel="alternate"/>
<author>
<name>Elsabbagh, Fares</name>
</author>
<id>https://hdl.handle.net/1721.1/164508</id>
<updated>2026-01-13T04:08:30Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Accelerating RTL Simulation Through Fine-grained Task Dataflow and Selective Execution
Elsabbagh, Fares
Fast simulation of digital circuits is crucial to build modern chips. Current processors and SoCs integrate hundreds of complex components, including cores, accelerators, and memory hierarchies. Simulating these systems is necessary to verify correctness and explore the design space. Simulation can happen at different levels of abstraction. In this work we focus on Register-Transfer-Level (RTL) simulation. While RTL simulators are frequently used in development due to their quick compilation times, their runtime performance is slow. This is because as the designs are scaled up, multicore communication and scheduling overheads limit performance and scalability.&#13;
&#13;
We present ASH, a parallel architecture tailored to RTL simulation workloads. ASH consists of a tightly codesigned hardware architecture and compiler for RTL simulation. ASH exploits two key opportunities. First, it performs dataflow execution of small tasks to leverage the fine-grained parallelism in simulation workloads. Second, it performs selective event-driven execution to run only the fraction of the design exercised each cycle, skipping ineffectual tasks. ASH hardware provides a novel combination of dataflow and speculative execution, and ASH’s compiler features several novel techniques to automatically leverage this hardware.&#13;
&#13;
We evaluate ASH in simulation using large Verilog designs that represent different types of architectures. With 256 simple cores, ASH is gmean 1,485× faster than 1-core Verilator, and it is 32× faster than Verilator on a server CPU with 32 complex cores while using 3× less area.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Task Scheduling Techniques to Accelerate RTL Simulation</title>
<link href="https://hdl.handle.net/1721.1/164507" rel="alternate"/>
<author>
<name>Sheikhha, Shabnam</name>
</author>
<id>https://hdl.handle.net/1721.1/164507</id>
<updated>2026-01-13T04:08:25Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Task Scheduling Techniques to Accelerate RTL Simulation
Sheikhha, Shabnam
Fast simulation of digital circuits is crucial to build modern chips. Slow simulation lengthens chip design time and makes bugs more frequent. While simulation can happen at different levels of abstraction, Register-Transfer-Level (RTL) simulation is the usual bottleneck in chip design, as it is needed for ongoing debugging and evaluation. Current simulators scale poorly across CPU cores, because they are unable to exploit the fine-grained parallelism inherent in simulation workloads.&#13;
&#13;
We present ASH, a parallel architecture tailored to simulation workloads. ASH consists of a tightly codesigned hardware architecture and compiler for RTL simulation. ASH exploits two key opportunities. First, it performs dataflow execution of small tasks to leverage the fine-grained parallelism in simulation workloads. Dataflow execution exposes abundant parallelism, as each task can run as soon as its inputs are available. Second, it performs selective event-driven execution to run only the fraction of the design exercised each cycle, skipping ineffectual tasks. Selective execution introduces dynamic data dependences since skipped tasks do not communicate data. ASH employs speculative execution to handle these dependencies. ASH’s hardware provides a novel combination of dataflow and speculative execution, and ASH’s compiler features several novel techniques to automatically leverage this hardware. The key compiler techniques include a novel partitioning for minimizing data communication while maintaining load balance, and a strategic coarsening mechanism to reduce the overheads of fine-grained tasks.&#13;
&#13;
We evaluate ASH in simulation using large Verilog designs. With 256 simple cores, ASH is gmean 1,485× faster than 1-core Verilator, and it is 32× faster than Verilator on a server CPU with 32 complex cores while using 3× less area.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scheduling Strategies for Bus Operator Retention: A Mixed-Methods Evaluation of Bus Operator Preferences and 4-Day Workweek Feasibility</title>
<link href="https://hdl.handle.net/1721.1/164506" rel="alternate"/>
<author>
<name>Baum, Amelia Rose</name>
</author>
<id>https://hdl.handle.net/1721.1/164506</id>
<updated>2026-01-13T04:08:29Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Scheduling Strategies for Bus Operator Retention: A Mixed-Methods Evaluation of Bus Operator Preferences and 4-Day Workweek Feasibility
Baum, Amelia Rose
Public transit agencies face significant and growing challenges related to workforce shortages, absenteeism, and employee retention, which threaten service reliability. Reports found that 90% of U.S. transit agencies are experiencing a workforce shortage, with 84% claiming that the shortage affects their ability to provide scheduled service. Industry-wide, operator absence is a significant contributor to missed work at transit agencies nationwide and has, in many cases, delayed the full reinstatement of service at transit agencies following the COVID-19 pandemic. The quality of bus operators' work is significantly impacted by inflexible crew scheduling constraints. However, most studies focus on pay, benefits, and infrastructure, neglecting the importance of scheduling. This thesis aims to fill this gap by examining the potential for crew scheduling improvements to enhance the quality of life for bus operators through a three-part case study at the Chicago Transit Authority. Part 1 analyzes the historical work preferences of CTA bus operators, providing actionable insights for scheduling improvements. Part 2 presents a high-fidelity proof of concept in HASTUS, using block schedules (10-hour-a-day runs that are intended to be run by an operator 4 days a week) and rostering to reduce negative work traits, increase consecutive and weekend days off for most operators, while maintaining schedules for the top 20% of senior operators. Part 3 evaluates the new 10-hour, 4-day-per-week packaged schedules via an LLM-based paired alternatives survey of operators at one CTA garage, measuring the desirability of the proof of concept and collecting qualitative feedback. Overall, the new schedules substantially improve the quality of work for operators by guaranteeing at least one weekend day off, at least two consecutive days off, and increasing day-to-day schedule consistency and overnight rest time, while maintaining constant vehicle requirements and total pay hours. The survey results show that 72% of operators at the 74th Street garage support the new schedule paradigm, demonstrating strong support for their potential adoption and encouraging future exploration of a block schedule hybrid rostering paradigm at the CTA and other transit agencies.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Charcuterie Platter of QCD Matter</title>
<link href="https://hdl.handle.net/1721.1/164505" rel="alternate"/>
<author>
<name>Sun, Zhiquan</name>
</author>
<id>https://hdl.handle.net/1721.1/164505</id>
<updated>2026-01-13T03:37:36Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">A Charcuterie Platter of QCD Matter
Sun, Zhiquan
One of the greatest current challenges in theoretical high energy physics is to understand the dynamics of Quantum Chromodynamics (QCD). In this thesis, I address a variety of questions in QCD using Effective Field Theory (EFT). The first question deals directly with the observed phenomenology of QCD: How can we use EFT to disentangle the complicated three-dimensional dynamics of how quarks and gluons, the fundamental degrees of freedom of QCD, combine to form the observed bound states in nature called hadrons? I initiate a new formalism using Heavy Quark Effective Theory to study this dynamical process known as hadronization. I shed new light on the transverse momentum-dependent fragmentation process of heavy (charm and bottom) quarks by making use of the fact that heavy quarks with masses much larger than the strong interaction scale decouple from the rest of the hadronization cascade. I also present exciting heavy quark phenomenology at existing colliders and the upcoming Electron-Ion Collider. The second question investigates the field theory structure of QCD: What can we learn about the nonperturbative structure of the quantum field theory through the abstruse emergent phenomenon in QCD called “confinement”, which traps quarks and gluons inside hadrons? I study a class of cleverly constructed observables known as energy correlators by using fieldtheory based methods to determine the leading nonperturbative contribution, and examine the universality of the nonperturbative matrix element that gives rise to this contribution. I also show that including the nonperturbative contribution has a significant impact on the extraction of the strong coupling constant, a fundamental parameter of the Standard Model, using tools such as factorization and resummation from EFT. Last but not least, the final question explores the underlying symmetry properties of QCD and its potential completions: How robust is the axion solution to the strong CP (ChargeParity) problem, and what are some of its implications beyond the realm of QCD? I examine the axion quality problem in post-inflationary QCD axion models with different symmetry properties and identify a new tension with standard cosmology. I further show that the axion string-domain wall dynamics is more complicated than commonly expected, undermining the reliability of a unique mass prediction for axion dark matter in post-inflationary models. I showcase the importance of considering both high-energy extensions and the EFT at low energy, and uncover new complexity of the axion solution to the strong CP problem.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning from pre-pandemic data to design and test future-proof therapeutics</title>
<link href="https://hdl.handle.net/1721.1/164504" rel="alternate"/>
<author>
<name>Gurev, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/164504</id>
<updated>2026-01-13T03:37:25Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Learning from pre-pandemic data to design and test future-proof therapeutics
Gurev, Sarah
Effective pandemic preparedness relies on predicting immune-evasive viral mutations to enable early detection of variants of concern and design vaccines and therapeutics that are resilient to future viral evolution. However, current strategies for viral evolution prediction are not available early in a pandemic and have limited predictive power – experimental approaches require host polyclonal antibodies and existing computational methods draw heavily from current strain prevalence. In addition, vaccines and therapeutics have been designed with an eye towards past or circulating variants, not towards future evolution. To address these challenges, we developed EVEscape, a generalizable framework that integrates fitness predictions from a deep generative model of evolutionary sequences with biophysical and structural information. EVEscape quantifies the immune escape potential of viral strains at scale and is applicable before surveillance sequencing, experimental scans, or 3D structures of antibody complexes are available. We demonstrate that EVEscape, trained on sequences available prior to 2020, performs as accurately as high-throughput experimental scans at anticipating pandemic variation for SARS-CoV-2 and is generalizable to other viruses including Influenza A virus, HIV, and understudied viruses with pandemic potential such as Lassa and Nipah. We investigate both alignment-based and protein language models to explore the best model of mutation effects across pandemic-threat viral families. We demonstrate the utility of EVEscape in three critical applications: (1) Surveillance efforts flagging high escape SARS-CoV-2 variants from their first appearance (2) Design of panels of viral antigens that mimic future viral variants for early, proactive evaluation of the future protection of vaccines and therapeutic; and (3) Design of a pan-sarbecovirus nanoparticle-based vaccine capable of eliciting broad, long-lasting protection against sarbecoviruses, including future variants. This three-pronged approach represents a paradigm shift in pandemic preparedness, offering a novel strategy to preemptively address viral families with pandemic potential and significantly bolster global prevention efforts.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Topics in quantum information theory and quantum&#13;
many-body physics</title>
<link href="https://hdl.handle.net/1721.1/164503" rel="alternate"/>
<author>
<name>Balasubramanian, Shankar</name>
</author>
<id>https://hdl.handle.net/1721.1/164503</id>
<updated>2026-01-13T03:37:33Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Topics in quantum information theory and quantum&#13;
many-body physics
Balasubramanian, Shankar
In this thesis we present two results relating to the intersection between quantum information theory and quantum many-body physics. The first pertains to quantum algorithms, where few computational problems are believed to exhibit exponential separation between quantum and classical performance. For those that are, natural generalizations remain elusive. One speedup that has especially resisted generalization is the use of quantum walks to traverse the welded tree graph, due to Childs, Cleve, Deotto, Farhi, Gutmann, and Spielman. We show how to generalize this to a large class of hierarchical graphs in which the vertices are grouped into “supervertices” that are arranged according to a d-dimensional lattice. Supervertices can have different sizes, and edges between supervertices correspond to random connections between their constituent vertices. The traversal time of quantum walks on these graphs are related to (a) the existence of small subspaces within which the quantum walk evolves and (b) the localization properties of the quantum walk within these subspaces. We find examples of hierarchical graphs that yield provable speedups over classical algorithms ranging from superpolynomial to exponential, depending on the underlying dimension and the random graph model. We also discuss how to relax criterion (a) to the existence of a small and approximate subspace by using techniques from graph sparsification. The second result pertains to fault-tolerant quantum memories. Storing a qubit in a noisy environment is crucial for developing full-scale quantum computers. While constructions of fault-tolerant quantum memories exist, they often assume that quantum operations are not local and assisting classical computation operates instantaneously and noislessly. In particular, constructing a topological quantum memory below four dimensions with local quantum and classical operations that is fault-tolerant under both quantum and classical noise is an open problem. We construct a local quantum memory for the 2D toric code using ideas from the classical cellular automata of Tsirelson and Gács. Our memory preserves a logical state for exponential time in the presence of both classical and quantum noise below a constant noise rate. While our 2D quantum memory is built from operations that depend on space and time, we construct a fault-tolerant quantum memory in 3D using stacks of 2D toric codes that can be built with time-independent operations.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantum Matter in the Era of Generalized Symmetries</title>
<link href="https://hdl.handle.net/1721.1/164502" rel="alternate"/>
<author>
<name>Chatterjee, Arkya</name>
</author>
<id>https://hdl.handle.net/1721.1/164502</id>
<updated>2026-01-13T03:37:15Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Quantum Matter in the Era of Generalized Symmetries
Chatterjee, Arkya
The discovery of generalized symmetries has led to powerful new insights into quantum matter. They have been used to classify new families of quantum phases, place constraints on phases realizable in a given physical system, and conceptually unify seemingly disparate phenomena. In many ways, they prove just as powerful as traditional symmetries at organizing and constraining the theories that describe quantum matter. In this thesis, we attempt a unification of such constraints by developing a holographic correspondence between (generalized) symmetries and topological orders, called the Sym/TO correspondence. For any (finite internal) symmetry of a quantum system in d (spatial) dimensions, we associate with it a unique topological order in d + 1 dimensions, called its Symmetry Topological Order (SymTO). We devise an operator algebraic recipe to compute the SymTO data for any lattice spin model, demonstrating it in a number of examples. We then use the SymTO to classify possible quantum phases allowed by the symmetry—we call this a generalized Landau paradigm. Besides classifying phases, we also identify constraints on the phase transitions between them using a SymTO-resolved modular bootstrap. We test this framework in a quantum spin chain with non-invertible symmetries. We discover a new Kramers-Wannier-like duality and a rich phase diagram including a noninvertible symmetry-enriched incommensurate phase. The translation symmetry of the spin chain has a nontrivial interplay with the lattice Kramers-Wannier duality, which matches the anomaly of the corresponding non-invertible symmetry in the low-energy effective field theory. Finally, we explore such unusual anomaly-matching mechanisms in more detail in the context of the chiral anomaly of a single massless Dirac fermion, demonstrating a novel lattice realization of chiral symmetries and their anomaly.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systems Materials Design of Ordered Nanocomposite Assemblies</title>
<link href="https://hdl.handle.net/1721.1/164501" rel="alternate"/>
<author>
<name>Thrasher, Carl James</name>
</author>
<id>https://hdl.handle.net/1721.1/164501</id>
<updated>2026-01-13T03:37:31Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Systems Materials Design of Ordered Nanocomposite Assemblies
Thrasher, Carl James
The ability to precisely organize matter across multiple length scales is a central challenge in modern materials science. In this dissertation, I develop a systems materials design approach to engineer hierarchically structured nanocomposite assemblies, integrating molecular recognition, supramolecular chemistry, colloidal assembly, and bulk processing into unified material platforms. At the molecular and nanoscale, I investigate how multivalent supramolecular interactions can be rationally programmed by controlling the architecture of polymer binders grafted to nanoparticle surfaces. Through systematic variations in polymer topology, recognition group density, and scaffold geometry, I demonstrate how polymer design dictates the thermodynamic strength and multivalency of nanoparticle superlattice assembly, enabling precise control of thermal stability,&#13;
crystallographic symmetry, and collective bonding behaviors in massively multivalent systems. Building on these design principles, I develop a colloidal metallurgy framework to process selfassembled nanoparticle superlattices into dense macroscopic polycrystalline solids while preserving nanoscale order. By systematically studying the interplay of temperature, pressure, and time during colloidal sintering, I elucidate mechanisms of densification, defect evolution, and grain growth unique to colloidal systems, establishing processing–structure relationships that parallel but fundamentally diverge from atomic sintering. Finally, I extend these concepts to create stretchable nanocomposite supercrystals, embedding supramolecularly assembled superlattices into elastomeric matrices via co-engineered polymer chemistries that enable hierarchical strain&#13;
transduction. These materials combine the nanoscale precision of supercrystals with mechanical robustness, reconfigurability, and stimuli-responsive optical properties, illustrating a scalable pathway to multifunctional metamaterials. Collectively, this work demonstrates how a systemslevel integration of molecular design, colloidal assembly, and bulk processing enables new&#13;
paradigms for the synthesis of hierarchically ordered, functional nanocomposites.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The novel roles of BCL6 and BATF3 in regulating human&#13;
CD8⁺ T cell dysfunction</title>
<link href="https://hdl.handle.net/1721.1/164500" rel="alternate"/>
<author>
<name>Traunbauer, Anna Katharina</name>
</author>
<id>https://hdl.handle.net/1721.1/164500</id>
<updated>2026-01-13T03:36:48Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">The novel roles of BCL6 and BATF3 in regulating human&#13;
CD8⁺ T cell dysfunction
Traunbauer, Anna Katharina
Reduced effector function and elevated inhibitory receptor expression are hallmarks of exhausted CD8⁺ T cells, yet the underlying molecular and epigenetic drivers remain incompletely defined. Here, we developed an in vitro repeated stimulation model to recapitulate features of human CD8⁺ T cell dysfunction and delineate transcriptional and epigenetic landscapes. Our analyses revealed that BCL6 and BATF3 are robustly upregulated in dysfunctional CD8⁺ T cells, with ATAC-seq demonstrating enhanced chromatin accessibility at their gene loci. Transcription factor footprinting shows increased BATF3 motif occupancy in chronically stimulated cells and integrative multi-omic analysis combining footprints, open chromatin regions, RNA-seq and ChIP-seq data revealed that putative BATF3 target genes may include master regulators of exhaustion. Moreover, overexpression of BCL6 or BATF3 markedly upregulates TIM-3 expression and suppressed cytokine release, establishing their capacity to induce T cell dysfunction. We further validated these findings ex vivo in antigen-specific CD8⁺ T cells from patients with advanced melanoma, as well as HCV and HIV infections, where cells were enriched for BCL6^high and BATF3^high subsets co-expressing canonical exhaustion markers such as PD-1, TIM-3 and CD39. Notably, Single-cell RNA sequencing of HIV-specific CD8⁺ T cells identified a distinct BCL6^high PD1⁻ progenitor population that gives rise to two distinct subsets via divergent differentiation trajectories: one branch generates effector-like BCL6^high PD1⁺ cells, whereas the other produces BCL6^high PD1⁺ cells that retain an exhaustion gene signature alongside partial memory-like feature. Collectively, these findings identify BCL6 and BATF3 as key mediators of human CD8⁺ T cell dysfunction and illuminate novel transcriptional and epigenetic pathways that may be leveraged for therapeutic intervention in cancer and chronic viral infections.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Aspects of Nonperturbative Heavy Quark Physics</title>
<link href="https://hdl.handle.net/1721.1/164499" rel="alternate"/>
<author>
<name>Lin, Joshua</name>
</author>
<id>https://hdl.handle.net/1721.1/164499</id>
<updated>2026-01-13T03:37:27Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Aspects of Nonperturbative Heavy Quark Physics
Lin, Joshua
The properties of charm and bottom quarks are an interesting corner of Quantum Chromo-Dynamics (QCD) due to the fact that their masses are much heavier than the typical QCD interaction energy ΛQCD. Due to this scale separation, it is possible to describe these heavy quarks by Effective Field Theories (EFTs) that simplify their equations of motion, make explicit additional symmetries that only appear for heavier quark masses, and simplify the theoretical calculations required for predictions. By discretising these EFTs in a lattice regularisation, nonperturbative calculations of observables of interest become possible. This thesis presents progress towards systematically controlled calculations of two such observables: the Spectator Effect contributions to the inclusive decay rates of b-hadrons, and the real-time dynamics of fermions propagating in a thermal medium. Standard EFT calculations in Lattice-QCD proceed by expressing observables as sums over perturbatively computed Wilson coefficients and nonperturbative matrix elements that can be calculated by path integral monte-carlo methods. Though it is possible to carry out this procedure within a regulator-independent renormalization scheme, in practice almost all such decompositions are computed in the modified minimal subtraction scheme MS which is only defined for the dimensional regulator (DR), due to its simplicity. Computing such observables therefore requires a matching between lattice regularised operators and operators renormalized in MS. In Chapter 2, both the dimensional regulator (DR) and the lattice regulator are reviewed, with a particular emphasis on techniques needed for calculations carried out in later sections. An interesting subtelty in DR is the need to introduce d-dimensional counterparts to the Dirac γ-matrices, which a-priori are only well defined in integer number of dimensions. This analytic continuation is of practical importance as it introduces additional Evanescent Operators (Sec. 2.1.4) that have physical consequences. In Sec. 2.1.5, traces of d-dimensional γ-matrices were related to Tutte polynomial evaluations [4], presenting a new graph-theoretic interpretation of the dimensionally regulated γ-matrices. One strategy of renormalizing lattice-regulated operators into MS involves first renormalizing into a regulator independent scheme, before perturbatively matching between the regulator independent scheme and MS. In Chapter 3, regulator independent position-space (X-space) schemes for renormalizing operators defined in the leading order Heavy Quark Effective Theory (HQET) are proposed [3]. Compared to other regulator independent renormalization schemes such as RI-xMOM, X-space schemes have the benefit that they are gauge invariant. The next to leading order matching calculations between X-space and MS are presented for heavy-light and heavy-light-light multiplicatively renormalizable operators, as well as ∆Q = 0 and ∆Q = 2 four quark operators relevant for heavy hadron decays and mixing, where Q refers to the static quark number. Due to their heavy masses, hadrons containing heavy quarks decay via the weak nuclear force. Experimental measurements of these lifetimes provide precision determinations of the fundamental parameters of the Standard Model. The Heavy Quark Expansion expresses the inclusive lifetimes of heavy hadrons in terms of matrix elements of HQET operators of increasing dimension. The Spectator Effects are contributions due to the ∆Q = 0 four-quark operators, where the light quark degrees of freedom within a heavy hadron participate in the decay. In Chapter 4, a Lattice-QCD determination of the static decay constant f HQET B and the isospin-nonsinglet portion of the Spectator Effect matrix elements for heavy-light mesons is presented. Fits of bare matrix elements were performed for three different lattice spacings, and renormalized with the schemes proposed in Chapter 3 before a continuum limit is taken. Due to the heavy masses mQ of the heavy quarks, it is possible to find temperatures T approximately satisfying a hierarchy ΛQCD ≪ T ≪ mQ. At these temperatures, QCD undergoes a deconfinement transition into the Quark-Gluon-Plasma (QGP) phase where the light degrees of freedom are no longer confined, and instead screen the long-range colour forces. The heavy quarks however are not thermalised, and act as probes of the QGP. Further understanding of the QGP requires first principles simulations of the heavy quark dynamics at finite temperature, however such calculations are difficult due to the enormous size of the Hilbert space. Variational approximations of the Hilbert space encode wavefunctions within a few parameters, and provide a practical method to simulate many particle systems. As a testcase, the variational approach was applied for the first time to simulate fermions at finite temperature in a simple QFT: the 1+1d U(1) gauge theory known as the massive Schwinger model. Both the real-time dynamics of string like states, and the properties of the thermal state were studied, and such variational methods are shown to be promising approaches to the more difficult case of a heavy quark effective theory in QCD.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Probing the Nonperturbative Physics of QCD with&#13;
Normalizing Flows and a moderate number of Pions</title>
<link href="https://hdl.handle.net/1721.1/164498" rel="alternate"/>
<author>
<name>Abbott, Ryan William</name>
</author>
<id>https://hdl.handle.net/1721.1/164498</id>
<updated>2026-01-13T03:36:19Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Probing the Nonperturbative Physics of QCD with&#13;
Normalizing Flows and a moderate number of Pions
Abbott, Ryan William
Quantum Chromodynamics (QCD) is a cornerstone of the standard model of particle physics, and the best known theory of strong nuclear interactions. The only known systematically improvable ab-initio method for accessing the nonperturbative physics of QCD is Lattice QCD is, and this thesis presents two advances in our understanding QCD using lattice-based methods. The first is a calculation using many-pion systems to map out the entire zero temperature, nonzero isospin density region of the QCD phase diagram. The calculation uses novel methods for working with many-pion systems that enables working with thousands of pions, and furthermore provides rigorous constraints on the baryon-dense region of the QCD phase diagram. The second is an application of methods from machine learning (namely normalizing flows) in order to accelerate sampling. This approach has the promise of eliminating issues such as critical slowing down, as well as introducing novel tools and methods that enable methods of calculation that would be possible otherwise.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Limits of QCD</title>
<link href="https://hdl.handle.net/1721.1/164497" rel="alternate"/>
<author>
<name>Gao, Anjie</name>
</author>
<id>https://hdl.handle.net/1721.1/164497</id>
<updated>2026-01-13T03:37:21Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Limits of QCD
Gao, Anjie
This thesis explores the fundamental kinematic limits of Quantum Chromodynamics (QCD), including the soft, collinear, and Regge limits, using soft-collinear effective theory (SCET). We begin by studying transverse momentum dependent (TMD) physics in semi-inclusive deep inelastic scattering (SIDIS), which probes the small transverse momentum regime arising from the soft and collinear limits of QCD. We derive all-order factorization theorems for azimuthal asymmetries in SIDIS at next-to-leading power (NLP). We also propose a new angular observable, q_∗, for probing TMD dynamics at the future Electron-Ion Collider (EIC), which enables an order-of-magnitude improvement in experimental resolution while retaining sensitivity to TMD distributions. Next, we apply the TMD formalism to a class of observables known as energy correlators. We study the transverse energy-energy correlator (TEEC) in the back-to-back limit, a dijet observable at hadron colliders, and the three-point energy correlator (EEEC) in the coplanar limit, a trijet observable at lepton colliders. For both observables, we derive allorder factorization theorems and resum large logarithms to next-to-next-to-next-to-leading logarithmic (N3LL) accuracy. Finally, we analyze the Regge limit of 2 → 2 QCD amplitudes. By factorizing these amplitudes into collinear jet and soft functions and studying their rapidity evolution, we define Regge-like anomalous dimensions in a gauge-invariant manner. At the level of the exchange of two Glauber gluons in the t-channel, we recover the BFKL equation from a purely collinear perspective. Extending to three-Glauber exchange, we derive the first closed-form renormalization group equations for Regge cut contributions in several nontrivial t-channel color representations, providing a systematic method for organizing non-planar QCD amplitudes at high energy.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Determining the Molecular Underpinnings of Iron Homeostasis in Human Cells</title>
<link href="https://hdl.handle.net/1721.1/164496" rel="alternate"/>
<author>
<name>Lee, April</name>
</author>
<id>https://hdl.handle.net/1721.1/164496</id>
<updated>2026-01-13T03:35:45Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Determining the Molecular Underpinnings of Iron Homeostasis in Human Cells
Lee, April
Precise regulation of nutrient availability is crucial for cellular function and survival. Iron, in particular, is tightly regulated as it serves as an essential cofactor for numerous enzymes but can catalyze the formation of toxic radicals at elevated levels. To maintain the necessary cytoplasmic iron concentration, cells store excess iron in large proteinaceous cages called ferritin and, when available iron levels fall, they degrade these cages, liberating the stored iron for use. This thesis focuses on the molecular mechanisms underlying cellular iron sensing, as well as the molecular interactions supporting regulated ferritin degradation and subsequent iron release. Specifically, this work interrogates the protein interactions involved in ferritinophagy, a form of selective autophagy that leads to the lysosomal degradation of ferritin. Extending prior work that identified key components supporting ferritinophagy, including the selective autophagy receptor protein NCOA4 and its cognate autophagosomal receptor GATE16, experiments described here uncover the molecular contacts between these proteins. I found that NCOA4 bears two short linear motifs that each bind to GATE16 with weak affinity. However, these binding motifs are highly avid and, in concert, support high-affinity binding of NCOA4 to oligomerized GATE16. I further describe that ferritin degradation in cultured human cells relies on the contacts I identified biochemically. Moreover, I found that iron decreases NCOA4’s affinity for GATE16, providing a plausible mechanism for irondependent regulation of ferritinophagy. Taken together, this work suggests a general mechanism by which selective autophagy receptors can distinguish between inactive monomeric GATE16 and the active oligomerized forms that primarily drive autophagy. In related studies, I have biochemically probed the NCOA4•ferritin interface, with these experiments suggesting a novel function of NCOA4 in modulating ferritin cage structure – either through cage dismantling or through the formation of higher order structures. Taken together, these studies further define the molecular mechanisms by which NCOA4 aids cells in maintaining iron homeostasis, and they provide the requisite reagents for future work aimed at building a unified model for how mammalian cells regulate this vital but toxic metal.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sampling Methods for Fast and Versatile GNN Training</title>
<link href="https://hdl.handle.net/1721.1/164495" rel="alternate"/>
<author>
<name>Alkhatib, Obada</name>
</author>
<id>https://hdl.handle.net/1721.1/164495</id>
<updated>2026-01-13T04:08:28Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Sampling Methods for Fast and Versatile GNN Training
Alkhatib, Obada
Graph neural networks (GNNs) have become a commonly used class of machine learning models that achieve state-of-the-art performance in various applications. A prevalent and effective approach for applying GNNs on large datasets involves mini-batch training with sampled neighborhoods. Numerous sampling algorithms have emerged, some tailored for specific GNN applications. In this thesis, I explore ways to improve the efficiency and expressivity of existing and emerging sampling schemes. &#13;
&#13;
First, I explore system solutions to facilitate the development of fast implementations of different sampling methods. I introduce FlexSample, a system for efficiently incorporating custom sampling algorithms into GNN training. FlexSample leverages the types of performance optimizations found in SALIENT, a state-of-the-art system for fast training of GNNs with node-wise sampling. In experiments with 4 GNN models which use layer-wise and subgraph sampling, FlexSample achieves up to 1.3× speed-up for end-to-end training over PyTorch Geometric with the same sampling code. Furthermore, FlexSample extends SALIENT with highly-optimized C++ implementations of FastGCN and LADIES layer-wise sampling, which achieve 2×–5× speed-up over their respective Python implementations.&#13;
&#13;
Second, I introduce a novel framework for learning neighbor sampling distributions as part of GNN training. Key components of this framework, which I name PertinenceSample, are: (i) a differentiable approximation of node-wise sampling for GNNs; and (ii) a parametrization of node sampling distributions as node- or edge-wise weights of attention-like GNN layers. I present an initial exploration of the potential of PertinenceSample for improving node classification accuracy in the presence of noisy edges. Specifically, in two synthetic experiments where roughly half of a node’s neighbors may have similar features but different labels, I demonstrate that extending a GraphSAGE model with a 2-layer perceptron for learning the PertinenceSample weights can improve classification accuracy from 50%–75% to (nearly) 100%.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Electrocatalysts for the Production and Oxidation of Liquid Fuels</title>
<link href="https://hdl.handle.net/1721.1/164494" rel="alternate"/>
<author>
<name>Zheng, Daniel J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164494</id>
<updated>2026-01-13T03:35:35Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Designing Electrocatalysts for the Production and Oxidation of Liquid Fuels
Zheng, Daniel J.
With the ever-rising CO₂ levels in the atmosphere, it is paramount to cease reliance on fossil fuels to meet global energy demands. While the cost of electricity from renewable sources, such as solar and wind, continues to decrease and has even fallen below that of fossil fuels since 2014, these renewable energy sources suffer from intermittency, potentially causing shortages at peak demands. Thus, methods to store or economically use excess renewable energy are needed for full decarbonization. One promising avenue is to store the excess generated electrical energy in chemical bonds, creating molecules and materials with industrial or energy storage utility. In this proposed scheme, the renewable electricity would be used to electrochemically convert earth-abundant molecules into value-added chemical or fuels. These generated products could then be utilized as feedstocks in industrial applications or as a fuel source to generate electricity when needed by transforming back into their earth-abundant forms.&#13;
&#13;
Central to transforming earth-abundant molecules into value-added chemicals or fuels is the oxygen evolution reaction (OER), which is found in nearly every process. The plentiful nature of OER’s main reactant, water, and moderate thermodynamic potential of 1.23 V vs. the reversible hydrogen electrode, make OER an ideal reaction to pair with other transformations. However, the slow kinetics of OER significant hinder the efficiency of these processes. As such, discovering new OER catalysts with high activity and stability would have wide-spread impacts. On the other hand, one of the most promising renewable fuel sources is methanol, which boasts about 3 times the energy density of hydrogen and can be used as an alternative to hydrogen in proton exchange membrane fuel cells. However, the sluggish kinetics of the methanol oxidation reaction (MOR), even with current state-of-the-art noble metal catalysts causes direct methanol fuel cells to reach an efficiency of &lt;40%, limiting their practical usage. While significant research has been invested in discovering new MOR electrocatalysts, PtRu has reigned for 5 decades, highlighting the need for a true breakthrough. &#13;
&#13;
In this thesis, electrocatalysts for OER and MOR are examined in depth. For OER, metal-hydroxide organic frameworks (MHOFs), a promising new class of hybrid organic-inorganic materials with potential to mimic the superior functionality of enzymes, are studied. Operando vibrational and absorption spectroscopy methods are used to characterize the degradation mechanisms and lattice oxygen exchange capacity as a function of the linkers. Using such knowledge, defects are engineered into the MHOF that increase both the activity and stability compared to the pristine material. Furthermore, the traditionally reported MOR mechanism is studied using isotope-labeled reactants and operando mass spectrometry. These experiments revealed that, in contradiction to typically accepted mechanisms, the C-O bond in methanol can be cleaved during MOR, with the resulting CO₂ molecule containing two water-derived oxygen atoms, opening a new paradigm for MOR catalyst design. Driven by the need to discover new materials at scale, a fluorescence-based OER catalyst screening method is developed that can screen an entire composition space simultaneously. In addition, an AI-driven, automated platform for screening a high-dimensional multimetallic space for MOR is presented.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bridging the Gap: From Artificial Intelligence and Optimization Theory to Action</title>
<link href="https://hdl.handle.net/1721.1/164493" rel="alternate"/>
<author>
<name>Petridis, Periklis S.</name>
</author>
<id>https://hdl.handle.net/1721.1/164493</id>
<updated>2026-01-13T03:37:10Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Bridging the Gap: From Artificial Intelligence and Optimization Theory to Action
Petridis, Periklis S.
Despite significant theoretical advances in Operations Research (OR) and Artificial Intelligence (AI), a persistent gap remains between these developments and their practical implementation in real-world settings. Despite significant progress in these fields, many OR and ML approaches struggle to scale to realistic problem sizes, lack robustness to uncertainty, or fail to address implementation constraints faced by practitioners in industry. Through four distinct works conducted in collaboration with industry partners, this research demonstrates how methodological advancements can bridge this theory-practice divide while maintaining rigorous theoretical foundations and guarantees. In the first part, we focus on optimization methodologies that scale traditional OR approaches to handle real-world problem sizes and uncertainty. In Chapter 2, we develop a stochastic Benders decomposition scheme for large-scale network design problems, a class of problems ubiquitous in logistics, transportation, and energy sectors. By incorporating sampling techniques within the decomposition framework, we achieve deterministic optimality guarantees while reducing computational costs, enabling solutions for networks with 700 nodes—an order of magnitude larger than previously tractable instances—while achieving optimality gaps of 5-7% compared to 16-27% for traditional deterministic approaches. In Chapter 3, we present a holistic framework for industrial decarbonization, developed with a major phosphate producer planning to quadruple energy consumption while transitioning to renewable sources. Our robust optimization approach combines strategic capacity expansion planning over a 25-year horizon with adaptive operational models, providing 95% reliability guarantees while balancing solar and wind integration with battery storage to meet a projected 12 TWh annual demand. In the second part, we shift our focus to developing AI systems that address the unique challenges of medical data abstraction and clinical decision support. In Chapter 4, we address the challenge of automating clinical data abstraction from electronic health records, collaborating with the Society of Thoracic Surgeons to populate their Adult Cardiac Surgery Database. Our AI pipeline combines 31 models per target variable with a two-tiered quality control system, achieving over 99% accuracy while automatically extracting 43-50% of registry variables, demonstrating how AI can dramatically reduce manual abstraction burden while maintaining clinical standards. In Chapter 5, we extend this healthcare AI focus by developing xHAIM (Explainable Holistic AI in Medicine), which addresses the limitations of current clinical AI systems in handling extensive patient records, providing interpretability, and incorporating medical knowledge. Through semantic similarity techniques and generative AI, xHAIM improves predictive performance while generating clinically grounded explanations that enhance trust and adoption by healthcare practitioners.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-species genome-wide CRISPR screens identify conserved suppressors of cold-induced cell death</title>
<link href="https://hdl.handle.net/1721.1/164492" rel="alternate"/>
<author>
<name>Lam, Breanna</name>
</author>
<id>https://hdl.handle.net/1721.1/164492</id>
<updated>2026-01-13T03:36:20Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Multi-species genome-wide CRISPR screens identify conserved suppressors of cold-induced cell death
Lam, Breanna
During hibernation of Syrian hamsters, the core body temperature shows a remarkable decrease, going from 37°C to 4°C. Although this ability to survive at low temperatures could in principle be due to systemic factors that occur during hibernation, we and others have seen that cells from hibernating rodents cultured in vitro maintain this ability. Although others have studied characteristics of cells from hibernating and non-hibernating organisms, the genes and pathways that are involved in cold-induced cell death have not been systematically explored. &#13;
In this thesis, we conduct two genome-wide CRISPR-Cas9 screens in both a cold-sensitive (K562) and cold-resistant (BHK-21) cell line, and uncover GPX4 and related selenocysteine incorporation genes as important for protection against cold-induced cell death. Using genetic knockdowns, along with overexpression of GPX4, we confirm our findings and demonstrate that levels of GPX4 may be limiting in K562 cells, contributing to their cold sensitivity. Additionally, pharmacological validation using inhibitors of GPX4 reveal that the catalytic activity of GPX4 is dependent on the selenocysteine in the active site. Our findings are extended across multiple cell lines and cell types across six species. Taken together, our results suggest that GPX4 may be a powerful and conserved suppressor of cold-induced cell death. &#13;
Building on our initial findings, we go on to show that cold exposure leads to increases in membrane permeability. This membrane permeability is transient, as rewarming of the cells reduces permeability to baseline levels. We also test the role of lipid peroxidation in contributing to membrane permeability and find that although it contributes in some cell lines, it is not the sole contributor as ferroptosis inhibitors do not fully mitigate membrane permeability. We go on to test different membrane channels and do not see decreases in membrane permeability, potentially indicating pathway-independent effects of temperature on membrane permeability. Altogether, this work provides a foundation for understanding how cold exposure influences mammalian cells.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Why Landfills Endure: Quantifying economic barriers to material and energy recovery from municipal solid waste in the United States</title>
<link href="https://hdl.handle.net/1721.1/164491" rel="alternate"/>
<author>
<name>Baidoo, Jacqueline E.</name>
</author>
<id>https://hdl.handle.net/1721.1/164491</id>
<updated>2026-01-13T03:37:18Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Why Landfills Endure: Quantifying economic barriers to material and energy recovery from municipal solid waste in the United States
Baidoo, Jacqueline E.
Municipal solid waste (MSW) is a heterogeneous mixture of materials discarded by residential and nonresidential generators at end-of-life processing facilities for treatment and disposal. Conventional treatment methods reduce waste volumes through recycling via material recovery facilities, energy recovery via municipal solid waste incinerators, and biochemical conversion via composting. Even so, nearly 50% of total MSW generated in the United States was sent to landfills for final disposal in 2018 and almost half of all landfills currently in operation are expected to reach capacity by 2050. Waste planners seek to use developing resource recovery technologies like dry anaerobic digestion, gasification, and pyrolysis to narrow the gaps in end-of-life processing. Such technologies are posited to improve materials circularity and advance zero-waste landfill diversion goals by transforming residuals into electricity, fuels, and precursors to chemicals and fertilizers. However, despite demonstrated improvements to technical inefficiencies in waste valorization, numerous projects built on these technologies have failed to break through to commercial success. We investigate the contribution of regional and economic factors to the success of resource recovery projects through the lens of why landfills remain the predominant method of waste disposal. We build cost models of conventional and select developing treatment methods and use discounted cash flow analysis to estimate financial feasibility by local MSW compositions as reported in regional waste characterization studies.&#13;
&#13;
Findings indicate that the most critical factor to sustainable operation is consistent supply of waste materials at the quality and scale that maximize production efficiency, which is not achievable without rigorous data monitoring of MSW composition. Conversely, dependence on waste volume rather than composition makes land disposal a uniquely flexible pathway capable of subsidizing the costs of resource recovery. Progress towards landfill diversion is economically linked to the opportunity cost of avoiding landfill utilization. Unless municipalities are able to introduce subsidies elsewhere in the waste management ecosystem through gate fees and credits, projects will fail where marginal net costs of diversion exceed the revenues lost from avoided landfilling. Targeted processing of organic wastes can facilitate an average diversion of 24% for the compositions surveyed and was found to be viable for composting and dry anaerobic digestion projects at low to negligible financial losses compared to landfilling.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enlightening Artificial Intelligence with Science</title>
<link href="https://hdl.handle.net/1721.1/164490" rel="alternate"/>
<author>
<name>Liu, Ziming</name>
</author>
<id>https://hdl.handle.net/1721.1/164490</id>
<updated>2026-01-13T03:37:08Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Enlightening Artificial Intelligence with Science
Liu, Ziming
Today’s artifciail intelligence (AI) systems, while remarkably capable, are largely black boxes. The black-box nature raises concerns for those who build AI – “How can we construct an understand AI in scientifically grounded ways?”, and those who use AI – “How can we trust systems we do not understand?”. This thesis takes a humble step towards addressing the black-box problem. Building white boxes with science (Science for AI): The prevailing paradigm in AI today – “scaling is all you need" – focuses on scaling up existing models. However, this approach often yields systems that are neither interpretable nor efficient. I argue that scientific principles offer fresh perspectives for designing more transparent and effective AI systems. This is demonstrated through Kolmogorov-Arnold Networks (KANs) inspired by mathematics, Poisson Flow Generative Models (PFGM) rooted in physical intuition, and brain-inspired modular training (BIMT) drawing insights from neuroscience, etc. Opening black boxes (Science of AI): Modern AI models exhibit a range of puzzling behaviors – such as grokking, neural scaling laws and emergent representation learning – whose underlying mechanisms remain poorly understood. I employed simplified “spherical cow” models to investigate these phenomena from the perspective of phase transitions. I will show that grokking is a special phase in the hyperparameter space, which can be controlled and eliminated. The learned algorithms after grokking also display distinct phases, called clock or pizza algorithms. AI for Science: With greater interpretability, AI systems can begin to function as “AI Scientists” capable of (re)discovering deep scientific structures from data. These include conservation laws, hidden symmetries, integrable systems, Langrangian and Hamiltonian formulations, modular structures, and high-precion solutions. I believe my research work contributes to the emerging interdiscipinary field that unites AI and Science. Building opon the foundation laid in this thesis, I envision a future in which science guides AI out of its current era of alchemy and into a true era of scientific understanding.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantitative modeling of 5' splice site subclass regulation and evolution</title>
<link href="https://hdl.handle.net/1721.1/164489" rel="alternate"/>
<author>
<name>Kenny, Connor Jens</name>
</author>
<id>https://hdl.handle.net/1721.1/164489</id>
<updated>2026-01-13T03:36:17Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Quantitative modeling of 5' splice site subclass regulation and evolution
Kenny, Connor Jens
Pre-mRNA splicing is an essential molecular process required for eukaryotic gene expression. In this thesis, I present a previously unknown mechanism of splicing regulation where a family of splicing factors, the LUC7 family, compete to differentially impact 5→ splice site (5→ SS) selection in a sequence-dependent manner. I quantitatively characterize two major subclasses of 5→ SS in eukaryotes and outline distinctive features of 5→ SS in exons affected by the three human LUC7 paralogs: LUC7L2 and LUC7L enhance splicing of “right-handed” 5→ SS that exhibit stronger consensus matching on the intron side of the nearly-invariant / GU, while LUC7L3 boosts splicing of “left-handed” 5→ SS with stronger consensus matching upstream of the /GU. Using a range of experimental systems, from human cells to mutant plants, I show that LUC7 paralogs have opposing effects on these two 5→ SS subclasses and that this regulatory mechanism likely originated in the last common ancestor of animals and plants over 1.5 billion years ago. I further evaluate a competing model of 5→ SS subclass regulation involving METTL16- mediated U6 snRNA modification and reconcile both models by devising computational tools that identify sequence features predictive splicing dysregulation in transcriptome-wide datasets. Finally, I examine the evolutionary dynamics of left- and right-handed 5→ SS and propose a model of intron evolution in which codon and intron phase constraints in protein-coding genes shape both minor-to-major intron conversion and transitions between left- and right- 5→ SS subclasses.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Compiler-Hardware Co-Design for Pervasive Parallelization</title>
<link href="https://hdl.handle.net/1721.1/164488" rel="alternate"/>
<author>
<name>Ying, Victor A.</name>
</author>
<id>https://hdl.handle.net/1721.1/164488</id>
<updated>2026-01-13T03:37:12Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Compiler-Hardware Co-Design for Pervasive Parallelization
Ying, Victor A.
Modern computer systems have hundreds of processor cores, so highly parallel programs are critical to achieve high performance. But parallel programming remains difficult on current systems, so many programs are still sequential. This dissertation presents new compilers and hardware architectures that can parallelize complex programs while retaining the simplicity of sequential code. Our new systems allow real-world programs to use hundreds of cores without burdening programmers with concurrency, deadlock, or data races. &#13;
 &#13;
This dissertation follows a novel approach that eliminates the burden of explicit parallel programming to make parallel execution pervasive. This approach relies on four guiding principles. First, exploiting implicit parallelism preserves the simplicity of sequential execution. Second, dividing computation into tiny tasks, as short as tens of instructions each, unlocks plentiful fine-grained parallelism in challenging programs. Hardware-compiler co-design techniques can create many tasks in parallel and reduce per-task overheads to make tiny tasks scale to many cores. Third, new hardware and software mechanisms can compose parallelism across entire programs, removing serializing barriers to overlap executions of nested parallel subroutines. Finally, exploiting static and dynamic information for data locality reduces data movement costs while maintaining load balance on large multicore systems. &#13;
 &#13;
This dissertation presents three systems that embody these four principles. First, T4 introduces automatic program transformations that exploit a novel hardware architecture to parallelize sequential programs. As a result, T4 scales hard-to-parallelize real-world programs to tens of cores, resulting in order-of-magnitude speedups. Second, S5 builds on T4 with novel transformations to remove needless serialization in a broad class of challenging data structures. Thus, S5 scales complex real-world programs to hundreds of cores, delivers additional order-of-magnitude speedups over T4, and outperforms manually parallelized code tuned by experts. Finally, ASH is an accelerator that demonstrates the same approach can be applied with simpler mechanisms tailored for digital circuit simulation. A small ASH implementation is 32x faster than a large multicore CPU running a state-of-the-art parallel simulator.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Graph Neural Network Training on Large Graphs in A Distributed Setting</title>
<link href="https://hdl.handle.net/1721.1/164487" rel="alternate"/>
<author>
<name>Murzynowski, Philip</name>
</author>
<id>https://hdl.handle.net/1721.1/164487</id>
<updated>2026-01-13T04:08:27Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Optimizing Graph Neural Network Training on Large Graphs in A Distributed Setting
Murzynowski, Philip
Graph neural networks (GNNs) are an important class of methods for leveraging the information present in graph structures to perform various learning tasks. Distributed GNNs can improve the performance of GNN execution by dividing computation among multiple machines and scale to large graphs by partitioning graph features and the graph structure. Although distributed GNNs are able to achieve self-relative speedup, they are often slower than well-optimized code running on a single machine. For example, evaluation of the prevalent Distributed DGL system on graphs in the Open Graph Benchmark shows Distributed DGL can achieve speedup of over 2× when moving from one to four nodes, but execution of Distributed DGL on 4 nodes is 2× slower than a well-optimized GNN system, such as the SALIENT system, on a single machine.&#13;
&#13;
In my thesis, I argue that it is possible for a distributed GNN system to be both fast and scalable. Specifically, I show that it is possible to match the performance of well-optimized, non-distributed codes for GNN training and also achieve good scalability when running in the distributed setting. I present a system called Distributed SALIENT and motivate its design through profiling and identifying bottlenecks that arise in the distributed setting. Key components of Distributed SALIENT include the use of well-optimized code for local computations, pipelining of inter-machine communication, and a careful trade-off between data partitioning and partial replication.&#13;
&#13;
I evaluate Distributed SALIENT on the Open Graph Benchmark (OGB) and show that Distributed SALIENT achieves good speedup compared to SALIENT’s well-optimized single-node code while only using replication factors of roughly 5%. In fact, in experiments with training a 3-layer GraphSAGE model on the large OGB papers100M data set, Distributed SALIENT on 8 nodes is 8.6x faster than SALIENT on 1 node.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Atomistic Insights into Alloy Solidification using&#13;
Machine-Learning Potentials</title>
<link href="https://hdl.handle.net/1721.1/164486" rel="alternate"/>
<author>
<name>Cao, Yifan</name>
</author>
<id>https://hdl.handle.net/1721.1/164486</id>
<updated>2026-01-13T03:35:39Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Atomistic Insights into Alloy Solidification using&#13;
Machine-Learning Potentials
Cao, Yifan
Alloy solidification is a critical process in materials design and manufacturing, as it governs the formation of microstructures that determines the mechanical, thermal, and chemical properties of materials. However, direct in situ observation remains extremely challenging due the need for high spatial and temporal resolution under elevated temperatures. On the theory side, solidification is a complex phenomenon often studied using phase-field simulations, which rely on empirically fitted parameters and simplified assumptions about interfacial kinetics, limiting their predictive capability. Capturing this process at the atomistic level can yield more fundamental insights, but is hindered by the need for interatomic models that are both accurate and computationally efficient across relevant timescales and length scales. To overcome these challenges, this thesis develops and applies machine-learning interatomic potentials (MLPs) that capture the chemical complexity of metallic alloys, providing a physically accurate and computationally efficient backbone for large-scale atomistic simulations of complex alloy solidification. We first address a foundational challenge in deploying MLPs: the systematic construction of robust and transferable training datasets. Using CrCoNi as a model system, we evaluate various strategies for training MLPs to capture chemical short-range order (SRO), a critical feature in high-entropy alloys, and its effects on materials quantities of relevance for mechanical properties, such as stacking-fault energy and phase stability. It is demonstrated that energy accuracy on test sets often does not correlate with accuracy in capturing material properties, which is fundamental in enabling large-scale atomistic simulations of metallic alloys with high physical fidelity. Based on this analysis we systematically derive design principles for the rational construction of MLPs that capture SRO in the crystal and liquid phases of alloys. The resulting MLPs are validated against experimental measurements on key thermophysical properties, including melting points, heat capacities, thermal expansion coefficients, and enthalpy of SRO formation, confirming their suitability for predictive simulations. With these validated potentials, we then investigate the evolution of SRO during rapid solidification processes. Our simulations reveal that alloy processing can lead to nonequilibrium steady states of SRO that differ qualitatively from any equilibrium configuration. We attribute this behavior to an inherent ordering bias introduced by nonequilibrium dynamics during solidification. These findings suggest that conventional manufacturing processes offer new opportunities to tailor alloy properties by accessing a broader spectrum of nonequilibrium SRO states, expanding the alloy design space beyond the equilibrium spectrum. Finally, we conduct predictive solidification simulations of chemically complex alloys across experimentally relevant growth rates (0.15–2 m/s) , alloy compositions, interface orientations, and undercooling levels. These simulations capture the dynamic build up of solute partitioning at the solid-liquid interface and reveal kinetics-dependent segregation patterns that deviate markedly from equilibrium predictions. The developed framework enables direct evaluation of key kinetic properties under realistic growth conditions, including interface mobility, liquid diffusivity, and solute trapping. Altogether, this thesis develops machine-learning potentials capable of capturing the chemical complexity of metallic alloys with near DFT-level accuracy, and establishes a framework for extracting key kinetic properties through predictive simulations of alloy solidification. When combined with emerging advances in continuum-scale modeling, these results lay the groundwork for truly multiscale investigations of alloy solidification, enabling DFT-level predictive capabilities at scales directly comparable to experimental alloy design and additive manufacturing processes.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sequential Resource Allocation and Applications in Revenue Management</title>
<link href="https://hdl.handle.net/1721.1/164485" rel="alternate"/>
<author>
<name>Zhou, Zijie</name>
</author>
<id>https://hdl.handle.net/1721.1/164485</id>
<updated>2026-01-13T03:36:41Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Sequential Resource Allocation and Applications in Revenue Management
Zhou, Zijie
Sequential resource allocation is a fundamental problem in operations research, encompassing a wide range of applications where decisions must be made dynamically under uncertainty. This thesis develops new theoretical foundations, explores practical applications, and establishes evaluation methodologies for sequential resource allocation, with a focus on revenue management, robustness and fairness, and experiment design. On the theoretical side, this thesis advances the study of classical network revenue management, a long-standing challenge in dynamic resource allocation. We introduce the first LP-free algorithm, improving the regret bound from O(T ^1/2) to O(T ^3/8)—a significant step toward closing the gap between existing algorithms and the theoretical lower bound of O(1). Additionally, we enhance robustness in sequential resource allocation by developing algorithms that incorporate machine-learned advice, striking a balance between overly conservative worst-case models and overly optimistic stochastic assumptions. Furthermore, we integrate individual fairness into sequential decision-making, ensuring equitable resource allocation without compromising competitive performance. On the application side, we demonstrate the impact of sequential resource allocation in the hospitality management domain. Collaborated with Oracle Lab, we design an online upgrading mechanism that enables hotels to dynamically determine when and at what price to offer room upgrades. Additionally, we propose near-optimal, fast approximation algorithms for this mechanism, achieving a regret bound of O(logT), which is close to the natural lower bound of O(1). We also incorporate our upgrading algorithm to a hotel dataset, and improves more than 20% revenue in 2022. Finally, we introduce new methodologies for evaluating sequential decision-making policies, with a focus on online experiment design. Traditional A/B testing methods struggle with dynamically arriving data, leading to biased or inefficient experimental results. Our pigeonhole experimental design effectively reduces bias and outperforms several well-known experimental design policies, including matched pair design and completely randomized design, making it a more reliable approach for evaluating sequential decision-making strategies. By unifying theoretical insights, real-world applications, and online evaluation frameworks, this thesis contributes to the broader field of sequential resource allocation, providing fundamental advancements with practical implications across revenue management and experimental design.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Aspects of Moiré Quantum Matter</title>
<link href="https://hdl.handle.net/1721.1/164484" rel="alternate"/>
<author>
<name>Paul, Nisarga</name>
</author>
<id>https://hdl.handle.net/1721.1/164484</id>
<updated>2026-01-13T03:36:10Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Aspects of Moiré Quantum Matter
Paul, Nisarga
The advent of moiré quantum matter has newly unified disparate themes in modern condensed matter physics, chief among them band theory, correlations, and topology. This thesis investigates how the interplay between these foundational elements leads to novel electronic phenomena uniquely enabled by moiré superlattices. We focus on modulated Landau levels, which is one of the simplest settings with all three of band dispersion, correlations and topology, yet is rich enough to capture much of the interesting phenomena of moiré quantum matter. We characterize emergent quantum phases that are newly unlocked by the moiré regime. Specifically, we discuss directional localization, formation of Hall crystals with tunable Chern numbers, and novel fractional Chern insulator collective mode physics in the context of modulated Landau levels. We also show that a class of models comprising itinerant electrons strongly coupled to skyrmion-like magnetic textures, closely connected with moiré transition metal dichalcogenides in which the fractional quantum anomalous Hall effect was observed, can host flat Chern bands, emergent Landau levels, and zero-field non-Abelian topological order. This thesis provides a framework for the study of the essential features of moiré quantum matter and demonstrates how moiré systems provide unprecedented opportunities to explore, design, and manipulate strongly correlated topological quantum matter.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Optimized Bayesian Analysis Framework for the KATRIN Experiment</title>
<link href="https://hdl.handle.net/1721.1/164483" rel="alternate"/>
<author>
<name>Xu, Weiran</name>
</author>
<id>https://hdl.handle.net/1721.1/164483</id>
<updated>2026-01-13T03:35:29Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">An Optimized Bayesian Analysis Framework for the KATRIN Experiment
Xu, Weiran
Neutrinos, which were originally predicted to be massless within the Standard Model of particle physics, have been confirmed to possess non-zero masses through the discovery of neutrino flavor oscillations. These oscillations precisely measure mass-squared splittings between neutrino mass eigenstates, establishing lower limits for the effective electron-neutrino mass at 0.009 eV for normal mass ordering and 0.050 eV for inverted mass ordering. However, the absolute neutrino mass scale remains a fundamental open question in both particle physics and cosmology.&#13;
&#13;
Precise spectroscopy of beta-decay spectrum provides a model-independent probe of the absolute neutrino mass via decay kinematics. The KArlsruhe TRItium Neutrino (KATRIN) experiment, utilizing a Magnetic Adiabatic Collimation and Electrostatic (MAC-E) filter spectrometer, sets the world's tightest upper limit of m_v &lt; 0.45 eV (90% C.L.) based on its first five measurement campaigns. KATRIN is scheduled to complete its 1,000-day data-taking period by the end of 2025, targeting a final sensitivity of m_v &lt; 0.3 eV}. Future improvements on neutrino mass measurements will depend on advances in differential detection techniques and the development of atomic tritium sources.&#13;
&#13;
This thesis presents an optimized modeling of the KATRIN beta spectrum and a comprehensive analysis of the first five measurement campaigns. An improved framework for computing the theoretical beta spectrum and the KATRIN response function is developed to address the complexities arising from the asymmetric field configurations in the main spectrometer. Benefiting from a computational speedup of four orders of magnitude and improved numerical stability, frequentist best-fit values for individual campaigns are reported, together with an upper limit on neutrino mass using the Lokhov-Tkachov confidence belt construction method.&#13;
&#13;
Parallel Bayesian analyses are conducted on the same dataset, yielding an independent and complementary statistical interpretation of the experimental results. Posterior distributions for the squared neutrino mass are sampled for each campaign under a flat prior on m²ᵥ using the parallel Stretch-Move algorithm, and are subsequently combined with a novel approach developed in this work to enhance computational efficiency. Convergence of each Markov chain is assessed through autocorrelation time analysis, and the robustness of the results is validated through cross-team comparison and consistency checks with profile likelihood. The Bayesian results reported here enable straightforward integration with constraints from oscillation measurements and cosmological observations, and the methodologies developed in this work are directly applicable to the final KATRIN dataset, providing a foundation for future neutrino mass analyses and searches for physics beyond the Standard Model.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Redox-Mediated Processes Toward Modular Electrochemical Systems</title>
<link href="https://hdl.handle.net/1721.1/164482" rel="alternate"/>
<author>
<name>Mallia, Christopher T.</name>
</author>
<id>https://hdl.handle.net/1721.1/164482</id>
<updated>2026-01-13T03:36:30Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Redox-Mediated Processes Toward Modular Electrochemical Systems
Mallia, Christopher T.
Electrochemical technologies offer an attractive path toward a sustainable future where conventional methods of storing energy or producing critical materials are increasingly coupled to renewable electricity generation. To enable such a future, it is imperative that we have strong foundational understanding of electrochemical reactions that are useful to our needs. Redox flow batteries (RFBs) have emerged as a promising architecture for large scale storage of electricity to bridge the gap when renewable generation is unavailable. These devices operate by storing charge in the form of redox-active species that are dissolved into an electrolyte, and subsequently passed through an electrochemical cell to either store or release electrical energy. An extension of the concept of RFBs toward more general applications is to use the dissolved redox-active species to drive a reaction with another material, either to increase the energy storage density through an electrochemically active charge-dense material, or to drive a useful chemical reaction. This extension is termed a redox-mediated (RM) process, and inherits many of the complexities and intricacies of conventional electrochemical technologies, specifically that of RFB-type devices. The subject of this thesis is the development of knowledge and techniques for studying RM processes toward practical embodiments. While technical implementations of this concept are still nascent, many promising early results have been found in devices that use redox-mediated reactions to store electricity. Despite this, progress is frequently hindered by a lack of foundational knowledge from which to ideate better systems, and techniques to experimentally determine underlying physics. First, I establish the development of the RM concept over the past years as primarily through proof-of-concept electrochemical reactors which mimic RFBs. Second, we establish that the underlying nature of some RM reactions can be quantified and understood through corrosion principles, which guide our intuition for selecting chemistries and operating conditions. Third, I demonstrate that the behavior of many desirable RM chemistries is intrinsically coupled to passivation phenomena, and that this must be accounted for in reaction design. Fourth and finally, I provide experimental and practical guidance for researchers in this field, coupled with the design of some apparatus and techniques useful for characterizing RM reactions in specific and electrochemical processes in general. This body of work is broadly intended to advance understanding of electrochemically active interfaces and enable technology concepts which promote a sustainable future.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predictive Modeling of Chemical Reactivity for Sustainability</title>
<link href="https://hdl.handle.net/1721.1/164481" rel="alternate"/>
<author>
<name>Singhal, Avni Priya</name>
</author>
<id>https://hdl.handle.net/1721.1/164481</id>
<updated>2026-01-13T03:35:43Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Predictive Modeling of Chemical Reactivity for Sustainability
Singhal, Avni Priya
Predicting and controlling chemical reactivity is key to sustainable material and process design. However, modeling reactivity at scale remains challenging due to the computational demands of quantum chemical methods and the complexity of reaction mechanisms. This thesis explores how high-throughput computational approaches, rooted in quantum chemistry and enabled by automation, can be used to interrogate reactivity across large chemical spaces. We focus on two domains where reactivity governs process efficiency and sustainability: solvent-based carbon capture and polymer, specifically thermoset, manufacturing.&#13;
&#13;
We first investigate pi-conjugated heterocyclic nucleophiles as alternative carbon capture solvents to address the high regeneration energy and degradation rates of conventional amine-based systems. We combine synthetic template-based library enumeration, density functional theory (DFT), and machine learning models to evaluate binding energies, capture capacity, regeneration thermodynamics, and oxidative stability. Structure–property analysis reveals design strategies to enhance capture strength while balancing tradeoffs with desorption temperature and degradation resistance.&#13;
&#13;
We next focus on designing monomers for frontal ring-opening metathesis polymerization (FROMP), a polymerization mode that enables rapid, energy-efficient manufacturing of polymers. This self-propagating process harnesses exothermic reactions to sustain a polymerization front without continuous external heating, but it requires monomers with a finely tuned balance of thermodynamic and kinetic parameters. We develop a multi-level screening pipeline that integrates DFT-calculated properties with a reaction-diffusion model to predict front behavior directly from the atomistic structure of the monomer. We experimentally validate a preliminary pipeline, identifying a new class of FROMP-capable furan-benzyne monomers, and uncover additional candidates from unexplored chemical spaces that overcome limitations of known systems. &#13;
&#13;
Together, these studies demonstrate how high-throughput, mechanism-informed modeling can guide the discovery of molecules and materials that meet complex reactivity and performance criteria.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>When Your Home Becomes a Panda Park: The Opportunity and Upheaval of China's Giant Panda National Park</title>
<link href="https://hdl.handle.net/1721.1/164480" rel="alternate"/>
<author>
<name>Zhao, Celina</name>
</author>
<id>https://hdl.handle.net/1721.1/164480</id>
<updated>2026-01-13T04:08:22Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">When Your Home Becomes a Panda Park: The Opportunity and Upheaval of China's Giant Panda National Park
Zhao, Celina
In December 2016, China launched the Giant Panda National Park (GPNP). A massive ecological initiative aimed at safeguarding its beloved national symbol and international icon of conservation, the park marked an unequivocal win for giant pandas. But for the 100,000 people already living in and around the borders, the outcome was not as clear. &#13;
The GPNP seeks to establish a harmonious balance between biodiversity protection and human development. But the vast amount of land covered by the park means not all places are equally primed to achieve that goal. A handful of communities have been designated as exclusive entrance communities, with lavish funding to become the face of the national park. In others, a persistent question simmers: Are pandas more important than people? &#13;
Central to this story is how individuals are adapting to and reimagining their futures. Rather than a binary of winners and losers, the GPNP has sparked a wide range of human responses -showing that the path to a sustainable future between people and pandas is far from black and white.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Sequence Landscape of Bacterial Genes is Shaped by Long-Range mRNA Folding</title>
<link href="https://hdl.handle.net/1721.1/164479" rel="alternate"/>
<author>
<name>Gill, Manraj Singh</name>
</author>
<id>https://hdl.handle.net/1721.1/164479</id>
<updated>2026-01-13T03:36:07Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">The Sequence Landscape of Bacterial Genes is Shaped by Long-Range mRNA Folding
Gill, Manraj Singh
An evolutionary selection for optimal expression of genes in regulatory networks has led to discernable sequence patterns in bacterial genomes observed in nature. Such patterns result from gene regulatory strategies that leverage sequence-dependent interactions with key cellular machineries and regulatory molecules. While numerous regulatory strategies that shape bacterial gene sequence have been characterized, predicting functional consequences from sequence alone remains challenging due to the sheer vastness of the possible sequence space. Moreover, the primary gene sequence encodes information on secondary and tertiary topologies that the molecules of the central dogma can fold into. Specifically, though local messenger RNA (mRNA) structures are known to regulate bacterial gene expression, the role of long-range mRNA folding remains unclear despite the predicted prevalence of such interactions across mRNAs. In bacteria, a major regulator of mRNA decay and translation rates is accessibility of the ribosome binding site (RBS) to the ribosome. Sequences in the mRNA’s 5´ untranslated region (UTR) complementary to the RBS can decrease gene expression by base pairing and occluding ribosomes from binding. To determine whether such antagonistic sequences are also the primary determinants of sequence choice along the rest of the mRNA transcript, we measured the effect of all possible 8-nucleotide substitutions (65,536 variants) on mRNA levels when placed in multiple positions along a bacterial transcript. We find that, while the vast majority of substitutions in the middle of genes negligibly affect RNA level, 8mers with complementarity to parts of the RBS exhibit the strongest effects by increasing RNA degradation rates up to 4-fold. RBS-complementary sequences also decrease translation initiation rates when placed in a coding sequence, and are able to occlude ribosome binding even when they are located hundreds of nucleotides away from the start codon. The inhibitory effect of such secondary structures on gene expression likely explains a strong selection against sequences complementary to conserved parts of RBSs throughout coding sequences of genes from diverse bacterial genomes, which we uncover through computational analysis. Together, this thesis reveals the widespread impact of RNA intramolecular interactions in vivo on both mRNA stability and translation and uncovers a key constraint on gene sequences.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Engineering Materials for Non-Compressible Torso Hemorrhage and Internal Bleeding</title>
<link href="https://hdl.handle.net/1721.1/164478" rel="alternate"/>
<author>
<name>Hong, Celestine Jia Huey</name>
</author>
<id>https://hdl.handle.net/1721.1/164478</id>
<updated>2026-01-13T03:35:37Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Engineering Materials for Non-Compressible Torso Hemorrhage and Internal Bleeding
Hong, Celestine Jia Huey
Non-compressible torso hemorrhage (NCTH) and internal bleeding results in a significant number of preventable casualties worldwide among civilians and in the field. In particular, internal bleeding can only be diagnosed through changes in vital signs and then through imaging modalities that may only be available in a hospital setting. Over the past few decades, researchers in the field have sought to address these needs by developing hemostats that can rapidly expand, bind, or seal an exposed wound, or interact with wound-specific components when delivered intravenously to enhance preexisting hemostatic processes.&#13;
&#13;
The first part of this thesis investigates the effect of hemostatic nanoparticle size on their interactions with platelets. Small nanoparticles were observed to result in an increased percentage of specifically-bound single platelets under flow and intermediate nanoparticles were observed to result in the greatest degree of platelet recruitment to a platelet-collagen surface. Large nanoparticles were observed to result in the most nanoparticle mass bound to a surface, the shortest circulation time and retention, and the highest pulmonary accumulation. Ultimately, intermediate nanoparticles were shown to result in the most significant increase in survival relative to the saline control in a lethal inferior vena cava (IVC) injury model (84.6% vs 26.7%), as well as the greatest accumulation at the injured IVC relative to uninjured vessel controls. &#13;
&#13;
Subsequently, the intermediate nanoparticles from the prior study were functionalized with bio-orthogonal click-crosslinkable azide groups to achieve targeted crosslinking behavior. Commercial multiarm PEG functionalized with the corresponding clickable moiety, dibenzylcyclooctyne (DBCO), and DBCO-PEG-b-PLGA nanoparticles were delivered as the second part of this two-component system. This system was demonstrated to increase platelet recruitment, and  decrease fibrin loss during plasminolysis in vitro. When challenged in a mouse liver resection model, the two-component system resulted in significantly increased survival relative to the nanoparticle-only system and higher accumulation in the remnant liver. &#13;
&#13;
Finally, a charge-inverting polymer was synthesized through controlled radical polymerization. The material was demonstrated to undergo rapid charge inversion when exposed to physiological pH, resulting in the near-complete lift-off within a minute of a layer-by-layer drug film into the dermis when coated on microneedles. This versatile release platform could be coated on wound dressings to facilitate the release of therapeutics to aid in healing, or other applications involving charged films. &#13;
&#13;
In sum, this thesis has investigated several new materials and assays for the treatment of traumatic hemorrhage, opening potential avenues for the development of more effective hemostats.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advances in Nonconvex and Robust Optimization</title>
<link href="https://hdl.handle.net/1721.1/164477" rel="alternate"/>
<author>
<name>Koukouvinos, Theodoros</name>
</author>
<id>https://hdl.handle.net/1721.1/164477</id>
<updated>2026-01-13T03:35:24Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Advances in Nonconvex and Robust Optimization
Koukouvinos, Theodoros
Nonconvex optimization presents significant challenges, as identifying the global optimum is often difficult. This thesis introduces novel algorithms to find the exact solution of a broad class of nonconvex optimization problems. The thesis is structured into four parts. In Chapter 2, we propose a novel method for solving nonconvex optimization problems, in which the nonconvex components are sums of linear times convex (SLC) functions. We introduce a new technique, called the Reformulation-Perspectification Technique (RPT), to obtain a convex approximation of the original nonconvex optimization problem. We then incorporate RPT within branch and bound to obtain the global optimal solution of the nonconvex optimization problem. By using the RPT, we obtain a convex relaxation by forming the perspective of each convex function and linearizing all product terms with newly introduced variables. To further tighten the approximation, we pairwise multiply constraints. Therefore, in Chapter 3, we analyze all possibilities of multiplying conic constraints, a very wide class of constraints. Further, we delineate methods for deriving new, valid linear and second-order cone inequalities for pairwise constraint multiplications involving the power cone and exponential cone, thereby enhancing the strength of the approximation. In Chapter 4, we address nonconvex optimization problems that involve polynomials. We derive valid SLC decompositions for polynomials, in which the linear functions are inequalities of the feasible region and the convex functions are quadratics. We prove the existence of such SLC decompositions for arbitrary degree polynomials. Further, out of the many possible SLC decompositions, we obtain the one that results in the tightest lower bound. Finally, in the numerical experiments we show that our method often outperforms state-of-the-art approaches for polynomial optimization. In Chapter 5, we propose a robust optimization framework that immunizes some of the central linear algebra problems in the presence of data uncertainty. Namely, we formulate linear systems, matrix inversion, eigenvalues-eigenvectors and matrix factorization under uncertainty, as robust optimization problems using appropriate descriptions of uncertainty. We show that for both linear systems and matrix inversion, the robust approach leads to more accurate solutions than the nominal, in the case of nearly singular matrices.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fabricating and Tailoring Halide Perovskites for Photovoltaic Applications</title>
<link href="https://hdl.handle.net/1721.1/164476" rel="alternate"/>
<author>
<name>Kadosh Zhitomirsky, Tamar</name>
</author>
<id>https://hdl.handle.net/1721.1/164476</id>
<updated>2026-01-13T03:36:12Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Fabricating and Tailoring Halide Perovskites for Photovoltaic Applications
Kadosh Zhitomirsky, Tamar
Green energy is a contemporary global concern, and research of materials for solar energy harvesting is the heart of potential solutions for the energy crisis. Halide perovskites are leading candidates to replace silicon in next generation solar cells. This thesis focuses on halide perovskite materials, aiming to understand their structure, electronic and ionic properties and photo-activity; and to re-direct their fabrication techniques to address global market needs and requirements. In this work we developed alternative, vapor-based fabrication techniques, based on manufacturing-compatible, safe, rapid and scalable processes, that have the potential to improve material stability and efficiency.&#13;
Vapor Transport Deposition (VTD) is investigated as a promising fabrication method for thin film halide perovskites and beyond. We explored the deposition parameter space and elucidated relationships and trends regarding composition, structure and deposition rate. We examined the morphology, crystal phase formation, optical and electrical properties, and finally the performance of the deposited films when incorporated into solar cells.&#13;
We begin by exemplifying the viability of vapor transport co-deposition in fabricating active perovskite films, utilizing methylammonium lead iodide (MAPbI3) as a simplified model system. We then design an improved version of the vapor transport deposition system and transition to the more technologically attractive perovskite composition formamidinium lead iodide (FAPbI3). Learning from previous attempts to fabricate this material, we developed a novel technique that we call Hybrid two-step vapor-solution deposition in which we use VTD to deposit the inorganic&#13;
4&#13;
precursor, not readily dissolved in industry acceptable solvents, and then react it with a solution of the organic precursors dissolved in a benign solvent. This technique allowed us to fabricate functioning FAPbI3 based solar cell devices, in a safe, fast-paced, scalable and manufacturing compatible fashion. The deposition rate is significantly influenced by chamber pressure and source temperature, and by controlling all deposition parameters, we systematically reached rates of up to 1200 nm/min, that is orders of magnitude faster than current comparable techniques. We found the technique to be reproducible, yielding 13% efficient devices, with champion efficiencies of up to 15.3%. Based on the proposed novel fabrication process, we believe it offers an avenue for further improvement in solar cell stability and efficiency.&#13;
CsPbBr3, a fully inorganic halide perovskite, also shows great promise as a photo and gamma ray detector and like the other halide perovskites is known to support halide ion conductivity that contributes to device instability and reduced sensitivity to irradiation. We choose this as a model system to apply concepts from defect chemistry and demonstrate the ability to measure and manipulate the ionic conductivity in the material by stoichiometry control and doping.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Membrane protein conformational dynamics and ligand-binding interactions in bacterial glycoconjugate biosynthesis</title>
<link href="https://hdl.handle.net/1721.1/164475" rel="alternate"/>
<author>
<name>Higinbotham, Hugh</name>
</author>
<id>https://hdl.handle.net/1721.1/164475</id>
<updated>2026-01-13T03:35:41Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Membrane protein conformational dynamics and ligand-binding interactions in bacterial glycoconjugate biosynthesis
Higinbotham, Hugh
Membrane associated proteins are an essential component of the complex biochemistry that is carried out at the membrane interface and perform essential functions for cellular life. Biophysical characterization of protein structure-function relationships faces a unique set of challenges due to the constraints of phospholipid bilayer chemistry and geometry. Advances in x-ray crystallography and cryo electron microscopy have made progress in this regard, but dynamic structural features remain difficult to study. Small membrane proteins, such as those responsible for bacterial glycosylation, remain challenging to structurally characterize at all. Bacterial glycan synthesis pathways are essential for cell function yet highly variable between strains, making them promising systems for targeted antibiotic development. Many pathways have initiating SmPGTs that show incredible specificity for minute changes in glycan chemistry despite being small enough to streamline many computational methods, which makes them ideal model systems for developing multidisciplinary strategies to study membrane protein dynamics. This thesis presents a strategy that employs structural bioinformatics in Chapter 2, molecular dynamics simulation (MD) in Chapter 3, and single-molecule FRET microscopy (smFRET) in Chapter 4 to observe the ligand-dependent conformational dynamics of integral membrane proteins in situ. It focuses on representative members of the small monotopic phosphoglycosyl transferase (SmPGT) superfamily, which catalyze transfer of a phosphosugar from a soluble nucleotide-sugar donor to a membrane-embedded polyprenol phosphate acceptor in the initiating step of glycoconjugate biosynthesis in prokaryotes. The pipeline is employed to confirm the role of SmPGT conformational dynamics in substrate binding and informs the design of non-hydrolyzable substratemimetic inhibitors. Chapter 5 further sets the stage for the use of structural bioinformatics and molecular simulation to characterize subsequent glycosyl transferase (GT) enzymes down pathway and presents initial results characterizing inter-protein cooperative interactions. The integrated approach to incorporate computational and experimental characterization methods has significantly contributed to the understanding of SmPGT structure-function relationships and opened up new directions of inquiry into specific PGTligand interactions, the development of new inhibitory compounds, and the role of interprotein interactions in bacterial glycan synthesis.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Functional Genomic and Image-Based Screening Approaches for Probing Host-Pathogen Interactions</title>
<link href="https://hdl.handle.net/1721.1/164474" rel="alternate"/>
<author>
<name>Carlson, Rebecca J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164474</id>
<updated>2026-01-13T03:35:32Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Functional Genomic and Image-Based Screening Approaches for Probing Host-Pathogen Interactions
Carlson, Rebecca J.
Host-pathogen interactions represent a complex interplay between hosts and pathogens that can evolve over millions of years. Interactions between bacteria or viruses and human cells, and the resulting evolved antipathogenic signaling pathways, are processes responsible for pathologies ranging from infectious diseases to autoimmune conditions and cancer. In addition, engineered designs inspired by pathogen interactions with hosts are increasingly being used to both treat and diagnose many pathologies that need not originate from infection with a pathogen. Therefore, it is critical to build and deploy scalable tools to better understand host-pathogen dynamics in order to both better treat conditions where pathogens or antipathogenic signaling contribute directly to disease pathology as well as to engineer new treatments to address a broader range of disease states.&#13;
&#13;
In this thesis, I describe approaches to leverage functional genomics and image-based screening to perturb and profile host-pathogen interactions, including responses to two RNA viruses, Sendai virus and Ebola virus. These provide case studies highlighting the utility of high-content image-based screening for revealing new genes regulating predefined phenotypes of interest as well as for generating single-cell imaging profiles that can be used to infer new genetic functions and phenotypic states directly from screening data without a priori specification. I also highlight an example of a genetic screen that revealed a robust negative result, leading to hypothesis and validation of a novel function of the STING protein as a proton channel.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measurement of the B0 → ρ(770)0γ branching fraction</title>
<link href="https://hdl.handle.net/1721.1/164473" rel="alternate"/>
<author>
<name>Aaij, R.</name>
</author>
<author>
<name>Abdelmotteleb, A. S. W.</name>
</author>
<author>
<name>Abellan Beteta, C.</name>
</author>
<author>
<name>Abudinén, F.</name>
</author>
<author>
<name>Ackernley, T.</name>
</author>
<author>
<name>Adefisoye, A. A.</name>
</author>
<author>
<name>Adeva, B.</name>
</author>
<author>
<name>Adinolfi, M.</name>
</author>
<author>
<name>Adlarson, P.</name>
</author>
<author>
<name>Agapopoulou, C.</name>
</author>
<author>
<name>Aidala, C. A.</name>
</author>
<author>
<name>Ajaltouni, Z.</name>
</author>
<author>
<name>Akar, S.</name>
</author>
<author>
<name>Akiba, K.</name>
</author>
<author>
<name>Albicocco, P.</name>
</author>
<author>
<name>Albrecht, J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164473</id>
<updated>2026-03-08T03:39:33Z</updated>
<published>2025-12-19T00:00:00Z</published>
<summary type="text">Measurement of the B0 → ρ(770)0γ branching fraction
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
The ratio between the branching fractions of the B0 → ρ(770)0γ and B0 → K*(892)0γ decays is measured with proton-proton collision data collected by the LHCb experiment at centre-of-mass energies of 7, 8, and 13 TeV, corresponding to an integrated luminosity of 9 fb−1. The measured value is B B 0 → ρ 770 0 γ B B 0 → K ∗ 892 0 γ = 0.0189 ± 0.0007 ± 0.0005 , where the first uncertainty is statistical and the second systematic. The branching fraction for B0 → ρ(770)0γ decays is hence obtained as B B 0 → ρ 770 0 γ = 7.9 ± 0.3 ± 0.2 ± 0.2 × 10 − 7 , where the last uncertainty is due to the branching fraction of the normalisation mode. This result assumes that both the ρ(770)0 and K*(892)0 decays saturate the dihadron mass spectra considered in the analysis. It is consistent with the current world-average value and by far the most precise measurement to date.
</summary>
<dc:date>2025-12-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Beamdump facility at Jefferson Lab</title>
<link href="https://hdl.handle.net/1721.1/164472" rel="alternate"/>
<author>
<name>Achenbach, Patrick</name>
</author>
<author>
<name>Afanasev, Andrei</name>
</author>
<author>
<name>Ambrozewicz, Pawel</name>
</author>
<author>
<name>Ashkenazi, Adi</name>
</author>
<author>
<name>Banerjee, Dipanwita</name>
</author>
<author>
<name>Battaglieri, Marco</name>
</author>
<author>
<name>Benesch, Jay</name>
</author>
<author>
<name>Bondí, Mariangela</name>
</author>
<author>
<name>Brindza, Paul</name>
</author>
<author>
<name>Camsonne, Alexandre</name>
</author>
<author>
<name>Christy, Eric M.</name>
</author>
<author>
<name>Cline, Ethan W.</name>
</author>
<author>
<name>Cuevas, Chris</name>
</author>
<author>
<name>Dilling, Jens</name>
</author>
<author>
<name>Doria, Luca</name>
</author>
<author>
<name>Fegan, Stuart</name>
</author>
<author>
<name>Filippini, Marco</name>
</author>
<author>
<name>Fulci, Antonino</name>
</author>
<author>
<name>Giovannella, Simona</name>
</author>
<author>
<name>Grazzi, Stefano</name>
</author>
<id>https://hdl.handle.net/1721.1/164472</id>
<updated>2026-03-08T03:39:32Z</updated>
<published>2025-12-24T00:00:00Z</published>
<summary type="text">A Beamdump facility at Jefferson Lab
Achenbach, Patrick; Afanasev, Andrei; Ambrozewicz, Pawel; Ashkenazi, Adi; Banerjee, Dipanwita; Battaglieri, Marco; Benesch, Jay; Bondí, Mariangela; Brindza, Paul; Camsonne, Alexandre; Christy, Eric M.; Cline, Ethan W.; Cuevas, Chris; Dilling, Jens; Doria, Luca; Fegan, Stuart; Filippini, Marco; Fulci, Antonino; Giovannella, Simona; Grazzi, Stefano
The potential of the intense secondary muon, neutrino, and (hypothetical) light dark matter beams at the Thomas Jefferson National Accelerator Facility (Jefferson Lab) is explored. These are produced in the high-power dumps with high-current electron beams. Light dark matter searches with the approved Beam Dump eXperiment (BDX) are driving the realization of a new underground vault behind Hall A that could be extended to a Beamdump Facility with little additional installations. High-energy muons created via the Bethe–Heitler process uniquely do not proceed through the more common pion production and decay channels. Several possible muon physics applications are highlighted. Neutrino detector technologies and experiments suitable for a beamdump facility are outlined.
</summary>
<dc:date>2025-12-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Linking Chemical Phase and Mechanical Properties to Evaluate the Use of Millimeter-Wave Induced Vitrified Basalt in Enhanced Geothermal Systems</title>
<link href="https://hdl.handle.net/1721.1/164471" rel="alternate"/>
<author>
<name>Meltzer, Eve R.</name>
</author>
<author>
<name>Stefaniuk, Damian</name>
</author>
<author>
<name>Einstein, Herbert H.</name>
</author>
<id>https://hdl.handle.net/1721.1/164471</id>
<updated>2026-01-10T03:08:21Z</updated>
<published>2025-12-22T00:00:00Z</published>
<summary type="text">Linking Chemical Phase and Mechanical Properties to Evaluate the Use of Millimeter-Wave Induced Vitrified Basalt in Enhanced Geothermal Systems
Meltzer, Eve R.; Stefaniuk, Damian; Einstein, Herbert H.
Extraction of geothermal energy from Earth’s heat could significantly contribute to long-term energy needs, yet the current geothermal drilling process faces significant technical limitations. A promising advancement in enhanced geothermal systems is the use of a millimeter-wave (MMW) gyrotron, which enables faster and more efficient drilling. The MMW drilling process offers two key advantages over traditional methods: (1) rock is melted rather than mechanically drilled, leading to faster well hole advancement, and (2) the molten rock solidifies into a vitrified wall, eliminating the need for additional casing materials. This integrated drilling and casing method has the potential to save costs, time, and materials. This paper examines the strength, structural integrity, and microscale mechanical and chemical properties of the vitrified material formed during the mm-wave process, focusing on basalt as the test material. By employing a suite of experimental and analytical characterization techniques, this study aims to provide a comprehensive comparison of the structural, mechanical, and chemical changes in the rock before and after melting, offering insights into the effectiveness and implications of mm-wave drilling for enhanced geothermal systems. Highlights There is a clear change of phase between the basalt, the transition zone, and the melt, due to mm-wave exposure. The region exposed to mm-waves is completely vitrified, while there is partial melting of minerals within the zone right outside of the mm-wave beam. The transition zone created from mm-waves poses high risk to wellbore stability due to its variable mechanical strength and chemical composition. A better understanding of this new material can be achieved by overlaying a series of chemical and mechanical characterization data.
</summary>
<dc:date>2025-12-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reimagining Commercial Health Insurance in India: A System-Dynamics Approach to Complex Stakeholder Incentives and Policy Outcomes</title>
<link href="https://hdl.handle.net/1721.1/164470" rel="alternate"/>
<author>
<name>Mor, Nachiket</name>
</author>
<author>
<name>Gupta, Aakriti</name>
</author>
<author>
<name>Roy, Rahul</name>
</author>
<id>https://hdl.handle.net/1721.1/164470</id>
<updated>2026-01-10T03:08:27Z</updated>
<published>2025-12-08T00:00:00Z</published>
<summary type="text">Reimagining Commercial Health Insurance in India: A System-Dynamics Approach to Complex Stakeholder Incentives and Policy Outcomes
Mor, Nachiket; Gupta, Aakriti; Roy, Rahul
Most low- and middle-income governments are unwilling and unable to adequately fund their health systems using tax resources. Despite this route’s popularity in public discourse, it is neither a feasible nor a desirable route for financing Universal Health Coverage (UHC), given competing public finance priorities and limited citizen demand, among other challenges. It thus becomes essential to study the underlying mechanisms behind commercial health insurance and offer citizens the best possible product, which ensures that they not only receive a high degree of protection from health and financial risk on a sustained basis but also find reasonable access and support to improve their health outcomes. In this paper, we build a system-dynamics model that simulates the aggregate behavior of the Indian health-insurance industry, with interacting feedbacks between decisions by stakeholders such as the insurer, healthy and chronically ill populations, and the regulator to outcomes like insurance penetration among segments, overall coverage, health status over the long run, a mechanism of market-discovered premium, and financial viability of the private insurer. We then investigate policy choices and scenarios to explore contrast between design choices and ideal or targeted states of this market, such as a market with 100% enrollment, risk selection by insurers, group insurance models, and managed care, and study the impact on our outcomes of interest, i.e., insurance penetration and pricing, the financial sustainability of the insurers, and the population’s health outcomes. The simulations show that even while insurers and the different population segments optimize for their respective near-term objectives, the best outcomes for all come from the managed-care policy option, which has greater insurance penetration, lower premiums, higher profitability for insurers, and better long-term health outcomes. All other choices and scenarios yield suboptimal, imbalanced systemic outcomes. We thus recommend managed care as a desirable policy alternative for low-income countries intending to improve UHC by leveraging commercial health insurance.
</summary>
<dc:date>2025-12-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Symbiotic Digital Environment Framework for Industry 4.0 and 5.0: Enhancing Lifecycle Circularity</title>
<link href="https://hdl.handle.net/1721.1/164469" rel="alternate"/>
<author>
<name>Ponce, Pedro</name>
</author>
<author>
<name>Maldonado-Romo, Javier</name>
</author>
<author>
<name>Anthony, Brian W.</name>
</author>
<author>
<name>Bradley, Russel</name>
</author>
<author>
<name>Montesinos, Luis</name>
</author>
<id>https://hdl.handle.net/1721.1/164469</id>
<updated>2026-01-10T03:08:29Z</updated>
<published>2025-12-06T00:00:00Z</published>
<summary type="text">A Symbiotic Digital Environment Framework for Industry 4.0 and 5.0: Enhancing Lifecycle Circularity
Ponce, Pedro; Maldonado-Romo, Javier; Anthony, Brian W.; Bradley, Russel; Montesinos, Luis
This paper introduces a Symbiotic Digital Environment Framework (SDEF) that integrates Human Digital Twins (HDTs) and Machine Digital Twins (MDTs) to advance lifecycle circularity across all stages of the CADMID model (i.e., Concept, Assessment, Design, Manufacture, In-Service, and Disposal). Unlike existing frameworks that address either digital twins or sustainability in isolation, SDEF establishes a bidirectional adaptive system where human, machine, and environmental digital entities continuously interact to co-optimize performance, resource efficiency, and well-being. The framework’s novelty lies in unifying human-centric adaptability (via HDTs) with circular economy principles to enable real-time symbiosis between industrial processes and their operators. Predictive analytics, immersive simulation, and continuous feedback loops dynamically adjust production parameters based on operator states and environmental conditions, extending asset lifespan while minimizing waste. Two simulation-based scenarios in VR using synthetic data demonstrate the framework’s capacity to integrate circularity metrics (material throughput, energy efficiency, remanufacturability index) with human-machine interaction variables in virtual manufacturing environments. SDEF bridges Industry 4.0’s automation capabilities and Industry 5.0’s human-centric vision, offering a scalable pathway toward sustainable and resilient industrial ecosystems by closing the loop between physical and digital realms.
</summary>
<dc:date>2025-12-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of a Test Bed to Investigate Wetting Behaviours of High-Temperature Heavy Liquid Metals for Advanced Nuclear Applications</title>
<link href="https://hdl.handle.net/1721.1/164468" rel="alternate"/>
<author>
<name>Saraswat, Abhishek</name>
</author>
<author>
<name>Bhattacharyay, Rajendraprasad</name>
</author>
<author>
<name>Chaudhuri, Paritosh</name>
</author>
<author>
<name>Gedupudi, Sateesh</name>
</author>
<id>https://hdl.handle.net/1721.1/164468</id>
<updated>2026-01-10T03:08:39Z</updated>
<published>2025-11-26T00:00:00Z</published>
<summary type="text">Development of a Test Bed to Investigate Wetting Behaviours of High-Temperature Heavy Liquid Metals for Advanced Nuclear Applications
Saraswat, Abhishek; Bhattacharyay, Rajendraprasad; Chaudhuri, Paritosh; Gedupudi, Sateesh
Specifically engineered heavy liquid metals are proposed as candidate coolants and tritium breeders for advanced nuclear applications. Understanding the wetting behaviours of these liquids on relevant substrate configurations is crucial to tackle the challenges associated with corrosion protection and flow diagnostics development. However, detailed investigations are scarce in the literature. In this experimental study, an apparatus is designed to measure contact angles of different liquid metals over a mirror-polished horizontal SS-304 substrate. This paper presents design aspects of the developed test facility, as well as initial results obtained using direct imaging and the Low-Bond Axisymmetric Drop Shape Analysis algorithm-based image processing technique. Methodological validation is achieved through surrogate liquids/liquid metals (H2O, Hg, Ga, GaInSn), prior to taking measurements from molten lead (Pb) droplets at 425 °C. Estimated contact angles obtained using the two techniques lie within ±10% deviation. Towards the end, the paper lays out plans for future upgrades for studies of wetting behaviours of molten Pb/Pb alloys on substrates with relevant surface properties, including bare P-91 and reduced-activation ferritic–martensitic steels, along with Al2O3/Er2O3-coated versions of these materials, to generate a database for Gen-IV fission reactors and fusion power plants.
</summary>
<dc:date>2025-11-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Future Circular Collider Feasibility Study Report</title>
<link href="https://hdl.handle.net/1721.1/164467" rel="alternate"/>
<author>
<name>Benedikt, M.</name>
</author>
<author>
<name>Zimmermann, F.</name>
</author>
<author>
<name>Auchmann, B.</name>
</author>
<author>
<name>Bartmann, W.</name>
</author>
<author>
<name>Burnet, J. P.</name>
</author>
<author>
<name>Carli, C.</name>
</author>
<author>
<name>Chancé, A.</name>
</author>
<author>
<name>Craievich, P.</name>
</author>
<author>
<name>Giovannozzi, M.</name>
</author>
<author>
<name>Grojean, C.</name>
</author>
<author>
<name>Gutleber, J.</name>
</author>
<author>
<name>Hanke, K.</name>
</author>
<author>
<name>Henriques, André</name>
</author>
<author>
<name>Janot, P.</name>
</author>
<author>
<name>Lourenço, C.</name>
</author>
<author>
<name>Mangano, M.</name>
</author>
<author>
<name>Otto, T.</name>
</author>
<author>
<name>Poole, J.</name>
</author>
<author>
<name>Rajagopalan, S.</name>
</author>
<author>
<name>Raubenheimer, T.</name>
</author>
<id>https://hdl.handle.net/1721.1/164467</id>
<updated>2026-01-10T03:08:37Z</updated>
<published>2025-12-24T00:00:00Z</published>
<summary type="text">Future Circular Collider Feasibility Study Report
Benedikt, M.; Zimmermann, F.; Auchmann, B.; Bartmann, W.; Burnet, J. P.; Carli, C.; Chancé, A.; Craievich, P.; Giovannozzi, M.; Grojean, C.; Gutleber, J.; Hanke, K.; Henriques, André; Janot, P.; Lourenço, C.; Mangano, M.; Otto, T.; Poole, J.; Rajagopalan, S.; Raubenheimer, T.
Volume 1 of the FCC Feasibility Report presents an overview of the physics case, experimental programme, and detector concepts for the Future Circular Collider (FCC). This volume outlines how FCC would address some of the most profound open questions in particle physics, from precision studies of the Higgs and EW bosons and of the top quark, to the exploration of physics beyond the Standard Model. The report reviews the experimental opportunities offered by the staged implementation of FCC, beginning with an electron-positron collider (FCC-ee), operating at several centre-of-mass energies, followed by a hadron collider (FCC-hh). Benchmark examples are given of the expected physics performance, in terms of precision and sensitivity to new phenomena, of each collider stage. Detector requirements and conceptual designs for FCC-ee experiments are discussed, as are the specific demands that the physics programme imposes on the accelerator in the domains of the calibration of the collision energy, and the interface region between the accelerator and the detector. The report also highlights advances in detector, software and computing technologies, as well as the theoretical tools/reconstruction techniques that will enable the precision measurements and discovery potential of the FCC experimental programme. The content and structure of this report are guided by the scope and priorities defined in the mandate of the FCC Feasibility Study. It is therefore not intended to serve as an exhaustive review of the full physics potential of FCC. Several topics, already covered in earlier reports such as the FCC CDR, are not reiterated here or are addressed only briefly, in alignment with the study’s focus. This volume reflects the outcome of a global collaborative effort involving hundreds of scientists and institutions, aided by a dedicated community-building coordination, and provides a targeted assessment of the scientific opportunities and experimental foundations of the FCC programme.
</summary>
<dc:date>2025-12-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unregulated Vertical Urban Growth Alters Microclimate: Coupling Building-Scale Digital Surface Models with High-Resolution Microclimate Simulations</title>
<link href="https://hdl.handle.net/1721.1/164466" rel="alternate"/>
<author>
<name>Falcão, Jonatas Goulart Marinho</name>
</author>
<author>
<name>Furtado, Luiz Felipe de Almeida</name>
</author>
<author>
<name>Barbosa, Gisele Silva</name>
</author>
<author>
<name>Teixeira Coelho, Luiz Carlos</name>
</author>
<id>https://hdl.handle.net/1721.1/164466</id>
<updated>2026-01-10T03:08:51Z</updated>
<published>2025-11-10T00:00:00Z</published>
<summary type="text">Unregulated Vertical Urban Growth Alters Microclimate: Coupling Building-Scale Digital Surface Models with High-Resolution Microclimate Simulations
Falcão, Jonatas Goulart Marinho; Furtado, Luiz Felipe de Almeida; Barbosa, Gisele Silva; Teixeira Coelho, Luiz Carlos
Rio de Janeiro&amp;rsquo;s favelas house over 20% of the city&amp;rsquo;s population in just 5% of its territory, with Rio das Pedras emerging as a critical case study: ranking as Brazil&amp;rsquo;s fifth most populous favela and its most vertically intensified. This study quantifies how uncontrolled vertical growth in informal settlements disrupts microclimate dynamics, directly impacting thermal comfort. Using high-resolution geospatial analytics, we integrated digital surface models (DSMs) derived from LiDAR and photogrammetric data (2013, 2019, and 2024) with microclimatic simulations to assess urban morphology changes and their thermal effects. A spatiotemporal cadastral analysis tracked vertical expansion (new floors) and demolition patterns, while ENVI-met simulations mapped air temperature anomalies across decadal scenarios. Results reveal two key findings: (1) rapid, unregulated construction has significantly altered local airflow and surface energy balance, exacerbating the urban heat island (UHI) effect; (2) microclimatic simulations consistently recorded elevated temperatures, with the most pronounced impacts in densely built zones. These findings underscore the need for public policies to mitigate such negative effects observed in informal settlement areas.
</summary>
<dc:date>2025-11-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Symbolic Bridge: A Monograph on Niela Miller Life’s Work</title>
<link href="https://hdl.handle.net/1721.1/164465" rel="alternate"/>
<author>
<name>Labrune, Jean-Baptiste</name>
</author>
<id>https://hdl.handle.net/1721.1/164465</id>
<updated>2026-01-10T03:05:14Z</updated>
<published>2026-01-09T00:00:00Z</published>
<summary type="text">The Symbolic Bridge: A Monograph on Niela Miller Life’s Work
Labrune, Jean-Baptiste
This monograph examines the interdisciplinary contributions of Niela Miller, specifically her development of Symbolic Modeling (SymMod) and its role in bridging humanistic psychology with technological innovation. Situated within the MIT Media Lab’s framework of unconventional synthesis, the study explores how Miller’s focus on tacit, pre-verbal, and intuitive knowledge complements data-driven paradigms. The research archives her transition from bodily-based psychological practices to pioneering work in virtual learning environments and "metaliteracy." By analyzing Miller’s methodology for unlocking human potential through symbolic expression, this document provides a formal architecture for integrating and extending human consciousness into the design of future technologies.
In an era dominated by code and explicit data, the work of Niela Miller serves as a vital reminder that human innovation is rooted in the intuitive and the symbolic. This document offers an immersive look into Miller’s lifelong exploration of the "inner landscape," tracing her journey from foundational humanistic psychology to her visionary use of virtual spaces as laboratories for authentic interaction.&#13;
&#13;
Through the lens of the MIT Media Lab, we explore her Symbolic Modeling methodology—a replicable system designed to translate deep, pre-verbal insights into tangible creation. Whether she is utilizing bodily performance to map the psyche or defining new frontiers of digital literacy, Miller’s work challenges the boundary between the human experience and technological advancement. This is more than an archive; it is a celebration of the belief that our most profound breakthroughs come from what we can symbolically express but not always logically articulate.
</summary>
<dc:date>2026-01-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Striking a Pose: DIY Computer Vision Sensor Kit to Measure Public Life Using Pose Estimation Enhanced Action Recognition Model</title>
<link href="https://hdl.handle.net/1721.1/164463" rel="alternate"/>
<author>
<name>Williams, Sarah</name>
</author>
<author>
<name>Kang, Minwook</name>
</author>
<id>https://hdl.handle.net/1721.1/164463</id>
<updated>2026-01-09T06:25:43Z</updated>
<published>2025-11-01T00:00:00Z</published>
<summary type="text">Striking a Pose: DIY Computer Vision Sensor Kit to Measure Public Life Using Pose Estimation Enhanced Action Recognition Model
Williams, Sarah; Kang, Minwook
Observing and measuring public life is essential for designing inclusive, vibrant, and climate-resilient public spaces. While urban planners have traditionally relied on manual observation, recent advances in open-source Computer Vision (CV) now enable automated analysis. However, most CV sensors in urban studies focus on transportation analysis, offering limited insight into nuanced human behaviors such as sitting or socializing. This limitation stems in part from the challenges CV algorithms face in detecting subtle activities within public spaces. This study introduces the Public Life Sensor Kit (PLSK), an open-source, do-it-yourself system that integrates a GoPro camera with an NVIDIA Jetson edge device, and evaluates whether pose estimation-enhanced CV models can improve the detection of fine-grained public life behaviors, such as sitting and social interaction. The PLSK was deployed during a public space intervention project in Sydney, Australia. The resulting data were measured against data collected from the Vivacity sensor, a commercial transportation-focused CV system, and traditional human observation. The results show that the PLSK outperforms the commercial sensor in detecting and classifying key public life activities, including pedestrian traffic, sitting, and socializing. These findings highlight the potential of the PLSK to support ethically collected and behavior-rich public space analysis and advocate for its adoption in next-generation urban sensing technologies.
</summary>
<dc:date>2025-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pretreatment of Mice with 830 nm Light Enhances Endurance During Acute Exercise</title>
<link href="https://hdl.handle.net/1721.1/164462" rel="alternate"/>
<author>
<name>Cheema, Nashwa</name>
</author>
<author>
<name>Ghag, Namrata</name>
</author>
<author>
<name>Pham, Linh</name>
</author>
<author>
<name>Wise, Emma</name>
</author>
<author>
<name>Fuchs, Christiane</name>
</author>
<author>
<name>Anderson, Rox</name>
</author>
<author>
<name>Tam, Joshua</name>
</author>
<id>https://hdl.handle.net/1721.1/164462</id>
<updated>2026-01-09T06:25:47Z</updated>
<published>2025-10-23T00:00:00Z</published>
<summary type="text">Pretreatment of Mice with 830 nm Light Enhances Endurance During Acute Exercise
Cheema, Nashwa; Ghag, Namrata; Pham, Linh; Wise, Emma; Fuchs, Christiane; Anderson, Rox; Tam, Joshua
Light therapy has been shown to produce several beneficial physiological effects in a wide range of tissues. The musculoskeletal system can be irradiated with deeply penetrating wavelengths in near infrared (NIR) regions. Photobiomodulation therapy (PBMT) reduces pain and inflammation and enhances physical performance. However, the mechanism(s) of cellular responses to PBMT in muscle is not clearly understood. Therefore, the goal of this study is to improve our understanding of the mechanism(s) of action of PBMT effects in exercised and sedentary muscle. In sedentary mice, PBMT using a wavelength of 830 nm increased the gene expression for muscle tissue development, including cFos, which is critical for activating interstitial and satellite cells that repair muscle. Immunostaining for cFOS expression confirmed an increase in the number of activated cells in PBMT-treated muscle. We observed that PBMT-treated mice showed increased performance on the treadmill, reduced muscle fiber damage, and altered mitochondrial structure. RNA sequencing from fatigued TA tissue suggested that PBMT treatment increased the gene expression of tissue regeneration and remodeling, suggesting tissue adaptation and muscle repair after exercise with PBMT. In conclusion, our study suggests that the 830 nm wavelength may have altered the muscle by activating regenerative genes that protect the tissue from exercise-induced cellular stress.
</summary>
<dc:date>2025-10-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrating Physiologic Assessment into Virtual Reality-Based Pediatric Pain Intervention: A Feasibility Study</title>
<link href="https://hdl.handle.net/1721.1/164461" rel="alternate"/>
<author>
<name>Marwah, Harsheen</name>
</author>
<author>
<name>Moldovanu, Stefania R.</name>
</author>
<author>
<name>Reks, Talis</name>
</author>
<author>
<name>Anthony, Brian</name>
</author>
<author>
<name>Logan, Deirdre E.</name>
</author>
<id>https://hdl.handle.net/1721.1/164461</id>
<updated>2026-01-09T06:25:49Z</updated>
<published>2025-10-22T00:00:00Z</published>
<summary type="text">Integrating Physiologic Assessment into Virtual Reality-Based Pediatric Pain Intervention: A Feasibility Study
Marwah, Harsheen; Moldovanu, Stefania R.; Reks, Talis; Anthony, Brian; Logan, Deirdre E.
This feasibility study explored the integration of physiological monitoring into a virtual reality (VR) intervention for pediatric pain management. The goal of this study is to identify a feasible strategy for collecting physiologic data in the context of a VR intervention currently being developed for youth with chronic pain. We assess the potential of Cognitive Load (CL)—derived from heart rate and pupillometry/eye-tracking data—as a marker of arousal and user engagement in a VR simulation to promote school functioning in youth with chronic pain. The HP Reverb G2 Omnicept headset and Polar H10 heart-rate sensor were utilized. The Child Presence Questionnaire (CPQ) assessed participants’ self-reported immersion and engagement. Data collection focused on feasibility and utility of physiologic data in assessing arousal and correlations with self-reported experience. Nine participants engaged in the simulation, with eight yielding complete data. The simulation and headset were well tolerated. CPQ Transportation subscale showed trend-level correlation with mean CL. Due to small sample and feasibility focus, individual-level results were examined. Combining multiple physiologic markers into a construct like CL is intriguing, but data interpretability was limited. Pupillometry and related metrics show promise as feasible markers of engagement and arousal for VR-based intervention but require appropriate expertise to fully interpret. The study found that integration of physiologic monitoring is feasible, but further work is needed to standardize metrics and identify the most useful and user-friendly markers.
</summary>
<dc:date>2025-10-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Two Decades of CARICOMP Mangrove Monitoring (1992–2013) Reveal Variability in Tree Structure and Productivity of Rhizophora mangle Across the Wider Caribbean</title>
<link href="https://hdl.handle.net/1721.1/164460" rel="alternate"/>
<author>
<name>Kjerfve, Björn</name>
</author>
<author>
<name>Oxenford, Hazel A.</name>
</author>
<author>
<name>Collin, Rachel</name>
</author>
<author>
<name>Pestana, Inácio Abreu</name>
</author>
<author>
<name>Samper-Villarreal, Jimena</name>
</author>
<author>
<name>Medina-Gómez, Israel</name>
</author>
<author>
<name>Cortés, Jorge</name>
</author>
<author>
<name>Smith, Struan R.</name>
</author>
<author>
<name>Koltes, Karen</name>
</author>
<author>
<name>Feller, Ilka C.</name>
</author>
<author>
<name>Bastidas, Carolina</name>
</author>
<author>
<name>Juman, Rahanna</name>
</author>
<author>
<name>Geraldes, Francisco X.</name>
</author>
<author>
<name>Filippo, Alessandro</name>
</author>
<author>
<name>Varela, Ramon</name>
</author>
<author>
<name>McCoy, Croy</name>
</author>
<author>
<name>Garzón-Ferreira, Jaime</name>
</author>
<author>
<name>Polanía, Jaime</name>
</author>
<author>
<name>Capelo, Juan C.</name>
</author>
<author>
<name>Ogden, John</name>
</author>
<id>https://hdl.handle.net/1721.1/164460</id>
<updated>2026-01-09T06:25:50Z</updated>
<published>2025-12-01T00:00:00Z</published>
<summary type="text">Two Decades of CARICOMP Mangrove Monitoring (1992–2013) Reveal Variability in Tree Structure and Productivity of Rhizophora mangle Across the Wider Caribbean
Kjerfve, Björn; Oxenford, Hazel A.; Collin, Rachel; Pestana, Inácio Abreu; Samper-Villarreal, Jimena; Medina-Gómez, Israel; Cortés, Jorge; Smith, Struan R.; Koltes, Karen; Feller, Ilka C.; Bastidas, Carolina; Juman, Rahanna; Geraldes, Francisco X.; Filippo, Alessandro; Varela, Ramon; McCoy, Croy; Garzón-Ferreira, Jaime; Polanía, Jaime; Capelo, Juan C.; Ogden, John
The Caribbean Coastal Marine Productivity (CARICOMP) program was conceptualized in 1985 to monitor coral reefs, seagrass beds, and mangrove forests at multiple sites across the wider Caribbean. Mangrove monitoring was focused on the dominant Caribbean species, red mangrove (Rhizophora mangle). Forest structure and productivity were monitored at 21 sites (18 countries) across different geomorphological settings, from tropical to subtropical mainland and island systems. Here, we provide the key findings from the CARICOMP mangrove data collected, mostly from 1992 to 2013, to assess spatial and temporal variability across the region. Red mangrove above-ground biomass averaged 190 t ha−1 (far higher than previously reported) but ranged widely across sites from 33 to 590 t ha−1, equating to an average above-ground ‘blue carbon’ of 84 t ha−1 (range 15–260 t ha−1). Tree density averaged 3237 trees ha−1, tree basal area averaged 19.7 m2 ha−1, tree height averaged 6.1 ± 2.8 m, and seedling density varied from 1.2 to 74 seedlings m−2 across the sites. Among the environmental factors that influence mangroves, local temperature and rainfall explained 48% of the variability in measured tree structure parameters. Annual litterfall, as a proxy for productivity, measured on average 1.24 ± 0.70 kg m−2 yr−1, with 60% of the total litterfall composed of leaves. Litterfall varied seasonally by 42%. No relationship was apparent between litterfall and seasonal ocean–atmosphere climate indices (ONI and AMM). With exception of the three most southwesterly CARICOMP sites, hurricanes and tropical storms impacted the mangrove sites repeatedly, resulting in considerable damage. A direct strike by a category-4 hurricane in 1998 in Dominican Republic killed 67% of the red mangrove trees, lowered above-ground biomass by 91%, basal area by 89%, litterfall by 63%, and resulted in the subsequent growth of many tall and thin saplings, totally changing the structure of the forest ecosystem in the first few years after the hurricane. In comparing mangrove systems, major differences may be explained by time elapsed since the last destructive event (hurricane) affecting each site. This highlights the fact that despite an increasing focus on preserving these valuable ecosystems, they are still highly vulnerable to natural hazards and likely to face a poor outcome under ongoing climate change.
</summary>
<dc:date>2025-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Thallium(I) Uptake and Accumulation by Wheat and Rice Plants</title>
<link href="https://hdl.handle.net/1721.1/164459" rel="alternate"/>
<author>
<name>Yang, Puu-Tai</name>
</author>
<author>
<name>Chang, Hsin-Fang</name>
</author>
<author>
<name>Huang, Liang-Sin</name>
</author>
<author>
<name>Chuang, Tsung-Ju</name>
</author>
<author>
<name>Wang, Shan-Li</name>
</author>
<id>https://hdl.handle.net/1721.1/164459</id>
<updated>2026-01-09T06:25:52Z</updated>
<published>2025-12-17T00:00:00Z</published>
<summary type="text">Thallium(I) Uptake and Accumulation by Wheat and Rice Plants
Yang, Puu-Tai; Chang, Hsin-Fang; Huang, Liang-Sin; Chuang, Tsung-Ju; Wang, Shan-Li
Thallium (Tl) is a highly toxic trace metal of increasing concern in agricultural soils. This study investigated the uptake, accumulation, and tissue-level distribution of Tl(I) in rice (Oryza sativa L.) and wheat (Triticum aestivum L.) grown in three agricultural soils differing in soil pH and texture. In the seedling pot experiment (0–100 mg kg−1 soil Tl), plant Tl concentrations increased dose-dependently, and were at least an order of magnitude lower in the alkaline soil than in the acidic soils. Bioaccumulation factors of roots and shoots generally exceeded unity and declined with increasing Tl dose in acidic soils, consistent with uptake saturation and physiological stress at high exposure. To elucidate how soil Tl speciation and pH regulate Tl availability, X-ray absorption spectroscopy (XAS) was used; it showed that Tl(I)—sorbed on illite was the predominant species in all soils (89–95%), with a minor fraction (5–11%) associated with non-specific adsorption. In maturity pots (5 mg kg−1 soil Tl), both crops grown in the moderately acidic, coarse-textured soil translocated a small fraction of absorbed Tl to grains, with wheat and rice containing 0.24 and 0.10 mg kg−1 Tl, respectively. Comparatively, plants in the more acidic soil failed to reach maturity, and grain Tl was not detected in the alkaline soil. LA-ICP-MS mapping revealed Tl enrichment in the bran and embryo of rice and in the crease, bran, and embryo of wheat, indicating that unpolished grains may pose higher dietary exposure risks than polished products. Overall, these findings demonstrate the key roles of soil pH and mineral composition in governing soil Tl availability and plant Tl uptake, whereas plant transport processes regulate grain Tl loading. In the absence of food-safety standards for Tl, the results of this study underscore the need to better understand and mitigate Tl transfer from contaminated soils into human food chains via cereal crops.
</summary>
<dc:date>2025-12-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of the field control operation of railway motors : a thesis</title>
<link href="https://hdl.handle.net/1721.1/164458" rel="alternate"/>
<author>
<name>Davis, Stanley W.
            (Stanley Whitcomb)</name>
</author>
<id>https://hdl.handle.net/1721.1/164458</id>
<updated>2026-01-07T03:24:36Z</updated>
<published>1925-01-01T00:00:00Z</published>
<summary type="text">A study of the field control operation of railway motors : a thesis
Davis, Stanley W.
            (Stanley Whitcomb)
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1925; Includes bibliographical references (leaf 91).
</summary>
<dc:date>1925-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reviewing I.S. : how to handle legacy systems?</title>
<link href="https://hdl.handle.net/1721.1/164457" rel="alternate"/>
<author>
<name>Orlando, Ricardo,
            1966-</name>
</author>
<id>https://hdl.handle.net/1721.1/164457</id>
<updated>2026-01-07T03:23:47Z</updated>
<published>1999-01-01T00:00:00Z</published>
<summary type="text">Reviewing I.S. : how to handle legacy systems?
Orlando, Ricardo,
            1966-
Thesis: S.M.M.O.T., Massachusetts Institute of Technology, Sloan School of Management, Management of Technology Program, 1999; Includes bibliographical references (leaves 100-106).
</summary>
<dc:date>1999-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effects of changing economic conditions on energy costs in stainless-steel clad pressurized water reactors</title>
<link href="https://hdl.handle.net/1721.1/164456" rel="alternate"/>
<author>
<name>Trapp, Donald L.</name>
</author>
<id>https://hdl.handle.net/1721.1/164456</id>
<updated>2026-01-07T03:23:33Z</updated>
<published>1962-01-01T00:00:00Z</published>
<summary type="text">The effects of changing economic conditions on energy costs in stainless-steel clad pressurized water reactors
Trapp, Donald L.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1962; Appendix contains numerous pamphlets.; Includes bibliographical references (leaves 135-136).
</summary>
<dc:date>1962-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The politics of metropolitan transportation.</title>
<link href="https://hdl.handle.net/1721.1/164455" rel="alternate"/>
<author>
<name>Colcord, Frank Carlton.</name>
</author>
<id>https://hdl.handle.net/1721.1/164455</id>
<updated>2026-01-07T03:04:18Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">The politics of metropolitan transportation.
Colcord, Frank Carlton.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics and Social Science, 1964
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The design of a control system for the terminal phase of a satellite rendezvous</title>
<link href="https://hdl.handle.net/1721.1/164454" rel="alternate"/>
<author>
<name>Hollister, Walter M.,
            1930-</name>
</author>
<id>https://hdl.handle.net/1721.1/164454</id>
<updated>2026-01-07T03:23:50Z</updated>
<published>1959-01-01T00:00:00Z</published>
<summary type="text">The design of a control system for the terminal phase of a satellite rendezvous
Hollister, Walter M.,
            1930-
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1959; Includes bibliographical references (leaf 47).
</summary>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Compliance in a gyroscope gimbal</title>
<link href="https://hdl.handle.net/1721.1/164453" rel="alternate"/>
<author>
<name>Graham, James William.</name>
</author>
<id>https://hdl.handle.net/1721.1/164453</id>
<updated>2026-01-07T03:24:32Z</updated>
<published>1958-01-01T00:00:00Z</published>
<summary type="text">Compliance in a gyroscope gimbal
Graham, James William.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1958
</summary>
<dc:date>1958-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On hypergraphs and hypergeometries.</title>
<link href="https://hdl.handle.net/1721.1/164452" rel="alternate"/>
<author>
<name>Helgason, Thorkell.</name>
</author>
<id>https://hdl.handle.net/1721.1/164452</id>
<updated>2026-01-07T03:04:03Z</updated>
<published>1971-01-01T00:00:00Z</published>
<summary type="text">On hypergraphs and hypergeometries.
Helgason, Thorkell.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1971; Vita.; Bibliography: leaves 158-159.
</summary>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Noise analysis of circuit models representing maser operation.</title>
<link href="https://hdl.handle.net/1721.1/164451" rel="alternate"/>
<author>
<name>Hempstead, Robert Douglas.</name>
</author>
<id>https://hdl.handle.net/1721.1/164451</id>
<updated>2026-01-07T03:23:54Z</updated>
<published>1965-01-01T00:00:00Z</published>
<summary type="text">Noise analysis of circuit models representing maser operation.
Hempstead, Robert Douglas.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1965; Bibliography: leaves 106-108.
</summary>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cocoa in the Ghanaian economy.</title>
<link href="https://hdl.handle.net/1721.1/164450" rel="alternate"/>
<author>
<name>Bateman, Merril Joseph.</name>
</author>
<id>https://hdl.handle.net/1721.1/164450</id>
<updated>2026-01-07T03:03:55Z</updated>
<published>1965-01-01T00:00:00Z</published>
<summary type="text">Cocoa in the Ghanaian economy.
Bateman, Merril Joseph.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1965
</summary>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Even denominator quantum numbers and termination of the fractional series in the fractional quantum hall effect</title>
<link href="https://hdl.handle.net/1721.1/164449" rel="alternate"/>
<author>
<name>Willett, Robert L.
            (Robert Lee)</name>
</author>
<id>https://hdl.handle.net/1721.1/164449</id>
<updated>2026-01-07T03:04:22Z</updated>
<published>1989-01-01T00:00:00Z</published>
<summary type="text">Even denominator quantum numbers and termination of the fractional series in the fractional quantum hall effect
Willett, Robert L.
            (Robert Lee)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1989; Includes bibliographical references (leaves 6-7).
</summary>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving railroad terminal control systems : a case study of Southern Railway's Brosnan yard</title>
<link href="https://hdl.handle.net/1721.1/164448" rel="alternate"/>
<author>
<name>Ferguson, William Lloyd.</name>
</author>
<id>https://hdl.handle.net/1721.1/164448</id>
<updated>2026-01-07T03:23:43Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">Improving railroad terminal control systems : a case study of Southern Railway's Brosnan yard
Ferguson, William Lloyd.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1979; Bibliography: leaves 194-195.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigations in the theory of quantum corrections to classical solutions of the Yang-Mills equations</title>
<link href="https://hdl.handle.net/1721.1/164447" rel="alternate"/>
<author>
<name>Callias, Constantine John.</name>
</author>
<id>https://hdl.handle.net/1721.1/164447</id>
<updated>2026-01-07T03:04:06Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">Investigations in the theory of quantum corrections to classical solutions of the Yang-Mills equations
Callias, Constantine John.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1979; Includes bibliographical references.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wave equations, particles and chronometric geometry.</title>
<link href="https://hdl.handle.net/1721.1/164446" rel="alternate"/>
<author>
<name>Orsted, Bent.</name>
</author>
<id>https://hdl.handle.net/1721.1/164446</id>
<updated>2026-01-07T03:03:58Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">Wave equations, particles and chronometric geometry.
Orsted, Bent.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1976; Includes bibliographical references.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Planteez(tm) : business plan and preliminary research</title>
<link href="https://hdl.handle.net/1721.1/164445" rel="alternate"/>
<author>
<name>Sanchez, Manuel A.
            (Manuel Andres),
            1979-</name>
</author>
<id>https://hdl.handle.net/1721.1/164445</id>
<updated>2026-01-07T03:24:18Z</updated>
<published>2001-01-01T00:00:00Z</published>
<summary type="text">Planteez(tm) : business plan and preliminary research
Sanchez, Manuel A.
            (Manuel Andres),
            1979-
Thesis: S.B., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2001; Includes bibliographical references (p. 15).
</summary>
<dc:date>2001-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Refutability Gap: Challenges in Validating Reasoning by Large Language Models</title>
<link href="https://hdl.handle.net/1721.1/164444" rel="alternate"/>
<author>
<name>Mossel, Elchanan</name>
</author>
<id>https://hdl.handle.net/1721.1/164444</id>
<updated>2026-01-07T03:01:39Z</updated>
<published>2026-01-06T00:00:00Z</published>
<summary type="text">The Refutability Gap: Challenges in Validating Reasoning by Large Language Models
Mossel, Elchanan
Recent reports claim that Large Language Models (LLMs) have achieved the ability to derive new science and exhibit human-level general intelligence. We argue that such claims are not rigorous scientific claims, as they do not satisfy Popper’s refutability principle (often termed falsifiability), which requires that scientific statements be capable of being disproven. We identify several methodological pitfalls in current AI research on reasoning, including the inability to verify the novelty of findings due to opaque and non-searchable training data, the lack of reproducibility caused by continuous model updates, and the omission of human-interaction transcripts, which obscures the true source of scientific discovery. Additionally, the absence of counterfactuals and data on failed attempts creates a selection bias that may exaggerate LLM capabilities. To address these challenges, we propose guidelines for scientific transparency and reproducibility for research on reasoning by LLMs. Establishing such guidelines is crucial for both scientific integrity and the ongoing societal debates regarding fair data usage.
</summary>
<dc:date>2026-01-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>Proximity Loses: Real-Time Resolution of Ambiguous Wh-Questions in Japanese</title>
<link href="https://hdl.handle.net/1721.1/164441" rel="alternate"/>
<author>
<name>Nakamura, Chie</name>
</author>
<author>
<name>Flynn, Suzanne</name>
</author>
<author>
<name>Miyamoto, Yoichi</name>
</author>
<author>
<name>Yusa, Noriaki</name>
</author>
<id>https://hdl.handle.net/1721.1/164441</id>
<updated>2026-03-08T03:39:35Z</updated>
<published>2025-11-25T00:00:00Z</published>
<summary type="text">Proximity Loses: Real-Time Resolution of Ambiguous Wh-Questions in Japanese
Nakamura, Chie; Flynn, Suzanne; Miyamoto, Yoichi; Yusa, Noriaki
This study investigated how Japanese speakers interpret structurally ambiguous wh-questions, testing whether filler–gap resolution is guided by syntactic resolution based on hierarchical structure or linear locality based on surface word order. We combined behavioral key-press responses with fine-grained eye-tracking data and applied cluster-based permutation analysis to capture the moment-by-moment time course of syntactic interpretation as sentences were processed in real time. Key-press responses revealed a preference for resolving the dependency at the main clause (MC) gap position. Eye-tracking data showed early predictive fixations to the MC picture, followed by shifts to the embedded clause (EC) picture as the embedded event was described. These shifts occurred prior to the appearance of syntactic cues that signal the presence of an EC structure, such as the complementizer -to, and were therefore most likely guided by referential alignment with the linguistic input rather than by syntactic reanalysis. A subsequent return of the gaze to the MC picture occurred when the clause-final question particle -ka became available, confirming the interrogative use of the wh-phrase. Both key-press and eye-tracking data showed that participants did not commit to the first grammatically available EC interpretation but instead waited until clause-final particle information confirmed the interrogative use of the wh-phrase, ultimately favoring the MC interpretation. This pattern supports the view that filler–gap resolution is guided by structural locality rather than linear locality. By using high-resolution temporal data and statistically robust analytic techniques, this study demonstrates that Japanese comprehenders engage in predictive yet structurally cautious parsing. These findings challenge earlier claims that filler–gap resolution in Japanese is primarily driven by linear locality and instead showed a preference for resolving dependencies at the structurally higher MC position, consistent with parsing biases previously observed in English, despite typological differences in word order between the two languages. This preference also reflects sensitivity to language-specific morpho-syntactic cues in Japanese, such as clause-final particles.
</summary>
<dc:date>2025-11-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Engineering peroxisomal biosynthetic pathways for maximization of triterpene production in Yarrowia lipolytica</title>
<link href="https://hdl.handle.net/1721.1/164440" rel="alternate"/>
<author>
<name>Ma, Yongshuo</name>
</author>
<author>
<name>Shang, Yi</name>
</author>
<author>
<name>Stephanopoulos, Gregory</name>
</author>
<id>https://hdl.handle.net/1721.1/164440</id>
<updated>2026-03-08T03:39:27Z</updated>
<published>2024-01-23T00:00:00Z</published>
<summary type="text">Engineering peroxisomal biosynthetic pathways for maximization of triterpene production in Yarrowia lipolytica
Ma, Yongshuo; Shang, Yi; Stephanopoulos, Gregory
Constructing efficient cell factories for product synthesis is frequently hampered by competing pathways and/or insufficient precursor supply. This is particularly evident in the case of triterpenoid biosynthesis in Yarrowia lipolytica, where squalene biosynthesis is tightly coupled to cytosolic biosynthesis of sterols essential for cell viability. Here, we addressed this problem by reconstructing the complete squalene biosynthetic pathway, starting from acetyl-CoA, in the peroxisome, thus harnessing peroxisomal acetyl-CoA pool and sequestering squalene synthesis in this organelle from competing cytosolic reactions. This strategy led to increasing the squalene levels by 1,300-fold relatively to native cytosolic synthesis. Subsequent enhancement of the peroxisomal acetyl-CoA supply by two independent approaches, 1) converting cellular lipid pool to peroxisomal acetyl-CoA and 2) establishing an orthogonal acetyl-CoA shortcut from CO2-derived acetate in the peroxisome, further significantly improved local squalene accumulation. Using these approaches, we constructed squalene-producing strains capable of yielding 32.8 g/L from glucose, and 31.6 g/L from acetate by employing a cofeeding strategy, in bioreactor fermentations. Our findings provide a feasible strategy for protecting intermediate metabolites that can be claimed by multiple reactions by engineering peroxisomes in Y. lipolytica as microfactories for the production of such intermediates and in particular acetyl-CoA-derived metabolites.
</summary>
<dc:date>2024-01-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Drug screening in human physiologic medium identifies uric acid as an inhibitor of rigosertib efficacy</title>
<link href="https://hdl.handle.net/1721.1/164439" rel="alternate"/>
<author>
<name>Rawat, Vipin</name>
</author>
<author>
<name>DeLear, Patrick</name>
</author>
<author>
<name>Prashanth, Prarthana</name>
</author>
<author>
<name>Ozgurses, Mete Emir</name>
</author>
<author>
<name>Tebeje, Anteneh</name>
</author>
<author>
<name>Burns, Philippa A</name>
</author>
<author>
<name>Conger, Kelly O</name>
</author>
<author>
<name>Solís, Christopher</name>
</author>
<author>
<name>Hasnain, Yasir</name>
</author>
<author>
<name>Novikova, Anna</name>
</author>
<author>
<name>Endress, Jennifer E</name>
</author>
<author>
<name>González-Sánchez, Paloma</name>
</author>
<author>
<name>Dong, Wentao</name>
</author>
<author>
<name>Stephanopoulos, Greg</name>
</author>
<author>
<name>DeNicola, Gina M</name>
</author>
<author>
<name>Harris, Isaac S</name>
</author>
<author>
<name>Sept, David</name>
</author>
<author>
<name>Mason, Frank M</name>
</author>
<author>
<name>Coloff, Jonathan L</name>
</author>
<id>https://hdl.handle.net/1721.1/164439</id>
<updated>2026-03-08T03:39:27Z</updated>
<published>2024-05-30T00:00:00Z</published>
<summary type="text">Drug screening in human physiologic medium identifies uric acid as an inhibitor of rigosertib efficacy
Rawat, Vipin; DeLear, Patrick; Prashanth, Prarthana; Ozgurses, Mete Emir; Tebeje, Anteneh; Burns, Philippa A; Conger, Kelly O; Solís, Christopher; Hasnain, Yasir; Novikova, Anna; Endress, Jennifer E; González-Sánchez, Paloma; Dong, Wentao; Stephanopoulos, Greg; DeNicola, Gina M; Harris, Isaac S; Sept, David; Mason, Frank M; Coloff, Jonathan L
The nonphysiological nutrient levels found in traditional culture media have been shown to affect numerous aspects of cancer cell physiology, including how cells respond to certain therapeutic agents. Here, we comprehensively evaluated how physiological nutrient levels affect therapeutic response by performing drug screening in human plasma-like medium. We observed dramatic nutrient-dependent changes in sensitivity to a variety of FDA-approved and clinically trialed compounds, including rigosertib, an experimental cancer therapeutic that recently failed in phase III clinical trials. Mechanistically, we found that the ability of rigosertib to destabilize microtubules is strongly inhibited by the purine metabolism end product uric acid, which is uniquely abundant in humans relative to traditional in vitro and in vivo cancer models. These results demonstrate the broad and dramatic effects nutrient levels can have on drug response and how incorporation of human-specific physiological nutrient medium might help identify compounds whose efficacy could be influenced in humans.
</summary>
<dc:date>2024-05-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Metabolic Engineering of E. coli for Enhanced Diols Production from Acetate</title>
<link href="https://hdl.handle.net/1721.1/164438" rel="alternate"/>
<author>
<name>Ricci, Luca</name>
</author>
<author>
<name>Cen, Xuecong</name>
</author>
<author>
<name>Zu, Yuexuan</name>
</author>
<author>
<name>Antonicelli, Giacomo</name>
</author>
<author>
<name>Chen, Zhen</name>
</author>
<author>
<name>Fino, Debora</name>
</author>
<author>
<name>Pirri, Fabrizio C</name>
</author>
<author>
<name>Stephanopoulos, Gregory</name>
</author>
<author>
<name>Woolston, Benjamin M</name>
</author>
<author>
<name>Re, Angela</name>
</author>
<id>https://hdl.handle.net/1721.1/164438</id>
<updated>2026-03-08T03:38:53Z</updated>
<published>2025-04-18T00:00:00Z</published>
<summary type="text">Metabolic Engineering of E. coli for Enhanced Diols Production from Acetate
Ricci, Luca; Cen, Xuecong; Zu, Yuexuan; Antonicelli, Giacomo; Chen, Zhen; Fino, Debora; Pirri, Fabrizio C; Stephanopoulos, Gregory; Woolston, Benjamin M; Re, Angela
Effective employment of renewable carbon sources is highly demanded to develop sustainable biobased manufacturing. Here, we developed Escherichia coli strains to produce 2,3-butanediol and acetoin (collectively referred to as diols) using acetate as the sole carbon source by stepwise metabolic engineering. When tested in fed-batch experiments, the strain overexpressing the entire acetate utilization pathway was found to consume acetate at a 15% faster rate (0.78 ± 0.05 g/g/h) and to produce a 35% higher diol titer (1.16 ± 0.01 g/L) than the baseline diols-producing strain. Moreover, singularly overexpressing the genes encoding alternative acetate uptake pathways as well as alternative isoforms of genes in the malate-to-pyruvate pathway unveiled that leveraging ackA-pta and maeA is more effective in enhancing acetate consumption and diols production, compared to acs and maeB. Finally, the increased substrate consumption rate and diol production obtained in flask-based experiments were confirmed in bench-scale bioreactors operated in fed-batch mode. Consequently, the highest titer of 1.56 g/L achieved in this configuration increased by over 30% compared to the only other similar effort carried out so far.
</summary>
<dc:date>2025-04-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Constant Degree Networks for Almost-Everywhere Reliable Transmission</title>
<link href="https://hdl.handle.net/1721.1/164437" rel="alternate"/>
<author>
<name>Bafna, Mitali</name>
</author>
<author>
<name>Minzer, Dor</name>
</author>
<id>https://hdl.handle.net/1721.1/164437</id>
<updated>2026-03-08T03:22:36Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Constant Degree Networks for Almost-Everywhere Reliable Transmission
Bafna, Mitali; Minzer, Dor
In the almost-everywhere reliable message transmission problem, introduced by [Dwork, Pippenger, Peleg, Upfal’86], the goal is to design a sparse communication network G that supports efficient, fault-tolerant protocols for interactions between all node pairs. By fault-tolerant, we mean that that even if an adversary corrupts a small fraction of vertices in G, then all but a small fraction of vertices can still communicate perfectly via the constructed protocols. Being successful to do so allows one to simulate, on a sparse graph, any fault-tolerant distributed computing task and secure multi-party computation protocols built for a complete network, with only minimal overhead in efficiency. Previous works on this problem achieved either constant-degree networks tolerating o(1) faults, constant-degree networks tolerating a constant fraction of faults via inefficient protocols (exponential work complexity), or poly-logarithmic degree networks tolerating a constant fraction of faults. We show a construction of constant-degree networks with efficient protocols (i.e., with polylogarithmic work complexity) that can tolerate a constant fraction of adversarial faults, thus solving the main open problem of Dwork et al. Our main contribution is a composition technique for communication networks, based on graph products. Our technique combines two networks tolerant to adversarial edge-faults to construct a network with a smaller degree while maintaining efficiency and fault-tolerance. We apply this composition result multiple times, using the polylogarithmic-degree edge-fault tolerant networks constructed in a recent work of [Bafna, Minzer, Vyas’24] (that are based on high-dimensional expanders) with itself, and then with the constant-degree networks (albeit with inefficient protocols) of [Upfal’92].
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quasi-Linear Size PCPs with Small Soundness from HDX</title>
<link href="https://hdl.handle.net/1721.1/164436" rel="alternate"/>
<author>
<name>Bafna, Mitali</name>
</author>
<author>
<name>Minzer, Dor</name>
</author>
<author>
<name>Vyas, Nikhil</name>
</author>
<author>
<name>Yun, Zhiwei</name>
</author>
<id>https://hdl.handle.net/1721.1/164436</id>
<updated>2026-03-08T03:22:29Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Quasi-Linear Size PCPs with Small Soundness from HDX
Bafna, Mitali; Minzer, Dor; Vyas, Nikhil; Yun, Zhiwei
We construct 2-query, quasi-linear size probabilistically checkable&#13;
proofs (PCPs) with arbitrarily small constant soundness, improving&#13;
upon Dinur’s 2-query quasi-linear size PCPs with soundness 1 −&#13;
Ω(1). As an immediate corollary, we get that under the exponential&#13;
time hypothesis, for all&#120576; &gt; 0 no approximation algorithm for 3-SAT&#13;
can obtain an approximation ratio of 7/8+&#120576; in time 2&#13;
&#119899;/log&#119862; &#119899;&#13;
, where&#13;
&#119862; is a constant depending on &#120576;. Our result builds on a recent line&#13;
of independent works by Bafna, Lifshitz and Minzer, and Dikstein,&#13;
Dinur and Lubotzky, that showed the existence of linear size direct&#13;
product testers with small soundness.&#13;
The main new ingredient in our proof is a technique that embeds&#13;
a given 2-CSP into a 2-CSP on a prescribed graph, provided that the&#13;
latter is a graph underlying a sufficiently good high-dimensional&#13;
expander (HDX). We achieve this by establishing a novel connection between PCPs and fault-tolerant distributed computing, more&#13;
precisely, to the almost-everywhere reliable transmission problem&#13;
introduced by Dwork, Peleg, Pippenger and Upfal (1986). We instantiate this connection by showing that graphs underlying HDXs&#13;
admit routing protocols that are tolerant to adversarial edge corruptions, also improving upon the state of the art constructions of&#13;
sparse edge-fault-tolerant networks in the process.&#13;
Our PCP construction requires variants of the aforementioned&#13;
direct product testers with poly-logarithmic degree. The existence&#13;
and constructability of these variants is shown in the full version.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Approximately Counting and Sampling Hamiltonian Motifs in Sublinear Time</title>
<link href="https://hdl.handle.net/1721.1/164435" rel="alternate"/>
<author>
<name>Eden, Talya</name>
</author>
<author>
<name>Levi, Reut</name>
</author>
<author>
<name>Ron, Dana</name>
</author>
<author>
<name>Rubinfeld, Ronitt</name>
</author>
<id>https://hdl.handle.net/1721.1/164435</id>
<updated>2026-03-08T03:22:49Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Approximately Counting and Sampling Hamiltonian Motifs in Sublinear Time
Eden, Talya; Levi, Reut; Ron, Dana; Rubinfeld, Ronitt
Counting small subgraphs, referred to as motifs, in large graphs is&#13;
a fundamental task in graph analysis, extensively studied across&#13;
various contexts and computational models. In the sublinear-time&#13;
regime, the relaxed problem of approximate counting has been&#13;
explored within two prominent query frameworks: the standard&#13;
model, which permits degree, neighbor, and pair queries, and the&#13;
strictly more powerful augmented model, which additionally allows&#13;
for uniform edge sampling. Currently, in the standard model, (optimal) results have been established only for approximately counting&#13;
edges, stars, and cliques, all of which have a radius of one. This&#13;
contrasts sharply with the state of affairs in the augmented model,&#13;
where algorithmic results (some of which are optimal) are known&#13;
for any input motif, leading to a disparity which we term the “scope&#13;
gap" between the two models.&#13;
In this work, we make significant progress in bridging this gap.&#13;
Our approach draws inspiration from recent advancements in the&#13;
augmented model and utilizes a framework centered on counting&#13;
by uniform sampling, thus allowing us to establish new results in&#13;
the standard model and simplify on previous results.&#13;
In particular, our first, and main, contribution is a new algorithm&#13;
in the standard model for approximately counting any Hamiltonian&#13;
motif in sublinear time, where the complexity of the algorithm&#13;
is the sum of two terms. One term equals the complexity of the&#13;
known algorithms by Assadi, Kapralov, and Khanna (ITCS 2019)&#13;
and Fichtenberger and Peng (ICALP 2020) in the (strictly stronger)&#13;
augmented model and the other is an additional, necessary, additive&#13;
overhead.&#13;
Our second contribution is a variant of our algorithm that enables nearly uniform sampling of these motifs, a capability previously limited in the standard model to edges and cliques. Our&#13;
third contribution is to introduce even simpler algorithms for stars&#13;
and cliques by exploiting their radius-one property. As a result, we&#13;
simplify all previously known algorithms in the standard model for&#13;
stars (Gonen, Ron, Shavitt (SODA 2010)), triangles (Eden, Levi, Ron Seshadhri (FOCS 2015)) and cliques (Eden, Ron, Seshadri (STOC&#13;
2018)).
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sandwiching Random Geometric Graphs and Erdos-Renyi with Applications: Sharp Thresholds, Robust Testing, and Enumeration</title>
<link href="https://hdl.handle.net/1721.1/164434" rel="alternate"/>
<author>
<name>Bangachev, Kiril</name>
</author>
<author>
<name>Bresler, Guy</name>
</author>
<id>https://hdl.handle.net/1721.1/164434</id>
<updated>2026-03-08T03:39:03Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Sandwiching Random Geometric Graphs and Erdos-Renyi with Applications: Sharp Thresholds, Robust Testing, and Enumeration
Bangachev, Kiril; Bresler, Guy
The distribution RGG(n,Sd−1,p) is formed by sampling independent vectors {Vi}i = 1n uniformly on Sd−1 and placing an edge between pairs of vertices i and j for which ⟨ Vi,Vj⟩ ≥ τdp, where τdp is such that the expected density is p. Our main result is a poly-time implementable coupling between Erdős-Rényi and RGG such that G(n,p(1 − O(√np/d)))⊆ RGG(n,Sd−1,p)⊆ G(n,p(1 + O(√np/d))) edgewise with high probability when d≫ np. We apply the result to: 1) Sharp Thresholds: We show that for any monotone property having a sharp threshold with respect to the Erdős-Rényi distribution and critical probability pnc, random geometric graphs also exhibit a sharp threshold when d≫ npnc, thus partially answering a question of Perkins. 2) Robust Testing: The coupling shows that testing between G(n,p) and RGG(n,Sd−1,p) with є n2p adversarially corrupted edges for any constant є&gt;0 is information-theoretically impossible when d≫ np. We match this lower bound with an efficient (constant degree SoS) spectral refutation algorithm when d≪ np. 3) Enumeration: We show that the number of geometric graphs in dimension d is at least exp(dnlog−7n), recovering (up to the log factors) the sharp result of Sauermann.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sample-Optimal Private Regression in Polynomial Time</title>
<link href="https://hdl.handle.net/1721.1/164433" rel="alternate"/>
<author>
<name>Anderson, Prashanti</name>
</author>
<author>
<name>Bakshi, Ainesh</name>
</author>
<author>
<name>Majid, Mahbod</name>
</author>
<author>
<name>Tiegel, Stefan</name>
</author>
<id>https://hdl.handle.net/1721.1/164433</id>
<updated>2026-03-08T03:22:47Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Sample-Optimal Private Regression in Polynomial Time
Anderson, Prashanti; Bakshi, Ainesh; Majid, Mahbod; Tiegel, Stefan
We consider the task of privately obtaining prediction error guarantees in ordinary least-squares regression problems with Gaussian covariates (with unknown covariance structure). We provide the first sample-optimal polynomial time algorithm for this task under both pure and approximate differential privacy. We show that any improvement to the sample complexity of our algorithm would violate either statistical-query or information-theoretic lower bounds. Additionally, our algorithm is robust to a small fraction of arbitrary outliers and achieves optimal error rates as a function of the fraction of outliers. In contrast, all prior efficient algorithms either incurred sample complexities with sub-optimal dimension dependence, scaling with the condition number of the covariates, or obtained a polynomially worse dependence on the privacy parameters.&#13;
Our technical contributions are two-fold: first, we leverage resilience guarantees of Gaussians within the sum-of-squares framework. As a consequence, we obtain efficient sum-of-squares algorithms for regression with optimal robustness rates and sample complexity. Second, we generalize the recent robustness-to-privacy framework of Hopkins, Kamath, Majid, and Narayanan to account for the geometry induced by the covariance of the input samples. This framework crucially relies on the robust estimators to be sum-of-squares algorithms, and combining the two steps yields a sample-optimal private regression algorithm. We believe our techniques are of independent interest, and we demonstrate this by obtaining an efficient algorithm for covariance-aware mean estimation, with an optimal dependence on the privacy parameters.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Faster Weighted and Unweighted Tree Edit Distance and APSP Equivalence</title>
<link href="https://hdl.handle.net/1721.1/164432" rel="alternate"/>
<author>
<name>Nogler, Jakob</name>
</author>
<author>
<name>Polak, Adam</name>
</author>
<author>
<name>Saha, Barna</name>
</author>
<author>
<name>Vassilevska Williams, Virginia</name>
</author>
<author>
<name>Xu, Yinzhan</name>
</author>
<author>
<name>Ye, Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/164432</id>
<updated>2026-03-08T03:22:38Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Faster Weighted and Unweighted Tree Edit Distance and APSP Equivalence
Nogler, Jakob; Polak, Adam; Saha, Barna; Vassilevska Williams, Virginia; Xu, Yinzhan; Ye, Christopher
The tree edit distance (TED) between two rooted ordered trees with n nodes labeled from an alphabet Σ is the minimum cost of transforming one tree into the other by a sequence of valid operations consisting of insertions, deletions and relabeling of nodes. The tree edit distance is a well-known generalization of string edit distance and has been studied since the 1970s. Its running time has seen steady improvements starting with an O(n6) algorithm [Tai, J.ACM 1979], improved to O(n4) [Shasha, Zhang, SICOMP 1989] and to O(n3logn) [Klein, ESA 1998], and culminating in an O(n3) algorithm [Demaine, Mozes, Rossman, Weimann, ACM TALG 2010]. The latter is known to be optimal for any dynamic programming based algorithm that falls under a certain decomposition framework that captures all known sub-n4 time algorithms. Fine-grained complexity casts further light onto this hardness showing that a truly subcubic time algorithm for TED implies a truly subcubic time algorithm for All-Pairs Shortest Paths (APSP) [Bringmann, Gawrychowski, Mozes, Weimann, ACM TALG 2020]. Therefore, under the popular APSP hypothesis, a truly subcubic time algorithm for TED cannot exist. However, unlike many problems in fine-grained complexity for which conditional hardness based on APSP also comes with equivalence to APSP, whether TED can be reduced to APSP has remained unknown.&#13;
In this paper, we resolve this. Not only we show that TED is fine-grained equivalent to APSP, our reduction is tight enough, so that combined with the fastest APSP algorithm to-date [Williams, SICOMP 2018] it gives the first ever subcubic time algorithm for TED running in n3/2Ω(√logn) time.&#13;
We also consider the unweighted tree edit distance problem in which the cost of each edit (insertion, deletion, and relabeling) is one. For unweighted TED, a truly subcubic algorithm is known due to Mao [Mao, FOCS 2022], and later improved slightly by Dürr [Dürr, IPL 2023] to run in O(n2.9148) time. Since their algorithm uses bounded monotone min-plus product as a crucial subroutine, and the best running time for this product is Õ(n3+ω/2)≤ O(n2.6857) (where ω is the exponent of fast matrix multiplication), the much higher running time of unweighted TED remained unsatisfactory. In this work, we close this gap and give an algorithm for unweighted TED that runs in Õ(n3+ω/2) time.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Structure of Catalytic Space: Capturing Randomness and Time via Compression</title>
<link href="https://hdl.handle.net/1721.1/164431" rel="alternate"/>
<author>
<name>Cook, James</name>
</author>
<author>
<name>Li, Jiatu</name>
</author>
<author>
<name>Mertz, Ian</name>
</author>
<author>
<name>Pyne, Edward</name>
</author>
<id>https://hdl.handle.net/1721.1/164431</id>
<updated>2026-03-08T03:22:44Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">The Structure of Catalytic Space: Capturing Randomness and Time via Compression
Cook, James; Li, Jiatu; Mertz, Ian; Pyne, Edward
In the catalytic logspace (CL) model of (Buhrman et. al. STOC 2013), we are given a small work tape, and a larger catalytic tape that has an arbitrary initial configuration. We may edit this tape, but it must be exactly restored to its initial configuration at the completion of the computation. This model is of interest from a complexity-theoretic perspective as it gains surprising power over traditional space. However, many fundamental structural questions remain open.&#13;
We substantially advance the understanding of the structure of CL, addressing several questions raised in prior work. Our main results are as follows.&#13;
1: We unconditionally derandomize catalytic logspace: CBPL = CL.&#13;
2: We show time and catalytic space bounds can be achieved separately if and only if they can be achieved simultaneously: any problem in both CL and P can be solved in polynomial time-bounded CL.&#13;
3: We characterize deterministic catalytic space by the intersection of randomness and time: CL is equivalent to polytime-bounded, zero-error randomized CL.&#13;
Our results center around the compress--or--random framework.&#13;
For the second result, we introduce a simple yet novel compress--or--compute algorithm which, for any catalytic tape, either compresses the tape or quickly and successfully computes the function at hand. For our first result, we further introduce a compress--or--compress--or--random algorithm that combines runtime compression with a second compress--or--random algorithm, building on recent work on distinguish-to-predict transformations and pseudorandom generators with small-space deterministic reconstruction.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rounding Large Independent Sets on Expanders</title>
<link href="https://hdl.handle.net/1721.1/164430" rel="alternate"/>
<author>
<name>Bafna, Mitali</name>
</author>
<author>
<name>Hsieh, Jun-Ting</name>
</author>
<author>
<name>Kothari, Pravesh K.</name>
</author>
<id>https://hdl.handle.net/1721.1/164430</id>
<updated>2026-03-08T03:22:48Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Rounding Large Independent Sets on Expanders
Bafna, Mitali; Hsieh, Jun-Ting; Kothari, Pravesh K.
We develop a new approach for approximating large independent sets when the input graph is a one-sided spectral expander - that is, the uniform random walk matrix of the graph has its second eigenvalue bounded away from 1. Consequently, we obtain a polynomial time algorithm to find linear-sized independent sets in one-sided expanders that are almost 3-colorable or are promised to contain an independent set of size (1/2−є)n. Our second result above can be refined to require only a weaker vertex expansion property with an efficient certificate. In a surprising contrast to our algorithmic result, we observe that the analogous task of finding a linear-sized independent set in almost 4-colorable one-sided expanders (even when the second eigenvalue is on(1)) is NP-hard, assuming the Unique Games Conjecture.&#13;
All prior algorithms that beat the worst-case guarantees for this problem rely on bottom eigenspace enumeration techniques (following the classical spectral methods of Alon and Kahale) and require two-sided expansion, meaning a bounded number of negative eigenvalues of magnitude Ω(1). Such techniques naturally extend to almost k-colorable graphs for any constant k, in contrast to analogous guarantees on one-sided expanders, which are Unique Games-hard to achieve for k ≥ 4.&#13;
Our rounding scheme builds on the method of simulating multiple samples from a pseudo-distribution introduced in Bafna et. al. for rounding Unique Games instances. The key to our analysis is a new clustering property of large independent sets in expanding graphs - every large independent set has a larger-than-expected intersection with some member of a small list - and its formalization in the low-degree sum-of-squares proof system.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Universal SNARGs for NP from Proofs of Correctness</title>
<link href="https://hdl.handle.net/1721.1/164429" rel="alternate"/>
<author>
<name>Jin, Zhengzhong</name>
</author>
<author>
<name>Kalai, Yael Tauman</name>
</author>
<author>
<name>Lombardi, Alex</name>
</author>
<author>
<name>Mathialagan, Surya</name>
</author>
<id>https://hdl.handle.net/1721.1/164429</id>
<updated>2026-03-08T03:22:24Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Universal SNARGs for NP from Proofs of Correctness
Jin, Zhengzhong; Kalai, Yael Tauman; Lombardi, Alex; Mathialagan, Surya
We give new constructions of succinct non-interactive arguments (SNARGs) for NP in the settings of both non-adaptive and adaptive soundness.&#13;
Our construction of non-adaptive SNARG is universal assuming the security of a (leveled or unleveled) fully homomorphic encryption (FHE) scheme as well as a batch argument (BARG) scheme. Specifically, for any choice of parameters ℓ and L, we construct a candidate SNARG scheme for any NP language L with the following properties: (i) the proof length is ℓ· poly(λ), (ii) the common reference string crs has length L· poly(λ), and (iii) the setup is transparent (no private randomness).&#13;
We prove that this SNARG has non-adaptive soundness assuming the existence of any SNARG where the proof size is ℓ, the crs size is L, and there is a size L Extended Frege (EF) proof of completeness for the SNARG.&#13;
Moreover, we can relax the underlying SNARG to be any 2-message privately verifiable argument where the first message is of length L and the second message is of length ℓ. This yields new SNARG constructions based on any “EF-friendly” designated-verifier SNARG or witness encryption scheme. We emphasize that our SNARG is universal in the sense that it does not depend on the argument system.&#13;
We show several new implications of this construction that do not reference proof complexity: (1) a non-adaptive SNARG for NP with transparent crs from LWE under the evasive LWE heuristic. This gives a candidate lattice-based SNARG for NP. (2) a non-adaptive SNARG for NP with transparent crs assuming the (non-explicit) existence of any iO and LWE. (3) a non-adaptive SNARG for NP with a short and transparent (i.e., uniform) crs assuming LWE, FHE and the (non-explicit) existence of any hash function that makes Micali’s SNARG construction sound. (4) a non-adaptive SNARG for languages such as QR and DCR assuming only LWE.&#13;
In the setting of adaptive soundness, we show how to convert any designated verifier SNARG into publicly verifiable SNARG, assuming the underlying designated verifier SNARG has an EF proof of completeness. As a corollary, we construct an adaptive SNARG for UP with a transparent crs assuming subexponential LWE under the evasive LWE heuristic.&#13;
We prove our results by extending the encrypt-hash-and-BARG paradigm of [Jin-Kalai-Lombardi-Vaikuntanathan, STOC ’24].
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Medium is the Message: How Non-Clinical Information Shapes Clinical Decisions in LLMs</title>
<link href="https://hdl.handle.net/1721.1/164428" rel="alternate"/>
<author>
<name>Gourabathina, Abinitha</name>
</author>
<author>
<name>Gerych, Walter</name>
</author>
<author>
<name>Pan, Eileen</name>
</author>
<author>
<name>Ghassemi, Marzyeh</name>
</author>
<id>https://hdl.handle.net/1721.1/164428</id>
<updated>2026-03-08T03:22:18Z</updated>
<published>2025-06-23T00:00:00Z</published>
<summary type="text">The Medium is the Message: How Non-Clinical Information Shapes Clinical Decisions in LLMs
Gourabathina, Abinitha; Gerych, Walter; Pan, Eileen; Ghassemi, Marzyeh
The integration of large language models (LLMs) into clinical diagnostics necessitates a careful understanding of how clinically irrelevant aspects of user inputs directly influence generated treatment recommendations and, consequently, clinical outcomes for end-users. Building on prior research that examines the impact of demographic attributes on clinical LLM reasoning, this study explores how non-clinically relevant attributes shape clinical decision-making by LLMs. Through the perturbation of patient messages, we evaluate whether LLM behavior remains consistent, accurate, and unbiased when non-clinical information is altered. These perturbations assess the brittleness of clinical LLM reasoning by replicating structural errors that may occur during electronic data processing patient questions and simulating interactions between patient-AI systems in diverse, vulnerable patient groups. Our findings reveal notable inconsistencies in LLM treatment recommendations and significant degradation of clinical accuracy in ways that reduce care allocation to patients. Additionally, there are significant disparities in treatment recommendations between gender subgroups as well as between model-inferred gender subgroups. We also apply our perturbation framework to a conversational clinical dataset to find that even in conversation, LLM clinical accuracy decreases post-perturbation, and disparities exist in how perturbations impact gender subgroups. By analyzing LLM outputs in response to realistic yet modified clinical contexts, our work deepens understanding of the sensitivity, inaccuracy, and biases inherent in medical LLMs, offering critical insights for the deployment of patient-AI systems.
FAccT ’25, Athens, Greece
</summary>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-Performance Mixed-Precision Matrix Multiplication via Tile-Centric Design on Modern Architectures</title>
<link href="https://hdl.handle.net/1721.1/164427" rel="alternate"/>
<author>
<name>Zhang, Qiao</name>
</author>
<author>
<name>Alomairy, Rabab</name>
</author>
<author>
<name>Wang, Dali</name>
</author>
<author>
<name>Gu, Zhuowei</name>
</author>
<author>
<name>Cao, Qinglei</name>
</author>
<id>https://hdl.handle.net/1721.1/164427</id>
<updated>2025-12-23T03:10:24Z</updated>
<published>2025-12-20T00:00:00Z</published>
<summary type="text">High-Performance Mixed-Precision Matrix Multiplication via Tile-Centric Design on Modern Architectures
Zhang, Qiao; Alomairy, Rabab; Wang, Dali; Gu, Zhuowei; Cao, Qinglei
General Matrix Multiplication (GEMM) is a critical operation underpinning a wide range of applications in high-performance computing (HPC) and artificial intelligence (AI). The emergence of hardware optimized for low-precision arithmetic necessitates a reevaluation of numerical algorithms to leverage mixed-precision computations, achieving improved performance and energy efficiency. This research presents an adaptive mixed-precision GEMM framework that enables support for various precision formats at fine-grained tile and block levels, offering a reliable foundation for trustworthy mixed-precision computations. Furthermore, we leverage the PaRSEC runtime system to effectively balance workloads across diverse architectures. The performance exhibits strong scalability across both homogeneous platforms (Intel CPU-based systems and the ARM CPU-based Fugaku supercomputer) and heterogeneous systems (Nvidia V100, A100, and H100 GPU-based platforms, as well as the AMD GPU-based Frontier supercomputer). This work aims to improve computational efficiency and accuracy by bridging algorithmic innovations with hardware capabilities, fostering transformative advancements across a wide range of applications.
</summary>
<dc:date>2025-12-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>Search for t-channel scalar and vector leptoquark exchange in the high-mass dimuon and dielectron spectra in proton-proton collisions at √s = 13 TeV</title>
<link href="https://hdl.handle.net/1721.1/164426" rel="alternate"/>
<author>
<name>Hayrapetyan, A.</name>
</author>
<author>
<name>Tumasyan, A.</name>
</author>
<author>
<name>Adam, W.</name>
</author>
<author>
<name>Andrejkovic, J. W.</name>
</author>
<author>
<name>Benato, L.</name>
</author>
<author>
<name>Bergauer, T.</name>
</author>
<author>
<name>Chatterjee, S.</name>
</author>
<author>
<name>Damanakis, K.</name>
</author>
<author>
<name>Dragicevic, M.</name>
</author>
<author>
<name>Hussain, P. S.</name>
</author>
<author>
<name>Jeitler, M.</name>
</author>
<author>
<name>Krammer, N.</name>
</author>
<author>
<name>Li, A.</name>
</author>
<author>
<name>Liko, D.</name>
</author>
<author>
<name>Mikulec, I.</name>
</author>
<author>
<name>Schieck, J.</name>
</author>
<author>
<name>Schöfbeck, R.</name>
</author>
<id>https://hdl.handle.net/1721.1/164426</id>
<updated>2026-03-08T03:38:51Z</updated>
<published>2025-12-09T00:00:00Z</published>
<summary type="text">Search for t-channel scalar and vector leptoquark exchange in the high-mass dimuon and dielectron spectra in proton-proton collisions at √s = 13 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.
A search for t-channel exchange of leptoquarks (LQs) is performed in dimuon&#13;
and dielectron spectra using proton-proton collision data collected at √&#13;
s = 13 TeV with&#13;
the CMS detector at the CERN LHC. The data correspond to an integrated luminosity of&#13;
138 fb−1&#13;
. Eight scenarios are considered, in which up or down quarks couple to muons or&#13;
electrons via a scalar or vector LQ exchange, for dilepton invariant masses above 500 GeV.&#13;
The LQ masses are probed up to 5 TeV, beyond a regime probed by previous pair-production&#13;
and single-production searches. The differential distributions of dilepton events are fit to&#13;
templates that model the nonresonant LQ exchange and various standard model background&#13;
processes. Limits are set on LQ-fermion coupling strengths for scalar and vector LQ masses&#13;
in the 1–5 TeV range at 95% confidence level, establishing stringent limits on first- and&#13;
second-generation LQs.
</summary>
<dc:date>2025-12-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Search for charged-lepton flavour violation in top quark interactions with an up-type quark, a muon, and a τ lepton in proton-proton collisions at √s = 13 TeV</title>
<link href="https://hdl.handle.net/1721.1/164425" rel="alternate"/>
<author>
<name>Hayrapetyan, A.</name>
</author>
<author>
<name>Tumasyan, A.</name>
</author>
<author>
<name>Adam, W.</name>
</author>
<author>
<name>Andrejkovic, J. W.</name>
</author>
<author>
<name>Benato, L.</name>
</author>
<author>
<name>Bergauer, T.</name>
</author>
<author>
<name>Chatterjee, S.</name>
</author>
<author>
<name>Damanakis, K.</name>
</author>
<author>
<name>Dragicevic, M.</name>
</author>
<author>
<name>Hussain, P. S.</name>
</author>
<author>
<name>Jeitler, M.</name>
</author>
<author>
<name>Krammer, N.</name>
</author>
<author>
<name>Li, A.</name>
</author>
<author>
<name>Liko, D.</name>
</author>
<author>
<name>Mikulec, I.</name>
</author>
<author>
<name>Schieck, J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164425</id>
<updated>2026-03-08T03:38:51Z</updated>
<published>2025-12-10T00:00:00Z</published>
<summary type="text">Search for charged-lepton flavour violation in top quark interactions with an up-type quark, a muon, and a τ lepton in proton-proton collisions at √s = 13 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.
A search for charged-lepton flavour violation (CLFV) in top quark (t) production&#13;
and decay is presented. The search uses proton-proton collision data corresponding to 138 fb−1&#13;
collected with the CMS experiment at √&#13;
s = 13 TeV. The signal consists of the production&#13;
of a single top quark via a CLFV interaction or top quark pair production followed by a&#13;
CLFV decay. The analysis selects events containing a hadronically decaying τ lepton and&#13;
a muon of opposite electric charge, as well as at least three jets, one of which is identified&#13;
as originating from the fragmentation of a bottom quark. Machine learning classification&#13;
techniques are used to distinguish signal from standard model background events. The results&#13;
of this search are consistent with the standard model expectations. The upper limits at 95%&#13;
confidence level on the branching fraction B for CLFV top quark decays to a muon, a τ&#13;
lepton, and an up or a charm quark are set at B(t → µτu) &lt; (0.04, 0.08, and 0.12) × 10−6&#13;
,&#13;
and B(t → µτ c) &lt; (0.81, 1.71, and 2.05) × 10−6&#13;
for scalar, vector, and tensor-like operators,&#13;
respectively.
</summary>
<dc:date>2025-12-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Anyon delocalization transitions out of a disordered fractional quantum anomalous Hall insulator</title>
<link href="https://hdl.handle.net/1721.1/164424" rel="alternate"/>
<author>
<name>Shi, Zhengyan Darius</name>
</author>
<author>
<name>Todadri, Senthil</name>
</author>
<id>https://hdl.handle.net/1721.1/164424</id>
<updated>2026-03-08T03:39:17Z</updated>
<published>2025-12-19T00:00:00Z</published>
<summary type="text">Anyon delocalization transitions out of a disordered fractional quantum anomalous Hall insulator
Shi, Zhengyan Darius; Todadri, Senthil
Motivated by the experimental discovery of the fractional quantum anomalous Hall&#13;
effect, we develop a theory of doping-induced transitions out of the  = 2/3 lattice&#13;
Jain state in the presence of quenched disorder. We show that disorder strongly&#13;
affects the evolution into the conducting phases described in our previous work.&#13;
The delocalization of charge 2/3 anyons leads to a chiral superconductor through&#13;
a direct second-order transition for a smooth random potential with long-wavelength&#13;
modulations. The longitudinal resistance has a universal peak at the associated quantum&#13;
critical point. Close to the transition, we show that the superconducting ground state&#13;
is an “Anomalous Vortex Glass” stabilized in the absence of an external magnetic&#13;
field. For short-wavelength disorder, this transition generically splits into three distinct&#13;
ones with intermediate insulating topological phases. If instead, the charge 1/3 anyon&#13;
delocalizes, then at low doping the resulting phase is a Reentrant Integer Quantum&#13;
Hall state with xy = h/e&#13;
2&#13;
. At higher doping this undergoes a second transition to a&#13;
Fermi liquid metal. We show that this framework provides a plausible explanation for&#13;
the complex phase diagram recently observed in twisted MoTe2 near  = 2/3 and&#13;
discuss future experiments that can test our theory in more detail.
</summary>
<dc:date>2025-12-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Data to Transformative Change: Designing Interactive Systems for Citizen Science Empowerment</title>
<link href="https://hdl.handle.net/1721.1/164423" rel="alternate"/>
<author>
<name>Prandi, Catia</name>
</author>
<author>
<name>Herodotou, Christothea</name>
</author>
<author>
<name>Dionisio, Mara</name>
</author>
<author>
<name>Reeves, Neal</name>
</author>
<author>
<name>Reitsma, Lizette</name>
</author>
<author>
<name>Mora, Simone</name>
</author>
<id>https://hdl.handle.net/1721.1/164423</id>
<updated>2025-12-20T03:09:51Z</updated>
<published>2025-07-05T00:00:00Z</published>
<summary type="text">From Data to Transformative Change: Designing Interactive Systems for Citizen Science Empowerment
Prandi, Catia; Herodotou, Christothea; Dionisio, Mara; Reeves, Neal; Reitsma, Lizette; Mora, Simone
Citizen Science (CS) is a research approach in which scientists and everyday people collaborate to address a research problem. Advancements in digital technologies have significantly expanded the reach of Citizen Science by enabling large-scale data collection and collaboration. In addition to its scientific benefits, citizen science enhances participants’ science literacy, fosters public engagement, and promotes collaborative problem-solving. Despite this being true, we believe that the true potential of CS has not yet been fully explored as a collaborative practice for transformative change. With this in mind, we planned a one-day workshop as a forum for critical discussions and reflections on the role of HCI researchers, designers, and practitioners in designing CS-empowered interactive systems for increasing awareness about social good and societal issues and promoting concrete actions and behavioural change, from data to sustainable futures. Participants will have the possibility to reflect on and discuss the main open challenges still affecting the design of CS-empowered interactive systems, and to prototype, exploiting data physicalization and co-design, solutions that focus on a specific real-world challenge as presented by experts of the Madeira Island that offers a unique ecosystem to spark reflections on the interplay between sustainability, technology and CS.
DIS ’25 Companion, Funchal, Portugal
</summary>
<dc:date>2025-07-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lost in Transplantation: Characterizing Racial Gaps in Physician Organ Offer Acceptance</title>
<link href="https://hdl.handle.net/1721.1/164422" rel="alternate"/>
<author>
<name>Adam, Hammaad</name>
</author>
<author>
<name>Bermea, Rene</name>
</author>
<author>
<name>Yang, Ming Ying</name>
</author>
<author>
<name>Celi, Leo Anthony</name>
</author>
<author>
<name>Ghassemi, Marzyeh</name>
</author>
<id>https://hdl.handle.net/1721.1/164422</id>
<updated>2025-12-20T03:09:52Z</updated>
<published>2025-06-23T00:00:00Z</published>
<summary type="text">Lost in Transplantation: Characterizing Racial Gaps in Physician Organ Offer Acceptance
Adam, Hammaad; Bermea, Rene; Yang, Ming Ying; Celi, Leo Anthony; Ghassemi, Marzyeh
There are known racial disparities in the organ transplant allocation system in the United States. While recent research has focused on designing scores and matching algorithms for organ allocation, prior work has yet to study how transplant center physician decisions on offer acceptance—the final step in the allocation process—contribute to the observed disparities. In this paper, we use data from the Scientific Registry of Transplant Recipients to examine the role of candidate race in the acceptance of heart, liver, and lung transplant offers. We find that Black race was associated with significantly lower odds of offer acceptance for livers and lungs. Further, existing allocation scores such as MELD and LAS did not account for clinical factors that made Black patients harder to match. Our analysis also revealed that donor candidate race-match was associated with significantly higher odds of offer acceptance for hearts, livers, and lungs. Finally, we found that rejecting an offer was associated with lower survival times for all three organs. Our findings demonstrate the additional barriers that Black patients face in accessing organ transplants and the consequences of these barriers on patient survival. Overall, our work highlights the limitations of technical solutions to socio-technical problems; new allocation scores and other algorithmic updates will not improve equity if they do not explicitly account for gaps in the ensuing human decisions.
FAccT ’25, Athens, Greece
</summary>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Coboundary Expansion of Coset Complexes</title>
<link href="https://hdl.handle.net/1721.1/164421" rel="alternate"/>
<author>
<name>Kaufman, Tali</name>
</author>
<author>
<name>Oppenheim, Izhar</name>
</author>
<author>
<name>Weinberger, Shmuel</name>
</author>
<id>https://hdl.handle.net/1721.1/164421</id>
<updated>2025-12-20T03:09:54Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">Coboundary Expansion of Coset Complexes
Kaufman, Tali; Oppenheim, Izhar; Weinberger, Shmuel
Coboundary expansion is a high dimensional generalization of the Cheeger constant to simplicial complexes. Originally, this notion was motivated by the fact that it implies topological expansion, but nowadays a significant part of the motivation stems from its deep connection to problems in theoretical computer science such as list agreement expansion and agreement expansion in the low soundness regime. In this paper, we prove coboundary expansion with non-Abelian coefficients for the coset complex construction of Kaufman and Oppenheim. Our proof uses a novel global argument, as opposed to the local-to-global arguments that are used to prove cosystolic expansion.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Disclosure without Engagement: An Empirical Review of Positionality Statements at FAccT</title>
<link href="https://hdl.handle.net/1721.1/164420" rel="alternate"/>
<author>
<name>Schroeder, Hope</name>
</author>
<author>
<name>Pareek, Akshansh</name>
</author>
<author>
<name>Barocas, Solon</name>
</author>
<id>https://hdl.handle.net/1721.1/164420</id>
<updated>2025-12-20T03:09:55Z</updated>
<published>2025-06-23T00:00:00Z</published>
<summary type="text">Disclosure without Engagement: An Empirical Review of Positionality Statements at FAccT
Schroeder, Hope; Pareek, Akshansh; Barocas, Solon
Positionality statements have become more common in engineering fields in recent years, despite ongoing debates across many fields about the merits of the practice. In 2024, the Program Chairs of FAccT recommended that authors include positionality statements with their paper submissions, dramatically increasing their use at the conference. In this paper, we analyze all positionality statements at FAccT from 2018 to 2024, highlighting the different aspects of identity commonly disclosed by authors and the degree to which authors explore the potential impact of these aspects of their positionality on their research. While we encountered and highlight a number of thoughtful positionality statements, we also identified and describe several concerning trends, including patterns of identity disclosure without discussion of corresponding impacts, a notable lack of reflection on the potential impacts of industry affiliation, and cases where identity is invoked to excuse what are really methodological choices, among others. We raise particular concerns about the possibility that disclosure without engagement may cause readers to rely on stereotypes to make guesses about the perspectives that individuals from certain groups bring to their work. We conclude by considering potential mechanisms for encouraging reflexivity in the FAccT community, with a focus on setting policies that protect researchers from risks, supporting researchers from backgrounds without existing traditions of reflexive practice, and empirically evaluating the efficacy of interventions designed to foster reflexivity.
FAccT ’25, Athens, Greece
</summary>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>LEAD: Towards Learning-Based Equity-Aware Decarbonization in Ridesharing Platforms</title>
<link href="https://hdl.handle.net/1721.1/164419" rel="alternate"/>
<author>
<name>Sahebdel, Mahsa</name>
</author>
<author>
<name>Zeynali, Ali</name>
</author>
<author>
<name>Bashir, Noman</name>
</author>
<author>
<name>Shenoy, Prashant</name>
</author>
<author>
<name>Hajiesmaili, Mohammad</name>
</author>
<id>https://hdl.handle.net/1721.1/164419</id>
<updated>2025-12-20T03:09:43Z</updated>
<published>2025-06-23T00:00:00Z</published>
<summary type="text">LEAD: Towards Learning-Based Equity-Aware Decarbonization in Ridesharing Platforms
Sahebdel, Mahsa; Zeynali, Ali; Bashir, Noman; Shenoy, Prashant; Hajiesmaili, Mohammad
Ridesharing platforms such as Uber, Lyft, and DiDi have grown in popularity due to their on-demand availability, ease of use, and commute cost reductions, among other benefits. However, not all ridesharing promises have panned out. Recent studies demonstrate that the expected drop in traffic congestion and reduction in greenhouse gas (GHG) emissions have not materialized. This is primarily due to the substantial distances traveled by the ridesharing vehicles without passengers between rides, known as deadhead miles. Recent work has focused on reducing the impact of deadhead miles while considering additional metrics such as rider waiting time, GHG emissions from deadhead miles, or driver earnings. However, most prior studies consider these environmental and equity-based metrics individually despite them being interrelated. In this paper, we propose a Learning-based Equity-Aware Decarabonization approach, LEAD, for ridesharing platforms. LEAD targets minimizing emissions while ensuring that the driver’s utility, defined as the difference between the trip distance and the deadhead miles, is fairly distributed. LEAD uses reinforcement learning to match riders with drivers based on the expected future utility of drivers and the expected carbon emissions of the platform without increasing the rider waiting times. Extensive experiments based on a real-world ridesharing dataset show that LEAD improves the defined notion of fairness by 150% when compared to emission-aware ride-assignment and reduces emissions by 14.6% while ensuring fairness within 28–52% of the fairness-focused baseline. It also reduces the rider wait time, by at least 32.1%, compared to a fairness-focused baseline.
FAccT ’25, Athens, Greece
</summary>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>LuciEntry: Towards Understanding the Design of Lucid Dream Induction</title>
<link href="https://hdl.handle.net/1721.1/164418" rel="alternate"/>
<author>
<name>Wang, Po-Yao (Cosmos)</name>
</author>
<author>
<name>Fang, Xiao Zoe</name>
</author>
<author>
<name>Ducos, Gabriel</name>
</author>
<author>
<name>Lee, Nathaniel Yung Xiang</name>
</author>
<author>
<name>Loose, Antony</name>
</author>
<author>
<name>Rajesh, Rohit</name>
</author>
<author>
<name>Botheju, Nethmini</name>
</author>
<author>
<name>Chen, Eric</name>
</author>
<author>
<name>Montoya, Maria</name>
</author>
<author>
<name>Kitson, Alexandra</name>
</author>
<author>
<name>Konkoly, Karen</name>
</author>
<author>
<name>Sagi, Rohan</name>
</author>
<author>
<name>Patibanda, Rakesh</name>
</author>
<author>
<name>Whitmore, Nathan</name>
</author>
<author>
<name>Jafarzadeh Esfahani, Mahdad</name>
</author>
<author>
<name>Deng, Jialin</name>
</author>
<author>
<name>Bu, Jiajun</name>
</author>
<author>
<name>Dresler, Martin</name>
</author>
<author>
<name>Elvitigala, Don Samitha</name>
</author>
<author>
<name>Semertzidis, Nathan</name>
</author>
<author>
<name>Mueller, Florian</name>
</author>
<id>https://hdl.handle.net/1721.1/164418</id>
<updated>2025-12-19T05:31:15Z</updated>
<published>2025-07-04T00:00:00Z</published>
<summary type="text">LuciEntry: Towards Understanding the Design of Lucid Dream Induction
Wang, Po-Yao (Cosmos); Fang, Xiao Zoe; Ducos, Gabriel; Lee, Nathaniel Yung Xiang; Loose, Antony; Rajesh, Rohit; Botheju, Nethmini; Chen, Eric; Montoya, Maria; Kitson, Alexandra; Konkoly, Karen; Sagi, Rohan; Patibanda, Rakesh; Whitmore, Nathan; Jafarzadeh Esfahani, Mahdad; Deng, Jialin; Bu, Jiajun; Dresler, Martin; Elvitigala, Don Samitha; Semertzidis, Nathan; Mueller, Florian
Lucid dreaming, a state in which people become aware that they are dreaming, is known for its many mental and physical health benefits. However, most lucid dream induction techniques, such as reality testing, require significant time and effort to master, creating a barrier for people seeking these experiences. We designed LuciEntry, a portable interactive prototype aimed at helping people induce lucid dreaming through well-timed visual and auditory cues. We conducted a lab and a field study to understand LuciEntry’s user experience. The interview data allowed us to identify three themes. Building on these findings and our design practice, we derived seven considerations to guide the design of future lucid dream systems. Ultimately, this work aims to inspire further research into interactive technologies for altered states of consciousness.
DIS ’25, Funchal, Portugal
</summary>
<dc:date>2025-07-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Reality of AI and Biorisk</title>
<link href="https://hdl.handle.net/1721.1/164417" rel="alternate"/>
<author>
<name>Peppin, Aidan</name>
</author>
<author>
<name>Reuel, Anka</name>
</author>
<author>
<name>Casper, Stephen</name>
</author>
<author>
<name>Jones, Elliot</name>
</author>
<author>
<name>Strait, Andrew</name>
</author>
<author>
<name>Anwar, Usman</name>
</author>
<author>
<name>Agrawal, Anurag</name>
</author>
<author>
<name>Kapoor, Sayash</name>
</author>
<author>
<name>Koyejo, Sanmi</name>
</author>
<author>
<name>Pellat, Marie</name>
</author>
<author>
<name>Bommasani, Rishi</name>
</author>
<author>
<name>Frosst, Nick</name>
</author>
<author>
<name>Hooker, Sara</name>
</author>
<id>https://hdl.handle.net/1721.1/164417</id>
<updated>2025-12-19T05:31:08Z</updated>
<published>2025-06-23T00:00:00Z</published>
<summary type="text">The Reality of AI and Biorisk
Peppin, Aidan; Reuel, Anka; Casper, Stephen; Jones, Elliot; Strait, Andrew; Anwar, Usman; Agrawal, Anurag; Kapoor, Sayash; Koyejo, Sanmi; Pellat, Marie; Bommasani, Rishi; Frosst, Nick; Hooker, Sara
To accurately and confidently answer the question “could an AI model or system increase biorisk”, it is necessary to have both a sound theoretical threat model for how AI models or systems could increase biorisk and a robust method for testing that threat model. This paper provides an analysis of existing available research surrounding two AI and biorisk threat models: 1) access to information and planning via large language models (LLMs), and 2) the use of AI-enabled biological tools (BTs) in synthesizing novel biological artifacts. We find that existing studies around AI-related biorisk are nascent, often speculative in nature, or limited in terms of their methodological maturity and transparency. The available literature suggests that current LLMs and BTs do not pose an immediate risk, and more work is needed to develop rigorous approaches to understanding how future models could increase biorisks. We end with recommendations about how empirical work can be expanded to more precisely target biorisk and ensure rigor and validity of findings.
FAccT ’25, Athens, Greece
</summary>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>SoS Certifiability of Subgaussian Distributions and Its Algorithmic Applications</title>
<link href="https://hdl.handle.net/1721.1/164416" rel="alternate"/>
<author>
<name>Diakonikolas, Ilias</name>
</author>
<author>
<name>Hopkins, Samuel</name>
</author>
<author>
<name>Pensia, Ankit</name>
</author>
<author>
<name>Tiegel, Stefan</name>
</author>
<id>https://hdl.handle.net/1721.1/164416</id>
<updated>2025-12-19T05:31:17Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">SoS Certifiability of Subgaussian Distributions and Its Algorithmic Applications
Diakonikolas, Ilias; Hopkins, Samuel; Pensia, Ankit; Tiegel, Stefan
We prove that there is a universal constant C&gt;0 so that for every d ∈ ℕ, every centered subgaussian distribution D on ℝd, and every even p ∈ ℕ, the d-variate polynomial (Cp)p/2 · ||v||2p − EX ∼ D ⟨ v,X⟩p is a sum of square polynomials. This establishes that every subgaussian distribution is SoS-certifiably subgaussian—a condition that yields efficient learning algorithms for a wide variety of high-dimensional statistical tasks. As a direct corollary, we obtain computationally efficient algorithms with near-optimal guarantees for the following tasks, when given samples from an arbitrary subgaussian distribution: robust mean estimation, list-decodable mean estimation, clustering mean-separated mixture models, robust covariance-aware mean estimation, robust covariance estimation, and robust linear regression. Our proof makes essential use of Talagrand’s generic chaining/majorizing measures theorem.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using collective dialogues and AI to find common ground between Israeli and Palestinian peacebuilders</title>
<link href="https://hdl.handle.net/1721.1/164415" rel="alternate"/>
<author>
<name>Konya, Andrew</name>
</author>
<author>
<name>Thorburn, Luke</name>
</author>
<author>
<name>Almasri, Wasim</name>
</author>
<author>
<name>Leshem, Oded Adomi</name>
</author>
<author>
<name>Procaccia, Ariel</name>
</author>
<author>
<name>Schirch, Lisa</name>
</author>
<author>
<name>Bakker, Michiel</name>
</author>
<id>https://hdl.handle.net/1721.1/164415</id>
<updated>2025-12-19T05:31:06Z</updated>
<published>2025-06-23T00:00:00Z</published>
<summary type="text">Using collective dialogues and AI to find common ground between Israeli and Palestinian peacebuilders
Konya, Andrew; Thorburn, Luke; Almasri, Wasim; Leshem, Oded Adomi; Procaccia, Ariel; Schirch, Lisa; Bakker, Michiel
A growing body of work has shown that AI-assisted methods — leveraging large language models, social choice methods, and collective dialogues — can help navigate polarization and surface common ground in controlled lab settings. But what can these approaches contribute in real-world contexts? We present a case study applying these techniques to find common ground between Israeli and Palestinian peacebuilders in the period following October 7th, 2023. From April to July 2024 an iterative deliberative process combining LLMs, bridging-based ranking, and collective dialogues was conducted in partnership with the Alliance for Middle East Peace. Around 138 civil society peacebuilders participated including Israeli Jews, Palestinian citizens of Israel, and Palestinians from the West Bank and Gaza. The process resulted in a set of collective statements, including demands to world leaders, with at least 84% agreement from participants on each side. In this paper, we document the process, results, challenges, and important open questions.
FAccT ’25, Athens, Greece
</summary>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Recourse, Repair, Reparation, &amp; Prevention: A Stakeholder Analysis of AI Supply Chains</title>
<link href="https://hdl.handle.net/1721.1/164414" rel="alternate"/>
<author>
<name>Hopkins, Aspen</name>
</author>
<author>
<name>Struckman, Isabella</name>
</author>
<author>
<name>Klyman, Kevin</name>
</author>
<author>
<name>Silbey, Susan S.</name>
</author>
<id>https://hdl.handle.net/1721.1/164414</id>
<updated>2025-12-19T05:31:11Z</updated>
<published>2025-06-23T00:00:00Z</published>
<summary type="text">Recourse, Repair, Reparation, &amp; Prevention: A Stakeholder Analysis of AI Supply Chains
Hopkins, Aspen; Struckman, Isabella; Klyman, Kevin; Silbey, Susan S.
The AI industry is exploding in popularity, with increasing attention to potential harms and unwanted consequences. In the current digital ecosystem, AI deployments are often the product of AI supply chains (AISC): networks of outsourced models, data, and tooling through which multiple entities contribute to AI development and distribution. AI supply chains lack the modularity, redundancies, or conventional supply chain practices that enable identification, isolation, and easy correction of failures, exacerbating the already difficult processes of responding to ML-generated harms. As the stakeholders participating in and impacted by AISCs have scaled and diversified, so too have the risks they face. In this stakeholder analysis of AI supply chains, we consider who participates in AISCs, what harms they face, where sources of harm lie, and how market dynamics and power differentials inform the type and probability of remedies. Because AI supply chains are purposely invented and implemented, they may be designed to account for, rather than ignore, the complexities, consequences, and risks of deploying AI systems. To enable responsible design and management of AISCs, we offer a typology of responses to AISC-induced harms: recourse, repair, reparation or prevention. We apply this typology to stakeholders participating in a health-care AISC across three stylized markets—vertical integration, horizontal integration, free market—to illustrate how stakeholder positioning and power within an AISC may shape responses to an experienced harm.
FAccT ’25, Athens, Greece
</summary>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>When to Ask a Question: Understanding Communication Strategies in Generative AI Tools</title>
<link href="https://hdl.handle.net/1721.1/164413" rel="alternate"/>
<author>
<name>Park, Charlotte</name>
</author>
<author>
<name>Donahue, Kate</name>
</author>
<author>
<name>Raghavan, Manish</name>
</author>
<id>https://hdl.handle.net/1721.1/164413</id>
<updated>2025-12-19T05:31:04Z</updated>
<published>2025-06-12T00:00:00Z</published>
<summary type="text">When to Ask a Question: Understanding Communication Strategies in Generative AI Tools
Park, Charlotte; Donahue, Kate; Raghavan, Manish
Generative AI tools (GAITs) fundamentally differ from traditional machine learning tools in that they allow users to provide as much or as little information as they choose in their inputs. This flexibility often leads users to omit certain details, relying on the GAIT to infer and fill in less critical information based on distributional knowledge of user preferences. Inferences about preferences lead to natural questions about fairness, since a GAIT’s “best guess” may skew towards the preferences of larger groups at the expense of smaller ones. Unlike more traditional recommender systems, GAITs can acquire additional information about a user’s preferences through feedback or by explicitly soliciting it. This creates an interesting communication challenge: the user is aware of their specific preference, while the GAIT has knowledge of the overall distribution of preferences, and both parties can only exchange a limited amount of information. In this work, we present a mathematical model to describe human-AI co-creation of content under information asymmetry. Our results suggest that GAITs can use distributional information about overall preferences to determine the “right” questions to ask to maximize both welfare and fairness, opening up a rich design space in human-AI collaboration.
UMAP Adjunct ’25, New York City, NY, USA
</summary>
<dc:date>2025-06-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Cloud Next Door: Investigating the Environmental and Socioeconomic Strain of Datacenters on Local Communities</title>
<link href="https://hdl.handle.net/1721.1/164412" rel="alternate"/>
<author>
<name>Ngata, Wacuka M</name>
</author>
<author>
<name>Bashir, Noman</name>
</author>
<author>
<name>Westerlaken, Michelle</name>
</author>
<author>
<name>Liote, Laurent</name>
</author>
<author>
<name>Chandio, Yasra</name>
</author>
<author>
<name>Olivetti, Elsa</name>
</author>
<id>https://hdl.handle.net/1721.1/164412</id>
<updated>2025-12-19T05:31:54Z</updated>
<published>2025-07-21T00:00:00Z</published>
<summary type="text">The Cloud Next Door: Investigating the Environmental and Socioeconomic Strain of Datacenters on Local Communities
Ngata, Wacuka M; Bashir, Noman; Westerlaken, Michelle; Liote, Laurent; Chandio, Yasra; Olivetti, Elsa
Datacenters have become the backbone of modern digital infrastructure, powering the rapid rise of artificial intelligence and promising economic growth and technological progress. However, this expansion has brought growing tensions in the local communities where datacenters are already situated or being proposed. While the mainstream discourse often focuses on energy usage and carbon footprint of the computing sector at a global scale, the local socio-environmental consequences—such as health impacts, water usage, noise pollution, infrastructural strain, and economic burden—remain largely underexplored and poorly addressed. In this work1, we surface these community-level consequences through a mixed-methods study that combines quantitative data with qualitative insights. Focusing on Northern Virginia’s “Data Center Alley,” we highlight how datacenter growth reshapes local environments and everyday life, and examine the power dynamics that determine who benefits and who bears the costs. Our goal is to bring visibility to these impacts and prompt more equitable and informed decisions about the future of digital infrastructure.
COMPASS ’25, Toronto, ON, Canada
</summary>
<dc:date>2025-07-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>SuperSONIC: Cloud-Native Infrastructure for ML Inferencing</title>
<link href="https://hdl.handle.net/1721.1/164411" rel="alternate"/>
<author>
<name>Kondratyev, Dmitry</name>
</author>
<author>
<name>Riedel, Benedikt</name>
</author>
<author>
<name>Chou, Yuan-Tang</name>
</author>
<author>
<name>Cochran-Branson, Miles</name>
</author>
<author>
<name>Paladino, Noah</name>
</author>
<author>
<name>Schultz, David</name>
</author>
<author>
<name>Liu, Mia</name>
</author>
<author>
<name>Duarte, Javier</name>
</author>
<author>
<name>Harris, Philip</name>
</author>
<author>
<name>Hsu, Shih-Chieh</name>
</author>
<id>https://hdl.handle.net/1721.1/164411</id>
<updated>2025-12-19T05:31:03Z</updated>
<published>2025-07-18T00:00:00Z</published>
<summary type="text">SuperSONIC: Cloud-Native Infrastructure for ML Inferencing
Kondratyev, Dmitry; Riedel, Benedikt; Chou, Yuan-Tang; Cochran-Branson, Miles; Paladino, Noah; Schultz, David; Liu, Mia; Duarte, Javier; Harris, Philip; Hsu, Shih-Chieh
The increasing computational demand from growing data rates and complex machine learning (ML) algorithms in large-scale scientific experiments has driven the adoption of the Services for Optimized Network Inference on Coprocessors (SONIC) approach. SONIC accelerates ML inference by offloading it to local or remote coprocessors to optimize resource utilization. Leveraging its portability to different types of coprocessors, SONIC enhances data processing and model deployment efficiency for cutting-edge research in high energy physics (HEP) and multi-messenger astrophysics (MMA). We developed the SuperSONIC project, a scalable server infrastructure for SONIC, enabling the deployment of computationally intensive tasks to Kubernetes clusters equipped with graphics processing units (GPUs). Using NVIDIA Triton Inference Server, SuperSONIC decouples client workflows from server infrastructure, standardizing communication, optimizing throughput, load balancing, and monitoring. SuperSONIC has been successfully deployed for the CMS and ATLAS experiments at the CERN Large Hadron Collider (LHC), the IceCube Neutrino Observatory (IceCube), and the Laser Interferometer Gravitational-Wave Observatory (LIGO) and tested on Kubernetes clusters at Purdue University, the National Research Platform (NRP), and the University of Chicago. SuperSONIC addresses the challenges of the Cloud-native era by providing a reusable, configurable framework that enhances the efficiency of accelerator-based inference deployment across diverse scientific domains and industries.
PEARC ’25, Columbus, OH, USA
</summary>
<dc:date>2025-07-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systematic engineering of controlled, localized oligonucleotide delivery systems for wound angiogenesis</title>
<link href="https://hdl.handle.net/1721.1/164410" rel="alternate"/>
<author>
<name>Berger, Adam G.</name>
</author>
<id>https://hdl.handle.net/1721.1/164410</id>
<updated>2025-12-19T04:11:01Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Systematic engineering of controlled, localized oligonucleotide delivery systems for wound angiogenesis
Berger, Adam G.
The standard of care for diabetic wounds has remained relatively unchanged for decades, resulting in patients with wounds that do not heal on meaningful time scales, referred to as ulcers, and high rates of recurrence for patients whose wounds do heal. This common complication of diabetes decreases quality of life, increases mortality, and raises health care costs. New paradigms to treat these wounds remains a formidable but critical challenge.&#13;
&#13;
Addressing diabetic ulcers at the molecular level may decrease healing time and prevent recurrence. Impaired blood vessel formation, or angiogenesis, in diabetic ulcers is an important target pathway. Angiogenesis is needed to bring oxygen, nutrients, signaling cues, and cells to newly formed tissue while removing waste. Nucleic acid oligonucleotide therapies, such as small interfering RNAs (siRNAs) or microRNA inhibitors (anti-miRs), that regulate gene expression at the post-transcriptional level, hold particular promise for promoting angiogenesis and wound healing; however, the large size and negative charge of these therapies require drug carriers to mediate their biological effect.&#13;
&#13;
In this thesis, we leverage sequential electrostatic adsorption of oligonucleotide therapy and polyelectrolytes into thin film coatings on commercial wound dressings through the layer-by-layer (LbL) process. These dressings package oligonucleotide, enhance its transfection efficacy, and control its temporal release locally to the wound bed. After initial validation experiments, we sought to systematically understand our drug carrier system and use this insight to engineer better wound dressings. First, we developed a proof-of-concept anti-miR-coated dressing and showed its efficacy in promoting both wound closure and sex-dependent angiogenesis. We found that therapy released from coated dressings had a preferential association with different wound cell types, particularly endothelial cells. We then sought to uncover how changes in the oligonucleotide structure itself may alter interactions with transfection polymers in thin film coatings. We found that binding with certain polyelectrolytes differed based on whether the therapy was a flexible single stranded anti-miR or a more rigid double stranded helix siRNA. We also showed how chemically modified nucleotides, such as locked nucleic acid and 2’-O-methyl RNA, can modulate affinity to polyelectrolytes and ultimately impact transfection efficacy. We also elucidated how physicochemical properties of the hydrolysable transfection-enhancing poly(β-aminoester) polymer mediate its efficiency in transfecting oligonucleotide therapy. We demonstrated that a more hydrophobic polymer enhanced transfection efficacy through its ability to facilitate permeation of biological barriers. Finally, we identified how modulation of the anionic excipients contained in these thin film coatings can be leveraged to vary the release kinetics from coated wound dressings. We engineered formulations that released on a fast or slow time scale. We observed that while both release time scales promoted efficacy in wound closure, they did so through potentially different mechanisms despite the same putative pro-angiogenic anti-miR therapy.&#13;
&#13;
In sum, this thesis elucidates how physicochemical properties and formulation of coated wound dressings alter their interfacial effects with biological systems. We use this knowledge to rationally design better drug carriers that can deliver pro-angiogenic oligonucleotide therapeutics to the wound bed. The findings have broad applications in the delivery of nucleic acid therapies for a wide host of diseases where local delivery to the injured tissue could prove beneficial. Ultimately, we also advance our pro-angiogenic coated wound dressing strategy towards clinical translation. Our strategy has the potential to provide a new, targeted therapeutic paradigm to help those suffering from diabetic ulcers.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>What I Don’t Get About AI . . .</title>
<link href="https://hdl.handle.net/1721.1/164409" rel="alternate"/>
<author>
<name>Wright, Randall S.</name>
</author>
<id>https://hdl.handle.net/1721.1/164409</id>
<updated>2025-12-19T05:31:43Z</updated>
<published>2024-11-05T00:00:00Z</published>
<summary type="text">What I Don’t Get About AI . . .
Wright, Randall S.
In a recent MIT News article titled “Explained: Generative AI,” Adam Zewe (Citation2023) writes&#13;
&#13;
But what do people really mean when they say ‘generative AI?’&#13;
&#13;
Before the generative AI boom of the past few years, when people talked about AI, typically they were talking about machine-learning models that can learn to make a prediction based on data. For instance, such models are trained, using millions of examples, to predict whether a certain X-ray shows signs of a tumor or if a particular borrower is likely to default on a loan.&#13;
&#13;
Generative AI can be thought of as a machine-learning model that is trained to create new data, rather than making a prediction about a specific dataset. A generative AI system is one that learns to generate more objects that look like the data it was trained on.
</summary>
<dc:date>2024-11-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bringing a Working-Class Archive Online: Multimodal Storytelling in a Post-Industrial City</title>
<link href="https://hdl.handle.net/1721.1/164408" rel="alternate"/>
<author>
<name>Walley, Christine</name>
</author>
<id>https://hdl.handle.net/1721.1/164408</id>
<updated>2025-12-19T05:31:50Z</updated>
<published>2024-05-03T00:00:00Z</published>
<summary type="text">Bringing a Working-Class Archive Online: Multimodal Storytelling in a Post-Industrial City
Walley, Christine
We find it familiar to consider objects as useful or aesthetic, as necessities or vain indulgences. We are on less familiar ground when we consider objects as companions to our emotional lives or as provocations to thought. The notion of evocative objects brings together these two less familiar ideas, underscoring the inseparability of thought and feeling in our relationship to things. We think with the objects we love; we love the objects we think with.
</summary>
<dc:date>2024-05-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>What is a Right?</title>
<link href="https://hdl.handle.net/1721.1/164407" rel="alternate"/>
<author>
<name>Setiya, Kieran</name>
</author>
<id>https://hdl.handle.net/1721.1/164407</id>
<updated>2025-12-19T05:31:47Z</updated>
<published>2025-01-02T00:00:00Z</published>
<summary type="text">What is a Right?
Setiya, Kieran
This paper argues for a theory of natural rights on which they are explained in terms of reasons supplied by rational consent. When B has a claim-right against A that A φ, A’s non-consent is not a reason for B not to simply make A φ. This theory solves a puzzle that defeats alternative views, including standard will and interest theories, the demand theory of rights, and the view that rights are irreducible or primitive.
</summary>
<dc:date>2025-01-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Afterlife of Energy: Post-carbon and Feminist Post-work Politics</title>
<link href="https://hdl.handle.net/1721.1/164406" rel="alternate"/>
<author>
<name>Ghosn, Rania</name>
</author>
<author>
<name>Vronskaya, Alla</name>
</author>
<author>
<name>Jia, Ruo</name>
</author>
<author>
<name>Pohl, Ethel Baraona</name>
</author>
<author>
<name>Dharia, Namita Vijay</name>
</author>
<author>
<name>Aidoo, Fallon Samuels</name>
</author>
<author>
<name>Wolff, Ilze</name>
</author>
<id>https://hdl.handle.net/1721.1/164406</id>
<updated>2025-12-19T05:31:45Z</updated>
<published>2024-07-02T00:00:00Z</published>
<summary type="text">The Afterlife of Energy: Post-carbon and Feminist Post-work Politics
Ghosn, Rania; Vronskaya, Alla; Jia, Ruo; Pohl, Ethel Baraona; Dharia, Namita Vijay; Aidoo, Fallon Samuels; Wolff, Ilze
In the conclusion to her book The Birth of Energy: Fossil Fuels, Thermodynamics, and the Politics of Work, political scientist Cara Daggett considers “A Post-Work Energy Politics” in which she examines the historical coupling of energy and work—meaning human, waged work—in an invitation to disassociate their values and futures. The exponential power of fossil fuels animated the pipedream that powerful, inorganic slaves could substitute unfree human labor, ideas that have driven European imperialism. Fossil fuel systems did not lead, however, to a world beyond work. Rather, today’s “patriarchal slave states” continue to manage the project of putting the world to work through the maximization of productivity, and the subordination of racialized, immigrant, and gendered bodies—who would work for lower, or for no, wages. “The project of work,” Daggett argues, “is in tension with the project of life.” 1 And the rise of “work–life balance” is a mere tactic of governance in which the enemy is fatigue, exhaustion, and burn-out. She suggests, in turn, an alliance between post-carbon and feminist post-work politics and asks: what might it mean for energy politics to refer to the politics of ensuring public vitality? In order to advance a feminist revaluation of work, Daggett draws on Kathi Weeks’s The Problem with Work to outline a project that makes two utopian demands. One demand articulates a paradoxical relationship between the pragmatism of (present) demands and the speculative seeds of possibility; a second demand outlines a utopian form for such politics: partial, fragmented kin to the genre of the manifesto. Daggett concludes with an invitation that “a radical planet politics, if it seeks to contest ecomodernist claims, needs its own politics of pleasure.” 2 In an echo to Daggett’s invitation, the authors of this Educators’ Roundtable were invited to contribute a short text that picks up on the possibilities of a post-carbon, post-work politics.
</summary>
<dc:date>2024-07-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>‘May Our Egos Die So That the World May Live’</title>
<link href="https://hdl.handle.net/1721.1/164405" rel="alternate"/>
<author>
<name>Gupta, Huma</name>
</author>
<id>https://hdl.handle.net/1721.1/164405</id>
<updated>2025-12-19T05:31:51Z</updated>
<published>2024-07-02T00:00:00Z</published>
<summary type="text">‘May Our Egos Die So That the World May Live’
Gupta, Huma
This image-based essay reflects upon the author’s experience of running an experimental filmmaking workshop titled Climate Futures, Cities Past in the spring of 2023 at MIT’s School of Architecture featuring stills from four student films set in Greece, Italy, Pakistan, and Syria. It explores how architectural pedagogy can intersect with filmmaking to offer a critical space outside the studio or seminar paper. Engaging eco-critical and narrative approaches of Stefanie K. Dunning, Jennifer Fay, Ursula K. Le Guin, Donna Harraway, Saidiya Hartman, Adrian J. Ivakhiv, and Ousmane Sembène, it explores how ‘cinema might teach us to die’ or rather, embrace a different eschatological paradigm that moves beyond individual authorship, accomplishment, and post-mortem legacy towards more mutualist, collectivist, and anarchic models of existence. It argues that filmmaking as inquiry can offer a way to collect different kinds of stories that help facilitate the messy, uncomfortable, and wildly creative processes of unworlding and reworlding.
</summary>
<dc:date>2024-07-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Oil to Information: Caudill, Rowlett, Scott and Architectures of the Energy Crisis</title>
<link href="https://hdl.handle.net/1721.1/164404" rel="alternate"/>
<author>
<name>Hanly, B. Jack</name>
</author>
<id>https://hdl.handle.net/1721.1/164404</id>
<updated>2025-12-19T05:31:53Z</updated>
<published>2024-07-02T00:00:00Z</published>
<summary type="text">From Oil to Information: Caudill, Rowlett, Scott and Architectures of the Energy Crisis
Hanly, B. Jack
This paper traces the history of architecture-engineering firm Caudill Rowlett Scott (CRS), roughly 1948–1983, in the context of the postwar oil economy and the 1973 energy crisis. The paper examines CRS’s transformation from a design firm into an energy conglomerate over the course of three decades, as it both concretized the fossil economy between Houston and Saudi Arabia and modeled its own corporate structure after its oil clientele. Analyzing numerous CRS projects designed and built for the oil industry, from corporate office towers to industrial training colleges, the paper looks at a moment in which energy systems and the architectural profession were coproduced through the discourses, practices, and institutions of oil at its most vulnerable historical inflection points. CRS thereby epitomized an energy transition from oil as a substance to oil as information, where a growing postindustrial society would leverage the immaterial dimensions of energy as a foundation for building.
</summary>
<dc:date>2024-07-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Demonstrating Xstrings: 3D Printing Cable-driven Mechanism for Actuation, Deformation, and Manipulation</title>
<link href="https://hdl.handle.net/1721.1/164403" rel="alternate"/>
<author>
<name>Li, Jiaji</name>
</author>
<author>
<name>Feng, Shuyue</name>
</author>
<author>
<name>Perroni-Scharf, Maxine</name>
</author>
<author>
<name>Liu, Yujia</name>
</author>
<author>
<name>Guan, Emily</name>
</author>
<author>
<name>Mueller, Stefanie</name>
</author>
<id>https://hdl.handle.net/1721.1/164403</id>
<updated>2025-12-19T05:30:56Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Demonstrating Xstrings: 3D Printing Cable-driven Mechanism for Actuation, Deformation, and Manipulation
Li, Jiaji; Feng, Shuyue; Perroni-Scharf, Maxine; Liu, Yujia; Guan, Emily; Mueller, Stefanie
In this Demo, we present Xstrings, a method for designing and fabricating 3D printed objects with integrated cable-driven mechanisms that can be printed in one go without the need for manual assembly. Xstrings supports four types of cable-driven interactions—bend, coil, screw and compress—which are activated by applying an input force to the cables. To facilitate the design of Xstrings objects, we developed a design tool that allows users to embed cable-driven mechanisms into the object geometry based on the desired interaction by automatically placing joints and cables at the respective locations. The application potential of Xstrings is demonstrated through examples such as manipulable gripping, bionic robot manufacturing, and dynamic prototyping.
CHI EA ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>My CXL Pool Obviates Your PCIe Switch</title>
<link href="https://hdl.handle.net/1721.1/164402" rel="alternate"/>
<author>
<name>Zhong, Yuhong</name>
</author>
<author>
<name>Berger, Daniel</name>
</author>
<author>
<name>Zardoshti, Pantea</name>
</author>
<author>
<name>Saurez, Enrique</name>
</author>
<author>
<name>Nelson, Jacob</name>
</author>
<author>
<name>Psistakis, Antonis</name>
</author>
<author>
<name>Fried, Joshua</name>
</author>
<author>
<name>Cidon, Asaf</name>
</author>
<id>https://hdl.handle.net/1721.1/164402</id>
<updated>2025-12-19T05:31:09Z</updated>
<published>2025-06-06T00:00:00Z</published>
<summary type="text">My CXL Pool Obviates Your PCIe Switch
Zhong, Yuhong; Berger, Daniel; Zardoshti, Pantea; Saurez, Enrique; Nelson, Jacob; Psistakis, Antonis; Fried, Joshua; Cidon, Asaf
Pooling PCIe devices across multiple hosts offers a promising solution to mitigate stranded I/O resources, enhance device utilization, address device failures, and reduce total cost of ownership. The only viable option today are PCIe switches, which decouple PCIe devices from hosts by connecting them through a hardware switch. However, the high cost and limited flexibility of PCIe switches hinder their widespread adoption beyond specialized datacenter use cases.&#13;
This paper argues that PCIe device pooling can be effectively implemented in software using CXL memory pools. CXL memory pools improve memory utilization and already have positive return on investment. We find that, once CXL pools are in place, they can serve as a building block for pooling any kind of PCIe device. We demonstrate that PCIe devices can directly use CXL memory as I/O buffers without device modifications, which enables routing PCIe traffic through CXL pool memory. This software-based approach is deployable on today's hardware and is more flexible than hardware PCIe switches. In particular, we explore how disaggregating devices such as NICs can transform datacenter infrastructure.
HOTOS 25, May 14–16, 2025, Banff, AB, Canada
</summary>
<dc:date>2025-06-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>PolyMOF nanoparticles constructed from intrinsically microporous polymer ligand towards scalable composite membranes for CO2 separation</title>
<link href="https://hdl.handle.net/1721.1/164401" rel="alternate"/>
<author>
<name>Lee, Tae Hoon</name>
</author>
<author>
<name>Lee, Byung Kwan</name>
</author>
<author>
<name>Yoo, Seung Yeon</name>
</author>
<author>
<name>Lee, Hyunhee</name>
</author>
<author>
<name>Wu, Wan-Ni</name>
</author>
<author>
<name>Smith, Zachary P</name>
</author>
<author>
<name>Park, Ho Bum</name>
</author>
<id>https://hdl.handle.net/1721.1/164401</id>
<updated>2025-12-18T05:49:51Z</updated>
<published>2023-12-14T00:00:00Z</published>
<summary type="text">PolyMOF nanoparticles constructed from intrinsically microporous polymer ligand towards scalable composite membranes for CO2 separation
Lee, Tae Hoon; Lee, Byung Kwan; Yoo, Seung Yeon; Lee, Hyunhee; Wu, Wan-Ni; Smith, Zachary P; Park, Ho Bum
Integrating different modification strategies into a single step to achieve the desired properties of metal–organic frameworks (MOFs) has been very synthetically challenging, especially in developing advanced MOF/polymer mixed matrix membranes (MMMs). Herein, we report a polymer–MOF (polyMOF) system constructed from a carboxylated polymer with intrinsic microporosity (cPIM-1) ligand. This intrinsically microporous ligand could coordinate with metals, leading to ~100 nm-sized polyMOF nanoparticles. Compared to control MOFs, these polyMOFs exhibit enhanced ultramicroporosity for efficient molecular sieving, and they have better dispersion properties in casting solutions to prepare MMMs. Ultimately, integrating coordination chemistries through the cPIM-1 and polymer-based functionality into porous materials results in polyMOF/PIM-1 MMMs that display excellent CO2 separation performance (surpassing the CO2/N2 and CO2/CH4 upper bounds). In addition to exploring the physicochemical and transport properties of this polyMOF system, scalability has been demonstrated by converting the developed MMM material into large-area (400 cm2) thin-film nanocomposite (TFN) membranes.
</summary>
<dc:date>2023-12-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Single Layer Silk and Cotton Woven Fabrics for Acoustic Emission and Active Sound Suppression</title>
<link href="https://hdl.handle.net/1721.1/164400" rel="alternate"/>
<author>
<name>Yang, Grace H</name>
</author>
<author>
<name>Lin, Jinuan</name>
</author>
<author>
<name>Cheung, Henry</name>
</author>
<author>
<name>Rui, Guanchun</name>
</author>
<author>
<name>Zhao, Yongyi</name>
</author>
<author>
<name>Balachander, Latika</name>
</author>
<author>
<name>Joo, Taigyu</name>
</author>
<author>
<name>Lee, Hyunhee</name>
</author>
<author>
<name>Smith, Zachary P</name>
</author>
<author>
<name>Zhu, Lei</name>
</author>
<author>
<name>Ma, Chu</name>
</author>
<author>
<name>Fink, Yoel</name>
</author>
<id>https://hdl.handle.net/1721.1/164400</id>
<updated>2025-12-18T05:49:53Z</updated>
<published>2024-04-01T00:00:00Z</published>
<summary type="text">Single Layer Silk and Cotton Woven Fabrics for Acoustic Emission and Active Sound Suppression
Yang, Grace H; Lin, Jinuan; Cheung, Henry; Rui, Guanchun; Zhao, Yongyi; Balachander, Latika; Joo, Taigyu; Lee, Hyunhee; Smith, Zachary P; Zhu, Lei; Ma, Chu; Fink, Yoel
Whether intentionally generating acoustic waves or attempting to mitigate unwanted noise, sound control is an area of challenge and opportunity. This study investigates traditional fabrics as emitters and suppressors of sound. When attached to a single strand of a piezoelectric fiber actuator, a silk fabric emits up to 70 dB of sound. Despite the complex fabric structure, vibrometer measurements reveal behavior reminiscent of a classical thin plate. Fabric pore size relative to the viscous boundary layer thickness is found—through comparative fabric analysis—to influence acoustic‐emission efficiency. Sound suppression is demonstrated using two distinct mechanisms. In the first, direct acoustic interference is shown to reduce sound by up to 37 dB. The second relies on pacifying the fabric vibrations by the piezoelectric fiber, reducing the amplitude of vibration waves by 95% and attenuating the transmitted sound by up to 75%. Interestingly, this vibration‐mediated suppression in principle reduces sound in an unlimited volume. It also allows the acoustic reflectivity of the fabric to be dynamically controlled, increasing by up to 68%. The sound emission and suppression efficiency of a 130 µm silk fabric presents opportunities for sound control in a variety of applications ranging from apparel to transportation to architecture.
</summary>
<dc:date>2024-04-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implications of changing the base raw material – the case of license plate manufacturing</title>
<link href="https://hdl.handle.net/1721.1/164399" rel="alternate"/>
<author>
<name>Uygun, Yilmaz</name>
</author>
<author>
<name>Mohammadian, Noushin</name>
</author>
<author>
<name>Un Nisa, Mehr</name>
</author>
<id>https://hdl.handle.net/1721.1/164399</id>
<updated>2025-12-18T05:49:47Z</updated>
<published>2024-12-31T00:00:00Z</published>
<summary type="text">Implications of changing the base raw material – the case of license plate manufacturing
Uygun, Yilmaz; Mohammadian, Noushin; Un Nisa, Mehr
License plates to uniquely identify vehicles mainly use aluminum as the base material. Currently, there is no distinction of different use cases of license plates, such as short-time usage for test drive and transportation purposes that do not need such long-lasting materials not only from cost but also from sustainability perspectives. This paper presents a methodology to select the best material for different use cases under the holistic consideration of specifically defined criteria as to material properties, sustainability aspects, and supply chain implications. We show that there are several candidate materials for different use cases that stick out by changing the importance of these numerous criteria. In addition, the paper delves deeper into the sustainability aspect by means of a comprehensive System Dynamics model. We show that a scenario in which the company picks up used license plates by relying on a logistics service provider to get them delivered to an external recycling service provider yields the best results.
</summary>
<dc:date>2024-12-31T00:00:00Z</dc:date>
</entry>
<entry>
<title>Metalite, a new class of composite laminates with unique properties</title>
<link href="https://hdl.handle.net/1721.1/164398" rel="alternate"/>
<author>
<name>Miravete, Antonio</name>
</author>
<id>https://hdl.handle.net/1721.1/164398</id>
<updated>2025-12-18T05:49:40Z</updated>
<published>2024-07-21T00:00:00Z</published>
<summary type="text">Metalite, a new class of composite laminates with unique properties
Miravete, Antonio
Metalite is a new class of antisymmetric composite laminates composed of angle-plies, 0-degree, and 90-degree plies, presenting unique properties. These include extremely thin laminates suitable for minimum gauge applications, remarkable weight savings compared to conventional quads, adjustable zero and negative coefficients of thermal expansion (CTE), ease of manufacturing, excellent ability to adjust mode frequency, change sound radiation characteristics, and high tunability. In this study, Metalite laminates ranging from 3 to 8 plies are described using their feasible spaces and compared with quads, detailing the weight savings achieved for hard, soft, and neutral laminates. Through an experimental study, the CTE value of a hybrid Metalite is correlated with theory, demonstrating how to tune zero and negative CTE values. The proposed work offers significant benefits through practical solutions for designing and manufacturing lightweight composite laminates with unique properties.
</summary>
<dc:date>2024-07-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tertiary-Amine-Functional Poly(arylene ether)s for Acid-Gas Separations</title>
<link href="https://hdl.handle.net/1721.1/164397" rel="alternate"/>
<author>
<name>Dean, Pablo A</name>
</author>
<author>
<name>Wu, Yifan</name>
</author>
<author>
<name>Guo, Sheng</name>
</author>
<author>
<name>Swager, Timothy M</name>
</author>
<author>
<name>Smith, Zachary P</name>
</author>
<id>https://hdl.handle.net/1721.1/164397</id>
<updated>2025-12-18T05:49:50Z</updated>
<published>2024-10-02T00:00:00Z</published>
<summary type="text">Tertiary-Amine-Functional Poly(arylene ether)s for Acid-Gas Separations
Dean, Pablo A; Wu, Yifan; Guo, Sheng; Swager, Timothy M; Smith, Zachary P
Competitive sorption enables the emergent phenomenon of enhanced CO&lt;sub&gt;2&lt;/sub&gt;-based selectivities for gas separation membranes when using microporous polymers with primary amines. However, strong secondary forces in these polymers through hydrogen bonding results in low solvent solubility, precluding standard solution processing approaches to form these polymers into membrane films. Herein, we circumvent these manufacturing constraints while maintaining competitive-sorption enhancements by synthesizing eight representative microporous poly(arylene ether)s (PAEs) with tertiary amines. High-pressure H&lt;sub&gt;2&lt;/sub&gt;S, CO&lt;sub&gt;2&lt;/sub&gt;, and CH&lt;sub&gt;4&lt;/sub&gt; sorption isotherms were collected for these samples to demonstrate enhanced affinity for acid gases relative to the unfunctional control polymer. Although competitive sorption was observed for all samples, improvements were less pronounced than for primary-amine-functional analogs. For H&lt;sub&gt;2&lt;/sub&gt;S-based separations, the benefits of competitive sorption offset decreases in selectivity due to plasticization. This detailed study helps to elucidate the role of tertiary amines for acid gas separations in solution-processable microporous PAEs.
</summary>
<dc:date>2024-10-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Weathering the storm: examining how organisations navigate the sea of cybersecurity regulations</title>
<link href="https://hdl.handle.net/1721.1/164396" rel="alternate"/>
<author>
<name>Proudfoot, Jeffrey G</name>
</author>
<author>
<name>Cram, W Alec</name>
</author>
<author>
<name>Madnick, Stuart</name>
</author>
<id>https://hdl.handle.net/1721.1/164396</id>
<updated>2025-12-18T05:49:49Z</updated>
<published>2025-05-04T00:00:00Z</published>
<summary type="text">Weathering the storm: examining how organisations navigate the sea of cybersecurity regulations
Proudfoot, Jeffrey G; Cram, W Alec; Madnick, Stuart
Governments around the world routinely regulate the activities of private enterprises to guide the behaviour of individuals and organisations towards acceptable norms. This holds true in a cybersecurity context. However, practitioners report that cybersecurity regulations are often out of date and compliance is confusing, expensive, and time consuming. As a result, organisational leaders are often uncertain about the practicalities of adopting and implementing the various rules, which can lead to trickle-down effects on the robustness of lower-level cybersecurity controls and compliance activities. In this research, we aim to clarify how cybersecurity regulations are operationalised in organisations, as well as reveal the compliance and performance consequences of cybersecurity regulations. To do so, we interviewed 22 senior leaders with expertise in cybersecurity regulations. Our analysis reveals 7 distinct themes (i.e., concept groupings) that are ordered within four phases (i.e., temporal stages), which we use to create the Institutional Cybersecurity Regulations Model (ICRM). The results provide a holistic view of the cybersecurity regulations process in organisations that can serve to clarify current theory relationships and inform future research. As well, the ICRM can provide a practical roadmap for managers to navigate regulatory cybersecurity challenges in their own companies.
</summary>
<dc:date>2025-05-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessment of Technoeconomic Opportunities in Automation for Nuclear Microreactors</title>
<link href="https://hdl.handle.net/1721.1/164395" rel="alternate"/>
<author>
<name>Naranjo de Candido, Isabel</name>
</author>
<author>
<name>Al Rashdan, Ahmad</name>
</author>
<author>
<name>Abou Jaoude, Abdalla</name>
</author>
<author>
<name>Buongiorno, Jacopo</name>
</author>
<id>https://hdl.handle.net/1721.1/164395</id>
<updated>2025-12-18T05:49:45Z</updated>
<published>2024-07-24T00:00:00Z</published>
<summary type="text">Assessment of Technoeconomic Opportunities in Automation for Nuclear Microreactors
Naranjo de Candido, Isabel; Al Rashdan, Ahmad; Abou Jaoude, Abdalla; Buongiorno, Jacopo
Achieving full decarbonization of all economic sectors remains a challenge, especially in niche markets. For example, remote communities and industrial or mining activities detached from the main electric grid heavily rely on fossil fuels, similar to urban and industrial microgrids with combined heat and power needs. A combination of renewables and energy storage is often not suitable due to cost, reliability, intermittency, and large storage requirements. Small nuclear reactors with a flexible purpose could serve these applications. Microreactors (MR) are a class of reactors that are compact, factory manufactured, transportable, and self-regulating. Typically, they generate much less power than their large reactor counterparts. The main advantages of microreactors include the versatile nature of the energy produced, the reliability of supply, and freedom from having to transport and store large quantities of fuels on-site, coupled with the absence of dependence on an electrical grid. A strong business case is needed to move from the microreactor prototype to the commercialization phase. In fact, fossil fuels are still relatively inexpensive, and in the near term, carbon credits will be available to virtually compensate for emissions. For microreactors, one of the main costs in operation and maintenance (O&amp;M) is their staffing levels. In this study, we investigate how to optimize the number (and thus the cost) of workers, moving from a traditional, fully manned, on-site personnel approach to an unmanned, remote personnel approach. We examine four different staffing models that can be implemented as the technology matures and evolves. We estimate the staffing needs of each model and build a business case to justify the substitution of on-site personnel with adequate technologies. To do so, we propose a cost model to quantify potential cost reductions from automating O&amp;M activities. The model accounts for both the reduction in cost derived from the reduced number of full-time-equivalent (FTE) employees and the increase in cost derived from the need to buy new control hardware as needed. Applying the cost model that we created to different scenarios, an on-site O&amp;M cost reduction exceeding 80% can be expected. Additionally, we found that it is more impactful to focus on automating routine O&amp;M tasks rather than attempting to automate transient management (shutdowns, restarts, monitoring condition deviations). In fact, transients typically account for less than 1% of the total FTE time spent on the reactors.
</summary>
<dc:date>2024-07-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing acid–gas separations using free volume manipulation for microporous poly(arylene ether)s</title>
<link href="https://hdl.handle.net/1721.1/164394" rel="alternate"/>
<author>
<name>Joo, Taigyu</name>
</author>
<author>
<name>Wu, Yifan</name>
</author>
<author>
<name>Lee, Tae Hoon</name>
</author>
<author>
<name>Dean, Pablo A</name>
</author>
<author>
<name>Wu, Wan-Ni</name>
</author>
<author>
<name>Swager, Timothy M</name>
</author>
<author>
<name>Smith, Zachary P</name>
</author>
<id>https://hdl.handle.net/1721.1/164394</id>
<updated>2025-12-18T05:49:37Z</updated>
<published>2025-01-27T00:00:00Z</published>
<summary type="text">Enhancing acid–gas separations using free volume manipulation for microporous poly(arylene ether)s
Joo, Taigyu; Wu, Yifan; Lee, Tae Hoon; Dean, Pablo A; Wu, Wan-Ni; Swager, Timothy M; Smith, Zachary P
To address global energy needs, traditional and renewable natural gas will likely be key energy sources for years to come. However, raw feeds require removal of impurities like hydrogen sulfide (H2S) and carbon dioxide (CO2) before use. In this study, we illustrate the key challenges of using traditional post-synthetic modification approaches to simultaneously enhance H2S/CH4 and CO2/CH4 selectivities in microporous polymer membranes, while also demonstrating how free volume manipulation (FVM) can overcome some of these challenges. By integrating tert-butoxycarbonyl-protected piperazinyl (PIP-tBOC) groups into a microporous poly(arylene ether) (PAE-1) and applying thermal treatment with oxygen to degrade the incorporated units in solid-state films, we successfully increased sorption capacity and diffusion selectivity. This modification enhanced the mixed-gas selectivity of H2S/CH4 and CO2/CH4 by 88% and 114%, respectively, compared to the original PAE-1 films. Consequently, the films achieved a combined acid gas (CAG) selectivity of 48, which approached the CAG upper bound for glassy polymers. The FVM process not only improved the selectivity of these membrane films but also markedly increased their resistance to plasticization, making them more suitable for industrial applications in acid–gas separation. This post-synthetic modification strategy, applicable to any glassy polymer containing a nucleophilic aromatic unit, provides a means to leverage the competitive sorption of H2S molecules and the molecular sieving properties of the polymer.
</summary>
<dc:date>2025-01-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Knowledge and ignorance in forensic identification: the origins of a contested human rights fact</title>
<link href="https://hdl.handle.net/1721.1/164393" rel="alternate"/>
<author>
<name>Medina, Eden</name>
</author>
<id>https://hdl.handle.net/1721.1/164393</id>
<updated>2025-12-18T05:49:39Z</updated>
<published>2024-12-31T00:00:00Z</published>
<summary type="text">Knowledge and ignorance in forensic identification: the origins of a contested human rights fact
Medina, Eden
In 2006, DNA testing revealed that the Chilean Medical Legal Service had misidentified at least half of the 96 human rights victims whose remains had been exhumed in 1991 from a lot in the Santiago General Cemetery known as Patio 29. Years earlier the government had returned those remains to the victims' families. This examination of the history of that forensic misidentification uncovers the role played by the shifting relations of knowledge and ignorance in establishing the legal facts of those identities. Building on the growing literature in agnotology, the article demonstrates the ways in which the context of dictatorship created varied and overlapping forms of ignorance that continued to shape the outcome of the forensic work even after Chile returned to democracy. By detailing different examples of ignorance production by the state, a human rights organization, and a university department under military surveillance, the article illuminates the diverse ways that the civil–military dictatorship worked against knowledge production in the domains of science and human rights.
</summary>
<dc:date>2024-12-31T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tweeting during the Pandemic in New York City: Unveiling the Evolving Sentiment Landscape of NYC through a Spatiotemporal Analysis of Geolocated Tweets</title>
<link href="https://hdl.handle.net/1721.1/164392" rel="alternate"/>
<author>
<name>Ignaccolo, Carmelo</name>
</author>
<author>
<name>Wibisono, Kevin</name>
</author>
<author>
<name>Sutto, Maria Paola</name>
</author>
<author>
<name>Plunz, Richard A.</name>
</author>
<id>https://hdl.handle.net/1721.1/164392</id>
<updated>2025-12-18T05:50:00Z</updated>
<published>2024-05-26T00:00:00Z</published>
<summary type="text">Tweeting during the Pandemic in New York City: Unveiling the Evolving Sentiment Landscape of NYC through a Spatiotemporal Analysis of Geolocated Tweets
Ignaccolo, Carmelo; Wibisono, Kevin; Sutto, Maria Paola; Plunz, Richard A.
This article explores the relationship between spatial factors, socioeconomic conditions, and Twitter (now called X) sentiment in New York City (NYC) during the COVID-19 pandemic. Using Twitter data, the study investigates how sentiment varied across different geographies. It examines whether sentiment scores, unemployment rates, and COVID-19 hospitalization rates in NYC zip codes revealed spatial associations. The research employs sentiment analysis, a natural language processing technique used to algorithmically determine the emotional tone of a text, on a database of geo-located tweets spanning January to December 2020. The findings reveal a shift towards more negative sentiment during the initial year of the pandemic. Moreover, the study uncovers variations in sentiment trends across boroughs and zip codes. Additionally, a zip code-level fixed-effects model demonstrates a statistically significant relationship between sentiment scores and unemployment rates. In summary, this article makes a two-fold contribution: firstly, it adds a spatial lens to the scholarly debate regarding the use of Twitter data as an indicator of publicly expressed sentiment; secondly, it provides empirical evidence on the spatial interconnectedness of sentiment, health (hospitalization), and socioeconomic factors (unemployment). Overall, this research sheds light on the nuanced relationship between sentiment and space during the COVID-19 pandemic in NYC.
</summary>
<dc:date>2024-05-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Solution‐Processable, Ladder‐Branched Polyimides of Intrinsic Microporosity by [4+4] Cycloaddition for Membrane Gas Separation</title>
<link href="https://hdl.handle.net/1721.1/164391" rel="alternate"/>
<author>
<name>Lee, Tae Hoon</name>
</author>
<author>
<name>Dean, Pablo A</name>
</author>
<author>
<name>Yeo, Jing Ying</name>
</author>
<author>
<name>Smith, Zachary P</name>
</author>
<id>https://hdl.handle.net/1721.1/164391</id>
<updated>2025-12-18T05:49:54Z</updated>
<published>2025-10-15T00:00:00Z</published>
<summary type="text">Solution‐Processable, Ladder‐Branched Polyimides of Intrinsic Microporosity by [4+4] Cycloaddition for Membrane Gas Separation
Lee, Tae Hoon; Dean, Pablo A; Yeo, Jing Ying; Smith, Zachary P
Advancements in membrane-based gas separation have the potential to address global challenges related to energy and the environment. However, new membrane materials must have excellent separation performance, stability, and processability, and simultaneously achieving all three metrics is extremely challenging. To circumvent these issues, a post-synthetic modification of polyimides of intrinsic microporosity (PIM-PIs) synthesized with a UV light (UV)-reactive anthracene co-monomer is reported. UV irradiation on the PIM-PI solution converts the anthracene units into dianthracene linkages by [4+4] cycloaddition, while the resultant PIM-PI is still solution-processable due to the branched structure. The ladder-like dianthracene moieties significantly increased both microporosity (&lt;20 Å) and ultramicroporosity (&lt;7 Å) of the precursor PIM-PI. Notably, the UV-treated PIM-PI membrane exhibits a large boost in pure-gas CO2 permeability by up to 260%, reaching 376 barrer, while maintaining CO2/CH4 ideal selectivity of 35 at 1 bar. Moreover, the developed membrane material has enhanced stability against physical aging and plasticization and showcases excellent CO2/CH4 mixed-gas selectivity (&gt;30 up to 31 bar feed pressure), which surpasses the 2018 mixed-gas upper bound.
</summary>
<dc:date>2025-10-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interactive infill topology optimisation guided by user drawn patterns</title>
<link href="https://hdl.handle.net/1721.1/164390" rel="alternate"/>
<author>
<name>Schiffer, Gillian</name>
</author>
<author>
<name>Schmidt, Martin-Pierre</name>
</author>
<author>
<name>Pedersen, Claus BW</name>
</author>
<author>
<name>Carstensen, Josephine V</name>
</author>
<id>https://hdl.handle.net/1721.1/164390</id>
<updated>2025-12-18T05:50:02Z</updated>
<published>2024-12-31T00:00:00Z</published>
<summary type="text">Interactive infill topology optimisation guided by user drawn patterns
Schiffer, Gillian; Schmidt, Martin-Pierre; Pedersen, Claus BW; Carstensen, Josephine V
Widespread use of topology optimisation as a design tool for additive manufacturing faces major inhibiting obstacles, such as high computational costs and complexity, concern for other failure modes, and manufacturability. Interactive infill topology optimisation presents an alternative approach to circumvent some of these barriers. The novel contribution of the present work prompts the user to draw a tailored infill pattern, specify regions of interest to locate the infill, and control how strictly the pattern is replicated in the material layout of the design using appearance constraints. This approach improves engineering metrics not directly included in the optimisation formulation by incorporating the user’s engineering experience, thereby avoiding increased computational costs, parameter tuning, and numerical artifacts associated with complex objective functions and constraints. Two 2D benchmark examples increase the linear buckling resistance and energy absorption, respectively, and a 2.5D example minimises compliance while reducing the quantity of overhang supports for additive manufacturing.
</summary>
<dc:date>2024-12-31T00:00:00Z</dc:date>
</entry>
<entry>
<title>Minimum Bucket and Car Battery Problems</title>
<link href="https://hdl.handle.net/1721.1/164389" rel="alternate"/>
<author>
<name>Feng, Raymond</name>
</author>
<id>https://hdl.handle.net/1721.1/164389</id>
<updated>2025-12-18T05:49:43Z</updated>
<published>2024-06-11T00:00:00Z</published>
<summary type="text">Minimum Bucket and Car Battery Problems
Feng, Raymond
A solar car needs 5 fully charged batteries to run, and it depletes those batteries in 5 hours. The batteries are rechargeable, and solar panels on the car are able to charge 3 batteries simultaneously. It takes 3 hours for the solar panels to finish charging 3 batteries. Furthermore, batteries cannot be charging and in use at the same time. If the car always starts running as soon as 5 full batteries are available, and the solar panels can only operate if 3 empty batteries are available, how many batteries are needed so that the car can eventually run without stopping? We investigate this resource optimization problem and its different variations.
</summary>
<dc:date>2024-06-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developers Grappling with Flood Risks: Evaluating Boston’s Climate Resiliency Checklist</title>
<link href="https://hdl.handle.net/1721.1/164388" rel="alternate"/>
<author>
<name>Loescher-Montal, Angela</name>
</author>
<author>
<name>Mazereeuw, Miho</name>
</author>
<author>
<name>Shen, Kairos</name>
</author>
<id>https://hdl.handle.net/1721.1/164388</id>
<updated>2025-12-18T05:49:56Z</updated>
<published>2024-01-02T00:00:00Z</published>
<summary type="text">Developers Grappling with Flood Risks: Evaluating Boston’s Climate Resiliency Checklist
Loescher-Montal, Angela; Mazereeuw, Miho; Shen, Kairos
Ongoing waterfront development in risky areas across the globe raises the continued paradox between resilience initiatives and broader market mechanisms. Even as flood risk increases, existing development patterns do not often adequately account for future flood risk. This research examines using resiliency checklists as a growing regulatory tool to improve predevelopment flood resilience standards. The research employs mixed quantitative and qualitative methods to evaluate how four large-scale developments interacted with Boston’s Climate Resiliency Checklist in the last decade and how its current design criteria influenced design decisions. The checklist’s format, design, and time horizon considerations are evaluated. Increased format and smaller-scale tools are considered.
</summary>
<dc:date>2024-01-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>The “content” of intergroup contact: lessons from the Denton Women’s Interracial Fellowship</title>
<link href="https://hdl.handle.net/1721.1/164387" rel="alternate"/>
<author>
<name>English, Jasmine</name>
</author>
<id>https://hdl.handle.net/1721.1/164387</id>
<updated>2025-12-18T05:49:42Z</updated>
<published>2024-04-14T00:00:00Z</published>
<summary type="text">The “content” of intergroup contact: lessons from the Denton Women’s Interracial Fellowship
English, Jasmine
Does the content of intergroup contact matter? Despite extensive research on the benefits of contact for intergroup relations, we know little about what happens during contact-based programs and interventions. This article addresses this gap by inductively building theory about the desired content of contact. My analysis draws on oral history interviews and archival data from the Denton Women’s Interracial Fellowship: a real-world case of intergroup contact that emerged to ease the process of school desegregation in Denton, Texas. My analysis of these data moves beyond the scope conditions suggested by (Allport, Gordon W. 1954. The Nature of Prejudice. 25th ed. Cambridge, MA: Perseus Books) to highlight the role of conversations about outgroup experiences. I illuminate how these conversations produce positive impacts on intergroup relations and draw out the implications for research on intergroup contact: namely, that forms of intergroup contact that incorporate these conversations are more likely to improve intergroup relations, and that intergroup contact interventions should explicitly encourage or incorporate these kinds of conversations.
</summary>
<dc:date>2024-04-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Roadmap of Graphite Moderator and Graphite-Matrix TRISO Fuel Management Options</title>
<link href="https://hdl.handle.net/1721.1/164386" rel="alternate"/>
<author>
<name>Forsberg, CW</name>
</author>
<id>https://hdl.handle.net/1721.1/164386</id>
<updated>2025-12-18T05:50:03Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Roadmap of Graphite Moderator and Graphite-Matrix TRISO Fuel Management Options
Forsberg, CW
Most high-temperature reactors use graphite as a moderator and structural material. This includes high-temperature gas-cooled reactors with helium cooling and TRi-structural ISOtropic (TRISO) fuel particles embedded in graphite, as well as fluoride salt–cooled high-temperature reactors with clean salt coolant and TRISO fuel particles embedded in graphite and thermal spectrum molten salt reactors with a graphite moderator and fuel dissolved in the salt. The largest volume radioactive waste stream from these reactors is the irradiated graphite. We describe herein a roadmap for management of these graphite wastes that contain radioactive 14C, tritium, and other radionuclides. There may be some graphite wastes with sufficiently low radioactivity levels that can be treated as nonradioactive waste and managed like other graphite waste. Management options for the graphite include (1) direct disposal, (2) recycled back to the reactor or other nuclear applications, and (3) oxidizing the graphite with release as an effluent or underground sequestration of the carbon dioxide. Cosequestration of this carbon dioxide with carbon dioxide from industrial, biological, and cement production processes can isotopically dilute the 14C before sequestration to eliminate the possibility of exceeding individual radiation exposure limits. We also describe options for processing graphite-matrix TRISO fuel, including separating the bulk graphite to reduce the volumes of used fuel for disposal or processing to recover fissile materials. The inventories of radioactive isotopes in different carbon wastes vary by many orders of magnitude; thus, there is no single economic option for the management of all graphite waste.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adaptive Model Reduction of High-Order Solutions of Compressible Flows via Optimal Transport</title>
<link href="https://hdl.handle.net/1721.1/164385" rel="alternate"/>
<author>
<name>Van Heyningen, Robert Loek</name>
</author>
<author>
<name>Nguyen, Ngoc Cuong</name>
</author>
<author>
<name>Blonigan, Patrick</name>
</author>
<author>
<name>Peraire, Jaime</name>
</author>
<id>https://hdl.handle.net/1721.1/164385</id>
<updated>2025-12-18T05:49:58Z</updated>
<published>2024-04-28T00:00:00Z</published>
<summary type="text">Adaptive Model Reduction of High-Order Solutions of Compressible Flows via Optimal Transport
Van Heyningen, Robert Loek; Nguyen, Ngoc Cuong; Blonigan, Patrick; Peraire, Jaime
The solution of conservation laws with parametrised shock waves presents challenges for both high-order numerical methods and model reduction techniques. We introduce an r-adaptivity scheme based on optimal transport and apply it to develop reduced order models for compressible flows. The optimal transport theory allows us to compute high-order r-adaptive meshes from a starting reference mesh by solving the Monge–Ampère equation. A high-order discretization of the conservation laws enables high-order solutions to be computed on the resulting r-adaptive meshes. Furthermore, the Monge–Ampère solutions contain mappings that are used to reduce the spatial locality of the resulting solutions and make them more amenable to model reduction. We use a non-intrusive model reduction method to construct reduced order models of both the mesh and the solution. The procedure is demonstrated on three supersonic and hypersonic test cases, with the hybridisable discontinuous Galerkin method being used as the full order model.
</summary>
<dc:date>2024-04-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Demonstrating Thermochromorph: Dynamic Relief Printing with Thermochromic Inks</title>
<link href="https://hdl.handle.net/1721.1/164384" rel="alternate"/>
<author>
<name>Sethapakdi, Ticha</name>
</author>
<author>
<name>Myers, Paris</name>
</author>
<author>
<name>Yu, Tianyu</name>
</author>
<author>
<name>Covarrubias, Juliana</name>
</author>
<author>
<name>Leake, Mackenzie</name>
</author>
<author>
<name>Mueller, Stefanie</name>
</author>
<id>https://hdl.handle.net/1721.1/164384</id>
<updated>2025-12-18T05:49:35Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Demonstrating Thermochromorph: Dynamic Relief Printing with Thermochromic Inks
Sethapakdi, Ticha; Myers, Paris; Yu, Tianyu; Covarrubias, Juliana; Leake, Mackenzie; Mueller, Stefanie
We demonstrate Thermochromorph, a novel relief printing technique that produces multicolored images that transition into each other through changes in temperature. Our process utilizes two sets of CMYK thermochromic inks that exhibit complementary color-changing behaviors: one shifting from color to transparency, the other from transparency to color at the same activation temperature. We describe our printmaking workflow, provide an open-source software toolkit, showcase prints made with our system, and explore how our system can be used in creative practice through an artist workshop. By incorporating new materials and technology with the rich history of printmaking, our work extends the expressive capabilities of relief printing as the medium continues to evolve.
CHI EA ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Demonstrating OpenEarable 2.0: An AI-Powered Ear Sensing Platform</title>
<link href="https://hdl.handle.net/1721.1/164383" rel="alternate"/>
<author>
<name>R?ddiger, Tobias</name>
</author>
<author>
<name>Zitz, Valeria</name>
</author>
<author>
<name>Hummel, Jonas</name>
</author>
<author>
<name>K?ttner, Michael</name>
</author>
<author>
<name>Lepold, Philipp</name>
</author>
<author>
<name>King, Tobias</name>
</author>
<author>
<name>Paradiso, Joseph</name>
</author>
<author>
<name>Clarke, Christopher</name>
</author>
<author>
<name>Beigl, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/164383</id>
<updated>2025-12-18T05:49:04Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Demonstrating OpenEarable 2.0: An AI-Powered Ear Sensing Platform
R?ddiger, Tobias; Zitz, Valeria; Hummel, Jonas; K?ttner, Michael; Lepold, Philipp; King, Tobias; Paradiso, Joseph; Clarke, Christopher; Beigl, Michael
In this demo, we present OpenEarable 2.0, an open-source earphone platform designed to provide an interactive exploration of physiological ear sensing and the development of AI applications. Attendees will have the opportunity to explore real-time sensor data and understand the capabilities of OpenEarable 2.0’s sensing components. OpenEarable 2.0 integrates a rich set of sensors, including two ultrasound-capable microphones (inward/outward), a 3-axis ear canal accelerometer/bone conduction microphone, a 9-axis head inertial measurement unit, a pulse oximeter, an optical temperature sensor, an ear canal pressure sensor, a microSD slot, and a microcontroller. Participants will be able to try out the web-based dashboard and mobile app for real-time control and data visualization. Furthermore, the demo will show different applications and real-time data based on OpenEarable 2.0 across physiological sensing and health monitoring, movement and activity tracking, and human-computer interaction.
CHI EA ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Conductive Ceramics: Embedding Electronics in Everyday Ceramic Objects</title>
<link href="https://hdl.handle.net/1721.1/164382" rel="alternate"/>
<author>
<name>Chin, Sam</name>
</author>
<author>
<name>Kim, Keunwook</name>
</author>
<author>
<name>An, Audrey</name>
</author>
<author>
<name>Kuang, Quincy</name>
</author>
<author>
<name>Zhang, Kai</name>
</author>
<id>https://hdl.handle.net/1721.1/164382</id>
<updated>2025-12-18T05:49:02Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Conductive Ceramics: Embedding Electronics in Everyday Ceramic Objects
Chin, Sam; Kim, Keunwook; An, Audrey; Kuang, Quincy; Zhang, Kai
We present a method for integrating conductive traces into ceramic objects using a silver-based glaze compatible with traditional firing processes. Our glaze combines silver powder with a glass former and xanthan gum, enabling application through standard ceramic techniques while maintaining the durability of conventional ceramics. Through a material-driven experimentation approach, we characterized how glaze composition and post-processing methods affect conductivity and surface quality. We demonstrate this technique through functional prototypes including a temperature-responsive heating vessel, a touch-sensitive musical controller utilizing kintsugi repair, and an interactive marble machine. This work bridges traditional ceramic craft with interactive technology, offering ceramicists a way to incorporate electronic functionality while preserving traditional methods.
CHI EA ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Resource-Efficient Compound AI Systems</title>
<link href="https://hdl.handle.net/1721.1/164381" rel="alternate"/>
<author>
<name>Chaudhry, Gohar Irfan</name>
</author>
<author>
<name>Choukse, Esha</name>
</author>
<author>
<name>Goiri, ??igo</name>
</author>
<author>
<name>Fonseca, Rodrigo</name>
</author>
<author>
<name>Belay, Adam</name>
</author>
<author>
<name>Bianchini, Ricardo</name>
</author>
<id>https://hdl.handle.net/1721.1/164381</id>
<updated>2025-12-18T05:48:25Z</updated>
<published>2025-06-06T00:00:00Z</published>
<summary type="text">Towards Resource-Efficient Compound AI Systems
Chaudhry, Gohar Irfan; Choukse, Esha; Goiri, ??igo; Fonseca, Rodrigo; Belay, Adam; Bianchini, Ricardo
Compound AI Systems, integrating multiple interacting components like models, retrievers, and external tools, have emerged as essential for addressing complex AI tasks. However, current implementations suffer from inefficient resource utilization due to tight coupling between application logic and execution details, a disconnect between orchestration and resource management layers, and the perceived exclusiveness between efficiency and quality.&#13;
We propose a vision for resource-efficient Compound AI Systems through a declarative workflow programming model and an adaptive runtime system for dynamic scheduling and resource-aware decision-making. Decoupling application logic from low-level details exposes levers for the runtime to flexibly configure the execution environment and resources, without compromising on quality. Enabling collaboration between the workflow orchestration and cluster manager enables higher efficiency through better scheduling and resource management.&#13;
We are building a prototype system, called Murakkab, to realize this vision. Our preliminary evaluation demonstrates speedups up to ~ 3.4× in workflow completion times while delivering ~ 4.5× higher energy efficiency, showing promise in optimizing resources and advancing AI system design.
HOTOS 25, May 14–16, 2025, Banff, AB, Canada
</summary>
<dc:date>2025-06-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fits like a Flex-Glove: Automatic Design of Personalized FPCB-Based Tactile Sensing Gloves</title>
<link href="https://hdl.handle.net/1721.1/164380" rel="alternate"/>
<author>
<name>Murphy, Devin</name>
</author>
<author>
<name>Li, Yichen</name>
</author>
<author>
<name>Owens, Crystal</name>
</author>
<author>
<name>Stanton, Layla</name>
</author>
<author>
<name>Liang, Paul Pu</name>
</author>
<author>
<name>Luo, Yiyue</name>
</author>
<author>
<name>Torralba, Antonio</name>
</author>
<author>
<name>Matusik, Wojciech</name>
</author>
<id>https://hdl.handle.net/1721.1/164380</id>
<updated>2025-12-18T05:48:58Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Fits like a Flex-Glove: Automatic Design of Personalized FPCB-Based Tactile Sensing Gloves
Murphy, Devin; Li, Yichen; Owens, Crystal; Stanton, Layla; Liang, Paul Pu; Luo, Yiyue; Torralba, Antonio; Matusik, Wojciech
Resistive tactile sensing gloves have captured the interest of researchers spanning diverse domains, such as robotics, healthcare, and human-computer interaction. However, existing fabrication methods often require labor-intensive assembly or costly equipment, limiting accessibility. Leveraging flexible printed circuit board (FPCB) technology, we present an automated pipeline for generating resistive tactile sensing glove design files solely from a simple hand photo on legal-size paper, which can be readily supplied to commercial board houses for manufacturing. Our method enables cost-effective, accessible production at under $130 per glove with sensor assembly times under 15 minutes. Sensor performance was characterized under varying pressure loads, and a preliminary user evaluation showcases four unique automatically manufactured designs, evaluated for their reliability and comfort.
CHI EA ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Randomness, Not Representation: The Unreliability of Evaluating Cultural Alignment in LLMs</title>
<link href="https://hdl.handle.net/1721.1/164379" rel="alternate"/>
<author>
<name>Khan, Ariba</name>
</author>
<author>
<name>Casper, Stephen</name>
</author>
<author>
<name>Hadfield-Menell, Dylan</name>
</author>
<id>https://hdl.handle.net/1721.1/164379</id>
<updated>2025-12-18T05:49:05Z</updated>
<published>2025-06-23T00:00:00Z</published>
<summary type="text">Randomness, Not Representation: The Unreliability of Evaluating Cultural Alignment in LLMs
Khan, Ariba; Casper, Stephen; Hadfield-Menell, Dylan
Research on the ‘cultural alignment’ of Large Language Models (LLMs) has emerged in response to growing interest in understanding representation across diverse stakeholders. Current approaches to evaluating cultural alignment through survey-based assessments that borrow from social science methodologies often overlook systematic robustness checks. We identify and test three assumptions behind current survey-based evaluation methods: (1) Stability: that cultural alignment is a property of LLMs rather than an artifact of evaluation design, (2) Extrapolability: that alignment with one culture on a narrow set of issues predicts alignment with that culture on others, and (3) Steerability: that LLMs can be reliably prompted to represent specific cultural perspectives. Through experiments examining both explicit and implicit preferences of leading LLMs, we find a high level of instability across presentation formats, incoherence between evaluated versus held-out cultural dimensions, and erratic behavior under prompt steering. We show that these inconsistencies can cause the results of an evaluation to be very sensitive to minor variations in methodology. Finally, we demonstrate in a case study on evaluation design that narrow experiments and a selective assessment of evidence can be used to paint an incomplete picture of LLMs’ cultural alignment properties. Overall, these results highlight significant limitations of current survey-based approaches to evaluating the cultural alignment of LLMs and highlight a need for systematic robustness checks and red-teaming for evaluation results. Data and code are available at https://doi.org/akhan02/cultural-dimension-cover-letters and https://doi.org/ariba-k/llm-cultural-alignment-evaluation, respectively.
FAccT ’25, Athens, Greece
</summary>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Aptly: Making Mobile Apps from Natural Language</title>
<link href="https://hdl.handle.net/1721.1/164378" rel="alternate"/>
<author>
<name>Patton, Evan</name>
</author>
<author>
<name>Kim, David</name>
</author>
<author>
<name>Granquist, Ashley</name>
</author>
<author>
<name>Liu, Robin</name>
</author>
<author>
<name>Scott, Arianna</name>
</author>
<author>
<name>Zamanova, Jennet</name>
</author>
<author>
<name>Abelson, Harold</name>
</author>
<id>https://hdl.handle.net/1721.1/164378</id>
<updated>2025-12-18T05:48:46Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Aptly: Making Mobile Apps from Natural Language
Patton, Evan; Kim, David; Granquist, Ashley; Liu, Robin; Scott, Arianna; Zamanova, Jennet; Abelson, Harold
This paper introduces Aptly, a platform designed to democratize mobile app development, particularly for young learners. Aptly integrates a Large Language Model (LLM) with App Inventor, enabling users to create apps using their natural language. User’s description is translated into a programming language that corresponds with App Inventor’s visual blocks. A preliminary study with high school students demonstrated the usability and potential of the platform. Prior programming experience influenced how users interact with Aptly. Participants identified areas for improvement and expressed a shift in perspective regarding programming accessibility and AI’s role in creative endeavors.
CHI EA ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>AcceloPrint: Fabricating Customizable Accelerometers with Multi-Material 3D Printing</title>
<link href="https://hdl.handle.net/1721.1/164377" rel="alternate"/>
<author>
<name>Ozbek, Doga</name>
</author>
<author>
<name>AlAlawi, Marwa</name>
</author>
<author>
<name>Wessely, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/164377</id>
<updated>2025-12-18T05:48:43Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">AcceloPrint: Fabricating Customizable Accelerometers with Multi-Material 3D Printing
Ozbek, Doga; AlAlawi, Marwa; Wessely, Michael
We introduce AcceloPrint, 3D-printed acceleration sensors that can be fabricated in one pass alongside a 3D object and report on its angular orientation or acceleration. AcceloPrint utilizes capacitive sensing to track the deflection of a 3D printed cantilever beam to a sensor patch. Our AcceloPrint tool integrated into a 3D editor generates a sensor with a user-defined sensing range generated by our computational model. We also propose a novel sensor design with an adjustable sensing range post-fabrication. Our technical evaluation shows our sensor can detect acceleration up to 50 m/s2, with a root mean squared error of 0.35 m/s2 (%3.57) in the range up to 10 m/s2. We demonstrate AcceloPrint with three application examples on sports performance tracking and tangible tools.
CHI EA ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>"How can we learn and use AI at the same time?": Participatory Design of GenAI with High School Students</title>
<link href="https://hdl.handle.net/1721.1/164376" rel="alternate"/>
<author>
<name>Pu, Isabella</name>
</author>
<author>
<name>Ravi, Prerna</name>
</author>
<author>
<name>Dinh, Linh</name>
</author>
<author>
<name>Joe, Chelsea</name>
</author>
<author>
<name>Ogoe, Caitlin</name>
</author>
<author>
<name>Li, Zixuan</name>
</author>
<author>
<name>Breazeal, Cynthia</name>
</author>
<author>
<name>Ostrowski, Anastasia</name>
</author>
<id>https://hdl.handle.net/1721.1/164376</id>
<updated>2025-12-18T05:48:40Z</updated>
<published>2025-06-23T00:00:00Z</published>
<summary type="text">"How can we learn and use AI at the same time?": Participatory Design of GenAI with High School Students
Pu, Isabella; Ravi, Prerna; Dinh, Linh; Joe, Chelsea; Ogoe, Caitlin; Li, Zixuan; Breazeal, Cynthia; Ostrowski, Anastasia
As generative AI (GenAI) emerges as a transformative force, clear understanding of high school students’ perspectives is essential for GenAI’s meaningful integration in high school environments. In this work, we draw insights from a participatory design workshop where we engaged 17 high school students—a group rarely involved in prior research in this area—through the design of novel GenAI tools and school policies addressing their key concerns. Students identified challenges and developed solutions outlining their ideal features in GenAI tools, appropriate school use, and regulations. These centered around the problem spaces of combating bias &amp; misinformation, tackling crime &amp; plagiarism, preventing over-reliance on AI, and handling false accusations of academic dishonesty. Building on our participants’ underrepresented perspectives, we propose new guidelines targeted at educational technology designers for development of GenAI technologies in high schools. We also argue for further incorporation of student voices in development of AI policies in their schools.
IDC ’25, Reykjavik, Iceland
</summary>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bird: A Point Cursor for Virtual Immersive Environments</title>
<link href="https://hdl.handle.net/1721.1/164375" rel="alternate"/>
<author>
<name>Simonson, Aubrey</name>
</author>
<author>
<name>Gretton, Dana</name>
</author>
<author>
<name>Harteveld, Casper</name>
</author>
<id>https://hdl.handle.net/1721.1/164375</id>
<updated>2025-12-18T05:48:42Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Bird: A Point Cursor for Virtual Immersive Environments
Simonson, Aubrey; Gretton, Dana; Harteveld, Casper
This paper introduces the Bird, a novel point cursor for immersive virtual environments (IVEs) that enables precise, one-handed control over a point in 3D space beyond arm’s reach. Interaction techniques commonly used in VR today lack this functionality. While direct manipulation allows for control of the position of an object in 3D space, it is limited to arm’s reach. Ray-casting enables interaction at a distance but specifies a line rather than a point, making it impossible to move objects closer or farther without additional mechanics. The Bird overcomes these limitations by allowing users to select any visible object and place it anywhere within view, with one hand and without requiring a controller. We explore a range of use cases that highlight the Bird’s potential to expand the design space for spatial computing.
CHI EA ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Approximability of Satisfiable &#119896;-CSPs: V</title>
<link href="https://hdl.handle.net/1721.1/164374" rel="alternate"/>
<author>
<name>Bhangale, Amey</name>
</author>
<author>
<name>Khot, Subhash</name>
</author>
<author>
<name>Minzer, Dor</name>
</author>
<id>https://hdl.handle.net/1721.1/164374</id>
<updated>2025-12-18T05:48:59Z</updated>
<published>2025-06-15T00:00:00Z</published>
<summary type="text">On Approximability of Satisfiable &#119896;-CSPs: V
Bhangale, Amey; Khot, Subhash; Minzer, Dor
We propose a framework of algorithm vs. hardness for all Max-CSPs and demonstrate it for a large class of predicates. This framework extends the work of Raghavendra [STOC, 2008], who showed a similar result for almost satisfiable Max-CSPs. Our framework is based on a new hybrid approximation algorithm, which uses a combination of the Gaussian elimination technique (i.e., solving a system of linear equations over an Abelian group) and the semidefinite programming relaxation. We complement our algorithm with a matching dictator vs. quasirandom test that has perfect completeness. The analysis of our dictator vs. quasirandom test is based on a novel invariance principle, which we call the mixed invariance principle. Our mixed invariance principle is an extension of the invariance principle of Mossel, O’Donnell and Oleszkiewicz [Annals of Mathematics, 2010] which plays a crucial role in Raghavendra’s work. The mixed invariance principle allows one to relate 3-wise correlations over discrete probability spaces with expectations over spaces that are a mixture of Guassian spaces and Abelian groups, and may be of independent interest.
STOC ’25, Prague, Czechia
</summary>
<dc:date>2025-06-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fewer Than 1% of Explainable AI Papers Validate Explainability with Humans</title>
<link href="https://hdl.handle.net/1721.1/164373" rel="alternate"/>
<author>
<name>Suh, Ashley</name>
</author>
<author>
<name>Hurley, Isabelle</name>
</author>
<author>
<name>Smith, Nora</name>
</author>
<author>
<name>Siu, Ho Chit</name>
</author>
<id>https://hdl.handle.net/1721.1/164373</id>
<updated>2025-12-18T05:48:45Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Fewer Than 1% of Explainable AI Papers Validate Explainability with Humans
Suh, Ashley; Hurley, Isabelle; Smith, Nora; Siu, Ho Chit
This late-breaking work presents a large-scale analysis of explainable AI (XAI) literature to evaluate claims of human explainability. We collaborated with a professional librarian to identify 18,254 papers containing keywords related to explainability and interpretability. Of these, we find that only 253 papers included terms suggesting human involvement in evaluating an XAI technique, and just 128 of those conducted some form of a human study. In other words, fewer than 1% of XAI papers (0.7%) provide empirical evidence of human explainability when compared to the broader body of XAI literature. Our findings underscore a critical gap between claims of human explainability and evidence-based validation, raising concerns about the rigor of XAI research. We call for increased emphasis on human evaluations in XAI studies and provide our literature search methodology to enable both reproducibility and further investigation into this widespread issue.
CHI EA ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characteristics of Driver Peripheral Vision: How Drivers Respond to Ubiquitous Information on Wide-Area In-Vehicle Displays</title>
<link href="https://hdl.handle.net/1721.1/164372" rel="alternate"/>
<author>
<name>Huang, Hongwei</name>
</author>
<author>
<name>Li, Jiateng</name>
</author>
<author>
<name>Feng, Xuejing</name>
</author>
<author>
<name>Ma, Jun</name>
</author>
<author>
<name>Mehler, Bruce</name>
</author>
<id>https://hdl.handle.net/1721.1/164372</id>
<updated>2025-12-18T05:48:47Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Characteristics of Driver Peripheral Vision: How Drivers Respond to Ubiquitous Information on Wide-Area In-Vehicle Displays
Huang, Hongwei; Li, Jiateng; Feng, Xuejing; Ma, Jun; Mehler, Bruce
Despite advancements in In-Vehicle Information Systems (IVIS) and extensive research on screen layouts, the influence of drivers’ peripheral vision on interactions with evolving multi-screen and large display technologies remains poorly understood. This study examines drivers’ responses to in-vehicle interactive information through peripheral vision, aiming to optimize visual interaction efficiency and enhance driving safety. Analyzing data from 216 participants in a driving simulator, we explored how horizontal eccentricity, screen type, cognitive load, visual crowding, and stimulus type affect perception rates and reaction times. Our findings highlight the significance of these factors and the need for driver-centered design. The results suggest designing IVIS that align with natural visual tendencies to improve interaction efficiency and driving safety.
CHI EA ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Guiding Evolutionary AutoEncoder Training with Activation-Based Pruning Operators</title>
<link href="https://hdl.handle.net/1721.1/164371" rel="alternate"/>
<author>
<name>Jorgensen, Steven</name>
</author>
<author>
<name>Hemberg, Erik</name>
</author>
<author>
<name>Toutouh, Jamal</name>
</author>
<author>
<name>O'Reilly, Una-May</name>
</author>
<id>https://hdl.handle.net/1721.1/164371</id>
<updated>2025-12-18T05:48:34Z</updated>
<published>2025-07-13T00:00:00Z</published>
<summary type="text">Guiding Evolutionary AutoEncoder Training with Activation-Based Pruning Operators
Jorgensen, Steven; Hemberg, Erik; Toutouh, Jamal; O'Reilly, Una-May
This study explores a novel approach to neural network pruning using evolutionary computation, focusing on simultaneously pruning the encoder and decoder of an autoencoder. We introduce two new mutation operators that use layer activations to guide weight pruning. Our findings reveal that one of these activation-informed operators outperforms random pruning, resulting in more efficient autoencoders with comparable performance to canonically trained models. Prior work has established that autoencoder training is effective and scalable with a spatial coevolutionary algorithm that cooperatively coevolves a population of encoders with a population of decoders, rather than one autoencoder. We evaluate how the same activity-guided mutation operators transfer to this context. We find that random pruning is better than guided pruning, in the coevolutionary setting. This suggests activation-based guidance proves more effective in low-dimensional pruning environments, where constrained sample spaces can lead to deviations from true uniformity in randomization. Conversely, population-driven strategies enhance robustness by expanding the total pruning dimensionality, achieving statistically uniform randomness that better preserves system dynamics. We experiment with pruning according to different schedules and present best combinations of operator and schedule for the canonical and coevolving populations cases.
GECCO ’25, July 14–18, 2025, Malaga, Spain
</summary>
<dc:date>2025-07-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mirai: A Wearable Proactive AI "Inner-Voice" for Contextual Nudging</title>
<link href="https://hdl.handle.net/1721.1/164370" rel="alternate"/>
<author>
<name>Fang, Cathy Mengying</name>
</author>
<author>
<name>Samaradivakara, Yasith</name>
</author>
<author>
<name>Maes, Pattie</name>
</author>
<author>
<name>Nanayakkara, Suranga</name>
</author>
<id>https://hdl.handle.net/1721.1/164370</id>
<updated>2025-12-18T05:48:37Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Mirai: A Wearable Proactive AI "Inner-Voice" for Contextual Nudging
Fang, Cathy Mengying; Samaradivakara, Yasith; Maes, Pattie; Nanayakkara, Suranga
People often find it difficult to turn their intentions into real actions—a challenge that affects both personal growth and mental well-being. While established methods like cognitive-behavioral therapy and mindfulness training help people become more aware of their behaviors and set clear goals, these approaches cannot provide immediate guidance when people fall into automatic reactions or habits. We introduce Mirai, a novel wearable AI system with an integrated camera, real-time speech processing, and personalized voice-cloning to provide proactive and contextual nudges for positive behavior change. Mirai continuously monitors and analyzes the user’s environment to anticipate their intentions, generating contextually-appropriate responses delivered in the user’s own cloned voice. We demonstrate the application of Mirai through three scenarios focusing on dietary choices, work productivity, and communication skills. We also discuss future work on improving the proactive agent via human feedback and the need for a longitudinal study in naturalistic settings.
CHI EA ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>9.301J Neural Plasticity in Learning and Development, Spring 2002</title>
<link href="https://hdl.handle.net/1721.1/164369" rel="alternate"/>
<author>
<name>Miller, Earl K</name>
</author>
<author>
<name>Liu, Guosong</name>
</author>
<author>
<name>Wilson, Matthew A.</name>
</author>
<author>
<name>Tonegawa, Susumu</name>
</author>
<author>
<name>Quinn, William</name>
</author>
<id>https://hdl.handle.net/1721.1/164369</id>
<updated>2025-12-17T00:09:23Z</updated>
<published>2002-01-01T00:00:00Z</published>
<summary type="text">9.301J Neural Plasticity in Learning and Development, Spring 2002
Miller, Earl K; Liu, Guosong; Wilson, Matthew A.; Tonegawa, Susumu; Quinn, William
Roles of neural plasticity in learning and memory and in development of invertebrates and mammals. An in-depth critical analysis of current literature of molecular, cellular, genetic, electrophysiological, and behavioral studies. Discussion of original papers supplemented by introductory lectures.
</summary>
<dc:date>2002-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>9.110J Neurology, Neuropsychology, and Neurobiology of Aging. Spring 2005</title>
<link href="https://hdl.handle.net/1721.1/164368" rel="alternate"/>
<author>
<name>Corkin, Suzanne Hammond</name>
</author>
<author>
<name>Ingram, Vernon M.</name>
</author>
<id>https://hdl.handle.net/1721.1/164368</id>
<updated>2025-12-17T00:04:31Z</updated>
<published>2005-01-01T00:00:00Z</published>
<summary type="text">9.110J Neurology, Neuropsychology, and Neurobiology of Aging. Spring 2005
Corkin, Suzanne Hammond; Ingram, Vernon M.
Lectures and discussions in this course cover the clinical, behavioral, and molecular aspects of the brain aging processes in humans. Topics include the loss of memory and other cognitive abilities in normal aging, as well as neurodegenerative conditions such as Parkinson’s and Alzheimer’s diseases. Discussions based on readings taken from primary literature explore the current research in this field.
</summary>
<dc:date>2005-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>9.18 Developmental Neurobiology, Spring 2005</title>
<link href="https://hdl.handle.net/1721.1/164367" rel="alternate"/>
<author>
<name>Nedivi, Elly</name>
</author>
<id>https://hdl.handle.net/1721.1/164367</id>
<updated>2025-12-17T00:00:42Z</updated>
<published>2005-01-01T00:00:00Z</published>
<summary type="text">9.18 Developmental Neurobiology, Spring 2005
Nedivi, Elly
This course considers molecular control of neural specification, formation of neuronal connections, construction of neural systems, and the contributions of experience to shaping brain structure and function. Topics include: neural induction and pattern formation, cell lineage and fate determination, neuronal migration, axon guidance, synapse formation and stabilization, activity-dependent development and critical periods, development of behavior.
</summary>
<dc:date>2005-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>9.013J Cell and Molecular Neurobiology, Spring 2008</title>
<link href="https://hdl.handle.net/1721.1/164366" rel="alternate"/>
<author>
<name>Constantine-Paton, Martha</name>
</author>
<author>
<name>Sheng, Morgan Hwa-Tze</name>
</author>
<author>
<name>Quinn, William</name>
</author>
<id>https://hdl.handle.net/1721.1/164366</id>
<updated>2025-12-17T17:54:04Z</updated>
<published>2008-01-01T00:00:00Z</published>
<summary type="text">9.013J Cell and Molecular Neurobiology, Spring 2008
Constantine-Paton, Martha; Sheng, Morgan Hwa-Tze; Quinn, William
This course explores the major areas of cellular and molecular neurobiology, including excitable cells and membranes, ion channels and receptors, synaptic transmission, cell-type determination, axon guidance, neuronal cell biology, neurotrophin signaling and cell survival, synapse formation and neural plasticity. Material includes lectures and exams, and involves presentation and discussion of primary literature. It focuses on major concepts and recent advances in experimental neuroscience.
</summary>
<dc:date>2008-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>9.322J Genetic Neurobiology, Fall 2005</title>
<link href="https://hdl.handle.net/1721.1/164365" rel="alternate"/>
<author>
<name>Littleton, J. Troy</name>
</author>
<author>
<name>Quinn, William</name>
</author>
<id>https://hdl.handle.net/1721.1/164365</id>
<updated>2025-12-16T23:50:34Z</updated>
<published>2005-01-01T00:00:00Z</published>
<summary type="text">9.322J Genetic Neurobiology, Fall 2005
Littleton, J. Troy; Quinn, William
This course deals with the specific functions of neurons, the interactions of neurons in development, and the organization of neuronal ensembles to produce behavior. Topics covered include the analysis of mutations, and molecular analysis of the genes required for nervous system function. In particular, this course focuses on research work done with nematodes, fruit flies, mice, and humans.
</summary>
<dc:date>2005-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>9.19J Cognitive &amp; Behavioral Genetics, Spring 2001</title>
<link href="https://hdl.handle.net/1721.1/164364" rel="alternate"/>
<author>
<name>Nedivi, Elly</name>
</author>
<author>
<name>Pinker, Steven</name>
</author>
<id>https://hdl.handle.net/1721.1/164364</id>
<updated>2025-12-16T23:46:07Z</updated>
<published>2001-01-01T00:00:00Z</published>
<summary type="text">9.19J Cognitive &amp; Behavioral Genetics, Spring 2001
Nedivi, Elly; Pinker, Steven
How genetics can add to our understanding of cognition, language, emotion, personality, and behavior. Use of gene mapping to estimate risk factors for psychological disorders and variation in behavioral and personality traits. Mendelian genetics, genetic mapping techniques, and statistical analysis of large populations and their application to particular studies in behavioral genetics. Topics also include environmental influence on genetic programs, evolutionary genetics, and the larger scientific, social, ethical, and philosophical implications
</summary>
<dc:date>2001-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>5.95J Teaching College-Level Science and Engineering,  Spring 2009</title>
<link href="https://hdl.handle.net/1721.1/164363" rel="alternate"/>
<author>
<name>Mahajan, Sanjoy</name>
</author>
<id>https://hdl.handle.net/1721.1/164363</id>
<updated>2025-12-16T23:41:01Z</updated>
<published>2009-01-01T00:00:00Z</published>
<summary type="text">5.95J Teaching College-Level Science and Engineering,  Spring 2009
Mahajan, Sanjoy
This participatory seminar focuses on the knowledge and skills necessary for teaching science and engineering in higher education. This course is designed for graduate students interested in an academic career, and anyone else interested in teaching. Readings and discussions include: teaching equations for understanding, designing exam and homework questions, incorporating histories of science, creating absorbing lectures, teaching for transfer, the evils of PowerPoint, and planning a course. The subject is appropriate for both novices and those with teaching experience.
</summary>
<dc:date>2009-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>7.51 Biochemistry, Fall 2001</title>
<link href="https://hdl.handle.net/1721.1/164362" rel="alternate"/>
<author>
<name>Sauer, Robert T</name>
</author>
<author>
<name>Solomon, Frank</name>
</author>
<author>
<name>Baker, Tania</name>
</author>
<id>https://hdl.handle.net/1721.1/164362</id>
<updated>2025-12-17T17:55:19Z</updated>
<published>2001-01-01T00:00:00Z</published>
<summary type="text">7.51 Biochemistry, Fall 2001
Sauer, Robert T; Solomon, Frank; Baker, Tania
The tools and analytical methods that biochemists use to dissect biological problems. Analysis of the mode of action and structure of regulatory, binding, and catalytic proteins.
</summary>
<dc:date>2001-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>7.391 Concept-Centered Teaching, Fall 2005</title>
<link href="https://hdl.handle.net/1721.1/164361" rel="alternate"/>
<author>
<name>Khodor, Julia</name>
</author>
<id>https://hdl.handle.net/1721.1/164361</id>
<updated>2025-12-16T23:19:27Z</updated>
<published>2005-01-01T00:00:00Z</published>
<summary type="text">7.391 Concept-Centered Teaching, Fall 2005
Khodor, Julia
Do you like teaching, but find yourself frustrated by how little students seem to learn? Would you like to try teaching, but are nervous about whether you will be any good at it? Are you interested in new research on science education? Research in science education shows that the greatest obstacle to student learning is the failure to identify and confront the misconceptions with which the students enter the class or those that they acquire during their studies. This weekly seminar course focuses on developing the participants’ ability to uncover and confront student misconceptions and to foster student understanding and retention of key concepts. Participants read primary literature on science education, uncover basic concepts often overlooked when teaching biology, and lead a small weekly discussion session for students currently enrolled in introductory biology classes.&#13;
&#13;
The instructor for this course, Dr. Julia Khodor, is a member of the HHMI Education Group.
</summary>
<dc:date>2005-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>7.391 Concept-Centered Teaching, Spring 2006</title>
<link href="https://hdl.handle.net/1721.1/164360" rel="alternate"/>
<author>
<name>Kosinski-Collins, Melissa</name>
</author>
<id>https://hdl.handle.net/1721.1/164360</id>
<updated>2025-12-16T23:14:32Z</updated>
<published>2006-01-01T00:00:00Z</published>
<summary type="text">7.391 Concept-Centered Teaching, Spring 2006
Kosinski-Collins, Melissa
Do you like teaching, but find yourself frustrated by how little students seem to learn? Would you like to try teaching, but are nervous about whether you will be any good at it? Are you interested in new research on science education? Research in science education shows that the greatest obstacle to student learning is the failure to identify and confront the misconceptions with which the students enter the class or those that they acquire during their studies. This weekly seminar course focuses on developing the participants’ ability to uncover and confront student misconceptions and to foster student understanding and retention of key concepts. Participants read primary literature on science education, uncover basic concepts often overlooked when teaching biology, and lead a small weekly discussion session for students currently enrolled in introductory biology classes.&#13;
&#13;
The instructor for this course, Dr. Kosinski-Collins, is a member of the HHMI Education Group.
</summary>
<dc:date>2006-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>20.010J Introduction to Bioengineering (BE.010J), Spring 2006</title>
<link href="https://hdl.handle.net/1721.1/164359" rel="alternate"/>
<author>
<name>Lauffenburger, Douglas A</name>
</author>
<author>
<name>Matsudaira, Paul T.</name>
</author>
<author>
<name>Belcher, Angela M</name>
</author>
<id>https://hdl.handle.net/1721.1/164359</id>
<updated>2025-12-16T23:10:25Z</updated>
<published>2006-01-01T00:00:00Z</published>
<summary type="text">20.010J Introduction to Bioengineering (BE.010J), Spring 2006
Lauffenburger, Douglas A; Matsudaira, Paul T.; Belcher, Angela M
Bioengineering at MIT is represented by the diverse curricula offered by most Departments in the School of Engineering. This course samples the wide variety of bioengineering options for students who plan to major in one of the undergraduate Engineering degree programs. The beginning lectures describe the science basis for bioengineering with particular emphasis on molecular cell biology and systems biology. Bioengineering faculty will then describe the bioengineering options in a particular engineering course as well as the type of research conducted by faculty in the department,
</summary>
<dc:date>2006-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>7.347 Epigenetic Regulation of Stem Cells, Spring 2014</title>
<link href="https://hdl.handle.net/1721.1/164358" rel="alternate"/>
<author>
<name>Williams, Eric O</name>
</author>
<author>
<name>Subramanian, Vidya</name>
</author>
<id>https://hdl.handle.net/1721.1/164358</id>
<updated>2025-12-16T22:53:33Z</updated>
<published>2014-01-01T00:00:00Z</published>
<summary type="text">7.347 Epigenetic Regulation of Stem Cells, Spring 2014
Williams, Eric O; Subramanian, Vidya
During development a single totipotent cell gives rise to the vast array of cell types present in the adult human body, yet each cell has essentially the same DNA sequence. As cells differentiate, distinct sets of genes must be coordinately activated and repressed, ultimately leading to a cell-type specific pattern of gene expression and a particular cell fate. In eukaryotic organisms, DNA is packaged in a complex protein super structure known as chromatin. Modification and reorganization of chromatin play a critical role in coordinating the cell-type specific gene expression programs that are required as a cell transitions from a pluripotent stem cell to a fully differentiated cell type. Epigenetics refers to such heritable changes that occur in chromatin without altering the primary DNA sequence. This class will focus on the role of epigenetic regulation with respect to developmental fate and also consider the fact that the epigenetic mechanisms discussed have broad implications, including how seemingly normal cells can be transformed into cancerous cells.&#13;
&#13;
This course is one of many Advanced Undergraduate Seminars offered by the Biology Department at MIT. These seminars are tailored for students with an interest in using primary research literature to discuss and learn about current biological research in a highly interactive setting. Many instructors of the Advanced Undergraduate Seminars are postdoctoral scientists with a strong interest in teaching.
</summary>
<dc:date>2014-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structural Tuning of Self‐Conductive Polymer as Gas Diffusion Layer for Electrocatalytic Reactions at High Current</title>
<link href="https://hdl.handle.net/1721.1/164357" rel="alternate"/>
<author>
<name>Noh, Hwiyoon</name>
</author>
<author>
<name>Lee, Tae Hoon</name>
</author>
<author>
<name>Ahn, Sang Hyun</name>
</author>
<author>
<name>Davis, Jonathan T</name>
</author>
<author>
<name>Jeong, Daecheol</name>
</author>
<author>
<name>Gounder, Rajamani</name>
</author>
<author>
<name>Smith, Zachary P</name>
</author>
<author>
<name>Boudouris, Bryan W</name>
</author>
<author>
<name>Tackett, Brian M</name>
</author>
<id>https://hdl.handle.net/1721.1/164357</id>
<updated>2026-03-08T03:38:50Z</updated>
<published>2025-10-21T00:00:00Z</published>
<summary type="text">Structural Tuning of Self‐Conductive Polymer as Gas Diffusion Layer for Electrocatalytic Reactions at High Current
Noh, Hwiyoon; Lee, Tae Hoon; Ahn, Sang Hyun; Davis, Jonathan T; Jeong, Daecheol; Gounder, Rajamani; Smith, Zachary P; Boudouris, Bryan W; Tackett, Brian M
Electrocatalytic conversions offer a promising route for sustainable chemical production using renewable energy. Gas diffusion layers (GDLs) enable selective product formation at high current densities but suffer from electrolyte flooding, and polytetrafluoroethylene (PTFE)-based GDLs typically require metal conductive layers, which constrain catalyst development. A recently developed GDL configuration, electropolymerized poly(3,4-ethylenedioxythiophene) (PEDOT)-coated PTFE, demonstrates notable flooding resistance, but suffers from gas diffusion limitations at elevated currents due to limited gas diffusion through the PEDOT layer. Here, different dopants in PEDOT are exploited to modify the physical properties and enhance gas transport. ClO4−-doped PEDOT exhibits superior performance due to optimized physical structure, leading to increased gas permeance and faradaic efficiency (FE) for CO production during electrocatalytic CO2 reduction. Further optimization of coverage and thickness achieved by adjusting charge density led to an optimal configuration at 33 mC cm−2. This GDL supports various metal electrocatalysts and demonstrates FECO of &gt; 90% for over 150 h at −200 mA cm−2 using a commercial silver electrocatalyst. This work highlights the importance of GDL engineering in enhancing performance and durability for long-term electrocatalytic processes.
</summary>
<dc:date>2025-10-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Navigating Emotions Through Art</title>
<link href="https://hdl.handle.net/1721.1/164356" rel="alternate"/>
<author>
<name>Wu, Christine</name>
</author>
<author>
<name>Kumar, Ila</name>
</author>
<author>
<name>Picard, Rosalind</name>
</author>
<id>https://hdl.handle.net/1721.1/164356</id>
<updated>2026-03-08T03:22:26Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Navigating Emotions Through Art
Wu, Christine; Kumar, Ila; Picard, Rosalind
In this study, we design and deploy a novel system to examine the safety and efficacy of using a chatbot to conduct aspects of art therapy with youth who have experienced developmental trauma, focusing on supporting emotion identification, processing, and expression. This publication describes phase one, gathering feedback on the system from practicing art therapists (n = 17) and making recommendations for how to evolve such work in beneficial ways to meet the needs of trauma-impacted youth. Our findings highlight the potential value of chatbots for trauma-impacted youth as well as important reflection questions these chatbots should ask. Additionally, the study discusses the risk of harm associated with chatbot interventions, particularly if the conversation brings up negative emotions that the chatbot fails to help process. Finally, we end by presenting a set of practitioner-driven recommendations for chatbot designers who are interested in helping trauma-impacted youth understand and cope with their emotions, leveraging art therapy techniques.
CHI EA ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bridging Tradition and Technology: Human-AI Interface for Exploration and Co-Creation of Classical Dance Heritage</title>
<link href="https://hdl.handle.net/1721.1/164355" rel="alternate"/>
<author>
<name>Pataranutaporn, Pat</name>
</author>
<author>
<name>Archiwaranguprok, Chayapatr</name>
</author>
<author>
<name>Bhongse-tong, Piyaporn</name>
</author>
<author>
<name>Maes, Pattie</name>
</author>
<author>
<name>Klunchun, Pichet</name>
</author>
<id>https://hdl.handle.net/1721.1/164355</id>
<updated>2026-03-08T03:22:20Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Bridging Tradition and Technology: Human-AI Interface for Exploration and Co-Creation of Classical Dance Heritage
Pataranutaporn, Pat; Archiwaranguprok, Chayapatr; Bhongse-tong, Piyaporn; Maes, Pattie; Klunchun, Pichet
This paper introduces Text2Tradition, a system designed to bridge the epistemological gap between modern language processing and traditional dance knowledge by translating user-generated prompts into Thai classical dance repertoire. Our system interprets user prompts through the lens of Mae Bot Yai—the 59 foundational movements constituting the vocabulary of traditional Thai dance—and incorporates six choreographic elements that encode centuries of cultural knowledge. This research explores the fertile tension between two knowledge systems: the embodied, culturally-specific wisdom of traditional dance and the data-driven, statistically-derived, and often Western-centric intelligence of LLMs. By mediating between these epistemologies, we highlight the potential of AI-mediated systems not only to preserve traditional forms but also to foster new cultural co-creations, suggesting that these tensions can be harnessed to stimulate cultural dialogue and innovation.
CHI EA ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Write-Optimized Distributed B+Tree Index on Disaggregated Memory</title>
<link href="https://hdl.handle.net/1721.1/164354" rel="alternate"/>
<author>
<name>Kraska, Tim</name>
</author>
<id>https://hdl.handle.net/1721.1/164354</id>
<updated>2026-03-08T03:21:59Z</updated>
<published>2025-04-15T00:00:00Z</published>
<summary type="text">A Write-Optimized Distributed B+Tree Index on Disaggregated Memory
Kraska, Tim
If it were possible to scale memory independently from compute, it would be feasible to dynamically adjust the amount of memory based on the workload. It would further enable better resource utilization. Consider a dynamic workload regarding the number of queries but with very strict response time requirements, which can only be met, if data is kept in-memory. In this case, the separation of compute and memory would enable to scale the compute with the number of queries while keeping all the data constantly in-memory. This design principle is already used by services such as Google, which keeps the entire web-index in-memory.
</summary>
<dc:date>2025-04-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-Driven AI Avatars for Valuation in Dating Scenarios</title>
<link href="https://hdl.handle.net/1721.1/164353" rel="alternate"/>
<author>
<name>Baradari, D?nya</name>
</author>
<author>
<name>Polimetla, Tejaswi</name>
</author>
<author>
<name>Maes, Pattie</name>
</author>
<id>https://hdl.handle.net/1721.1/164353</id>
<updated>2026-03-08T03:22:10Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Data-Driven AI Avatars for Valuation in Dating Scenarios
Baradari, D?nya; Polimetla, Tejaswi; Maes, Pattie
Dating applications facilitate partner selection by presenting curated information about potential matches. However, traditional dating profiles often fail to convey the depth of a person’s personality, communication style, and lived experience, leading to inefficiencies in the match-finding process. This work-in-progress study introduces and evaluates two novel, data-driven dating interfaces: (1) a Data Dashboard, which aggregates and visualizes insights from a user’s digital footprint, and (2) an AI Avatar, an interactive, voice-enabled model using personal data to simulate real-world interactions. A user study with nine participants comparing these interfaces against traditional dating profiles reveals that the Data Dashboard enables more accurate personality assessments but imposes a high cognitive load. Meanwhile, the AI Avatar enhances engagement and enjoyability but raises concerns about trust and emotional investment. Our findings highlight the challenge of maintaining authenticity in AI-mediated interactions and bridging the gap between digital and real-life personas.
CHI EA ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cultivating a Supportive Sphere: Designing Technology to Increase Social Support for Foster-Involved Youth</title>
<link href="https://hdl.handle.net/1721.1/164352" rel="alternate"/>
<author>
<name>Kumar, Ila</name>
</author>
<author>
<name>Ferguson, Craig</name>
</author>
<author>
<name>Wu, Jiayi</name>
</author>
<author>
<name>Picard, Rosalind</name>
</author>
<id>https://hdl.handle.net/1721.1/164352</id>
<updated>2026-03-08T03:22:15Z</updated>
<published>2025-05-02T00:00:00Z</published>
<summary type="text">Cultivating a Supportive Sphere: Designing Technology to Increase Social Support for Foster-Involved Youth
Kumar, Ila; Ferguson, Craig; Wu, Jiayi; Picard, Rosalind
Approximately 400,000 youth in the US are living in foster care due to experiences with abuse or neglect at&#13;
home [17]. For multiple reasons, these youth often don’t receive adequate social support from those around&#13;
them. Despite technology’s potential, very little work has explored how these tools can provide more support&#13;
to foster-involved youth. To begin to fill this gap, we worked with current and former foster-involved youth&#13;
to develop the first digital tool that aims to increase social support for this population, creating a novel system&#13;
in which users complete reflective check-ins in an online community setting. We then conducted a pilot study&#13;
with 15 current and former foster-involved youth, comparing the effect of using the app for two weeks to&#13;
two weeks of no intervention. We collected qualitative and quantitative data, which demonstrated that this&#13;
type of interface can provide youth with types of social support that are often not provided by foster care&#13;
services and other digital interventions. The paper details the motivation behind the app, the trauma-informed&#13;
design process, and insights gained from this initial evaluation study. Finally, the paper concludes with&#13;
recommendations for designing digital tools that effectively provide social support to foster-involved youth.
</summary>
<dc:date>2025-05-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Allocation Multiplicity: Evaluating the Promises of the Rashomon Set</title>
<link href="https://hdl.handle.net/1721.1/164351" rel="alternate"/>
<author>
<name>Jain, Shomik</name>
</author>
<author>
<name>Wang, Margaret</name>
</author>
<author>
<name>Creel, Kathleen</name>
</author>
<author>
<name>Wilson, Ashia</name>
</author>
<id>https://hdl.handle.net/1721.1/164351</id>
<updated>2026-03-08T03:22:46Z</updated>
<published>2025-06-23T00:00:00Z</published>
<summary type="text">Allocation Multiplicity: Evaluating the Promises of the Rashomon Set
Jain, Shomik; Wang, Margaret; Creel, Kathleen; Wilson, Ashia
The Rashomon set of equally-good models promises less discriminatory algorithms, reduced outcome homogenization, and fairer decisions through model ensembles or reconciliation. However, we argue from the perspective of allocation multiplicity that these promises may remain unfulfilled. When there are more qualified candidates than resources available, many different allocations of scarce resources can achieve the same utility. This space of equal-utility allocations may not be faithfully reflected by the Rashomon set, as we show in a case study of healthcare allocations. We attribute these unfulfilled promises to several factors: limitations in empirical methods for sampling from the Rashomon set, the standard practice of deterministically selecting individuals with the lowest risk, and structural biases that cause all equally-good models to view some qualified individuals as inherently risky.
FAccT ’25, Athens, Greece
</summary>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interactive Sketchpad: A Multimodal Tutoring System for Collaborative, Visual Problem-Solving</title>
<link href="https://hdl.handle.net/1721.1/164350" rel="alternate"/>
<author>
<name>Lee, Jimin</name>
</author>
<author>
<name>Chen, Steven-Shine</name>
</author>
<author>
<name>Liang, Paul Pu</name>
</author>
<id>https://hdl.handle.net/1721.1/164350</id>
<updated>2026-03-08T03:22:21Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Interactive Sketchpad: A Multimodal Tutoring System for Collaborative, Visual Problem-Solving
Lee, Jimin; Chen, Steven-Shine; Liang, Paul Pu
Humans have long relied on visual aids like sketches and diagrams to support reasoning and problem-solving. Visual tools, like auxiliary lines in geometry or graphs in calculus, are essential for understanding complex ideas. However, many tutoring systems remain text-based, providing feedback only through natural language. Leveraging recent advances in Large Multimodal Models (LMMs), this paper introduces Interactive Sketchpad, a tutoring system that combines language-based explanations with interactive visualizations to enhance learning. Built on a pre-trained LMM, Interactive Sketchpad is fine-tuned to provide step-by-step guidance in both text and visuals, enabling natural multimodal interaction with the student. Accurate and robust diagrams are generated by incorporating code execution into the reasoning process. User studies conducted on math problems such as geometry, calculus, and trigonometry demonstrate that Interactive Sketchpad leads to improved task comprehension, problem-solving accuracy, and engagement levels, highlighting its potential for transforming educational technologies. All code is available at: https://stevenshinechen.github.io/interactivesketchpad/.
CHI EA ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Meeting at Crossroads: An exploration of playful listening through a co-creative AI game</title>
<link href="https://hdl.handle.net/1721.1/164349" rel="alternate"/>
<author>
<name>Lee, Cassandra</name>
</author>
<author>
<name>Dimitrakopoulou, Dimitra</name>
</author>
<author>
<name>Roy, Deb</name>
</author>
<id>https://hdl.handle.net/1721.1/164349</id>
<updated>2026-03-08T03:22:07Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Meeting at Crossroads: An exploration of playful listening through a co-creative AI game
Lee, Cassandra; Dimitrakopoulou, Dimitra; Roy, Deb
Active listening is a well-established cornerstone of empathetic communication and a hallmark of “civic competence”, but is a challenging and energy consuming skill. Games offer a provocative lens to consider how active listening could be explored playfully. In this paper, we present Crossroads, an interactive social game which makes active listening fun by inviting players to co-create images about one another’s personal experiences. Deployed through a tablet-mobile web app, players take turns acting in ‘listener roles’ to generate AI images, and eventually uncover a collective picture along a “crossroad” shaped map. An initial mixed-method evaluation with 36 users demonstrates that players find the experience highly engaging and feel especially heard during in-game conversations. This work contributes a novel game which uses AI to mediate empathetic dialogue, and surfaces questions about the trade-offs of gamifying listening.
CHI EA ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Atmospheric Impacts of Hydrogen as an Aviation Fuel</title>
<link href="https://hdl.handle.net/1721.1/164348" rel="alternate"/>
<author>
<name>Gibney, Evan M.</name>
</author>
<id>https://hdl.handle.net/1721.1/164348</id>
<updated>2025-12-17T03:06:42Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Atmospheric Impacts of Hydrogen as an Aviation Fuel
Gibney, Evan M.
Hydrogen is being investigated as a promising zero-carbon aviation fuel, offering the potential to eliminate direct CO₂ emissions while being produced with low lifecycle greenhouse gas emissions. Despite these benefits, there are additional indirect climate and air quality costs associated with direct hydrogen emissions which are often overlooked. We quantify the perturbation in the atmospheric composition associated with the introduction of hydrogen-fueled aircraft, broadening the current understanding of the non-CO₂ effects of these fleets. We use the GEOS-Chem High Performance (GCHP) global chemistry-transport model to conduct a spatially discretized, multi-year impact assessment of the atmospheric impacts of hydrogen-fueled aviation. We implement a flux surface boundary condition for hydrogen to provide an improved representation of the soil sink, relative to the default fixed boundary condition. This results in a net surface exchange of-16.7 Tg H₂ per year. Two hydrogen scenarios are evaluated using the updated GCHP implementation, which are representative of a high and low mitigation scenario for direct hydrogen emission rates. For the two scenarios, respectively, we observe increases in the mean atmospheric methane mixing ratio of 3.34 ppbv and 10.7 ppbv, corresponding to an increased methane lifetime of between 0.24% and 0.77%, respectively. The increased methane lifetime as well as in-situ oxidation of stratospheric hydrogen results in an increased stratospheric water vapor burden of 0.42 Tg and 2.3 Tg (or 0.052% and 0.28%) for the high and low mitigation scenarios, respectively. Additionally, we show the perturbation to tropospheric ozone levels to be between-0.047% and +0.30%, where the decreased ozone results from the removal of NOₓ emissions associated with fuel cells and low hydrogen emission rates. This analysis provides the foundation for understanding the implications of potential future hydrogen-based aviation fleets on climate and air quality.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Changing Climate Beneath Our Feet: How plant and microbial life in tropical soils are shifting and what that could mean for the future of our warming planet</title>
<link href="https://hdl.handle.net/1721.1/164347" rel="alternate"/>
<author>
<name>Ocharoenchai, Nanticha</name>
</author>
<id>https://hdl.handle.net/1721.1/164347</id>
<updated>2025-12-17T03:06:34Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">A Changing Climate Beneath Our Feet: How plant and microbial life in tropical soils are shifting and what that could mean for the future of our warming planet
Ocharoenchai, Nanticha
Discussions about climate change and carbon sequestration have largely revolved around plant structures we can easily see, like leaves that absorb CO₂ for photosynthesis and woody trunks that store carbon as biomass. Carbon credits that companies and consumers buy to compensate for emissions they’ve produced are primarily calculated based on these parts, as are models that predict climate change impacts. But researchers are now beginning to understand that what we see aboveground is only part of the equation. The other part lies beneath our feet in an intricate, expansive, covert realm where plant roots, microbial communities and soil dynamics interact. These belowground systems are crucial for cycling carbon through the Earth and regulating the climate, but relatively little is known about them compared to aboveground systems. This is especially true in tropical regions, where one-third of the world’s terrestrial carbon storage lies. However, these systems are evolving quickly with climate change, contradicting what models have previously projected. With so many global decisions based on such models, these uncertainties hold planetary significance for our future. A group of scientists is climbing an uphill battle, racing against time to understand this understudied field.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Engineering Winter</title>
<link href="https://hdl.handle.net/1721.1/164346" rel="alternate"/>
<author>
<name>White, Mackenzie</name>
</author>
<id>https://hdl.handle.net/1721.1/164346</id>
<updated>2025-12-17T03:06:43Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Engineering Winter
White, Mackenzie
As winters warm and snowfall becomes less reliable, ski resorts worldwide increasingly depend on artificial snow to stay open. Snowmaking, once a stopgap, has become the backbone of entire seasons in a sprawling choreography of pumps and pressurized mist designed to hold trails together. At resorts like Vermont’s Bromley Mountain, snowmakers work through the night, drawing millions of gallons from limited reservoirs and operating within narrowing windows of cold air. What emerges is a portrait of winter in transition: less predictable, more expensive, increasingly manufactured. The efforts to preserve winter recreation carry growing costs in energy, water, and equitable access. Many smaller, independent ski areas struggle to meet the demands of climate adaptation, while larger resorts expand their operations, widening the divide in who can afford to sustain operations. In the American West, where rivers depend heavily on snowpack melt, the spread of snowmaking ties winter recreation to a water system already under immense strain. As artificial snow becomes the norm, winter is increasingly a season bought, built, and rationed, raising the question of whether attempts to keep the season alive are accelerating the changes that threaten to erase it.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Influence of Electronic Structure and Lattice Dynamics on Oxygen Ion Transport in Solid-State Ionic Conductors</title>
<link href="https://hdl.handle.net/1721.1/164345" rel="alternate"/>
<author>
<name>Vivona, Daniele</name>
</author>
<id>https://hdl.handle.net/1721.1/164345</id>
<updated>2025-12-17T03:03:54Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Influence of Electronic Structure and Lattice Dynamics on Oxygen Ion Transport in Solid-State Ionic Conductors
Vivona, Daniele
Solid-state oxygen ion conductors are crucial for electrochemical devices such as separation membranes, solid-oxide electrolyzers, fuel cells, and sensors, serving as a technological link between renewable energy generation and consumption. Currently, these conductors are limited by slow transport rates and high operational temperatures, which pose challenges and increase costs. Developing faster conductors that operate at lower temperatures requires reducing activation energy and enhancing the pre-exponential factor in the Arrhenius equation of conductivity. However, our understanding of the fundamental processes in oxygen ion transport and methods to improve oxygen ion conductivity remain limited. This thesis focuses on understanding the fundamental mechanisms that regulate oxygen ion transport. First, the migration energy barrier in perovskite oxides is linked to an electronic energy penalty from local charge screening near the hopping ion. The energy of local electronic states is identified as a fundamental descriptor of the migration barrier. Next, migration entropy and phonon density of states (DOS) are highlighted as the main factors regulating the pre-exponential factor of oxygen ion conductivity across different materials. The phonons of oxygen ions near the hopping ion significantly contribute to migration entropy, suggesting that migration entropy can be tuned by designing the phonon dynamics of these atoms. These results imply that a widely observed correlation between increasing pre-exponential factors and activation energy arises from coupling local electronic energy states and phonons. The results are extended to the formation of oxygen vacancies and interstitials in perovskite and RuddlesdenPopper oxides. We find that defect formation energy rises with defect formation entropy, which is linked to electronic energy states interacting with phonons. In perovskite oxides, lower vacancy formation entropy is correlated with increasing oxygen phonon band center and shortening bond lengths with oxygen vacancy formation. In Ruddlesden-Popper oxides, lower interstitial formation entropy is associated with reduced octahedral tilting and local phonon changes. This thesis establishes a theoretical foundation for treating migration entropy and defect formation entropy as design variables in next-generation ionic conductors. By highlighting the impact of electronic structure and lattice dynamics on energy barriers and entropic drivers, the findings suggest new pathways for material design through the strategic separation of these factors and the intelligent design of lattice moieties in oxygen ion transport environments.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>7.344 Antibiotics, Toxins, and Protein Engineering, Spring 2007</title>
<link href="https://hdl.handle.net/1721.1/164344" rel="alternate"/>
<author>
<name>Koehrer, Caroline</name>
</author>
<author>
<name>Sassanfar, Mandana</name>
</author>
<id>https://hdl.handle.net/1721.1/164344</id>
<updated>2025-12-16T22:59:01Z</updated>
<published>2007-01-01T00:00:00Z</published>
<summary type="text">7.344 Antibiotics, Toxins, and Protein Engineering, Spring 2007
Koehrer, Caroline; Sassanfar, Mandana
The lethal poison Ricin (best known as a weapon of bioterrorism), Diphtheria toxin (the causative agent of a highly contagious bacterial disease), and the widely used antibiotic tetracycline have one thing in common: They specifically target the cell’s translational apparatus and disrupt protein synthesis.&#13;
&#13;
In this course, we will explore the mechanisms of action of toxins and antibiotics, their roles in everyday medicine, and the emergence and spread of drug resistance. We will also discuss the identification of new drug targets and how we can manipulate the protein synthesis machinery to provide powerful tools for protein engineering and potential new treatments for patients with devastating diseases, such as cystic fibrosis and muscular dystrophy.&#13;
&#13;
This course is one of many Advanced Undergraduate Seminars offered by the Biology Department at MIT. These seminars are tailored for students with an interest in using primary research literature to discuss and learn about current biological research in a highly interactive setting. Many instructors of the Advanced Undergraduate Seminars are postdoctoral scientists with a strong interest in teaching.
</summary>
<dc:date>2007-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>OpenMC Interpretation of FNS SINBAD Shielding Benchmark Experiments</title>
<link href="https://hdl.handle.net/1721.1/164343" rel="alternate"/>
<author>
<name>Ebiwonjumi, Bamidele</name>
</author>
<author>
<name>Segantin, Stefano</name>
</author>
<author>
<name>Peterson, Ethan</name>
</author>
<id>https://hdl.handle.net/1721.1/164343</id>
<updated>2026-03-08T03:38:47Z</updated>
<published>2025-01-02T00:00:00Z</published>
<summary type="text">OpenMC Interpretation of FNS SINBAD Shielding Benchmark Experiments
Ebiwonjumi, Bamidele; Segantin, Stefano; Peterson, Ethan
The Fusion Neutron Source (FNS) clean benchmark experiments on tungsten, vanadium, and beryllium assemblies from the SINBAD (Shielding Integral Benchmark Archive and Database) are analyzed to experimentally validate OpenMC (version 0.14.1-dev) fusion neutronics capabilities. The assemblies were irradiated with a 14-MeV deuterium-tritium neutron source. Neutron spectra, photon spectra, reaction rates, gamma heating rates (GHRs), and tritium production rates (TPRs) are compared to measured data in the experimental assemblies and MCNP-6.2 results. In the tungsten case, slight overestimations of the experimental data were observed in the neutron spectra, and the photon spectra agreed well with the experiments. Most of the GHRs agreed with the measured data within the range of experimental uncertainty in the tungsten and vanadium assemblies. In the vanadium assembly, the calculated neutron spectra underestimated the experiments in the low energy region while the photon spectra were well calculated when compared to experiments. The most noticeable discrepancies with experimental data in the gamma heating were observed at detector positions closest to the source. For the reaction rates, notable discrepancies with experimental data were seen at the front and rear of the assemblies. Compared to experiments, the OpenMC neutron spectra were well predicted in the beryllium assembly, whereas the calculated fission reaction rate and TPRs overestimated the experiments, an observation similar to that which has been reported by other authors. The average, overall calculation-to-experiment ratio (C/E) over nine TPR and seven GHR measurements were 1.03 ± 0.20 and 0.95 ± 0.14, respectively. In the case of verification, the OpenMC results of the benchmark calculations indicated comparable accuracy to MCNP-6.2. In general, the validation exercise showed that OpenMC can be used to analyze the fusion neutronics shielding benchmark problems.
</summary>
<dc:date>2025-01-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>SoK: Acoustic Side Channels</title>
<link href="https://hdl.handle.net/1721.1/164342" rel="alternate"/>
<author>
<name>Wang, Ping</name>
</author>
<author>
<name>Nagaraja, Shishir</name>
</author>
<author>
<name>Bourquard, Aur?lien</name>
</author>
<author>
<name>Gao, Haichang</name>
</author>
<author>
<name>Yan, Jeff</name>
</author>
<id>https://hdl.handle.net/1721.1/164342</id>
<updated>2026-03-08T03:31:57Z</updated>
<published>2025-11-26T00:00:00Z</published>
<summary type="text">SoK: Acoustic Side Channels
Wang, Ping; Nagaraja, Shishir; Bourquard, Aur?lien; Gao, Haichang; Yan, Jeff
Acoustic side channels (ASCs) have been discovered for several decades, highlighting the tangible security risks posed by unintended  sound emissions from computing and electronic systems. Their existence has drawn considerable attention from researchers, driving  rapid progress in both attack methodologies and defense mechanisms across a wide range of scenarios. In this paper, we provide  a state-of-the-art analysis of ASCs, covering all the significant academic research in the area. First, we clarify existing ambiguities  and conceptual confusion, proposing a clear definition of ASC. Second, we analyse the characteristics of known ASCs, discuss their  security implications, and propose the first taxonomy. Next, we summarize attack techniques, discuss countermeasures, and identify  areas for future research. We also link side channels and inverse problems, two fields that appear to be completely isolated from each  other but have deep connections.
</summary>
<dc:date>2025-11-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Umm Kamel’s Affair: How Infidelity Liberated the Night Sky in Jabal ‘Amil</title>
<link href="https://hdl.handle.net/1721.1/164341" rel="alternate"/>
<author>
<name>Nahleh, Mohamad</name>
</author>
<id>https://hdl.handle.net/1721.1/164341</id>
<updated>2026-03-08T03:38:46Z</updated>
<published>2024-01-02T00:00:00Z</published>
<summary type="text">Umm Kamel’s Affair: How Infidelity Liberated the Night Sky in Jabal ‘Amil
Nahleh, Mohamad
Weakened by the expansion of several imperial and colonial projects, the inhabitants of Jabal ‘Amil survived as second-class citizens, severed from the urban expression of Lebanese nationalism, and having to formulate their identity amid countless transgressions on their scholarship and literary production. It is thus in the spectacles of the universe and the mysteries of the cosmos that they inscribed fragments of their oral legacy, turning the night sky into an archive that no empire could burn or colonize. And yet it is light pollution, leaking from the same cities they were once forced to nourish, that quickly established itself as the main transgressor, clearing the faintest stories in their celestial library. Although distant manifestations of Islamic cosmology could no longer animate their rural nights, new alterations in the sky after dark, no matter how violent, have proven worthy carri-ers of their modern myths and legends. And it is onto the loudest object in their polluted sky, the Israeli reconnaissance drone IAI Searcher MK, that they grafted the tale of their legendary matriarch Umm Kamel. I argue that Umm Kamel’s physical and symbolic ascent into the sky was orchestrated by a modern generation of ‘Amilis whose infidelity to the celestial stories authored by their ancestors fortified their ability to transform the combined pressures of pollution and colonization. United by their efforts to forge new imaginaries around a starless night, they invite reflection on the possibility (and responsibility) of confronting the sky we have together inherited rather than lamenting the one we have lost. In tracing Umm Kamel’s transformation from figure to constellation, I contend that their cosmic interventions set the stage for new alliances between design and darkness, and ultimately, for a more expanded imagination of night design, particularly within the context of the climate crisis.
</summary>
<dc:date>2024-01-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ophthalmology Optical Coherence Tomography Databases for Artificial Intelligence Algorithm: A Review</title>
<link href="https://hdl.handle.net/1721.1/164340" rel="alternate"/>
<author>
<name>Restrepo, David</name>
</author>
<author>
<name>Quion, Justin Michael</name>
</author>
<author>
<name>Do Carmo Novaes, Frederico</name>
</author>
<author>
<name>Azevedo Costa, Iago Diogenes</name>
</author>
<author>
<name>Vasquez, Constanza</name>
</author>
<author>
<name>Bautista, Alyssa Nicole</name>
</author>
<author>
<name>Quiminiano, Ellaine</name>
</author>
<author>
<name>Lim, Patricia Abigail</name>
</author>
<author>
<name>Mwavu, Roger</name>
</author>
<author>
<name>Celi, Leo Anthony</name>
</author>
<author>
<name>Nakayama, Luis Filipe</name>
</author>
<id>https://hdl.handle.net/1721.1/164340</id>
<updated>2026-03-08T03:38:47Z</updated>
<published>2024-04-02T00:00:00Z</published>
<summary type="text">Ophthalmology Optical Coherence Tomography Databases for Artificial Intelligence Algorithm: A Review
Restrepo, David; Quion, Justin Michael; Do Carmo Novaes, Frederico; Azevedo Costa, Iago Diogenes; Vasquez, Constanza; Bautista, Alyssa Nicole; Quiminiano, Ellaine; Lim, Patricia Abigail; Mwavu, Roger; Celi, Leo Anthony; Nakayama, Luis Filipe
BACKGROUND: Imaging plays a pivotal role in eye assessment. With the introduction of advanced machine learning and artificial intelligence (AI), the focus has shifted to imaging datasets in ophthalmology. While disparities and health inequalities hidden within data are well-documented, the ophthalmology field faces specific challenges to the creation and maintenance of datasets. Optical Coherence Tomography (OCT) is useful for the diagnosis and monitoring of retinal pathologies, making it valuable for AI applications. This review aims to identify and compare the landscape of publicly available optical coherence tomography databases for AI applications.&#13;
METHODS: We conducted a literature review on OCT and AI articles with publicly accessible datasets, using PubMed, Scopus, and Web of Science databases. The review retrieved 183 articles, and after full-text analysis, 50 articles were included. From the included articles were identified 8 publicly available OCT datasets, focusing on patient demographics and clinical details for thorough assessment and comparison.&#13;
RESULTS: The resulting datasets encompass 154,313 images collected from Spectralis, Cirrus HD, Topcon 3D, and Bioptigen devices. These datasets included normal exams, age-related macular degeneration, and diabetic maculopathy, among others. Comprehensive demographic information is available in one dataset and the USA is the most represented population.&#13;
DISCUSSION: Current publicly available OCT databases for AI applications exhibit limitations, stemming from their non-representative nature and the lack of comprehensive demographic information. Limited datasets hamper research and equitable AI development. To promote equitable AI algorithmic development in ophthalmology, there is a need for the creation and dissemination of more representative datasets.
</summary>
<dc:date>2024-04-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Market Design for Capacity Sharing in Networks</title>
<link href="https://hdl.handle.net/1721.1/164339" rel="alternate"/>
<author>
<name>Amin, Saurabh</name>
</author>
<author>
<name>Jaillet, Patrick</name>
</author>
<author>
<name>Pulyassary, Haripriya</name>
</author>
<author>
<name>Wu, Manxi</name>
</author>
<id>https://hdl.handle.net/1721.1/164339</id>
<updated>2026-03-08T03:31:54Z</updated>
<published>2025-11-21T00:00:00Z</published>
<summary type="text">Market Design for Capacity Sharing in Networks
Amin, Saurabh; Jaillet, Patrick; Pulyassary, Haripriya; Wu, Manxi
We study a market mechanism that sets edge prices to incentivize strategic agents to efficiently share limited network capacity. In this market, agents form coalitions, with each coalition sharing a unit capacity of a selected route and making payments to cover edge prices. Our focus is on the existence and computation of market equilibrium, where challenges arise from the interdependence between coalition formation among strategic agents with heterogeneous preferences and route selection that induces a network flow under integral capacity constraints. To address this interplay between coalition formation and network capacity utilization, we introduce a novel approach based on combinatorial auction theory and network flow theory. We establish sufficient conditions on the network topology and agents' preferences that guarantee both the existence and polynomial-time computation of a market equilibrium. Additionally, we identify a particular market equilibrium that maximizes utilities for all agents and is equivalent to the classical Vickrey-Clarke-Groves mechanism. Furthermore, we extend our results to multi-period settings and general networks, showing that when the sufficient conditions are not met, an equilibrium may still exist but requires more complex, path-based pricing mechanisms that set differentiated prices based on agents' preference parameters.
</summary>
<dc:date>2025-11-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forage: Understanding RAG-based Sensemaking for Community Conversations</title>
<link href="https://hdl.handle.net/1721.1/164338" rel="alternate"/>
<author>
<name>Schroeder, Hope</name>
</author>
<author>
<name>Beeferman, Doug</name>
</author>
<author>
<name>Detwiller, Maya</name>
</author>
<author>
<name>Dimitrakopoulou, Dimitra</name>
</author>
<author>
<name>Roy, Deb</name>
</author>
<id>https://hdl.handle.net/1721.1/164338</id>
<updated>2026-03-08T03:22:24Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Forage: Understanding RAG-based Sensemaking for Community Conversations
Schroeder, Hope; Beeferman, Doug; Detwiller, Maya; Dimitrakopoulou, Dimitra; Roy, Deb
We introduce Forage, a RAG-based and LLM-augmented search engine, which we apply to the problem of sensemaking for community conversation data. We report on formative user studies introducing Forage to two distinct user groups: NPR journalists and municipal staff in the city of Durham, North Carolina. We taxonomize the query types users make with the tool, use cases that include synthesizing insights across conversations and finding content about a particular subject. We find that users tend to gravitate towards using the system for synthesis more than for pure search. We report on challenges and opportunities surfaced by performing sensemaking with an open-ended interface like Forage, such as the benefits of finding content quickly, but also the challenges users face interacting with a system in natural language. Insights from this formative study confirm the usefulness of Forage for sensemaking, but also make follow-up work, such as systematically evaluating system performance and developing appropriate design, urgent.
CHI EA ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generative artificial intelligence in supply chain and operations management: a capability-based framework for analysis and implementation</title>
<link href="https://hdl.handle.net/1721.1/164337" rel="alternate"/>
<author>
<name>Jackson, Ilya</name>
</author>
<author>
<name>Ivanov, Dmitry</name>
</author>
<author>
<name>Dolgui, Alexandre</name>
</author>
<author>
<name>Namdar, Jafar</name>
</author>
<id>https://hdl.handle.net/1721.1/164337</id>
<updated>2026-03-08T03:38:45Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Generative artificial intelligence in supply chain and operations management: a capability-based framework for analysis and implementation
Jackson, Ilya; Ivanov, Dmitry; Dolgui, Alexandre; Namdar, Jafar
This research examines the transformative potential of artificial intelligence (AI) in general and Generative AI (GAI) in particular in supply chain and operations management (SCOM). Through the lens of the resource-based view and based on key AI capabilities such as learning, perception, prediction, interaction, adaptation, and reasoning, we explore how AI and GAI can impact 13 distinct SCOM decision-making areas. These areas include but are not limited to demand forecasting, inventory management, supply chain design, and risk management. With its outcomes, this study provides a comprehensive understanding of AI and GAI's functionality and applications in the SCOM context, offering a practical framework for both practitioners and researchers. The proposed framework systematically identifies where and how AI and GAI can be applied in SCOM, focussing on decision-making enhancement, process optimisation, investment prioritisation, and skills development. Managers can use it as a guidance to evaluate their operational processes and identify areas where AI and GAI can deliver improved efficiency, accuracy, resilience, and overall effectiveness. The research underscores that AI and GAI, with their multifaceted capabilities and applications, open a revolutionary potential and substantial implications for future SCOM practices, innovations, and research.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Remote Direct Code Execution</title>
<link href="https://hdl.handle.net/1721.1/164336" rel="alternate"/>
<author>
<name>Huang, Yibo</name>
</author>
<author>
<name>Qiu, Yiming</name>
</author>
<author>
<name>Ding, Daqian</name>
</author>
<author>
<name>Kon, Patrick Tser Jern</name>
</author>
<author>
<name>Zhang, Yiwen</name>
</author>
<author>
<name>Mao, Yuzhou</name>
</author>
<author>
<name>Bhatnagar, Archit</name>
</author>
<author>
<name>Chowdhury, Mosharaf</name>
</author>
<author>
<name>Devadas, Srinivas</name>
</author>
<author>
<name>Xing, Jiarong</name>
</author>
<author>
<name>Chen, Ang</name>
</author>
<id>https://hdl.handle.net/1721.1/164336</id>
<updated>2026-03-08T03:31:55Z</updated>
<published>2025-11-17T00:00:00Z</published>
<summary type="text">Remote Direct Code Execution
Huang, Yibo; Qiu, Yiming; Ding, Daqian; Kon, Patrick Tser Jern; Zhang, Yiwen; Mao, Yuzhou; Bhatnagar, Archit; Chowdhury, Mosharaf; Devadas, Srinivas; Xing, Jiarong; Chen, Ang
We propose remote direct code execution (RDX), which elevates the power of RDMA from memory access to code execution. We target runtime extension frameworks such as Wasm filters, BPF programs, and UDF functions, where RDX enables an agentless architecture that unlocks capabilities such as fast extension injection, update consistency guarantees, and minimal resource contention. We outline the roadmap for RDX around a new CodeFlow abstraction, encompassing programming remote extensions, exposing management stubs, remotely validating and JIT compiling code, seamlessly linking code to local context, managing remote extension state, and synchronizing code to targets. The case studies and initial results demonstrate the feasibility of RDX and its potential to spark the next wave of RDMA innovations.
HotNets ’25, College Park, MD, USA
</summary>
<dc:date>2025-11-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>User Adoption of Intelligent Environments: A Review of Technology Adoption Models, Challenges, and Prospects</title>
<link href="https://hdl.handle.net/1721.1/164335" rel="alternate"/>
<author>
<name>FakhrHosseini, Shabnam</name>
</author>
<author>
<name>Chan, Kathryn</name>
</author>
<author>
<name>Lee, Chaiwoo</name>
</author>
<author>
<name>Jeon, Myounghoon</name>
</author>
<author>
<name>Son, Heesuk</name>
</author>
<author>
<name>Rudnik, John</name>
</author>
<author>
<name>Coughlin, Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/164335</id>
<updated>2026-03-08T03:38:44Z</updated>
<published>2024-02-16T00:00:00Z</published>
<summary type="text">User Adoption of Intelligent Environments: A Review of Technology Adoption Models, Challenges, and Prospects
FakhrHosseini, Shabnam; Chan, Kathryn; Lee, Chaiwoo; Jeon, Myounghoon; Son, Heesuk; Rudnik, John; Coughlin, Joseph
Recent technological advancements have enabled the development of smarter (more automated) and more intelligent (adaptable) environments. To understand what factors lead users to reject or adopt Intelligent Environments (IEs), we reviewed nine prominent technology adoption theories. We conducted a literature review to investigate the acceptance and adoption of different types of IEs. We found that perceived usefulness, ease of use, perceived control or self-efficacy, affect and enjoyment, and perceived risks are the common factors across the studies explaining the adoption of IEs. However, shortcomings in the design and methods of the reviewed studies present major concerns in the generalizability and application of existing theories to emerging IEs. We identify eight lacunae in the existing literature and propose a new conceptual model for explaining the adoption of IEs. Through this study, we contribute to the formulation of the theoretical background for the successful introduction of IEs and their integration into users’ everyday life.
</summary>
<dc:date>2024-02-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Safe and Secure Control of Connected and Automated Vehicles: An Event-Triggered Control Approach using Trust-Aware Robust Control Barrier Functions</title>
<link href="https://hdl.handle.net/1721.1/164334" rel="alternate"/>
<author>
<name>Ahmad, H M SABBIR</name>
</author>
<author>
<name>Sabouni, Ehsan</name>
</author>
<author>
<name>Xiao, Wei</name>
</author>
<author>
<name>Cassandras, Christos</name>
</author>
<author>
<name>Li, Wenchao</name>
</author>
<id>https://hdl.handle.net/1721.1/164334</id>
<updated>2026-03-08T03:31:56Z</updated>
<published>2025-11-18T00:00:00Z</published>
<summary type="text">Safe and Secure Control of Connected and Automated Vehicles: An Event-Triggered Control Approach using Trust-Aware Robust Control Barrier Functions
Ahmad, H M SABBIR; Sabouni, Ehsan; Xiao, Wei; Cassandras, Christos; Li, Wenchao
We address the security of a network of Connected and Automated Vehicles (CAVs) cooperating to safely navigate through a conflict area (e.g., traffic intersections, merging roadways, roundabouts). Previous studies have shown that such a network can be targeted by adversarial attacks causing traffic jams or safety violations resulting in collisions.   We focus on attacks targeting the V2X communication network used to share vehicle data and consider as well uncertainties due to noise in sensor measurements and communication channels. To combat these, motivated by recent work on the safe control of CAVs, we propose a trust-aware robust event-triggered decentralized control and coordination framework that can provably guarantee safety.   We maintain a trust metric for each vehicle in the network computed based on their behavior and used to balance the tradeoff between conservativeness (when deeming every vehicle as untrustworthy) while guaranteeing safety and performance.  It is important to highlight that our framework is invariant to the specific choice of the trust framework.   Based on this framework, we propose an attack detection and mitigation scheme which has twofold benefits: (i) the trust framework is immune to false positives, and (ii) it provably guarantees safety against false positive cases which may arise from a poor choice of trust framework. We use extensive simulations in SUMO and CARLA to validate the theoretical guarantees and demonstrate the efficacy of our proposed scheme to detect and mitigate adversarial attacks. The code for the simulated scenarios can be found in this \href{https://github.com/SabbirAhmad26/Trust_based_CBF}{\textit{\underline{link}}}.
</summary>
<dc:date>2025-11-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Three-dimensional, soft magnetic-cored solenoids via multi-material extrusion</title>
<link href="https://hdl.handle.net/1721.1/164333" rel="alternate"/>
<author>
<name>Cañada, Jorge</name>
</author>
<author>
<name>Kim, Hyeonseok</name>
</author>
<author>
<name>Velásquez-García, Luis Fernando</name>
</author>
<id>https://hdl.handle.net/1721.1/164333</id>
<updated>2026-03-08T03:38:45Z</updated>
<published>2024-02-20T00:00:00Z</published>
<summary type="text">Three-dimensional, soft magnetic-cored solenoids via multi-material extrusion
Cañada, Jorge; Kim, Hyeonseok; Velásquez-García, Luis Fernando
This study reports fully 3D-printed, three-dimensional, soft magnetic-cored solenoids that generate three times the largest magnetic fields previously reported from 3D-printed solenoids. The devices are fabricated on a customised, multi-material 3D printer that can extrude both filaments and pellets. Three different kinds of materials are employed to manufacture the reported soft magnetic-cored solenoids: pure PLA (dielectric portions), PLA doped with copper particles (electrically conductive structures), and nylon or PLA doped with metallic particles (soft magnetic cores). Via manufacturing optimisation, the reported devices are 33% smaller and can withstand about twice the current, generating three times more magnetic field. The 3D-printed solenoids generate Gauss-level magnetic fields while drawing tens-of-milliamps currents and can be readily used to implement fully 3D-printed induction sensors. The results of this work extend the state of the art in 3D-printed electronics, enabling the creation of more complex and capable solenoids for in-situ manufactured and in-space manufactured electromagnetic systems.
</summary>
<dc:date>2024-02-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Churns and Turns of HCI: Which CHI Papers Make the Most Impact in an Ever-growing Sea of HCI Publications</title>
<link href="https://hdl.handle.net/1721.1/164331" rel="alternate"/>
<author>
<name>Kaltenhauser, Annika</name>
</author>
<author>
<name>Sch?ning, Johannes</name>
</author>
<author>
<name>Churchill, Elizabeth</name>
</author>
<author>
<name>Ishii, Hiroshi</name>
</author>
<author>
<name>Mekler, Elisa</name>
</author>
<author>
<name>Shneiderman, Ben</name>
</author>
<id>https://hdl.handle.net/1721.1/164331</id>
<updated>2026-03-08T03:22:23Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">The Churns and Turns of HCI: Which CHI Papers Make the Most Impact in an Ever-growing Sea of HCI Publications
Kaltenhauser, Annika; Sch?ning, Johannes; Churchill, Elizabeth; Ishii, Hiroshi; Mekler, Elisa; Shneiderman, Ben
The ACM Conference on Human Factors in Computing Systems (CHI) is the premier venue for research in Human-Computer Interaction (HCI). 11,290 full papers have been published and collectively cited almost one million times. Highly cited papers undoubtedly represent influential work, affecting the creation of review standards and conference submission and acceptance practices within and beyond CHI. However, the factors contributing to high citation counts and what constitutes a highly cited CHI paper remain largely unclear. In this panel discussion, we will engage the CHI community in exploring the relationship between paper characteristics, citation numbers, and effective impact on HCI as a discipline, and on HCI as an influential endeavour in technology design and development. To ground this discussion, we present findings from a literature review of the 100 most cited CHI full papers, looking at past and present fields and subfields of influence. We will also share insights from HCI experts. Our goals are to shed light on the meaning of impactful work at CHI and in HCI more broadly, to reflect on key trends in HCI over the years, and to discuss themes that have driven pivotal shifts in HCI research. We will lead the conversation toward a deeper understanding of citation practices, the role of citations in focusing and driving HCI research, and the implications of citation when it comes to shaping what is considered impactful HCI.
CHI EA ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Safeguards and Security for High-Burnup TRISO Pebble Bed Spent Fuel and Reactors</title>
<link href="https://hdl.handle.net/1721.1/164332" rel="alternate"/>
<author>
<name>Forsberg, Charles</name>
</author>
<author>
<name>Kadak, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/164332</id>
<updated>2026-03-08T03:38:49Z</updated>
<published>2024-08-02T00:00:00Z</published>
<summary type="text">Safeguards and Security for High-Burnup TRISO Pebble Bed Spent Fuel and Reactors
Forsberg, Charles; Kadak, Andrew
Several high-temperature thermal neutron–spectrum pebble bed reactors are being commercialized. China has started up two helium-cooled pebble bed high-temperature reactors. In the United States, the X-Energy helium-cooled and the Kairos Power salt-cooled pebble bed high-temperature reactors will produce spent nuclear fuel (SNF) with burnups exceeding 150 000 MWd per tonne. The reactor fuel in each case consists of small spherical graphite pebbles (4 to 6 cm in diameter) containing thousands of small TRISO (microspheric tri-structural isotropic) fuel particles embedded in the fuel of zone these pebbles. The unique isotopic, chemical, and physical characteristics of this high-burnup SNF create a technical case to eliminate safeguards based on the low risk for use in nuclear weapons, while maintaining safeguards in terms of risk for use in radiological weapons. These safeguards could be reduced to the simple counting and monitoring of pebbles in storage. Alternatively, there is the option to create a special category with reduced requirements for this SNF in storage, transport, and disposal. No safeguards would be required for a repository with only this type of SNF. Reactor safeguards are required for fresh fuel, partly burnt fuel, and to identify unconventional pebbles with depleted uranium or other materials that might be used to create weapons-useable materials.
</summary>
<dc:date>2024-08-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Constrained Tabular Diffusion for Finance</title>
<link href="https://hdl.handle.net/1721.1/164330" rel="alternate"/>
<author>
<name>Cardei, Michael</name>
</author>
<author>
<name>Munoz, Jose</name>
</author>
<author>
<name>Barrera, Oscar</name>
</author>
<author>
<name>Chandrahas, Shreyas</name>
</author>
<author>
<name>Saha, Partha</name>
</author>
<id>https://hdl.handle.net/1721.1/164330</id>
<updated>2026-03-08T03:31:54Z</updated>
<published>2025-11-14T00:00:00Z</published>
<summary type="text">Constrained Tabular Diffusion for Finance
Cardei, Michael; Munoz, Jose; Barrera, Oscar; Chandrahas, Shreyas; Saha, Partha
Generative models in finance face the dual challenge of producing realistic data while satisfying strict regulatory and economic objectives, a requirement that standard tabular diffusion models cannot provide. To address this difficulty, we introduce Constrained Tabular Diffusion for Finance (CTDF), a novel integration of sampling-time feasibility operations with mixed-type tabular diffusion in financial applications. By incorporating a training-free feasibility operator into the reverse‑diffusion sampling loop, CTDF enforces hard constraints for applications such as simulation, legal compliance, and extrapolation. Extensive experiments on large-scale financial datasets demonstrate zero constraint violations and improvement in scarce data utility. CTDF establishes a robust method for generating trustworthy and compliant synthetic data, opening new avenues for rigorous generative modeling and analysis in the financial domain.
6th ACM International Conference on AI in Finance (ICAIF ’25), November 15–18, 2025,&#13;
Singapore, Singapore
</summary>
<dc:date>2025-11-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Bayesian sampling framework for constrained optimisation of build layouts in additive manufacturing</title>
<link href="https://hdl.handle.net/1721.1/164329" rel="alternate"/>
<author>
<name>Kim, Suh In</name>
</author>
<author>
<name>Gee, Kaitlyn</name>
</author>
<author>
<name>Hart, A John</name>
</author>
<id>https://hdl.handle.net/1721.1/164329</id>
<updated>2026-03-08T03:38:48Z</updated>
<published>2024-08-17T00:00:00Z</published>
<summary type="text">A Bayesian sampling framework for constrained optimisation of build layouts in additive manufacturing
Kim, Suh In; Gee, Kaitlyn; Hart, A John
In additive manufacturing processes such as laser powder bed fusion, the build orientation and packing of components affect the required support structures, the number of parts in each build, and the surface roughness of the printed parts, among other factors. Maximising the packing density while minimising the build height can increase effective machine utilisation and decrease per-part cost. Yet, the build layout optimisation problem is highly nonlinear and difficult to solve using human intuition, so a systematic algorithm approach is required. Here, we present and demonstrate a voxel-based analysis method with Bayesian optimisation for determining component build orientation in additive manufacturing. We introduce selected case studies incorporating exemplary process attributes of laser powder bed fusion, including the determination of orientation and packing configurations based on support removal and tool-accessibility constraints.
</summary>
<dc:date>2024-08-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>18.05 Introduction to Probability and Statistics, Spring 2014</title>
<link href="https://hdl.handle.net/1721.1/153490.2" rel="alternate"/>
<author>
<name>Orloff, Jeremy</name>
</author>
<author>
<name>Bloom, Jonathan</name>
</author>
<id>https://hdl.handle.net/1721.1/153490.2</id>
<updated>2025-12-16T19:16:26Z</updated>
<published>2014-06-01T00:00:00Z</published>
<summary type="text">18.05 Introduction to Probability and Statistics, Spring 2014
Orloff, Jeremy; Bloom, Jonathan
This course provides an elementary introduction to probability and statistics with applications. Topics include: basic combinatorics, random variables, probability distributions, Bayesian inference, hypothesis testing, confidence intervals, and linear regression. The Spring 2014 version of this subject employed the residential MITx system, which enables on-campus subjects to provide MIT students with learning and assessment tools such as online problem sets, lecture videos, reading questions, pre-lecture questions, problem set assistance, tutorial videos, exam review content, and even online exams.
</summary>
<dc:date>2014-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>RES.STR-001 Geographic Information System (GIS) Tutorial, January IAP 2016</title>
<link href="https://hdl.handle.net/1721.1/151195.2" rel="alternate"/>
<author>
<name>MIT GIS Services Group</name>
</author>
<id>https://hdl.handle.net/1721.1/151195.2</id>
<updated>2025-12-16T05:02:38Z</updated>
<published>2016-01-01T00:00:00Z</published>
<summary type="text">RES.STR-001 Geographic Information System (GIS) Tutorial, January IAP 2016
MIT GIS Services Group
The MIT GIS Services Group at the MIT Libraries hosts a number of tutorial workshops throughout the year. This resource gathers together some of those introductory workshop materials designed to accustom GIS novices to the various available software packages and introduce them to some of the many features included in GIS systems. Topics include an introduction to two GIS applications, spatial data analysis, and spatial statistics.
</summary>
<dc:date>2016-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>RES.18-001 Calculus Online Textbook, Spring 2005</title>
<link href="https://hdl.handle.net/1721.1/153487.2" rel="alternate"/>
<author>
<name>Strang, Gilbert</name>
</author>
<id>https://hdl.handle.net/1721.1/153487.2</id>
<updated>2025-12-16T05:03:40Z</updated>
<published>2005-06-01T00:00:00Z</published>
<summary type="text">RES.18-001 Calculus Online Textbook, Spring 2005
Strang, Gilbert
Published in 1991 by Wellesley-Cambridge Press, the book is a useful resource for educators and self-learners alike. It is well organized, covers single variable and multivariable calculus in depth, and is rich with applications.&amp;nbsp; In addition to the Textbook, there is also an online Instructor's Manual and a student Study Guide. Prof. Strang has also developed a related series of videos, Highlights of Calculus, on the basic ideas of calculus.The 2010 second edition of the Calculus textbook includes a new chapter on &amp;quot;Highlights of Calculus&amp;quot; that connects to the video series of the same name.&amp;nbsp; The new chapter has summaries and practice questions for all of the videos.&amp;nbsp; It also introduces The Exponential Function (e^x) as presented in Prof. Strang's video on this topic.
</summary>
<dc:date>2005-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CMS.930 / 21G.034 Media, Education, and the Marketplace, Fall 2001</title>
<link href="https://hdl.handle.net/1721.1/150949.2" rel="alternate"/>
<author>
<name>Miyagawa, Shigeru</name>
</author>
<id>https://hdl.handle.net/1721.1/150949.2</id>
<updated>2025-12-16T03:37:19Z</updated>
<published>2001-12-01T00:00:00Z</published>
<summary type="text">CMS.930 / 21G.034 Media, Education, and the Marketplace, Fall 2001
Miyagawa, Shigeru
How can we harness the emerging forms of interactive media to enhance the learning process? Professor Miyagawa and prominent guest speakers will explore a broad range of issues on new media and learning - technical, social, and business. Concrete examples of use of media will be presented as case studies. One major theme, though not the only one, is that today's youth, influenced by video games and other emerging interactive media forms, are acquiring a fundamentally different attitude towards media. Media is, for them, not something to be consumed, but also to be created. This has broad consequences for how we design media, how the young are taught in schools, and how mass media markets will need to adjust.
</summary>
<dc:date>2001-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>21W.765J / 21L.489J / CMS.845J Interactive and Non-Linear Narrative: Theory and Practice, Spring 2006</title>
<link href="https://hdl.handle.net/1721.1/150854.2" rel="alternate"/>
<author>
<name>Coleman, Beth</name>
</author>
<id>https://hdl.handle.net/1721.1/150854.2</id>
<updated>2025-12-16T01:03:38Z</updated>
<published>2006-06-01T00:00:00Z</published>
<summary type="text">21W.765J / 21L.489J / CMS.845J Interactive and Non-Linear Narrative: Theory and Practice, Spring 2006
Coleman, Beth
This course covers techniques of creating narratives that take advantage of the flexibility of form offered by the computer. The course studies the structural properties of book-based narratives that experiment with digression, multiple points of view, disruptions of time and of storyline. The class analyzes the structure and evaluates the literary qualities of computer-based narratives including hypertexts, adventure games, and classic artificial intelligence programs like Eliza. With this base, students use authoring systems to model a variety of narrative techniques and to create their own fictions. Knowledge of programming is helpful but not necessary.
</summary>
<dc:date>2006-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CMS.610 / CMS.922 Media Industries and Systems, Spring 2006</title>
<link href="https://hdl.handle.net/1721.1/151161.2" rel="alternate"/>
<author>
<name>Weaver, Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/151161.2</id>
<updated>2025-12-16T00:51:34Z</updated>
<published>2006-06-01T00:00:00Z</published>
<summary type="text">CMS.610 / CMS.922 Media Industries and Systems, Spring 2006
Weaver, Christopher
This course examines the interplay of art, science, and commerce shaping the production, marketing, distribution, and consumption of contemporary media. It combines perspectives on media industries and systems with an awareness of the creative process, the audience, and trends shaping content. There will be invited discussions with industry experts in various subject areas. Class projects will encourage students to think through the challenges of producing media in an industry context. CMS.610 is for undergraduate credit, whereas CMS.922 is for graduate credit. Though the requirements for graduates are more stringent, the course is intended for both undergraduate and graduate students.
</summary>
<dc:date>2006-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A rapid simple point-of-care assay for the detection of SARS-CoV-2 neutralizing antibodies</title>
<link href="https://hdl.handle.net/1721.1/164328" rel="alternate"/>
<author>
<name>Kongsuphol, Patthara</name>
</author>
<author>
<name>Jia, Huan</name>
</author>
<author>
<name>Cheng, Hoi Lok</name>
</author>
<author>
<name>Gu, Yue</name>
</author>
<author>
<name>Shunmuganathan, Bhuvaneshwari DO</name>
</author>
<author>
<name>Chen, Ming Wei</name>
</author>
<author>
<name>Lim, Sing Mei</name>
</author>
<author>
<name>Ng, Say Yong</name>
</author>
<author>
<name>Tambyah, Paul Ananth</name>
</author>
<author>
<name>Nasir, Haziq</name>
</author>
<author>
<name>Gao, Xiaohong</name>
</author>
<author>
<name>Tay, Dousabel</name>
</author>
<author>
<name>Kim, Seunghyeon</name>
</author>
<author>
<name>Gupta, Rashi</name>
</author>
<author>
<name>Qian, Xinlei</name>
</author>
<author>
<name>Kozma, Mary M</name>
</author>
<author>
<name>Purushotorman, Kiren</name>
</author>
<author>
<name>McBee, Megan E</name>
</author>
<author>
<name>MacAry, Paul A</name>
</author>
<author>
<name>Sikes, Hadley D</name>
</author>
<author>
<name>Preiser, Peter R</name>
</author>
<id>https://hdl.handle.net/1721.1/164328</id>
<updated>2026-03-08T03:34:07Z</updated>
<published>2021-11-11T00:00:00Z</published>
<summary type="text">A rapid simple point-of-care assay for the detection of SARS-CoV-2 neutralizing antibodies
Kongsuphol, Patthara; Jia, Huan; Cheng, Hoi Lok; Gu, Yue; Shunmuganathan, Bhuvaneshwari DO; Chen, Ming Wei; Lim, Sing Mei; Ng, Say Yong; Tambyah, Paul Ananth; Nasir, Haziq; Gao, Xiaohong; Tay, Dousabel; Kim, Seunghyeon; Gupta, Rashi; Qian, Xinlei; Kozma, Mary M; Purushotorman, Kiren; McBee, Megan E; MacAry, Paul A; Sikes, Hadley D; Preiser, Peter R
Background Neutralizing antibodies (NAbs) prevent pathogens from infecting host cells.&#13;
Detection of SARS-CoV-2 NAbs is critical to evaluate herd immunity and monitor vaccine&#13;
efficacy against SARS-CoV-2, the virus that causes COVID-19. All currently available NAb&#13;
tests are lab-based and time-intensive.&#13;
Method We develop a 10 min cellulose pull-down test to detect NAbs against SARS-CoV-2&#13;
from human plasma. The test evaluates the ability of antibodies to disrupt ACE2 receptor—&#13;
RBD complex formation. The simple, portable, and rapid testing process relies on two key&#13;
technologies: (i) the vertical-flow paper-based assay format and (ii) the rapid interaction of&#13;
cellulose binding domain to cellulose paper.&#13;
Results Here we show the construction of a cellulose-based vertical-flow test. The developed test gives above 80% sensitivity and specificity and up to 93% accuracy as compared&#13;
to two current lab-based methods using COVID-19 convalescent plasma.&#13;
Conclusions A rapid 10 min cellulose based test has been developed for detection of NAb&#13;
against SARS-CoV-2. The test demonstrates comparable performance to the lab-based tests&#13;
and can be used at Point-of-Care. Importantly, the approach used for this test can be easily&#13;
extended to test RBD variants or to evaluate NAbs against other pathogens.
</summary>
<dc:date>2021-11-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing a SARS-CoV-2 Antigen Test Using Engineered Affinity Proteins</title>
<link href="https://hdl.handle.net/1721.1/164327" rel="alternate"/>
<author>
<name>Kim, Seunghyeon</name>
</author>
<author>
<name>Yee, Emma</name>
</author>
<author>
<name>Miller, Eric A</name>
</author>
<author>
<name>Hao, Yining</name>
</author>
<author>
<name>Tay, Dousabel MY</name>
</author>
<author>
<name>Sung, Ki-Joo</name>
</author>
<author>
<name>Jia, Huan</name>
</author>
<author>
<name>Johnson, Joseph M</name>
</author>
<author>
<name>Saeed, Mohsan</name>
</author>
<author>
<name>Mace, Charles R</name>
</author>
<author>
<name>Yüksel Yurt, Deniz</name>
</author>
<author>
<name>Sikes, Hadley D</name>
</author>
<id>https://hdl.handle.net/1721.1/164327</id>
<updated>2026-03-08T03:34:03Z</updated>
<published>2021-08-11T00:00:00Z</published>
<summary type="text">Developing a SARS-CoV-2 Antigen Test Using Engineered Affinity Proteins
Kim, Seunghyeon; Yee, Emma; Miller, Eric A; Hao, Yining; Tay, Dousabel MY; Sung, Ki-Joo; Jia, Huan; Johnson, Joseph M; Saeed, Mohsan; Mace, Charles R; Yüksel Yurt, Deniz; Sikes, Hadley D
The ongoing COVID-19 pandemic has clearly established how vital rapid, widely accessible diagnostic tests are in controlling infectious diseases and how difficult and slow it is to scale existing technologies. Here, we demonstrate the use of the rapid affinity pair identification via directed selection (RAPIDS) method to discover multiple affinity pairs for SARS-CoV-2 nucleocapsid protein (N-protein), a biomarker of COVID-19, from in vitro libraries in 10 weeks. The pair with the highest biomarker sensitivity was then integrated into a 10 min, vertical-flow cellulose paper test. Notably, the as-identified affinity proteins were compatible with a roll-to-roll printing process for large-scale manufacturing of tests. The test achieved 40 and 80 pM limits of detection in 1× phosphate-buffered saline (mock swab) and saliva matrices spiked with cell-culture-generated SARS-CoV-2 viruses and is also capable of detection of N-protein from characterized clinical swab samples. Hence, this work paves the way toward the mass production of cellulose paper-based assays which can address the shortages faced due to dependence on nitrocellulose and current manufacturing techniques. Further, the results reported herein indicate the promise of RAPIDS and engineered binder proteins for the timely and flexible development of clinically relevant diagnostic tests in response to emerging infectious diseases.
</summary>
<dc:date>2021-08-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>Archaeology of Self: Reflexivity in Data Activism to Address Systemic Injustices</title>
<link href="https://hdl.handle.net/1721.1/164326" rel="alternate"/>
<author>
<name>Walker, Raechel</name>
</author>
<author>
<name>Cruse, Brady</name>
</author>
<author>
<name>Cora, Aisha</name>
</author>
<author>
<name>Rogers, Kantwon</name>
</author>
<author>
<name>D'Ignazio, Catherine</name>
</author>
<author>
<name>Brion-Meisels, Gretchen</name>
</author>
<author>
<name>Breazeal, Cynthia</name>
</author>
<id>https://hdl.handle.net/1721.1/164326</id>
<updated>2026-03-08T03:31:59Z</updated>
<published>2025-11-04T00:00:00Z</published>
<summary type="text">Archaeology of Self: Reflexivity in Data Activism to Address Systemic Injustices
Walker, Raechel; Cruse, Brady; Cora, Aisha; Rogers, Kantwon; D'Ignazio, Catherine; Brion-Meisels, Gretchen; Breazeal, Cynthia
Traditional data science education often neglects the importance of identity and sociopolitical context—especially for African American students whose lived experiences and cultural insights are essential for building justice centered technologies. This paper presents findings from the Data Activism Program, which integrated Dr. Yolanda Sealey-Ruiz’s Archaeology of Self™ framework to foster critical self-reflection and racial identity development among African American high school and college students. Through technical training in data science, art-based learning, and partnerships with social justice organizations, students engaged in reflexive practices that positioned them as active agents in challenging systemic oppression. Interviews reveal that the Archaeology of Self™ deepened students’ reflexivity skills and strengthened their sound racial identity, enabling them to interrogate bias within themselves and the data science process. We argue that embedding frameworks such as the Archaeology of Self™ into algorithmic design offers a concrete, transferable method for operationalizing reflexivity in data science and AI. This study contributes to the AI and data science community by offering actionable strategies to center identity and power in AI development.
EAAMO ’25, Pittsburgh, PA, USA
</summary>
<dc:date>2025-11-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advances in Financial AI: Innovations, Risk, and Responsibility in the Era of LLMs</title>
<link href="https://hdl.handle.net/1721.1/164325" rel="alternate"/>
<author>
<name>Lee, Yongjae</name>
</author>
<author>
<name>Mehrasa, Nazanin</name>
</author>
<author>
<name>Choi, Chanyeol</name>
</author>
<author>
<name>Chen, Chung-Chi</name>
</author>
<author>
<name>Mehta, Dhagash</name>
</author>
<author>
<name>Zohren, Stefan</name>
</author>
<author>
<name>Kim, Yoon</name>
</author>
<author>
<name>Lee, Chulheum</name>
</author>
<author>
<name>Lee, Yeonhee</name>
</author>
<author>
<name>Oh, Eunsook</name>
</author>
<id>https://hdl.handle.net/1721.1/164325</id>
<updated>2026-03-08T03:31:53Z</updated>
<published>2025-11-10T00:00:00Z</published>
<summary type="text">Advances in Financial AI: Innovations, Risk, and Responsibility in the Era of LLMs
Lee, Yongjae; Mehrasa, Nazanin; Choi, Chanyeol; Chen, Chung-Chi; Mehta, Dhagash; Zohren, Stefan; Kim, Yoon; Lee, Chulheum; Lee, Yeonhee; Oh, Eunsook
The finance sector is seeing a rapid increase in the application of machine learning and AI, with Large Language Models (LLMs), ESG (Environmental, Social, and Governance) investing, and AI Safety significantly reshaping the field. This workshop focuses on how these advancements intersect with core financial AI applications. We will foster interdisciplinary discussion on applying LLMs to finance, addressing challenges in multilingual and non-English markets like Korea. The event will also highlight the integration of ESG signals into algorithmic decision-making and explore AI Safety, emphasizing reliability, fairness, and explainability for AI systems in regulated financial environments. By bringing together experts from academia, industry, and regulatory bodies, the workshop aims to stimulate discussions on practical issues, ethical dilemmas, and cutting-edge research shaping financial AI's future. We welcome submissions that combine technical rigor with societal relevance in AI-driven financial decisions.
CIKM ’25, Seoul, Republic of Korea
</summary>
<dc:date>2025-11-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Fabrication of Hybrid Functional Identities for Mechanical Elements</title>
<link href="https://hdl.handle.net/1721.1/164324" rel="alternate"/>
<author>
<name>AlAlawi, Marwa</name>
</author>
<id>https://hdl.handle.net/1721.1/164324</id>
<updated>2026-03-08T03:22:13Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Design and Fabrication of Hybrid Functional Identities for Mechanical Elements
AlAlawi, Marwa
My PhD research explores the simultaneous integration of mechanical and electrical functionalities in mechanical components such as gears, linkages, and springs, which I define as "hybrid functional identities." The focus is on transforming these components into non-intrusive sensors and active elements that maintain structural integrity while providing electrical capabilities like sensing, energy harvesting, and communication. I establish a framework for hybrid functional identities by examining common mechanical elements and their associated motions—rotational, linear, and reciprocal—along with force-based interactions like stretching, compression, and torsion. This analysis identifies essential electrical functionalities that complement these mechanical behaviors. Building on this foundation, I investigate modular mechanical building blocks that support diverse mechanical and electrical interaction primitives using a unified geometric structure. Ultimately, I aim to create an interconnected system where hybrid mechanical-electrical components function autonomously and communicate through an embedded wireless network.
CHI EA ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Connecting through Comics: Design and Evaluation of Cube, an Arts-Based Digital Platform for Trauma-Impacted Youth</title>
<link href="https://hdl.handle.net/1721.1/164323" rel="alternate"/>
<author>
<name>Kumar, Ila</name>
</author>
<author>
<name>Shen, Jocelyn</name>
</author>
<author>
<name>Ferguson, Craig</name>
</author>
<author>
<name>Picard, Rosalind</name>
</author>
<id>https://hdl.handle.net/1721.1/164323</id>
<updated>2026-03-08T03:22:13Z</updated>
<published>2025-05-02T00:00:00Z</published>
<summary type="text">Connecting through Comics: Design and Evaluation of Cube, an Arts-Based Digital Platform for Trauma-Impacted Youth
Kumar, Ila; Shen, Jocelyn; Ferguson, Craig; Picard, Rosalind
This paper explores the design, development and evaluation of a digital platform that aims to assist young people who have experienced trauma in understanding and expressing their emotions and fostering social connections. Integrating principles from expressive arts and narrative-based therapies, we collaborate with lived experts to iteratively design a novel, user-centered digital tool for young people to create and share comics that represent their experiences. Specifically, we conduct a series of nine workshops with N=54 trauma-impacted youth and young adults to test and refine our tool, beginning with three workshops using low-fidelity prototypes, followed by six workshops with Cube, a web version of the tool. A qualitative analysis of workshop feedback and empathic relations analysis of artifacts provides valuable insights into the usability and potential impact of the tool, as well as the specific needs of young people who have experienced trauma. Our findings suggest that the integration of expressive and narrative therapy principles into Cube can offer a unique avenue for trauma-impacted young people to process their experiences, more easily communicate their emotions, and connect with supportive communities. We end by presenting implications for the design of social technologies that aim to support the emotional well-being and social integration of youth and young adults who have faced trauma.
</summary>
<dc:date>2025-05-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Structure of Cross-National Collaboration in Open-Source Software Development</title>
<link href="https://hdl.handle.net/1721.1/164322" rel="alternate"/>
<author>
<name>Xu, Henry</name>
</author>
<author>
<name>Yu, Katy</name>
</author>
<author>
<name>He, Hao</name>
</author>
<author>
<name>Fang, Hongbo</name>
</author>
<author>
<name>Vasilescu, Bogdan</name>
</author>
<author>
<name>Park, Patrick</name>
</author>
<id>https://hdl.handle.net/1721.1/164322</id>
<updated>2026-03-08T03:31:58Z</updated>
<published>2025-11-10T00:00:00Z</published>
<summary type="text">The Structure of Cross-National Collaboration in Open-Source Software Development
Xu, Henry; Yu, Katy; He, Hao; Fang, Hongbo; Vasilescu, Bogdan; Park, Patrick
Open-source software (OSS) development platforms, such as GitHub, expand the potential for cross-national collaboration among developers by lowering the geographic, temporal, and coordination barriers that limited software innovation in the past. However, research has shown that the technological affordances that facilitate cross-national collaboration do not uniformly benefit all countries. Using the GitHub Innovation Graph dataset, which aggregates the complete cross-country collaborations among the entire population of GitHub developers, we present quantitative evidence of deep-seated religious and cultural affinities, shared colonial histories, and geopolitical factors structuring the collaborations between non-U.S. country pairs that become visible when the overarching dominance of the U.S. is removed from the data. This study highlights the opportunities to develop decentralizing strategies to facilitate new collaborations between developers in non-U.S. countries, thereby fostering the development of novel, innovative solutions. More generally, this study also underscores the importance of contextualizing user behavior and knowledge management in information systems with long-term, macro-social conditions in which these systems are inextricably embedded.
CIKM ’25, Seoul, Republic of Korea
</summary>
<dc:date>2025-11-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Partition–diffusion–reaction bounds for thin-film membrane formation kinetics</title>
<link href="https://hdl.handle.net/1721.1/164321" rel="alternate"/>
<author>
<name>Deshmukh, Akshay</name>
</author>
<author>
<name>Elimelech, Menachem</name>
</author>
<author>
<name>Lienhard, John H.</name>
</author>
<id>https://hdl.handle.net/1721.1/164321</id>
<updated>2026-01-05T17:12:17Z</updated>
<published>2025-11-15T00:00:00Z</published>
<summary type="text">Partition–diffusion–reaction bounds for thin-film membrane formation kinetics
Deshmukh, Akshay; Elimelech, Menachem; Lienhard, John H.
New membrane chemistries and structures have rapidly developed over the last ten&#13;
years, driven by applications ranging from critical metals separations and carbon capture to highly chlorine-resistant reverse-osmosis membranes. The thin selective layer&#13;
at the heart of reverse osmosis and nanofiltration membranes is typically fabricated using interfacial synthesis, with multifunctional aqueous-phase monomers and organicphase monomers. Here, we develop a physics-based model of partition, diffusion, and&#13;
reaction dynamics during the early stages of interfacial synthesis. These processes&#13;
critically impact membrane structure and performance. By solving the resulting partial&#13;
differential equations numerically and with analytical approximations, we demonstrate&#13;
that the planar reaction rate is initially limited by the partitioning and diffusion of the&#13;
aqueous-phase reactant into the organic phase. Later, finite reactant availability and&#13;
aqueous-phase diffusion become limiting. Through a combination of nondimensionalization, parameter mapping, and property prediction, we develop a framework that&#13;
spans a wide parameter space in reactant chemistry, solvent and support layer choice,&#13;
and initial reactant concentrations. We demonstrate that the planar reaction rate and&#13;
dynamics are strongly affected by the partition coefficient of the aqueous reactant,&#13;
which varies rapidly with changes in reactant and solvent chemistry. The influence&#13;
of diffusion variations is more limited. This tractable, physics-based model enables&#13;
the rapid quantification of monomer and solvent impact on interfacial synthesis, which&#13;
is essential for the rational development of new high-performance thin-film composite&#13;
membranes.
</summary>
<dc:date>2025-11-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Finger stick blood test to assess postvaccination SARS-CoV-2 neutralizing antibody response against variants</title>
<link href="https://hdl.handle.net/1721.1/164320" rel="alternate"/>
<author>
<name>Lim, Sing Mei</name>
</author>
<author>
<name>Cheng, Hoi Lok</name>
</author>
<author>
<name>Jia, Huan</name>
</author>
<author>
<name>Kongsuphol, Patthara</name>
</author>
<author>
<name>D/O Shunmuganathan, Bhuvaneshwari</name>
</author>
<author>
<name>Chen, Ming Wei</name>
</author>
<author>
<name>Ng, Say Yong</name>
</author>
<author>
<name>Gao, Xiaohong</name>
</author>
<author>
<name>Turaga, Shuvan Prashant</name>
</author>
<author>
<name>Heussler, Sascha P</name>
</author>
<author>
<name>Somani, Jyoti</name>
</author>
<author>
<name>Sengupta, Sharmila</name>
</author>
<author>
<name>Tay, Dousabel MY</name>
</author>
<author>
<name>McBee, Megan E</name>
</author>
<author>
<name>Young, Barnaby E</name>
</author>
<author>
<name>MacAry, Paul A</name>
</author>
<author>
<name>Sikes, Hadley D</name>
</author>
<author>
<name>Preiser, Peter R</name>
</author>
<id>https://hdl.handle.net/1721.1/164320</id>
<updated>2025-12-13T03:10:37Z</updated>
<published>2022-01-22T00:00:00Z</published>
<summary type="text">Finger stick blood test to assess postvaccination SARS-CoV-2 neutralizing antibody response against variants
Lim, Sing Mei; Cheng, Hoi Lok; Jia, Huan; Kongsuphol, Patthara; D/O Shunmuganathan, Bhuvaneshwari; Chen, Ming Wei; Ng, Say Yong; Gao, Xiaohong; Turaga, Shuvan Prashant; Heussler, Sascha P; Somani, Jyoti; Sengupta, Sharmila; Tay, Dousabel MY; McBee, Megan E; Young, Barnaby E; MacAry, Paul A; Sikes, Hadley D; Preiser, Peter R
There is clinical need for a quantifiable point-of-care (PoC) SARS-CoV-2 neutralizing antibody (nAb) test that is adaptable with the pandemic's changing landscape. Here, we present a rapid and semi-quantitative nAb test that uses finger stick or venous blood to assess the nAb response of vaccinated population against wild-type (WT), alpha, beta, gamma, and delta variant RBDs. It captures a clinically relevant range of nAb levels, and effectively differentiates prevaccination, post first dose, and post second dose vaccination samples within 10 min. The data observed against alpha, beta, gamma, and delta variants agrees with published results evaluated in established serology tests. Finally, our test revealed a substantial reduction in nAb level for beta, gamma, and delta variants between early BNT162b2 vaccination group (within 3 months) and later vaccination group (post 3 months). This test is highly suited for PoC settings and provides an insightful nAb response in a postvaccinated population.
</summary>
<dc:date>2022-01-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rapid Evaluation of Vaccine Booster Effectiveness against SARS-CoV-2 Variants</title>
<link href="https://hdl.handle.net/1721.1/164319" rel="alternate"/>
<author>
<name>Cheng, Hoi Lok</name>
</author>
<author>
<name>Lim, Sing Mei</name>
</author>
<author>
<name>Jia, Huan</name>
</author>
<author>
<name>Chen, Ming Wei</name>
</author>
<author>
<name>Ng, Say Yong</name>
</author>
<author>
<name>Gao, Xiaohong</name>
</author>
<author>
<name>Somani, Jyoti</name>
</author>
<author>
<name>Sengupta, Sharmila</name>
</author>
<author>
<name>Tay, Dousabel MY</name>
</author>
<author>
<name>Chua, Patrina WL</name>
</author>
<author>
<name>R., Abirami</name>
</author>
<author>
<name>Ling, Sharon YH</name>
</author>
<author>
<name>McBee, Megan E</name>
</author>
<author>
<name>Young, Barnaby E</name>
</author>
<author>
<name>Sikes, Hadley D</name>
</author>
<author>
<name>Preiser, Peter R</name>
</author>
<id>https://hdl.handle.net/1721.1/164319</id>
<updated>2025-12-13T03:10:36Z</updated>
<published>2022-09-07T00:00:00Z</published>
<summary type="text">Rapid Evaluation of Vaccine Booster Effectiveness against SARS-CoV-2 Variants
Cheng, Hoi Lok; Lim, Sing Mei; Jia, Huan; Chen, Ming Wei; Ng, Say Yong; Gao, Xiaohong; Somani, Jyoti; Sengupta, Sharmila; Tay, Dousabel MY; Chua, Patrina WL; R., Abirami; Ling, Sharon YH; McBee, Megan E; Young, Barnaby E; Sikes, Hadley D; Preiser, Peter R
As the COVID-19 pandemic continues, countries around the world are switching toward vaccinations and boosters to combat the pandemic. However, waning immunity against SARS-CoV-2 wild-type (WT) and variants have been widely reported. Booster vaccinations have shown to be able to increase immunological protection against new variants; however, the protection observed appears to decrease quickly over time suggesting a second booster shot may be appropriate. Moreover, heterogeneity and waning of the immune response at the individual level was observed suggesting a more personalized vaccination approach should be considered. To evaluate such a personalized strategy, it is important to have the ability to rapidly evaluate the level of neutralizing antibody (nAbs) response against variants at the individual level and ideally at a point of care setting. Here, we applied the recently developed cellulose pulled-down virus neutralization test (cpVNT) to rapidly assess individual nAb levels to WT and variants of concerns in response to booster vaccination. Our findings confirmed significant heterogeneity of nAb responses against a panel of SARS-CoV-2 variants, and indicated a strong increase in nAb response against variants of concern (VOCs) upon booster vaccination. For instance, the nAb response against current predominant omicron variant was observed with medians of 88.1% (n = 6, 95% CI = 73.2% to 96.2%) within 1-month postbooster and 70.7% (n = 22, 95% CI = 66.4% to 81.8%) 3 months postbooster. Our data show a point of care (POC) test focusing on nAb response levels against VOCs can guide decisions on the potential need for booster vaccinations at individual level. Importantly, it also suggests the current booster vaccines only give a transient protective response against some VOC and new more targeted formulations of a booster vaccine against specific VOC may need to be developed in the future.&#13;
IMPORTANCE Vaccination against SARS-CoV-2 induces protection through production of neutralization antibodies (nAb). The level of nAb is a major indicator of immunity against SARS-CoV-2 infection. We developed a rapid point-of-care test that can monitor the nAb level from a drop of finger stick blood. Here, we have implemented the test to monitor individual nAb level against wild-type and variants of SARS-CoV-2 at various time points of vaccination, including post-second-dose vaccination and postbooster vaccination. Huge diversity of nAb levels were observed among individuals as well as increment in nAb levels especially against Omicron variant after booster vaccination. This study evaluated the performance of this point-of-care test for personalized nAb response tracking. It verifies the potential of using a rapid nAb test to guide future vaccination regimens at both the individual and population level.
</summary>
<dc:date>2022-09-07T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tumor-localized catalases can fail to alter tumor growth and transcriptional profiles in subcutaneous syngeneic mouse tumor models</title>
<link href="https://hdl.handle.net/1721.1/164318" rel="alternate"/>
<author>
<name>Sheen, Allison</name>
</author>
<author>
<name>Agarwal, Yash</name>
</author>
<author>
<name>Cheah, Keith M</name>
</author>
<author>
<name>Cowles, Sarah C</name>
</author>
<author>
<name>Stinson, Jordan A</name>
</author>
<author>
<name>Palmeri, Joseph R</name>
</author>
<author>
<name>Sikes, Hadley D</name>
</author>
<author>
<name>Wittrup, K Dane</name>
</author>
<id>https://hdl.handle.net/1721.1/164318</id>
<updated>2025-12-13T03:10:51Z</updated>
<published>2023-08-01T00:00:00Z</published>
<summary type="text">Tumor-localized catalases can fail to alter tumor growth and transcriptional profiles in subcutaneous syngeneic mouse tumor models
Sheen, Allison; Agarwal, Yash; Cheah, Keith M; Cowles, Sarah C; Stinson, Jordan A; Palmeri, Joseph R; Sikes, Hadley D; Wittrup, K Dane
Catalase is an antioxidant enzyme that catalyzes the rapid conversion of hydrogen peroxide to water and oxygen. Use of catalase as a cancer therapeutic has been proposed to reduce oxidative stress and hypoxia in the tumor microenvironment, both activities which are hypothesized to reduce tumor growth. Furthermore, exposing murine tumors to exogenous catalase was previously reported to have therapeutic benefit. We studied the therapeutic effect of tumor-localized catalases with the aim to further elucidate the mechanism of action. To do this, we engineered two approaches to maximize intratumoral catalase exposure: 1) an injected extracellular catalase with enhanced tumor retention, and 2) tumor cell lines that over-express intracellular catalase. Both approaches were characterized for functionality and tested for therapeutic efficacy and mechanism in 4T1 and CT26 murine syngeneic tumor models. The injected catalase was confirmed to have enzyme activity &gt;30,000 U/mg and was retained at the injection site for more than one week in vivo. The engineered cell lines exhibited increased catalase activity and antioxidant capacity, with catalase over-expression that was maintained for at least one week after gene expression was induced in vivo. We did not observe a significant difference in tumor growth or survival between catalase-treated and untreated mice when either approach was used. Finally, bulk RNA sequencing of tumors was performed, comparing the gene expression of catalase-treated and untreated tumors. Gene expression analysis revealed very few differentially expressed genes as a result of exposure to catalase and notably, we did not observe changes consistent with an altered state of hypoxia or oxidative stress. In conclusion, we observe that sustained intratumoral catalase neither has therapeutic benefit nor triggers significant differential expression of genes associated with the anticipated therapeutic mechanism in the subcutaneous syngeneic tumor models used. Given the lack of effect observed, we propose that further development of catalase as a cancer therapeutic should take these findings into consideration.
</summary>
<dc:date>2023-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optical Detection of Interleukin-6 Using Liquid Janus Emulsions Using Hyperthermophilic Affinity Proteins</title>
<link href="https://hdl.handle.net/1721.1/164317" rel="alternate"/>
<author>
<name>Chen, Michelle</name>
</author>
<author>
<name>Corless, Elliot I</name>
</author>
<author>
<name>Engelward, Bevin P</name>
</author>
<author>
<name>Swager, Timothy M</name>
</author>
<author>
<name>Sikes, Hadley D.</name>
</author>
<id>https://hdl.handle.net/1721.1/164317</id>
<updated>2025-12-13T03:10:31Z</updated>
<published>2024-08-22T00:00:00Z</published>
<summary type="text">Optical Detection of Interleukin-6 Using Liquid Janus Emulsions Using Hyperthermophilic Affinity Proteins
Chen, Michelle; Corless, Elliot I; Engelward, Bevin P; Swager, Timothy M; Sikes, Hadley D.
When equal volumes of two immiscible liquids are mixed (e.g., a hydrocarbon and a fluorocarbon), Janus droplets can form in an aqueous solution. In a gravity-aligned Janus droplet, the boundary between the two phases is flat and thus optically transparent when viewed from above. When tipped due to interactions with an analyte (i.e., agglutination), the resulting change in refraction and reflection yields an optical signal that can be detected and quantified. This study reports the detection and quantitation of interleukin-6 (IL-6) using emulsions functionalized at the hydrocarbon:aqueous interface with engineered proteins that bind IL-6 at high affinity and specificity. Hyperthermophilic affinity proteins (rcSso7d) are derived from thermophiles, giving them excellent thermal stability. Two rcSso7d affinity protein variants were synthesized with a noncanonical azide-functionalized amino acid to enable click chemistry to novel polymeric anchors embedded in the hydrocarbon phase. The two binding proteins recognize different epitopes, enabling the detection of both monomeric and dimeric IL-6 via agglutination. It is noteworthy that the rsSso7d protein variants, in addition to having superior thermal stability and facile recombinant synthesis in &lt;i&gt;E. coli&lt;/i&gt;, show superior performance when compared to commercial antibodies for IL-6.
</summary>
<dc:date>2024-08-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Point-of-need diagnostics in a post-Covid world: an opportunity for paper-based microfluidics to serve during syndemics</title>
<link href="https://hdl.handle.net/1721.1/164316" rel="alternate"/>
<author>
<name>Tsaloglou, Maria-Nefeli</name>
</author>
<author>
<name>Christodouleas, Dionysios C</name>
</author>
<author>
<name>Milette, Jonathan</name>
</author>
<author>
<name>Milkey, Kendall</name>
</author>
<author>
<name>Romine, Isabelle C</name>
</author>
<author>
<name>Im, Judy</name>
</author>
<author>
<name>Lathwal, Shefali</name>
</author>
<author>
<name>Selvam, Duraipandian Thava</name>
</author>
<author>
<name>Sikes, Hadley D</name>
</author>
<author>
<name>Whitesides, George M</name>
</author>
<id>https://hdl.handle.net/1721.1/164316</id>
<updated>2025-12-13T03:10:44Z</updated>
<published>2025-01-01T00:00:00Z</published>
<summary type="text">Point-of-need diagnostics in a post-Covid world: an opportunity for paper-based microfluidics to serve during syndemics
Tsaloglou, Maria-Nefeli; Christodouleas, Dionysios C; Milette, Jonathan; Milkey, Kendall; Romine, Isabelle C; Im, Judy; Lathwal, Shefali; Selvam, Duraipandian Thava; Sikes, Hadley D; Whitesides, George M
Zoonotic outbreaks present with unpredictable threats to human health, food production, biodiversity, national security and disrupt the global economy. The COVID-19 pandemic—caused by zoonotic coronavirus, SARS-CoV2— is the most recent upsurge of an increasing trend in outbreaks for the past 100 years. This year, emergence of avian influenza (H5N1) is a stark reminder of the need for national and international pandemic preparedness. Tools for threat reduction include consistent practices in reporting pandemics, and widespread availability of accurate detection technologies. Wars and extreme climate events redouble the need for fast, adaptable and affordable diagnostics at the point of need. During the recent pandemic, rapid home tests for SARS-CoV-2 proved to be a viable functional model that leverages simplicity. In this perspective, we introduce the concept of syndemnicity in the context of infectious diseases and point-of-need healthcare diagnostics. We also provide a brief state-of-the-art for paper-based microfluidics. We illustrate our arguments with a case study for detecting brucellosis in cows. Finally, we conclude with lessons learned, challenges and opportunities for paper-based microfluidics to serve point-of-need healthcare diagnostics during syndemics.
</summary>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Faster search for tensor decomposition over finite fields</title>
<link href="https://hdl.handle.net/1721.1/164315" rel="alternate"/>
<author>
<name>Yang, Jason</name>
</author>
<id>https://hdl.handle.net/1721.1/164315</id>
<updated>2025-12-13T03:10:04Z</updated>
<published>2025-11-10T00:00:00Z</published>
<summary type="text">Faster search for tensor decomposition over finite fields
Yang, Jason
We present an &#119874;&#13;
∗&#13;
(|F|&#13;
min{&#119877;, Í&#13;
&#119889;≥2 &#119899;&#119889; }+(&#119877;−&#119899;0 ) (Í&#13;
&#119889;≠0 &#119899;&#119889; )&#13;
)-time algorithm for determining whether the rank of a concise tensor &#119879; ∈&#13;
F&#13;
&#119899;0×···×&#119899;&#119863;−1&#13;
is ≤ &#119877;, assuming &#119899;0 ≥ · · · ≥ &#119899;&#119863;−1 and &#119877; ≥ &#119899;0.&#13;
For 3-dimensional tensors, we have a second algorithm running&#13;
in &#119874;&#13;
∗&#13;
(|F|&#13;
&#119899;0+&#119899;2+(&#119877;−&#119899;0+1−&#119903;∗ ) (&#119899;1+&#119899;2 )+&#119903;&#13;
2&#13;
∗ ) time, where &#119903;∗ :=&#13;
j&#13;
&#119877;&#13;
&#119899;0&#13;
k&#13;
+ 1.&#13;
Both algorithms use polynomial space and improve on our previous&#13;
work, which achieved running time &#119874;&#13;
∗&#13;
(|F|&#13;
&#119899;0+(&#119877;−&#119899;0 ) (Í&#13;
&#119889; &#119899;&#119889; )&#13;
).
ISSAC ’25, Guanajuato, Mexico
</summary>
<dc:date>2025-11-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Human-AI Interaction for Augmented Reasoning: Improving Human Reflective and Critical Thinking with Artificial Intelligence</title>
<link href="https://hdl.handle.net/1721.1/164314" rel="alternate"/>
<author>
<name>Danry, Valdemar</name>
</author>
<author>
<name>Pataranutaporn, Pat</name>
</author>
<author>
<name>Cui, Christopher</name>
</author>
<author>
<name>Hung, Jui-Tse</name>
</author>
<author>
<name>Blanchard, Lancelot</name>
</author>
<author>
<name>Bu?inca, Zana</name>
</author>
<author>
<name>Tan, Chenhao</name>
</author>
<author>
<name>Starner, Thad</name>
</author>
<author>
<name>Maes, Pattie</name>
</author>
<id>https://hdl.handle.net/1721.1/164314</id>
<updated>2025-12-13T03:09:46Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Human-AI Interaction for Augmented Reasoning: Improving Human Reflective and Critical Thinking with Artificial Intelligence
Danry, Valdemar; Pataranutaporn, Pat; Cui, Christopher; Hung, Jui-Tse; Blanchard, Lancelot; Bu?inca, Zana; Tan, Chenhao; Starner, Thad; Maes, Pattie
AI-Augmented Reasoning systems are cognitive assistants that support human reasoning by providing AI-based feedback that can help users improve their critical reasoning skills. Made possible with new techniques like argumentation mining, fact-checking, crowdsourcing, attention nudging, and large language models, AI augmented reasoning systems can provide real-time feedback on logical reasoning, help users identify and avoid flawed arguments and misinformation, suggest counter-arguments, provide evidence-based explanations, and foster deeper reflection. The goal of this workshop is to bring together researchers from AI, HCI, cognitive and social science to discuss recent advances in AI-augmented reasoning, to identify open problems in this area, and to cultivate an emerging community on this important topic.
CHI EA ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Speech to Reality: On-Demand Production using Natural Language, 3D Generative AI, and Discrete Robotic Assembly</title>
<link href="https://hdl.handle.net/1721.1/164313" rel="alternate"/>
<author>
<name>Kyaw, Alexander Htet</name>
</author>
<author>
<name>Smith, Miana</name>
</author>
<author>
<name>Jeon, Se Hwan</name>
</author>
<author>
<name>Gershenfeld, Neil</name>
</author>
<id>https://hdl.handle.net/1721.1/164313</id>
<updated>2025-12-13T03:10:08Z</updated>
<published>2025-11-19T00:00:00Z</published>
<summary type="text">Speech to Reality: On-Demand Production using Natural Language, 3D Generative AI, and Discrete Robotic Assembly
Kyaw, Alexander Htet; Smith, Miana; Jeon, Se Hwan; Gershenfeld, Neil
We present a system that transforms speech into physical objects using 3D generative AI and discrete robotic assembly. By leveraging natural language, the system makes design and manufacturing more accessible to people without expertise in 3D modeling or robotic programming. While generative AI models can produce a wide range of 3D meshes, AI-generated meshes are not directly suitable for robotic assembly or account for fabrication constraints. To address this, we contribute a workflow that integrates natural language, 3D generative AI, geometric processing, and discrete robotic assembly. The system discretizes the AI-generated geometry and modifies it to meet fabrication constraints such as component count, overhangs, and connectivity to ensure feasible physical assembly. The results are demonstrated through the assembly of various objects, ranging from chairs to shelves, which are prompted via speech and realized within 5 minutes using a robotic arm.
SCF ’25, Cambridge, MA, USA
</summary>
<dc:date>2025-11-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>MechStyle: Augmenting Generative AI with Mechanical Simulation to Create Stylized and Structurally Viable 3D Models</title>
<link href="https://hdl.handle.net/1721.1/164312" rel="alternate"/>
<author>
<name>Faruqi, Faraz</name>
</author>
<author>
<name>Abdel-Rahman, Amira</name>
</author>
<author>
<name>Tejedor, Leandra</name>
</author>
<author>
<name>Nisser, Martin</name>
</author>
<author>
<name>Li, Jiaji</name>
</author>
<author>
<name>Phadnis, Vrushank</name>
</author>
<author>
<name>Jampani, Varun</name>
</author>
<author>
<name>Gershenfeld, Neil</name>
</author>
<author>
<name>Hofmann, Megan</name>
</author>
<author>
<name>Mueller, Stefanie</name>
</author>
<id>https://hdl.handle.net/1721.1/164312</id>
<updated>2025-12-13T03:10:14Z</updated>
<published>2025-11-19T00:00:00Z</published>
<summary type="text">MechStyle: Augmenting Generative AI with Mechanical Simulation to Create Stylized and Structurally Viable 3D Models
Faruqi, Faraz; Abdel-Rahman, Amira; Tejedor, Leandra; Nisser, Martin; Li, Jiaji; Phadnis, Vrushank; Jampani, Varun; Gershenfeld, Neil; Hofmann, Megan; Mueller, Stefanie
Recent developments in Generative AI enable creators to stylize 3D models based on text prompts. These methods change the 3D model geometry, which can compromise the model’s structural integrity once fabricated. We present MechStyle, a system that enables creators to stylize 3D printable models while preserving their structural integrity. MechStyle accomplishes this by augmenting the Generative AI-based stylization process with feedback from a Finite Element Analysis (FEA) simulation. As the stylization process modifies the geometry to approximate the desired style, feedback from the FEA simulation reduces modifications to regions with increased stress. We evaluate the effectiveness of FEA simulation feedback in the augmented stylization process by comparing three stylization control strategies. We also investigate the time efficiency of our approach by comparing three adaptive scheduling strategies. Finally, we demonstrate MechStyle’s user interface that allows users to generate stylized and structurally viable 3D models and provide five example applications.
SCF ’25, Cambridge, MA, USA
</summary>
<dc:date>2025-11-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Technology-Policy Handbook for Trans-Atlantic Nuclear Maritime Corridors: Ports, Infrastructure, and Safety</title>
<link href="https://hdl.handle.net/1721.1/164311" rel="alternate"/>
<author>
<name>Valiaveedu, Anthony</name>
</author>
<author>
<name>Edmonds, Nat</name>
</author>
<id>https://hdl.handle.net/1721.1/164311</id>
<updated>2026-01-21T15:08:23Z</updated>
<published>2025-12-12T00:00:00Z</published>
<summary type="text">Technology-Policy Handbook for Trans-Atlantic Nuclear Maritime Corridors: Ports, Infrastructure, and Safety
Valiaveedu, Anthony; Edmonds, Nat
On September 18, 2025, the United States and the United Kingdom published a Memorandum of Understanding (MoU) on scientific and technological advancement. This new partnership focuses on understanding and deploying disruptive technologies in Artificial Intelligence, quantum, and civil nuclear energy. Less highlighted was a single sentence within the MoU outlining efforts to "explore opportunities" for establishing a "maritime shipping corridor" between the US and UK. So far, research on civilian nuclear ships has generally prioritized ship design and operation analysis. This paper will instead analyze port, regulatory, and infrastructural issues within this space and provide a path forward for technology policy solutions supporting systems safety.
Advised by Jacopo Buongiorno, Eric Forrest, Fotini Christia, Koroush Shirvanm and Themis Sapsis.; Contact information: Anthony Valiaveedu (arv7@mit.edu); Nat Edmonds (edmondsn@mit.edu)
</summary>
<dc:date>2025-12-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI-assisted sensemaking: Human-AI collaboration for the analysis and interpretation of recorded facilitated conversations</title>
<link href="https://hdl.handle.net/1721.1/164310" rel="alternate"/>
<author>
<name>Kabbara, Jad</name>
</author>
<author>
<name>Phan, Thanh-Mai</name>
</author>
<author>
<name>Rakhilin, Marina</name>
</author>
<author>
<name>Detwiller, Maya</name>
</author>
<author>
<name>Dimitrakopoulou, Dimitra</name>
</author>
<author>
<name>Roy, Deb</name>
</author>
<id>https://hdl.handle.net/1721.1/164310</id>
<updated>2025-12-13T03:09:44Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">AI-assisted sensemaking: Human-AI collaboration for the analysis and interpretation of recorded facilitated conversations
Kabbara, Jad; Phan, Thanh-Mai; Rakhilin, Marina; Detwiller, Maya; Dimitrakopoulou, Dimitra; Roy, Deb
In light of growing toxic polarization and societal fragmentation often fueled by social media, we are designing alternative communication spaces we refer to as dialogue networks—networks of people engaged in recorded small-group prompted dialogue. We introduce the dialogue network framework and our use of tools powered by large language models that assist humans in the analysis and interpretation of themes and patterns across conversations which we refer to as sensemaking. We pilot case studies in collaboration with community partners using a prototype AI-assisted sensemaking tool. Insights from these pilots can inform the use of AI for human-led community engagement processes.
CHI EA ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>$HealthGenie:$ A Knowledge-Driven LLM Framework for Tailored Dietary Guidance</title>
<link href="https://hdl.handle.net/1721.1/164309" rel="alternate"/>
<author>
<name>Gao, Fan</name>
</author>
<author>
<name>Zhao, Xinjie</name>
</author>
<author>
<name>Xia, Ding</name>
</author>
<author>
<name>Zhou, Zhongyi</name>
</author>
<author>
<name>Yang, Rui</name>
</author>
<author>
<name>Lu, Jinghui</name>
</author>
<author>
<name>Jiang, Hang</name>
</author>
<author>
<name>Park, Chanjun</name>
</author>
<author>
<name>Li, Irene</name>
</author>
<id>https://hdl.handle.net/1721.1/164309</id>
<updated>2025-12-13T03:10:10Z</updated>
<published>2025-11-10T00:00:00Z</published>
<summary type="text">$HealthGenie:$ A Knowledge-Driven LLM Framework for Tailored Dietary Guidance
Gao, Fan; Zhao, Xinjie; Xia, Ding; Zhou, Zhongyi; Yang, Rui; Lu, Jinghui; Jiang, Hang; Park, Chanjun; Li, Irene
Seeking dietary guidance often requires navigating complex nutritional knowledge while considering individual health needs. To address this, we present HealthGenie, an interactive platform that leverages the interpretability of knowledge graphs (KGs) and the conversational power of large language models (LLMs) to deliver tailored dietary recommendations alongside integrated nutritional visualizations for fast, intuitive insights. Upon receiving a user query, HealthGenie performs intent refinement and maps user's needs to a curated nutritional knowledge graph. The system then retrieves and visualizes relevant subgraphs, while offering detailed, explainable recommendations. Users can interactively adjust preferences to further tailor results. A within-subject study and quantitative analysis show that HealthGenie reduces cognitive load and interaction effort while supporting personalized, health-aware decision-making.
CIKM ’25, Seoul, Republic of Korea
</summary>
<dc:date>2025-11-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>TinkerXR: In-Situ, Reality-Aware CAD and 3D Printing Interface for Novices</title>
<link href="https://hdl.handle.net/1721.1/164308" rel="alternate"/>
<author>
<name>Arslan, O?uz</name>
</author>
<author>
<name>Akdo?an, Artun</name>
</author>
<author>
<name>Dogan, Mustafa Doga</name>
</author>
<id>https://hdl.handle.net/1721.1/164308</id>
<updated>2025-12-13T03:10:17Z</updated>
<published>2025-11-19T00:00:00Z</published>
<summary type="text">TinkerXR: In-Situ, Reality-Aware CAD and 3D Printing Interface for Novices
Arslan, O?uz; Akdo?an, Artun; Dogan, Mustafa Doga
Despite the growing accessibility of augmented reality (AR) for visualization, existing computer-aided design (CAD) systems remain confined to traditional screens or require complex setups or predefined parameters, limiting immersion and accessibility for novices. We present TinkerXR, an open-source AR interface enabling in-situ design and fabrication through Constructive Solid Geometry (CSG) modeling. TinkerXR operates solely with a headset and 3D printer, allowing users to design directly in and for their physical environments. By leveraging spatial awareness, depth occlusion, recognition of physical constraints, reference objects, and hand movement controls, TinkerXR enhances realism, precision, and ease of use. Its AR-based workflow integrates design and 3D printing with a drag-and-drop interface for printers’ virtual twins.&#13;
A user study comparing TinkerXR with Tinkercad shows that TinkerXR offers novices higher accessibility, engagement, and ease of use. Participants highlighted how designing directly in physical space made the process more intuitive. By bridging the gap between digital creation and physical output, TinkerXR aims to transform everyday spaces into expressive creative studios. We release TinkerXR as open source1 to encourage further exploration of accessible, spatially grounded CAD tools.
SCF ’25, Cambridge, MA, USA
</summary>
<dc:date>2025-11-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hierarchical Discrete Lattice Assembly: An Approach for the Digital Fabrication of Scalable Macroscale Structures</title>
<link href="https://hdl.handle.net/1721.1/164307" rel="alternate"/>
<author>
<name>Smith, Miana</name>
</author>
<author>
<name>Richard, Paul</name>
</author>
<author>
<name>Kyaw, Alexander</name>
</author>
<author>
<name>Gershenfeld, Neil</name>
</author>
<id>https://hdl.handle.net/1721.1/164307</id>
<updated>2025-12-13T03:10:19Z</updated>
<published>2025-11-19T00:00:00Z</published>
<summary type="text">Hierarchical Discrete Lattice Assembly: An Approach for the Digital Fabrication of Scalable Macroscale Structures
Smith, Miana; Richard, Paul; Kyaw, Alexander; Gershenfeld, Neil
Although digital fabrication processes at the desktop scale have become proficient and prolific, systems aimed at producing larger-scale structures are still typically complex, expensive, and unreliable. In this work, we present an approach for the fabrication of scalable macroscale structures using simple robots and interlocking lattice building blocks. A target structure is first voxelized so that it can be populated with an architected lattice. These voxels are then grouped into larger interconnected blocks, which are produced using standard digital fabrication processes, leveraging their capability to produce highly complex geometries at a small scale. These blocks, on the size scale of tens of centimeters, are then fed to mobile relative robots that are able to traverse over the structure and place new blocks to form structures on the meter scale. To facilitate the assembly of large structures, we introduce a live digital twin simulation tool for controlling and coordinating assembly robots that enables both global planning for a target structure and live user design, interaction, or intervention. To improve assembly throughput, we introduce a new modular assembly robot, designed for hierarchical voxel handling. We validate this system by demonstrating the voxelization, hierarchical blocking, path planning, and robotic fabrication of a set of meter-scale objects.
SCF ’25, Cambridge, MA, USA
</summary>
<dc:date>2025-11-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Are Crypto Ecosystems (De)centralizing? A Framework for Longitudinal Analysis</title>
<link href="https://hdl.handle.net/1721.1/164306" rel="alternate"/>
<author>
<name>Ju, Harang</name>
</author>
<author>
<name>Valavi, Eshan</name>
</author>
<author>
<name>Kumar, Madhav</name>
</author>
<author>
<name>Aral, Sinan</name>
</author>
<id>https://hdl.handle.net/1721.1/164306</id>
<updated>2025-12-13T03:10:12Z</updated>
<published>2025-11-24T00:00:00Z</published>
<summary type="text">Are Crypto Ecosystems (De)centralizing? A Framework for Longitudinal Analysis
Ju, Harang; Valavi, Eshan; Kumar, Madhav; Aral, Sinan
Blockchain technology relies on decentralization to resist faults and attacks while operating without trusted intermediaries. Although industry experts have touted decentralization as central to their promise and disruptive potential, it is still unclear whether the crypto ecosystems built around blockchains are becoming more or less decentralized over time. As crypto plays an increasing role in facilitating economic transactions and peer-to-peer interactions, measuring their decentralization becomes even more essential.We thus propose a systematic framework for measuring the decentralization of crypto ecosystems over time and compare commonly used decentralization metrics. We applied this framework to seven prominent blockchains, across five distinct subsystems and across their lifetime for over 15 years. Our analysis revealed that while crypto has largely become more decentralized over time, recent trends show a shift toward centralization in the consensus layer, NFT marketplaces, and developers. Our framework and results inform researchers, policymakers, and practitioners about the design, regulation, and implementation of crypto ecosystems and provide a systematic, replicable foundation for future studies.
</summary>
<dc:date>2025-11-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>MIT hosts 11th Undergraduate Research Technology Conference</title>
<link href="https://hdl.handle.net/1721.1/164305" rel="alternate"/>
<author>
<name>Beyah, Malakhi</name>
</author>
<author>
<name>Placides, Jojo</name>
</author>
<id>https://hdl.handle.net/1721.1/164305</id>
<updated>2025-12-13T03:12:15Z</updated>
<published>2025-10-30T00:00:00Z</published>
<summary type="text">MIT hosts 11th Undergraduate Research Technology Conference
Beyah, Malakhi; Placides, Jojo
From Oct. 10 to Oct. 12, the Stata Center was abuzz with bright minds and fresh faces as the Institute geared up for its 11th annual Undergraduate Research Technology Conference (URTC), where high school and undergraduate students from across the country came to present their latest research to experts and industry leaders.
</summary>
<dc:date>2025-10-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multimodal AI for Human Sensing and Interaction</title>
<link href="https://hdl.handle.net/1721.1/164304" rel="alternate"/>
<author>
<name>Liang, Paul Pu</name>
</author>
<author>
<name>Ahuja, Karan</name>
</author>
<author>
<name>Luo, Yiyue</name>
</author>
<id>https://hdl.handle.net/1721.1/164304</id>
<updated>2025-12-13T03:09:38Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Multimodal AI for Human Sensing and Interaction
Liang, Paul Pu; Ahuja, Karan; Luo, Yiyue
A significant body of HCI research today focuses on applying AI to sense, learn, and interact with humans through a wide range of wearable and ubiquitous sensors. These methods typically involve learning features from multimodal sensory data using AI methods. To aid HCI researchers who want to apply AI to their sensing problems, this course will cover the fundamental challenges and approaches in multimodal AI for human sensing and interaction. It is planned for 3 parts, one given by each organizer. The first covers the foundations of multimodal AI, studying how AI systems can represent, combine, and learn information from many interconnected sensory inputs. The second part discusses the practice of multimodal AI for human sensing, covering the latest methods for cross-modal learning across diverse sensors, human-centered application domains, and real-world concerns around their usage. The final part covers the hardware, fabrication, and data collection challenges that must be tackled to deploy these multimodal AI systems in the real world. By the end of this course, attendees should understand the fundamental principles and challenges of multimodal AI, identify the right AI approaches for their problems, prototype basic hardware systems for efficient and robust sensing, be aware of real-world concerns around ethics, interpretability, and privacy, and appreciate the range of human-centered applications enabled by multimodal AI and sensing.
CHI EA ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep-Time Architecture: Building as Material-Event</title>
<link href="https://hdl.handle.net/1721.1/164303" rel="alternate"/>
<author>
<name>Alonso, Cristina Parreño</name>
</author>
<id>https://hdl.handle.net/1721.1/164303</id>
<updated>2025-12-13T03:10:47Z</updated>
<published>2021-01-02T00:00:00Z</published>
<summary type="text">Deep-Time Architecture: Building as Material-Event
Alonso, Cristina Parreño
Despite our tendency to conceive, perceive, and represent buildings as static objects, buildings are, in their abundant reality, matter and energy in flux. As Heraclitus famously remarked in his panta rhei (πάντα ῥεῖ,): “everything flows.”1 Buildings are no different, and they need to be better thought through as entities in motion. In architectural literature, many voices have challenged the prevailing notion of the building as a static object. Bruno Latour, for instance, claims that a building is rather “a moving project, and that even once it has been built, it ages, it is transformed by its users, modified by all of what happens inside and outside, and that it will pass or be renovated, adulterated and transformed beyond recognition.”2 Another attempt to express architecture’s fluidity is Bernard Tschumi’s triad, “space, event and movement,” with which he aimed to expand what constitutes building beyond a static object and form: “There is no space without event, no architecture without movement.”3 And here we must add that there is no movement without time—and further, that given enough time, even a solid-like material (think of a building here) flows.
</summary>
<dc:date>2021-01-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Involuntary vs. voluntary flexible work: insights for scholars and stakeholders</title>
<link href="https://hdl.handle.net/1721.1/164302" rel="alternate"/>
<author>
<name>Kaduk, Anne</name>
</author>
<author>
<name>Genadek, Katie</name>
</author>
<author>
<name>Kelly, Erin L</name>
</author>
<author>
<name>Moen, Phyllis</name>
</author>
<id>https://hdl.handle.net/1721.1/164302</id>
<updated>2025-12-13T03:10:26Z</updated>
<published>2019-08-08T00:00:00Z</published>
<summary type="text">Involuntary vs. voluntary flexible work: insights for scholars and stakeholders
Kaduk, Anne; Genadek, Katie; Kelly, Erin L; Moen, Phyllis
Building on insights from the early stages of our research partnership with a U.S. Fortune 500 organization, we came to differentiate between voluntary and involuntary schedule variability and remote work. This differentiation underscores the complexity behind flexible schedules and remote work, especially among white-collar, salaried professionals. We collected survey data among the partner firm's information technology (IT) workforce to evaluate whether these forms of flexibility had different implications for workers, as part of the larger Work, Family, and Health Network Study. We find that a significant minority of these employees report working variable schedules and working at home involuntarily. Involuntary variable schedules are associated with greater work-to-family conflict, stress, burnout, turnover intentions, and lower job satisfaction in models that adjust for personal characteristics, job, work hours, family demands, and other factors. Voluntary remote work, in contrast, is protective and more common in this professional sample. Employees working at least 20% of their hours at home and reporting moderate or high choice over where they work have lower stress and intentions to leave the firm. These findings point to the importance of both stakeholders and scholars distinguishing between voluntary and involuntary forms of flexibility, even in a relatively advantaged workforce.
</summary>
<dc:date>2019-08-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine learning demand forecasting and supply chain performance</title>
<link href="https://hdl.handle.net/1721.1/164301" rel="alternate"/>
<author>
<name>Feizabadi, Javad</name>
</author>
<id>https://hdl.handle.net/1721.1/164301</id>
<updated>2025-12-13T03:10:33Z</updated>
<published>2020-08-04T00:00:00Z</published>
<summary type="text">Machine learning demand forecasting and supply chain performance
Feizabadi, Javad
In many supply chains, firms staged in upstream of the chain suffer from variance amplification emanating from demand information distortion in a multi-stage supply chain and, consequently, their operation inefficiency. Prior research suggest that employing advanced demand forecasting, such as machine learning, could mitigate the effect and improve the performance; however, it is less known what is the extent and magnitude of savings as tangible supply chain performance outcomes. In this research, hybrid demand forecasting methods grounded on machine learning i.e. ARIMAX and Neural Network is developed. Both time series and explanatory factors are feed into the developed method. The method was applied and evaluated in the context of functional product and a steel manufacturer. The statistically significant supply chain performance improvement differences were found across traditional and ML-based demand forecasting methods. The implications for the theory and practice are also presented.
</summary>
<dc:date>2020-08-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>The future of sperm: a biovariability framework for understanding global sperm count trends</title>
<link href="https://hdl.handle.net/1721.1/164300" rel="alternate"/>
<author>
<name>Boulicault, Marion</name>
</author>
<author>
<name>Perret, Meg</name>
</author>
<author>
<name>Galka, Jonathan</name>
</author>
<author>
<name>Borsa, Alex</name>
</author>
<author>
<name>Gompers, Annika</name>
</author>
<author>
<name>Reiches, Meredith</name>
</author>
<author>
<name>Richardson, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/164300</id>
<updated>2025-12-13T03:10:46Z</updated>
<published>2021-05-10T00:00:00Z</published>
<summary type="text">The future of sperm: a biovariability framework for understanding global sperm count trends
Boulicault, Marion; Perret, Meg; Galka, Jonathan; Borsa, Alex; Gompers, Annika; Reiches, Meredith; Richardson, Sarah
The past 50 years have seen heated debate in the reproductive sciences about global trends in human sperm count. In 2017, Levine and colleagues published the largest and most methodologically rigorous meta-regression analysis to date and reported that average total sperm concentration among men from ‘Western’ countries has decreased by 59.3% since 1973, with no sign of halting. These results reverberated in the scientific community and in public discussions about men and masculinity in the modern world, in part because of scientists’ public-facing claims about the societal implications of the decline of male fertility. We find that existing research follows a set of implicit and explicit assumptions about how to measure and interpret sperm counts, which collectively form what we term the Sperm Count Decline hypothesis (SCD). Using the study by Levine and colleagues, we identify weaknesses and inconsistencies in the SCD, and propose an alternative framework to guide research on sperm count trends: the Sperm Count Biovariability hypothesis (SCB). SCB asserts that sperm count varies within a wide range, much of which can be considered non-pathological and species-typical. Knowledge about the relationship between individual and population sperm count and life-historical and ecological factors is critical to interpreting trends in average sperm counts and their relationships to health and fertility.
</summary>
<dc:date>2021-05-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Estimating Pedestrian Flows on Street Networks</title>
<link href="https://hdl.handle.net/1721.1/164299" rel="alternate"/>
<author>
<name>Sevtsuk, Andres</name>
</author>
<id>https://hdl.handle.net/1721.1/164299</id>
<updated>2025-12-13T03:10:41Z</updated>
<published>2021-10-02T00:00:00Z</published>
<summary type="text">Estimating Pedestrian Flows on Street Networks
Sevtsuk, Andres
City governments and planners alike commonly seek to increase pedestrian activity on city streets as part of broader sustainability, community building, and economic development strategies. Though walkability has received ample attention in planning literature, most planners still lack practical methods for predicting how development proposals could affect pedestrian activity on specific streets or public spaces at different times of the day. Cities typically require traffic impact assessments (TIAs) but not pedestrian impact assessments. In this study I present a methodology for estimating pedestrian trip generation and distribution between detailed origins and destinations in both existing and proposed built environments. Using the betweenness index from network analysis, I introduce a number of methodological improvements that allow the index to model pedestrian trips with parameters and constraints to account for pedestrian behavior in different settings. I demonstrate its application in the Kendall Square area of Cambridge (MA), where estimated foot traffic is compared during lunch and evening peak periods with observed pedestrian counts. The proposed approach can be particularly useful for TIAs, neighborhood plans, and large-scale development projects, where pedestrian flow estimates can be used to guide pedestrian infrastructure and safety improvements and public space investments or for locating pedestrian priority streets during the COVID-19 pandemic.
</summary>
<dc:date>2021-10-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding individuals with spinal cord injury’s self-care practices: a technology probe study to promote pressure relief adherence</title>
<link href="https://hdl.handle.net/1721.1/164298" rel="alternate"/>
<author>
<name>Oh, Hannah Hye Yeon</name>
</author>
<author>
<name>Pontis, Sheila</name>
</author>
<id>https://hdl.handle.net/1721.1/164298</id>
<updated>2025-12-13T03:10:23Z</updated>
<published>2024-10-02T00:00:00Z</published>
<summary type="text">Understanding individuals with spinal cord injury’s self-care practices: a technology probe study to promote pressure relief adherence
Oh, Hannah Hye Yeon; Pontis, Sheila
Pressure reliefs (PRs) are self-care practices essential for individuals with spinal cord injury (SCI) to prevent life-threatening pressure injuries (PIs). Despite the benefits, individuals often do not do these exercises at home, leading to increased patient morbidity and mortality. To examine how digital technology could improve this population's adherence to PR exercises, we conducted a technology probe study with five individuals with SCI over ten consecutive business days. A chat-based intervention was created to send user-scheduled PR reminders, which were personalized with visual elements and progress trackers. Participants were interviewed before and after interacting with the probe to better understand their experiences with PIs and PR practices. Results shed light on specific factors that may impact individuals with SCI's behaviours towards PRs and four considerations to design a customisable reminder intervention: (1) easy to use and friendly technology, (2) design-your-own- schedule feature, (3) communication style feature, and (4) dialogue support features. Personalisation supported with gamified visual progress tracking and motivational messages emerged as a strong strategy to increase PR adherence. Both sets of findings expand upon the human-computer interaction (HCI) literature for mobile health tools that encourage self-care practices; in particular, to the specific needs of individuals with SCI and the use of visual elements to increase engagement.
</summary>
<dc:date>2024-10-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Health and toxicity in content moderation: the discursive work of justification</title>
<link href="https://hdl.handle.net/1721.1/164297" rel="alternate"/>
<author>
<name>Gibson, Anna D.</name>
</author>
<author>
<name>Docherty, Niall</name>
</author>
<author>
<name>Gillespie, Tarleton</name>
</author>
<id>https://hdl.handle.net/1721.1/164297</id>
<updated>2025-12-13T03:10:25Z</updated>
<published>2023-12-12T00:00:00Z</published>
<summary type="text">Health and toxicity in content moderation: the discursive work of justification
Gibson, Anna D.; Docherty, Niall; Gillespie, Tarleton
Within academia, industry, and government, the terms ‘health’ and ‘toxicity’ are widely used to describe and justify decisions around online content and its removal. However, the meanings of these terms are assumed to be self-evident and therefore are rarely examined. This article turns a critical eye to the health and toxicity metaphor to unpack its hidden political work. We trace the metaphor through three different discourses: the historical political economy of the term, the usage by cultural elites in the last two decades, and finally through its contemporary instrumental usage by volunteer content moderators on Facebook. By linking these discourses together, we argue that the metaphor of health and toxicity serves as a means for justification and legitimacy under contemporary neoliberalized orders that typically chafe at modes of public intervention and the language of democratic statecraft. Rather than elucidating the challenges of online content, we find that the metaphor often serves to obfuscate or sidestep the hardest problems in democratic governance. This analysis therefore has practical significance for researchers, policymakers, journalists, and other speakers that publicly traffic in this discourse at large.
</summary>
<dc:date>2023-12-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Balancing Covariates in Randomized Experiments with the Gram–Schmidt Walk Design</title>
<link href="https://hdl.handle.net/1721.1/164296" rel="alternate"/>
<author>
<name>Harshaw, Christopher</name>
</author>
<author>
<name>Sävje, Fredrik</name>
</author>
<author>
<name>Spielman, Daniel A</name>
</author>
<author>
<name>Zhang, Peng</name>
</author>
<id>https://hdl.handle.net/1721.1/164296</id>
<updated>2025-12-13T03:10:28Z</updated>
<published>2024-10-01T00:00:00Z</published>
<summary type="text">Balancing Covariates in Randomized Experiments with the Gram–Schmidt Walk Design
Harshaw, Christopher; Sävje, Fredrik; Spielman, Daniel A; Zhang, Peng
The design of experiments involves a compromise between covariate balance and robustness. This article provides a formalization of this tradeoff and describes an experimental design that allows experimenters to navigate it. The design is specified by a robustness parameter that bounds the worst-case mean squared error of an estimator of the average treatment effect. Subject to the experimenter’s desired level of robustness, the design aims to simultaneously balance all linear functions of potentially many covariates. Less robustness allows for more balance. We show that the mean squared error of the estimator is bounded in finite samples by the minimum of the loss function of an implicit ridge regression of the potential outcomes on the covariates. Asymptotically, the design perfectly balances all linear functions of a growing number of covariates with a diminishing reduction in robustness, effectively allowing experimenters to escape the compromise between balance and robustness in large samples. Finally, we describe conditions that ensure asymptotic normality and provide a conservative variance estimator, which facilitate the construction of asymptotically valid confidence intervals. Supplementary materials for this article are available online.
</summary>
<dc:date>2024-10-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From natural language to simulations: applying AI to automate simulation modelling of logistics systems</title>
<link href="https://hdl.handle.net/1721.1/164295" rel="alternate"/>
<author>
<name>Jackson, Ilya</name>
</author>
<author>
<name>Jesus Saenz, Maria</name>
</author>
<author>
<name>Ivanov, Dmitry</name>
</author>
<id>https://hdl.handle.net/1721.1/164295</id>
<updated>2025-12-13T03:10:49Z</updated>
<published>2024-02-16T00:00:00Z</published>
<summary type="text">From natural language to simulations: applying AI to automate simulation modelling of logistics systems
Jackson, Ilya; Jesus Saenz, Maria; Ivanov, Dmitry
Our research strives to examine how simulation models of logistics systems can be produced automatically from verbal descriptions in natural language and how human experts and artificial intelligence (AI)-based systems can collaborate in the domain of simulation modelling. We demonstrate that a framework constructed upon the refined GPT-3 Codex is capable of generating functionally valid simulations for queuing and inventory management systems when provided with a verbal explanation. As a result, the language model could produce simulation models for inventory and process control. These results, along with the rapid improvement of language models, enable a significant simplification of simulation model development. Our study offers guidelines and a design of a natural language processing-based framework on how to build simulation models of logistics systems automatically, given the verbal description. In generalised terms, our work offers a technological underpinning of human-AI collaboration for the development of simulation models.
</summary>
<dc:date>2024-02-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>HiTop 2.0: combining topology optimisation with multiple feature size controls and human preferences</title>
<link href="https://hdl.handle.net/1721.1/164294" rel="alternate"/>
<author>
<name>Schiffer, Gillian</name>
</author>
<author>
<name>Ha, Dat Quoc</name>
</author>
<author>
<name>Carstensen, Josephine V</name>
</author>
<id>https://hdl.handle.net/1721.1/164294</id>
<updated>2025-12-13T03:10:43Z</updated>
<published>2023-12-31T00:00:00Z</published>
<summary type="text">HiTop 2.0: combining topology optimisation with multiple feature size controls and human preferences
Schiffer, Gillian; Ha, Dat Quoc; Carstensen, Josephine V
Topology optimisation is a computational design approach that generates high-performing, efficient structures uniquely suited to a design engineer’s goal. However, there exist two major obstacles to the accessibility, or ease of use, of topology optimisation: expensive computational costs and users’ binary decision between personal intuition and the algorithm’s result. Human-informed topology optimisation, or HiTop, presents an alternative approach to topology optimisation when a user lacks access to a high-performance computer or knowledge of code parameters. HiTop 2.0 prompts users to interactively identify a region of interest in the preliminary design and modify the size of the solid and/or void features. The novel contribution of this paper implements multi-phase minimum and maximum solid feature size controls in HiTop 2.0, and demonstrates 2D and 3D benchmark examples, including test cases that show how the user can interactively enhance issues related to eigenvalues, stress, and energy absorption, while solving the minimum compliance problem.
</summary>
<dc:date>2023-12-31T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrated urban heat sinks for low-carbon neighbourhoods: dissipating heat to the ground and sky through building structures</title>
<link href="https://hdl.handle.net/1721.1/164293" rel="alternate"/>
<author>
<name>Gascón Alvarez, Eduardo</name>
</author>
<author>
<name>Feickert, Kiley</name>
</author>
<author>
<name>Ismail, Mohamed A</name>
</author>
<author>
<name>Mueller, Caitlin T</name>
</author>
<author>
<name>Norford, Leslie K</name>
</author>
<id>https://hdl.handle.net/1721.1/164293</id>
<updated>2025-12-13T03:10:39Z</updated>
<published>2025-05-04T00:00:00Z</published>
<summary type="text">Integrated urban heat sinks for low-carbon neighbourhoods: dissipating heat to the ground and sky through building structures
Gascón Alvarez, Eduardo; Feickert, Kiley; Ismail, Mohamed A; Mueller, Caitlin T; Norford, Leslie K
In a global context of simultaneous urbanization and rising ambient temperatures, it is imperative to design heat-resilient and material-efficient neighbourhoods that respond to the pressing demand for housing with minimal environmental impact. With this goal in mind, the work presented here focuses on the integration of heat dissipation systems within structural building components, introducing a novel framework for their systems-level simulation and design. Two well-studied, low-cost systems (shallow geothermal and night-sky cooling) are modelled within a parametric design workflow that combines bottom-up structural embodied carbon calculations with annual building energy simulations that account for heat sink availability. The proposed method results in a fast and reliable early-stage design tool that allows urban planners, policymakers, and designers to evaluate the suitability of available heat dissipation technologies across climates and urban morphologies. This paper analyzes specifically the multi-domain performance of a hypothetical urban geometry within three different cooling-dominated locations (Algiers, Cairo, and Bangkok).
</summary>
<dc:date>2025-05-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>China’s Potential Lessons from Ukraine for Conflict over Taiwan</title>
<link href="https://hdl.handle.net/1721.1/164292" rel="alternate"/>
<author>
<name>Taylor Fravel, M</name>
</author>
<id>https://hdl.handle.net/1721.1/164292</id>
<updated>2025-12-13T03:10:34Z</updated>
<published>2023-07-03T00:00:00Z</published>
<summary type="text">China’s Potential Lessons from Ukraine for Conflict over Taiwan
Taylor Fravel, M
What lessons for a conflict over Taiwan might China be learning from Russia’s invasion of Ukraine and the global responses to the war? And what are the strategic implications of these lessons? To answer these questions, I examine how the war in Ukraine may be shaping China’s assessments of the political, military and economic costs of military action against Taiwan, and how these assessments may influence China’s decision to use force against Taiwan.
</summary>
<dc:date>2023-07-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>BoundarEase: Fostering Constructive Community Engagement to Inform More Equitable Student Assignment Policies</title>
<link href="https://hdl.handle.net/1721.1/164291" rel="alternate"/>
<author>
<name>Overney, Cassandra</name>
</author>
<author>
<name>Moe, Cassandra</name>
</author>
<author>
<name>Chang, Alvin</name>
</author>
<author>
<name>Gillani, Nabeel</name>
</author>
<id>https://hdl.handle.net/1721.1/164291</id>
<updated>2025-12-12T05:25:02Z</updated>
<published>2025-05-02T00:00:00Z</published>
<summary type="text">BoundarEase: Fostering Constructive Community Engagement to Inform More Equitable Student Assignment Policies
Overney, Cassandra; Moe, Cassandra; Chang, Alvin; Gillani, Nabeel
Public school districts across the United States (US) play a pivotal role in shaping access to quality education&#13;
through their student assignment policies—most prominently, school attendance boundaries. Community&#13;
engagement processes for changing such policies, however, are often opaque, cumbersome, and highly&#13;
polarizing—hampering equitable access to quality schools in ways that can perpetuate disparities in achievement and future life outcomes. In this paper, we describe a collaboration with a large US public school district&#13;
serving nearly 150,000 students to design and evaluate a new sociotechnical system, “BoundarEase”, for&#13;
fostering more constructive community engagement around changing school attendance boundaries. Through&#13;
a formative study with 16 community members, we first identify several frictions in existing community&#13;
engagement processes during boundary planning, like individualistic over collective thinking; a failure to understand and empathize with different community members when considering policy impacts; and challenges&#13;
in accessing and understanding the impacts of boundary changes. We then use these frictions to inspire the&#13;
design and development of BoundarEase, a web platform that allows community members to explore and&#13;
offer feedback on potential boundaries based on their preferences. A user study with 12 community members&#13;
reveals that BoundarEase prompts reflection among community members on how policies might impact&#13;
families beyond their own, and increases transparency around the details of policy proposals. Our paper offers&#13;
education researchers insights into the challenges and opportunities involved in community engagement for&#13;
designing student assignment policies; human-computer interaction researchers a case study of how new&#13;
sociotechnical systems might help mitigate polarization in local policymaking; and school districts a practical&#13;
tool they might use to facilitate community engagement to foster more equitable student assignment policies.
Cassandra Overney, Cassandra Moe, Alvin Chang, and Nabeel Gillani. 2025. BoundarEase: Fostering Constructive Community Engagement to Inform More Equitable Student Assignment Policies. Proc. ACM Hum.-Comput. Interact. 9, 2, Article CSCW040 (May 2025), 37 pages.
</summary>
<dc:date>2025-05-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-objective Evolutionary Learning for Near Pareto-Optimal Optimization of Solar Deployment</title>
<link href="https://hdl.handle.net/1721.1/164290" rel="alternate"/>
<author>
<name>Sigrist, Cooper</name>
</author>
<author>
<name>Li, Archimedes</name>
</author>
<author>
<name>Zhang, Alice</name>
</author>
<author>
<name>Lechowicz, Adam</name>
</author>
<author>
<name>Bashir, Noman</name>
</author>
<author>
<name>Lertsaroj, Pichsinee</name>
</author>
<author>
<name>Bahlous-Boldi, Ryan</name>
</author>
<author>
<name>Hajiesmaili, Mohammad</name>
</author>
<id>https://hdl.handle.net/1721.1/164290</id>
<updated>2025-12-12T05:25:52Z</updated>
<published>2025-11-11T00:00:00Z</published>
<summary type="text">Multi-objective Evolutionary Learning for Near Pareto-Optimal Optimization of Solar Deployment
Sigrist, Cooper; Li, Archimedes; Zhang, Alice; Lechowicz, Adam; Bashir, Noman; Lertsaroj, Pichsinee; Bahlous-Boldi, Ryan; Hajiesmaili, Mohammad
Existing residential rooftop photovoltaic (PV) installations in the United States are inequitable, as they are concentrated in high-income neighborhoods, and carbon-inefficient because they are often not located in electric grids dominated by fossil-fuel generators. Prior work, however, shows that prioritizing socioeconomic equity can also significantly increase the carbon efficiency of new installations. In this paper, we formalize the problem of site selection for rooftop PV installations as a multi-objective optimization problem, with metrics including energy generation, carbon offsetting, and demographic equity. We introduce a novel method called Evolutionary Value Assignment (EVA) that uses a neural network trained via evolutionary learning to select ideal sites for deployment. We evaluate our proposed approach in a case study using a dataset of U.S. solar generation and demographic information. Compared to projections of current installation trends, our method improves Carbon Efficiency by 43%, Income Equity by 41%, and Racial Equity by 24%, while increasing Energy Generation Potential by up to 10%. Therefore, our optimized placement can achieve the estimated carbon offset needed for net-zero emissions from electricity generation earlier than current deployment trends.
BUILDSYS ’25, Golden, CO, USA
</summary>
<dc:date>2025-11-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>Robust and expert-agnostic digital twin calibration via ensemble learning and Bayesian optimization</title>
<link href="https://hdl.handle.net/1721.1/164289" rel="alternate"/>
<author>
<name>Zhan, Sicheng</name>
</author>
<author>
<name>Cui, Bosen</name>
</author>
<id>https://hdl.handle.net/1721.1/164289</id>
<updated>2025-12-12T05:25:38Z</updated>
<published>2025-11-11T00:00:00Z</published>
<summary type="text">Robust and expert-agnostic digital twin calibration via ensemble learning and Bayesian optimization
Zhan, Sicheng; Cui, Bosen
Digital twins have emerged as a critical tool in tackling climate change. Considering the data scarcity of complex systems, a promising approach to developing digital twins involves combining physics-based models with data assimilation. However, model calibration remains challenging due to uncertainties in both the physical models and observational data, and the reliance on domain knowledge. In this study, we develop an ensemble learning-based approach that aggregates sub-models with diversified calibration configurations. The proposed method streamlines calibration without expert-driven parameter screening and improves the digital twin's extrapolation capability, enabling more robust predictive applications. We demonstrate the effectiveness of our approach by calibrating the energy model of an office building, significantly reducing the extrapolation error and the associated risks. To the best of our knowledge, this is the first study to facilitate the calibration of physics-based models using ensemble learning, especially in the parameter space.
BUILDSYS ’25, Golden, CO, USA
</summary>
<dc:date>2025-11-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>Talk to the Hand: an LLM-powered Chatbot with Visual Pointer as Proactive Companion for On-Screen Tasks</title>
<link href="https://hdl.handle.net/1721.1/164288" rel="alternate"/>
<author>
<name>Prasongpongchai, Thanawit</name>
</author>
<author>
<name>Pataranutaporn, Pat</name>
</author>
<author>
<name>Lertsutthiwong, Monchai</name>
</author>
<author>
<name>Maes, Pattie</name>
</author>
<id>https://hdl.handle.net/1721.1/164288</id>
<updated>2025-12-12T05:21:10Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Talk to the Hand: an LLM-powered Chatbot with Visual Pointer as Proactive Companion for On-Screen Tasks
Prasongpongchai, Thanawit; Pataranutaporn, Pat; Lertsutthiwong, Monchai; Maes, Pattie
This paper presents Pointer Assistant, a novel human-AI interaction technique for on-screen tasks. The design features a chatbot displayed as an extra mouse pointer, alongside the user’s, which proactively gives feedback on user actions while directing them to relevant areas on the screen and responding to the user’s direct chat messages. The effectiveness of the design’s key characteristics, pointer form and proactivity, was investigated in a study involving 220 participants in a financial budget planning task. Results demonstrated that the pointer design and interaction reduced task load while improving satisfaction with the experience, and increased the number of budget categories ideated during the task compared to the traditional passive chat log design. Participants viewed Pointer Assistant as a fun, innovative, and helpful visual guide while noting that its assertiveness can be improved. Future developments could offer even further enhancements to the user experience of human-AI collaboration and task outcomes.
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>WireBend-kit: A Computational Design and Fabrication Toolkit for Wirebending Custom 3D Wireframe Structures</title>
<link href="https://hdl.handle.net/1721.1/164287" rel="alternate"/>
<author>
<name>Faruqi, Faraz</name>
</author>
<author>
<name>Paonaskar, Josha</name>
</author>
<author>
<name>Schuler, Riley</name>
</author>
<author>
<name>Prevey, Aiden</name>
</author>
<author>
<name>Taylor, Carson</name>
</author>
<author>
<name>Tak, Anika</name>
</author>
<author>
<name>Guinto, Anthony</name>
</author>
<author>
<name>Shilamkar, Eeshani</name>
</author>
<author>
<name>Cheenaruenthong, Natarith</name>
</author>
<author>
<name>Nisser, Martin</name>
</author>
<id>https://hdl.handle.net/1721.1/164287</id>
<updated>2025-12-12T05:25:40Z</updated>
<published>2025-11-19T00:00:00Z</published>
<summary type="text">WireBend-kit: A Computational Design and Fabrication Toolkit for Wirebending Custom 3D Wireframe Structures
Faruqi, Faraz; Paonaskar, Josha; Schuler, Riley; Prevey, Aiden; Taylor, Carson; Tak, Anika; Guinto, Anthony; Shilamkar, Eeshani; Cheenaruenthong, Natarith; Nisser, Martin
This paper introduces WireBend-kit, a desktop wirebending machine and computational design tool for creating 3D wireframe structures. Combined, they allow users to rapidly and inexpensively create custom 3D wireframe structures from aluminum wire. Our design tool is implemented in freely available software and allows users to generate virtual wireframe designs and assess their fabricability. A path-planning procedure automatically converts the wireframe design into fabrication instructions for our machine while accounting for material elasticity and kinematic error sources. The custom machine costs $293 in parts and can form aluminum wire into 3D wireframe structures through an ordered sequence of feed, bend, and rotate instructions. Our technical evaluation reveals our system’s ability to overcome odometrically accumulating errors inherent to wirebending in order to produce accurate 3D structures from inexpensive hardware. Finally, we provide application examples demonstrating the design space enabled by Wirebend-kit.
SCF ’25, Cambridge, MA, USA
</summary>
<dc:date>2025-11-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bias Delayed is Bias Denied? Assessing the Effect of Reporting Delays on Disparity Assessments</title>
<link href="https://hdl.handle.net/1721.1/164286" rel="alternate"/>
<author>
<name>Gosciak, Jennah</name>
</author>
<author>
<name>Balagopalan, Aparna</name>
</author>
<author>
<name>Ouyang, Derek</name>
</author>
<author>
<name>Koenecke, Allison</name>
</author>
<author>
<name>Ghassemi, Marzyeh</name>
</author>
<author>
<name>Ho, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/164286</id>
<updated>2025-12-12T05:25:05Z</updated>
<published>2025-06-23T00:00:00Z</published>
<summary type="text">Bias Delayed is Bias Denied? Assessing the Effect of Reporting Delays on Disparity Assessments
Gosciak, Jennah; Balagopalan, Aparna; Ouyang, Derek; Koenecke, Allison; Ghassemi, Marzyeh; Ho, Daniel
Prior work has documented widespread racial and ethnic inequities across sectors, such as healthcare, finance, and technology. Across all of these domains, conducting disparity assessments at regular time intervals is critical for surfacing potential biases in decision-making and improving outcomes across demographic groups. Because disparity assessments fundamentally depend on the availability of demographic information, their efficacy is limited by the availability and consistency of available demographic identifiers. While prior work has considered the impact of missing data on fairness, little attention has been paid to the role of delayed demographic data. Delayed data, while eventually observed, might be missing at the critical point of monitoring and action – and delays may be unequally distributed across groups in ways that distort disparity assessments. We characterize such impacts in healthcare, using electronic health records of over 5M patients across primary care practices in all 50 states. Our contributions are threefold. First, we document the high rate of race and ethnicity reporting delays in a healthcare setting and demonstrate widespread variation in rates at which demographics are reported across different groups. Second, through a set of retrospective analyses using real data, we find that such delays impact disparity assessments and hence conclusions made across a range of consequential healthcare outcomes, particularly at more granular levels of state-level and practice-level assessments. Third, we find limited ability of conventional methods that impute missing race in mitigating the effects of reporting delays on the accuracy of timely disparity assessments. Our insights and methods generalize to many domains of algorithmic fairness where delays in the availability of sensitive information may confound audits, thus deserving closer attention within a pipeline-aware machine learning framework.
FAccT ’25, Athens, Greece
</summary>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Securing Cryptographic Software via Typed Assembly Language</title>
<link href="https://hdl.handle.net/1721.1/164285" rel="alternate"/>
<author>
<name>Song, Shixin</name>
</author>
<author>
<name>Dong, Tingzhen</name>
</author>
<author>
<name>Nwabueze, Kosi</name>
</author>
<author>
<name>Zanders, Julian</name>
</author>
<author>
<name>Erbsen, Andres</name>
</author>
<author>
<name>Chlipala, Adam</name>
</author>
<author>
<name>Yan, Mengjia</name>
</author>
<id>https://hdl.handle.net/1721.1/164285</id>
<updated>2025-12-12T05:25:37Z</updated>
<published>2025-11-22T00:00:00Z</published>
<summary type="text">Securing Cryptographic Software via Typed Assembly Language
Song, Shixin; Dong, Tingzhen; Nwabueze, Kosi; Zanders, Julian; Erbsen, Andres; Chlipala, Adam; Yan, Mengjia
Authors of cryptographic software are well aware that their code should not leak secrets through its timing behavior, and, until 2018, they believed that following industry-standard constant-time coding guidelines was sufficient. However, the revelation of the Spectre family of speculative execution attacks injected new complexities.&#13;
To block speculative attacks, prior work has proposed annotating the program's source code to mark secret data, with hardware using this information to decide when to speculate (i.e., when only public values are involved) or not (when secrets are in play). While these solutions are able to track secret information stored on the heap, they suffer from limitations that prevent them from correctly tracking secrets on the stack, at a cost in performance.&#13;
This paper introduces SecSep, a transformation framework that rewrites assembly programs so that they partition secret and public data on the stack. By moving from the source-code level to assembly rewriting, SecSep is able to address limitations of prior work. The key challenge in performing this assembly rewriting stems from the loss of semantic information through the lengthy compilation process. The key innovation of our methodology is a new variant of typed assembly language (TAL), Octal, which allows us to address this challenge. Assembly rewriting is driven by compile-time inference within Octal. We apply our technique to cryptographic programs and demonstrate that it enables secure speculation efficiently, incurring a low average overhead of 1.2%.
CCS ’25, Taipei
</summary>
<dc:date>2025-11-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Study on LLMs for Promptagator-Style Dense Retriever Training</title>
<link href="https://hdl.handle.net/1721.1/164284" rel="alternate"/>
<author>
<name>Gwon, Daniel</name>
</author>
<author>
<name>Jedidi, Nour</name>
</author>
<author>
<name>Lin, Jimmy</name>
</author>
<id>https://hdl.handle.net/1721.1/164284</id>
<updated>2025-12-12T05:25:42Z</updated>
<published>2025-11-10T00:00:00Z</published>
<summary type="text">Study on LLMs for Promptagator-Style Dense Retriever Training
Gwon, Daniel; Jedidi, Nour; Lin, Jimmy
Promptagator demonstrated that Large Language Models (LLMs) with few-shot prompts can be used as task-specific query generators for fine-tuning domain-specialized dense retrieval models. However, the original Promptagator approach relied on proprietary and large-scale LLMs which users may not have access to or may be prohibited from using with sensitive data. In this work, we study the impact of open-source LLMs at accessible scales (≤14B parameters) as an alternative. Our results demonstrate that open-source LLMs as small as 3B parameters can serve as effective Promptagator-style query generators. We hope our work will inform practitioners with reliable alternatives for synthetic data generation and give insights to maximize fine-tuning results for domain-specific applications. Our code is available at https://www.github.com/mitll/promptodile
CIKM ’25, Seoul, Republic of Korea
</summary>
<dc:date>2025-11-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>One-Sided Bounded Noise: Theory, Optimization Algorithms and Applications</title>
<link href="https://hdl.handle.net/1721.1/164283" rel="alternate"/>
<author>
<name>Xiao, Hanshen</name>
</author>
<author>
<name>Wan, Jun</name>
</author>
<author>
<name>Shi, Elaine</name>
</author>
<author>
<name>Devadas, Srinivas</name>
</author>
<id>https://hdl.handle.net/1721.1/164283</id>
<updated>2025-12-12T05:25:34Z</updated>
<published>2025-11-22T00:00:00Z</published>
<summary type="text">One-Sided Bounded Noise: Theory, Optimization Algorithms and Applications
Xiao, Hanshen; Wan, Jun; Shi, Elaine; Devadas, Srinivas
We investigate the optimal trade-off between utility and privacy using one-sided perturbation. Unlike conventional privacy-preserving statistical releases, randomization for obfuscating side-channel information is often constrained by infrastructure limitations. In practical scenarios, these constraints may only allow positive and bounded perturbations. For example, extending processing time or sending and storing dummy messages/data is typically feasible. However, implementing modifications in the opposite direction is challenging due to restrictions imposed by hardware capacity, communication protocols, and data management systems. In this paper, we establish the foundation of the positive noise mechanism within three semantic privacy frameworks: Differential Privacy (DP), Maximal Leakage (MaxL), and Probably Approximately Correct (PAC) Privacy. We then present a series of results that characterize or approximate the optimal one-sided noise distribution, subject to a second-moment budget and a bounded maximal magnitude. Building on this theoretical foundation, we develop efficient tools to solve the underlying optimization problems. Through experiments conducted in various scenarios, we demonstrate that existing techniques, such as Truncated Biased Laplace noise, are often suboptimal and result in excessive performance degradation. For instance, in an anonymous communication system with a 250K message budget, our optimized DP noise mechanism achieves a 21× reduction in dummy messages and an 18× reduction in dummy message latency overhead compared to traditional methods.
CCS ’25, Taipei, Taiwan
</summary>
<dc:date>2025-11-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>TH-Wood: Developing Thermo-Hygro-Coordinating Driven Wood Actuators to Enhance Human-Nature Interaction</title>
<link href="https://hdl.handle.net/1721.1/164282" rel="alternate"/>
<author>
<name>Wang, Guanyun</name>
</author>
<author>
<name>Chen, Chuang</name>
</author>
<author>
<name>Jin, Xiao</name>
</author>
<author>
<name>Chen, Yulu</name>
</author>
<author>
<name>Zheng, Yangweizhe</name>
</author>
<author>
<name>Zhen, Qianzi</name>
</author>
<author>
<name>Zhang, Yang</name>
</author>
<author>
<name>Li, Jiaji</name>
</author>
<author>
<name>Yang, Yue</name>
</author>
<author>
<name>Tao, Ye</name>
</author>
<author>
<name>Luo, Shijian</name>
</author>
<author>
<name>Sun, Lingyun</name>
</author>
<id>https://hdl.handle.net/1721.1/164282</id>
<updated>2025-12-12T05:24:58Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">TH-Wood: Developing Thermo-Hygro-Coordinating Driven Wood Actuators to Enhance Human-Nature Interaction
Wang, Guanyun; Chen, Chuang; Jin, Xiao; Chen, Yulu; Zheng, Yangweizhe; Zhen, Qianzi; Zhang, Yang; Li, Jiaji; Yang, Yue; Tao, Ye; Luo, Shijian; Sun, Lingyun
Wood has become increasingly applied in shape-changing interfaces for its eco-friendly and smart responsive properties, while its applications face challenges as it remains primarily driven by humidity. We propose TH-Wood, a biodegradable actuator system composed of wood veneer and microbial polymers, driven by both temperature and humidity, and capable of functioning in complex outdoor environments. This dual-factor-driven approach enhances the sensing and response channels, allowing for more sophisticated coordinating control methods. To assist in designing and utilizing the system more effectively, we developed a structure library inspired by dynamic plant forms, conducted extensive technical evaluations, created an educational platform accessible to users, and provided a design tool for deformation adjustments and behavior previews. Finally, several ecological applications demonstrate the potential of TH-Wood to significantly enhance human interaction with natural environments and expand the boundaries of human-nature relationships.
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>From blades to tracks: a case study in structural reuse of curved surfaces for circular design</title>
<link href="https://hdl.handle.net/1721.1/164281" rel="alternate"/>
<author>
<name>Pupping, Jesse</name>
</author>
<author>
<name>Riso, Marzia</name>
</author>
<author>
<name>Popescu, Mariana</name>
</author>
<author>
<name>Bousseau, Adrien</name>
</author>
<author>
<name>Joustra, Jelle</name>
</author>
<id>https://hdl.handle.net/1721.1/164281</id>
<updated>2025-12-12T05:25:49Z</updated>
<published>2025-11-19T00:00:00Z</published>
<summary type="text">From blades to tracks: a case study in structural reuse of curved surfaces for circular design
Pupping, Jesse; Riso, Marzia; Popescu, Mariana; Bousseau, Adrien; Joustra, Jelle
We explore the fabrication of curved surfaces by reusing panels extracted from decommissioned wind turbine blades, using cycling pumptracks as a case study. We first present real-world prototypes of pumptrack modules that we manufactured to evaluate the practicality of this reuse scenario and to define the boundary conditions for harvesting blade panels and assembling a track. We then propose an algorithm to optimize the segmentation of a wind turbine blade into quadrilateral panels whose sides fall within a small set of compatible boundaries. These panels form a library of modules that designers can connect side by side to create pumptracks of various lengths and curvatures. Together, these contributions provide a proof-of-concept of how computer-aided design and manufacturing can support circular design through the reuse of curved surfaces.
</summary>
<dc:date>2025-11-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fintech Innovation in China</title>
<link href="https://hdl.handle.net/1721.1/164280" rel="alternate"/>
<author>
<name>Cusumano, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/164280</id>
<updated>2025-12-12T05:25:15Z</updated>
<published>2025-09-23T00:00:00Z</published>
<summary type="text">Fintech Innovation in China
Cusumano, Michael
This column discusses innovation in payment platforms in China and what Western central banks and governments might learn.  Private Chinese companies led in the introduction of the mobile payment systems Alipay and WeChat Pay, using QR codes, and most transactions in the country are now digital.  China also has banned private crypto currencies and stablecoins and introduced a public digital currency and payment system using crypto technology.  However, it has been very difficult to get users to switch to the new central bank digital currency, despite aggressive promotions, subsidies, and mandates.  China's experience suggests that other central banks around the world will have difficulty introducing their own digital currencies and competing with private stablecoins and cryptocurrencies as well as other private digital payment platforms.
</summary>
<dc:date>2025-09-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Stable Marriage Problem and Sudoku</title>
<link href="https://hdl.handle.net/1721.1/164279" rel="alternate"/>
<author>
<name>Borodin, Matvey</name>
</author>
<author>
<name>Chen, Eric</name>
</author>
<author>
<name>Duncan, Aidan</name>
</author>
<author>
<name>Khovanova, Tanya</name>
</author>
<author>
<name>Litchev, Boyan</name>
</author>
<author>
<name>Liu, Jiahe</name>
</author>
<author>
<name>Moroz, Veronika</name>
</author>
<author>
<name>Qian, Matthew</name>
</author>
<author>
<name>Raghavan, Rohith</name>
</author>
<author>
<name>Rastogi, Garima</name>
</author>
<author>
<name>Voigt, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/164279</id>
<updated>2025-12-11T03:12:42Z</updated>
<published>2024-08-07T00:00:00Z</published>
<summary type="text">The Stable Marriage Problem and Sudoku
Borodin, Matvey; Chen, Eric; Duncan, Aidan; Khovanova, Tanya; Litchev, Boyan; Liu, Jiahe; Moroz, Veronika; Qian, Matthew; Raghavan, Rohith; Rastogi, Garima; Voigt, Michael
Are you having trouble getting married? These days, there are lots of products on the market for dating, from apps to websites and matchmakers, but we know a simpler way! That’s right—your path to coupled life isn’t through Tinder; it’s through Sudoku! Read our fabulous paper, where we explore the Stable Marriage Problem to help you find happiness and stability in marriage through math. As a bonus, you get two Sudoku puzzles with a new flavor.
</summary>
<dc:date>2024-08-07T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Medical Technologies Materialize Oppression</title>
<link href="https://hdl.handle.net/1721.1/164278" rel="alternate"/>
<author>
<name>Boulicault, Marion</name>
</author>
<id>https://hdl.handle.net/1721.1/164278</id>
<updated>2025-12-11T03:12:52Z</updated>
<published>2023-04-03T00:00:00Z</published>
<summary type="text">How Medical Technologies Materialize Oppression
Boulicault, Marion
Biomedical practice can encode and perpetuate oppressive ideologies. This encoding and perpetuation, scholars like Liao and Carbonell (Citation2023) convincingly argue, can occur not only via social practices, but also through medical technologies themselves. In other words, medical technologies can “materialize oppression”: they can be biased in a way that systematically “reflects and perpetuates unjust power relations” (Liao and Carbonell Citation2023, 9).&#13;
&#13;
In this paper, I examine how medical technologies materialize oppression, offering a preliminary, non-exhaustive taxonomy of the mechanisms of this materialization. While scholars like Liao and Carbonell focus primarily on physical medical instruments, I offer new examples that illustrate these mechanisms at work, focusing on medical data classification technologies and infrastructures. A clearer view of how these mechanisms operate suggests possibilities for building technologies that liberate rather than oppress.
</summary>
<dc:date>2023-04-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wild Wood Gridshells: Mixed-Reality Construction of Nonstandard Wood</title>
<link href="https://hdl.handle.net/1721.1/164277" rel="alternate"/>
<author>
<name>Cousin, Tim</name>
</author>
<author>
<name>Alkhayat, Latifa</name>
</author>
<author>
<name>Pearl, Natalie</name>
</author>
<author>
<name>Dewart, Christopher B</name>
</author>
<author>
<name>Mueller, Caitlin</name>
</author>
<id>https://hdl.handle.net/1721.1/164277</id>
<updated>2025-12-11T03:12:47Z</updated>
<published>2023-07-03T00:00:00Z</published>
<summary type="text">Wild Wood Gridshells: Mixed-Reality Construction of Nonstandard Wood
Cousin, Tim; Alkhayat, Latifa; Pearl, Natalie; Dewart, Christopher B; Mueller, Caitlin
Irregular wood is often downcycled despite having significant embedded strength. Reintegrating this wood into structural assemblies can improve material efficiency in the built environment. This work implemented material logic in a design-to-fabrication workflow for building structures using bifurcated tree branches to leverage this potential (Figure 1). This process is demonstrated through the design and construction of a prototype. A user-oriented computational interface is proposed that manages irregular geometries, matching and optimization algorithms, and structural simulation for design iteration. The demonstrated workflow, which concludes with augmented reality (AR) assisted fabrication, facilitates designing with varying materials, enabling upcycling a wide range of nonstandard building elements. At scale, this methodology can significantly reduce the environmental impact of construction.
</summary>
<dc:date>2023-07-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>MiNav: Autonomous Drone Navigation Indoors using Millimeter-Waves</title>
<link href="https://hdl.handle.net/1721.1/164276" rel="alternate"/>
<author>
<name>Lam, Maisy</name>
</author>
<author>
<name>Herrera, Joshua</name>
</author>
<author>
<name>Afzal, Sayed Saad</name>
</author>
<author>
<name>Zhou, Kaichen</name>
</author>
<author>
<name>Adib, Fadel</name>
</author>
<id>https://hdl.handle.net/1721.1/164276</id>
<updated>2025-12-11T03:12:23Z</updated>
<published>2025-09-03T00:00:00Z</published>
<summary type="text">MiNav: Autonomous Drone Navigation Indoors using Millimeter-Waves
Lam, Maisy; Herrera, Joshua; Afzal, Sayed Saad; Zhou, Kaichen; Adib, Fadel
We present the design, implementation, and evaluation of MiNav, a system capable of accurate, efficient and fully autonomous&#13;
drone navigation in challenging indoor environments, including those where vision-based systems fail. MiNav builds on&#13;
recent literature in millimeter-wave (mmWave) backscatter localization and makes the leap to full end-to-end autonomous&#13;
mmWave-based navigation.&#13;
MiNav leverages a mmWave radar mounted on a drone and one or more mmWave backscatter tags deployed in the environment.&#13;
To enable autonomous navigation, our design introduces key innovations. First, MiNav derives a novel Joint DOP-SNR&#13;
formulation to probabilistically model uncertainty in localization, and uses this uncertainty to generate an RF-Navigation Map&#13;
that maximizes the accuracy and reliability of mmWave backscatter localization throughout an environment. It then applies a&#13;
RF-aware Autonomous Path Planning technique that jointly optimizes for navigation efficiency and localization performance.&#13;
We built an end-to-end real-time implementation of MiNav consisting of a custom built drone and mmWave backscatter&#13;
tags. We tested it in practical indoor environments. We run over 165 successful autonomous missions across different tag&#13;
deployments and demonstrate a median 3D navigation error of 9.1 cm. Our results also show that in comparison to baseline&#13;
implementations that rely on more classical uncertainty metrics, MiNav achieves a 20% increase in navigation reliability and&#13;
nearly 3x improvement in self-tracking in millimeter-wave backscatter localization. Finally, we demonstrate first of its kind&#13;
capabilities, such as fully autonomous, end-to-end mmWave-based drone navigation and path planning in featureless and dark&#13;
environments. Demo video: http://y2u.be/EpnWibRcxBI
</summary>
<dc:date>2025-09-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Civics Lesson for Corporations Seeking to Join a University Community of Innovation</title>
<link href="https://hdl.handle.net/1721.1/164275" rel="alternate"/>
<author>
<name>Wright, Randall S.</name>
</author>
<id>https://hdl.handle.net/1721.1/164275</id>
<updated>2025-12-11T03:12:51Z</updated>
<published>2023-10-30T00:00:00Z</published>
<summary type="text">A Civics Lesson for Corporations Seeking to Join a University Community of Innovation
Wright, Randall S.
Civics, according to Merriam-Webster(2023), is “a social science dealing withthe rights and duties of citizens.”We’ve reached an inflection point.The headline of the July 2023 edi-tion of University-Industry EngagementAdvisor (Lewis 2023) reads “Beforesigning off on strategic partnerships,experts stress value of solid due dili-gence process.”
</summary>
<dc:date>2023-10-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Artificial intelligence for telemedicine diabetic retinopathy screening: a review</title>
<link href="https://hdl.handle.net/1721.1/164274" rel="alternate"/>
<author>
<name>Nakayama, Luis Filipe</name>
</author>
<author>
<name>Zago Ribeiro, Lucas</name>
</author>
<author>
<name>Novaes, Frederico</name>
</author>
<author>
<name>Miyawaki, Isabele Ayumi</name>
</author>
<author>
<name>Miyawaki, Andresa Emy</name>
</author>
<author>
<name>de Oliveira, Juliana Angélica Estevão</name>
</author>
<author>
<name>Oliveira, Talita</name>
</author>
<author>
<name>Malerbi, Fernando Korn</name>
</author>
<author>
<name>Regatieri, Caio Vinicius Saito</name>
</author>
<author>
<name>Celi, Leo Anthony</name>
</author>
<author>
<name>Silva, Paolo S</name>
</author>
<id>https://hdl.handle.net/1721.1/164274</id>
<updated>2025-12-11T03:12:49Z</updated>
<published>2023-12-12T00:00:00Z</published>
<summary type="text">Artificial intelligence for telemedicine diabetic retinopathy screening: a review
Nakayama, Luis Filipe; Zago Ribeiro, Lucas; Novaes, Frederico; Miyawaki, Isabele Ayumi; Miyawaki, Andresa Emy; de Oliveira, Juliana Angélica Estevão; Oliveira, Talita; Malerbi, Fernando Korn; Regatieri, Caio Vinicius Saito; Celi, Leo Anthony; Silva, Paolo S
PURPOSE: This study aims to compare artificial intelligence (AI) systems applied in diabetic retinopathy (DR) teleophthalmology screening, currently deployed systems, fairness initiatives and the challenges for implementation.&#13;
METHODS: The review included articles retrieved from PubMed/Medline/EMBASE literature search strategy regarding telemedicine, DR and AI. The screening criteria included human articles in English, Portuguese or Spanish and related to telemedicine and AI for DR screening. The author's affiliations and the study's population income group were classified according to the World Bank Country and Lending Groups.&#13;
RESULTS: The literature search yielded a total of 132 articles, and nine were included after full-text assessment. The selected articles were published between 2004 and 2020 and were grouped as telemedicine systems, algorithms, economic analysis and image quality assessment. Four telemedicine systems that perform a quality assessment, image preprocessing and pathological screening were reviewed. A data and post-deployment bias assessment are not performed in any of the algorithms, and none of the studies evaluate the social impact implementations. There is a lack of representativeness in the reviewed articles, with most authors and target populations from high-income countries and no low-income country representation.&#13;
CONCLUSIONS: Telemedicine and AI hold great promise for augmenting decision-making in medical care, expanding patient access and enhancing cost-effectiveness. Economic studies and social science analysis are crucial to support the implementation of AI in teleophthalmology screening programs. Promoting fairness and generalizability in automated systems combined with telemedicine screening programs is not straightforward. Improving data representativeness, reducing biases and promoting equity in deployment and post-deployment studies are all critical steps in model development.
</summary>
<dc:date>2023-12-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Removal Chain &amp; Sentient Life Cycles</title>
<link href="https://hdl.handle.net/1721.1/164273" rel="alternate"/>
<author>
<name>Schrage, Leonard</name>
</author>
<author>
<name>Duarte, Fábio</name>
</author>
<author>
<name>Ratti, Carlo</name>
</author>
<id>https://hdl.handle.net/1721.1/164273</id>
<updated>2025-12-11T03:12:41Z</updated>
<published>2023-07-03T00:00:00Z</published>
<summary type="text">The Removal Chain &amp; Sentient Life Cycles
Schrage, Leonard; Duarte, Fábio; Ratti, Carlo
As our cities are growing, managing waste is becoming increasingly challenging. Global plastic waste is set to almost triple by 2060 (OECD Citation2020) while recycling rates are staying below expectations.&#13;
&#13;
At the same time, landfills are being relocated away from cities, reaching their maximum capacities, or forced to shut down due to contamination with hazardous materials. As waste management infrastructure is increasingly removed from urban areas, we are becoming further disconnected from its ubiquitous, indispensable, yet invisible life of its own.&#13;
&#13;
In recent years, supply chain issues have been an omnipresent reflection of our consumerist reality. For example, when the Ever Given—one of the largest container ships in the world—got stuck in the Suez Canal in 2021 (Chellel et al. Citation2021), we were reminded that our globalized goods travel a long way around the world before they arrive at our doorstep. Still, we tend to forget that there is a life after the supply. On a planet with finite resources and growing piles of (hazardous) trash, we need to look further than the obvious. We urgently need to embrace a circular economy to combat the climate crisis. And to do so, we need to mind both the supply and removal chains.
</summary>
<dc:date>2023-07-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>A two-level machine learning approach for predicting thermal striping in T-junctions with upstream elbow</title>
<link href="https://hdl.handle.net/1721.1/164272" rel="alternate"/>
<author>
<name>Wang, Yu-Jou</name>
</author>
<author>
<name>Baglietto, Emilio</name>
</author>
<author>
<name>Shirvan, Koroush</name>
</author>
<id>https://hdl.handle.net/1721.1/164272</id>
<updated>2025-12-11T03:12:54Z</updated>
<published>2024-06-02T00:00:00Z</published>
<summary type="text">A two-level machine learning approach for predicting thermal striping in T-junctions with upstream elbow
Wang, Yu-Jou; Baglietto, Emilio; Shirvan, Koroush
Thermal striping is a phenomenon characterized by oscillatory mixing of non-isothermal streams, which is commonly seen in industrial processes such as nuclear coolant piping, petrochemical plants and liquefied natural gas transportation. The oscillatory mixing of hot and cold fluid can produce thermal field fluctuations and pose a potential risk of high-cycle thermal fatigue failures. Predicting and evaluating spatiotemporal fluctuations in thermal striping often requires high resolution and massive computational power. Although there have been extensive studies using machine learning algorithms on surrogate modeling, research focused on spatiotemporal fluctuation predictions is very limited. Due to the high dimensionality, it often requires complex algorithms with a large amount of high-fidelity training data, which limits the adoption of such methods for industrial applications. In this research, a two-level machine learning framework based on turbulence coherent structures is proposed and its application to a practical problem is demonstrated. The two-level design leverages vortex identification and local bias correction techniques, efficiently reducing the number of full-order simulations required for training. In the first level, well-organized coherent structures are extracted by performing Proper Orthogonal Decomposition on local parameters and then a tree-based machine-learning model is used to down-select the reference structures for the field reconstruction. In the second level, a parameterized convolution neural network is trained to predict the bias introduced by reference structures approximation. The demonstration of the methodology shows that the method can accurately capture the fluctuation frequencies and amplitudes of the spatiotemporal fields in a highly variational setting. Based on the vortex identification method, the methodology is expected to be applicable to general phenomenon driven by large coherent structures.
</summary>
<dc:date>2024-06-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>IP Networks Over Heterogeneous Embedded Serial Links</title>
<link href="https://hdl.handle.net/1721.1/164271" rel="alternate"/>
<author>
<name>Perry, Nathan</name>
</author>
<id>https://hdl.handle.net/1721.1/164271</id>
<updated>2025-12-11T03:08:19Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">IP Networks Over Heterogeneous Embedded Serial Links
Perry, Nathan
The Internet Protocol (IP) provides a number of key benefits to networked devices: it serves as a "narrow waist" enabling functional modularity by decoupling lower-layer devices from application behavior, it provides a notion of transitive connectivity and a number of standardized methods to achieve it, and most importantly, it is ubiquitous, enabling almost all networked applications to mutually communicate.&#13;
&#13;
Many embedded microcontrollers cannot take advantage of the benefits of IP because they lack the dedicated networking hardware which is as a practical matter required to interact with nontrivial networks. I observe that multihop point-to-point IP networks can in principle be constructed over the communication media that microcontrollers commonly do have, such as UARTs, I2C, SPI, and CAN bus, but software support is lacking to make this networking approach accessible.&#13;
&#13;
Therefore, this thesis develops and evaluates interstice, a platform-independent, open-source software library designed to enable the flexible implementation of modular packet forwarders in userspace. It can be used to interconnect devices and their IP stacks across a variety of conventional&#13;
and unconventional links. Interstice exposes a reprogrammable, dynamically-updatable packet-forwarding strategy, enabling forwarder nodes in principle to act as hubs, bridges, full routers, or implement firewalls or NAT, as application requirements and platform constraints permit.&#13;
&#13;
This approach enables benefits for modular, networked systems of microcontrollers which need to talk to the outside world: using IP enables internal microcontrollers to communicate with external devices such as PCs and smartphones without the need for application gateways. Further, to the extent that such networks are runtime-reconfigurable, features of IP such as address assignment, dynamic routing, and link-agnosticity can be incredibly beneficial.&#13;
&#13;
Interstice is evaluated here primarily against networks of various types of serial links (UART, I2c, CAN) speaking PPP, selected to demonstrate utility of the approach to connect embedded devices lacking dedicated networking peripherals, and further that link drivers can be specialized to take advantage of the specific characteristics of each link. The approach is showcased in application scenarios including a networked milling machine, and is analyzed for a number of performance metrics.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>BioLIG: Designing Biologically Derived Electronics and Their Speculative Lives</title>
<link href="https://hdl.handle.net/1721.1/164270" rel="alternate"/>
<author>
<name>Li, Yuqing Lucy</name>
</author>
<id>https://hdl.handle.net/1721.1/164270</id>
<updated>2025-12-11T03:08:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">BioLIG: Designing Biologically Derived Electronics and Their Speculative Lives
Li, Yuqing Lucy
Imagination is the origin of reality. Cultivating new infrastructural and ecological imaginaries is crucial to addressing the climate crisis. Where is the space to prototype new social and technological relations? Transient electronics is an emerging field in advanced materials focused on making electronics that don’t last. Devices are designed to be transient for biomedical, environmental monitoring, or energy storage applications. It is a fascinating and unconventional direction that advances the area of biocompatibility, redefining waste and time-programmable decay {Making electronics that, 2022}. However, in a manufacturing system that fundamentally favors the inert and invariant, transient properties can be precisely the qualities that make adaptation most challenging, often failing at the very stage of imagination. Taking inspiration from transient electronics, this thesis consists of a set of novel biomaterials, a workflow, and three fictional stories to enrich our imagination and instill agency amidst entangled humanitarian, ecological, and technological crises. BioLIG is a material for prototyping accessible and compostable electronics. It uses laser-induced graphene as an organic, bio-derived conductor and affordable biomaterials as the substrate. Three sheets and two inks make up a toolkit to create biocomposites with different properties, colors, and textures specifically designed for prototyping sensors and circuits with transient behaviours. Through a series of characterisations, BioLIG is evaluated and demonstrates that with one material, its electrical performance is on par with synthetic substrates. However, the goal is not to create a replacement material but to prototype new social and technological relations to transient materials. Through a questionnaire, I collected stories, ideas, and questions from makers, designers, and artists for BioLIG and used those as the basis for imagination. In a speculative house, on three floors, three stories unfold of a hoarder, a city forester, and a family living in a time with a leap in our relationship to fabrication, to electronics, and to decay.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Being. Creative. Together. Designing Technologies That Center Human Connection, Co-creativity, and Calm in the Era of AI</title>
<link href="https://hdl.handle.net/1721.1/164269" rel="alternate"/>
<author>
<name>Dhariwal, Manuj</name>
</author>
<id>https://hdl.handle.net/1721.1/164269</id>
<updated>2025-12-11T03:06:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Being. Creative. Together. Designing Technologies That Center Human Connection, Co-creativity, and Calm in the Era of AI
Dhariwal, Manuj
As Artificial Intelligence (AI) becomes increasingly interwoven into our creative, social, and learning experiences, we must ask: Will these technologies deepen our connection to the timeless human experiences of Being, Being Together, and Being Creative Together—or will they pull us apart, leaving us more anxious and isolated? In an era where AI systems are increasingly framed as our “co-creators” and “companions,” enabling hyper-personalized yet hyper-isolated interactions, this dissertation reclaims the prefix ‘co-’ as fundamentally interhuman—introducing a set of new paradigms that center human connection, co-creativity, and calm in the design of technologies.&#13;
&#13;
Central to this work, we’ve developed CoCo (coco.build), a general-purpose, real-time co-creative learning platform that empowers young people to engage in a wide variety of safe, shared creative experiences with their peers—spanning creative computing, AI education, digital art, writing, and more. Through the platform, we showcase how digital environments can move beyond isolated modes of learning and creating to support multiple ways of being creative together with others—introducing a new paradigm for real-time digital collaboration. We further illuminate how CoCo has been envisioned as a “self-less” social platform that de-emphasizes comparison-based, self-centric metrics (profiles, likes, followers) prevalent in most online systems for young people. &#13;
&#13;
We weave these interconnected ideas into the unifying theme of “Being. Creative. Together.”— values we believe are both timeless and especially timely in the AI era. We supplement the broader design, technical, practical, and pedagogical contributions of this work by sharing insights and feedback from pilots with over 2,000 young people and educators across diverse settings. Ultimately, we see this dissertation as both a contribution and a call—to preserve the human essence of co-, to distinguish it from the useful, powerful, but instrumental AI interactions, and to shape digital environments that nurture our capacity to co-imagine, co-create, co-learn, co-exist, and co-evolve—with and through one another.&#13;
&#13;
Note: This work has been co-developed with Shruti Dhariwal. See https://coco.build/thesis for suggested citation and updates on this work.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward the computational transformation of legal theory and practice</title>
<link href="https://hdl.handle.net/1721.1/164268" rel="alternate"/>
<author>
<name>Mahari, Robert</name>
</author>
<id>https://hdl.handle.net/1721.1/164268</id>
<updated>2025-12-11T03:05:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Toward the computational transformation of legal theory and practice
Mahari, Robert
This doctoral thesis seeks to advance the formalization of computational law as a distinct research discipline. It explores three interwoven key themes: the empirical understanding of legal systems through advanced computational methods; the development of computational tools to augment the capabilities of legal practitioners, thereby expanding access to justice; and the identification of novel, computationally-enabled regulatory interventions. This research directly confronts the global access to justice crisis and the shortcomings of conventional legal services that frequently leave businesses and individuals without adequate support. Furthermore, the thesis investigates innovative regulatory strategies for emerging technologies, aiming to synchronize legal frameworks with contemporary technological progress by exploring adaptive and forward-looking governance approaches.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modular Development Platforms and Creative Ecosystems: Design &amp; Deployment for Wide Impact Across Fields</title>
<link href="https://hdl.handle.net/1721.1/164267" rel="alternate"/>
<author>
<name>Shtarbanov, Ali</name>
</author>
<id>https://hdl.handle.net/1721.1/164267</id>
<updated>2025-12-11T03:06:21Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Modular Development Platforms and Creative Ecosystems: Design &amp; Deployment for Wide Impact Across Fields
Shtarbanov, Ali
Physical, digital, and conceptual tools and building blocks are fundamental enablers and accelerators of humanity’s progress in technology, science, medicine, art, and even in abstract fields like mathematics, philosophy, and social sciences. Hardware development platforms present a special class of tools and building blocks, facilitating and accelerating innovation, prototyping, and research. They drastically reduce prototyping time and complexity, improve efficiency for experts, democratize access to innovation, and even inspire entirely new ideas. This research investigates how to design, develop, and deploy development platforms in ways that maximize their real-world impact potential. It focuses not only on the technical and engineering aspects, but also on the complete ecosystem a platform needs in order to have impact, including community building, engagement with users and volunteers, content strategy, online presence, publicity, deployment, feedback loops modularity, financial viability, and symbiotic relationships.  A comprehensive Design &amp; Deployment Framework is introduced as a conceptual tool for creating high-impact platforms and creative ecosystems, recognizing and fostering the positive feedback loops that sustain them and that shape their evolution and growth. This framework is applied in the development and deployment of multiple novel platform and ecosystem projects, including FlowIO, SleeveIO, and ModiStrap, as well as the ecosystem SoftRobotics.IO. Those works have benefited thousands of people around the world, providing researchers, designers, and engineers with powerful, reconfigurable, modular enabling artifacts that streamline prototyping, accelerate research, and lower barriers in fields like soft robotics, haptics, assistive technology, shape-changing interfaces, interactive arts, and more. A multitude of research, art, and engineering projects made possible by FlowIO and SoftRobotics.IO are presented, as well as over a dozen case studies showcasing how other users across disciplines have adopted, utilized, and extended these systems to advance their own creative, educational, and technical endeavors. Additionally, this thesis also investigates various deployment models for hardware and introduces a new hardware deployment model for equitable access to expensive hardware that may otherwise be financially out of reach for many users, as well as an “earned open-source” model, which preserves the essence of the traditional open-source model, while eliminating many of its pitfalls.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advancing Biosecurity in the Age of AI: Integrating Novel Detection, Suppression, and Evaluation Approaches</title>
<link href="https://hdl.handle.net/1721.1/164266" rel="alternate"/>
<author>
<name>Justen, Lennart J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164266</id>
<updated>2025-12-11T03:08:21Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Advancing Biosecurity in the Age of AI: Integrating Novel Detection, Suppression, and Evaluation Approaches
Justen, Lennart J.
Civilization confronts a growing challenge: advancing transformative biological science while safeguarding against catastrophic misuse, a tension amplified by the rapid convergence between biology and artificial intelligence. The COVID-19 pandemic starkly revealed our vulnerabilities to self-replicating, exponential biological phenomena, yet current defenses remain dangerously inadequate—often blind to novel pathogens until too late and lacking barriers against rapid airborne transmission. This thesis argues that robust biosecurity enables, rather than hinders, progress, and advances three key defensive capabilities. First, it evaluates blood metagenomics for pathogen-agnostic surveillance, reanalyzing public datasets to quantify viral signatures and guide the implementation of much-needed early-warning systems sensitive to novel pathogens. Second, it advances far-UVC, a type of ultraviolet between 200-235 nm, for continuous indoor air disinfection, critically assessing its safety profile through an international expert review and establishing research priorities essential for deploying this vital physical defense against airborne threats. Third, it develops rigorous methodologies for evaluating AI's rapidly evolving biological capabilities, benchmarking frontier models across diverse tasks to track progress, reveal limitations in current assessments, and guide responsible innovation in this powerful dual-use technology. Collectively, these contributions help accelerate technologies to mitigate biological risks, thereby helping secure the conditions for continued, beneficial advancement of biology in the age of AI.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Dialogue to Decision: An LLM-Powered Framework for Analyzing Collective Idea Evolution and Voting Dynamics in Deliberative Assemblies</title>
<link href="https://hdl.handle.net/1721.1/164265" rel="alternate"/>
<author>
<name>Poole-Dayan, Elinor</name>
</author>
<id>https://hdl.handle.net/1721.1/164265</id>
<updated>2025-12-11T03:08:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">From Dialogue to Decision: An LLM-Powered Framework for Analyzing Collective Idea Evolution and Voting Dynamics in Deliberative Assemblies
Poole-Dayan, Elinor
Deliberative assemblies—representative samples of citizens engaged in collective decision-making through facilitated learning and deliberation—are increasingly recognized as powerful tools for revitalizing democratic governance. Yet, core aspects of how deliberation shapes which ideas advance, how perspectives evolve, and why certain recommendations succeed remain opaque and underexamined. This thesis addresses these gaps by investigating: (1) How might we trace the evolution and distillation of ideas into concrete recommendations within deliberative assemblies? and (2) How does the deliberative process shape delegate perspectives and influence voting dynamics over the course of the assembly?&#13;
&#13;
&#13;
To answer these questions, I develop LLM-based methodologies for empirically analyzing transcripts from a tech-enhanced student deliberative assembly. The first framework identifies and visualizes the space of expressed suggestions, revealing that seemingly large gaps between ideas and final recommendations often reflect productive deliberative filtering—while also surfacing overlooked viable ideas.&#13;
A second analysis integrates post-assembly survey data with transcript-grounded voting patterns to uncover the primary drivers of vote change: edits to recommendations, evolving opinions, and strategic shifts in response to updated priorities. Building on this, I introduce a framework for reconstructing each delegate’s evolving stance across the assembly, linking shifts in perspective to specific deliberative moments and justifications.&#13;
&#13;
Together, these methods contribute novel empirical insight into deliberative processes and demonstrate how LLMs can surface high-resolution dynamics otherwise invisible in traditional assembly outputs. The findings lay groundwork for new tools that support facilitators and delegates during live assemblies, improve transparency for decision-makers, and elevate ideas that may otherwise be missed.&#13;
&#13;
Looking ahead, this work opens pathways for comparative research across assemblies and highlights the potential for human-centered AI to meaningfully enhance deliberative democratic practice. As societies seek new modes of participatory governance amid growing polarization and institutional mistrust, tools that strengthen deliberation without compromising its core human character are urgently needed.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Private, Verifiable, and Auditable AI Systems</title>
<link href="https://hdl.handle.net/1721.1/164264" rel="alternate"/>
<author>
<name>South, Tobin</name>
</author>
<id>https://hdl.handle.net/1721.1/164264</id>
<updated>2025-12-11T03:06:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Private, Verifiable, and Auditable AI Systems
South, Tobin
The growing societal reliance on artificial intelligence necessitates robust frameworks for ensuring its security, accountability, and trustworthiness. This thesis addresses the complex interplay between privacy, verifiability, and auditability in modern AI, particularly in foundation models. It argues that technical solutions that integrate these elements are critical for responsible AI innovation. Drawing from international policy contributions and technical research to identify key risks in the AI pipeline, this work introduces novel technical solutions for critical privacy and verifiability challenges.  Specifically, the research introduces techniques for enabling verifiable and auditable claims about AI systems using zero-knowledge cryptography; utilizing secure multi-party computation and trusted execution environments for auditable, confidential deployment of large language models and information retrieval; and implementing enhanced delegation mechanisms, credentialing systems, and access controls to secure interactions with autonomous and multi-agent AI systems. Synthesizing these technical advancements, this dissertation presents a cohesive perspective on balancing privacy, verifiability, and auditability in foundation model-based AI systems, offering practical blueprints for system designers and informing policy discussions on AI safety and governance.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Language Models as Mirrors and Bridges for Intergroup Communication</title>
<link href="https://hdl.handle.net/1721.1/164263" rel="alternate"/>
<author>
<name>Jiang, Hang</name>
</author>
<id>https://hdl.handle.net/1721.1/164263</id>
<updated>2025-12-11T03:06:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Language Models as Mirrors and Bridges for Intergroup Communication
Jiang, Hang
This dissertation explores how large language models (LLMs) can serve dual roles in intergroup communication: as mirrors that reflect intergroup differences, and as bridges that facilitate communication across group boundaries. Intergroup communication refers to interactions between individuals from different social groups, such as political, cultural, or professional communities, where divergent perspectives often lead to misunderstandings, unequal access to information, and social fragmentation.&#13;
&#13;
The first part of the dissertation presents LLMs as mirrors that reveal intergroup differences. We first introduce CommunityLM, a novel framework for probing public opinion by fine-tuning LLMs on social media posts from specific communities. Our case study comparing Republican and Democratic groups reveals that model predictions align well with human survey responses, substantially outperforming established baselines. Building on this foundation, we develop PersonaLLM to investigate whether prompt-based LLM agents can generate content aligned with assigned personas, which has emerged as a popular approach for modeling the behaviors of social groups. Through automated and human evaluations, we demonstrate that these agents can complete personality tests and write stories that reflect the distinctive behavioral patterns of specific personality profiles. Together, these complementary projects illustrate how LLMs can effectively capture and simulate the unique perspectives and behaviors that characterize diverse social groups.&#13;
&#13;
The second part of the dissertation presents LLMs as bridges that facilitate communication across group boundaries. First, we introduce Bridging Dictionary, an interactive tool that uses retrieval-augmented generation (RAG) techniques with LLMs to identify polarized language and suggest more inclusive alternatives. In collaboration with PBS Frontline, we demonstrate the potential of LLMs to reduce misunderstanding in journalism and political communication. Second, we present Legal Storytelling, a human-LLM collaboration framework that generates accessible narratives to explain complex legal concepts to non-experts. Through randomized controlled trials (RCTs), we find that LLM-generated narratives can improve legal literacy and help bridge communication gaps between experts and laypeople, particularly among non-native English speakers. Third, we develop FaciliTrain, a voice-based, LLM-powered system that enables facilitators to learn and practice intergroup dialogue skills with multiple LLM agents representing diverse social backgrounds and personas in a small-group setting. User studies with campus participants show encouraging early results, suggesting that LLMs can effectively support the development of communication skills essential for constructive intergroup dialogue. Together, these projects illustrate how LLMs can actively foster mutual understanding across social divides by promoting more inclusive, accessible, and constructive communication.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Delibrary: From Discussion to Outcomes and Back(casted) Again, a Visualization Tool for Deliberative Assemblies</title>
<link href="https://hdl.handle.net/1721.1/164262" rel="alternate"/>
<author>
<name>Wong, Wing Cheung Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/164262</id>
<updated>2025-12-11T03:08:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Delibrary: From Discussion to Outcomes and Back(casted) Again, a Visualization Tool for Deliberative Assemblies
Wong, Wing Cheung Michael
With trust in traditional democratic institutions waning, it is increasingly important to examine how potential new institutions could be created and bolstered, with particular emphasis on restoring trust and empowering the public. One potential solution, the citizen's or deliberative assembly, can serve to bridge the governance and legitimacy gap between real-world policy decision-making processes and citizen-driven impact by leveraging random sortition and a well-designed deliberation process. In this thesis, I explore how AI-driven sensemaking via GPT4o-mini--a Large-Language Model (LLM)--synthesized with custom-built visualization tools, can potentially reveal the dynamics within citizen deliberative assemblies where representative, randomly selected citizens navigate public interest issues through facilitated deliberation--and how such tools can serve to amplify transparency within both the assembly process itself and the issues they explore. Through building three different prototype visualization frameworks and the development of an AI-powered topic identification process called backcasting, I analyze novel datasets from two tech-enhanced assemblies; fully recorded discussions from both an on-the-ground citizens' assembly in Deschutes County, Oregon, as well as an MIT student assembly on sustainability. In backcasting, assembly outcomes are linked to transcriptions of assembly discussions via LLM tagging, uncovering what, when, who, and where participants deliberate about topics that eventually become proposals/recommendations/outcomes. Furthermore, I analyze the sentiment with which an assembly delegate presented their view on a certain recommendation (agreement, disagreement, etc.) in addition to the supporting reasoning patterns this delegate used to express their view, if any (e.g. whether they draw from personal experience, reference outside expertise, etc.). To evaluate the final prototype tool, I interview subject matter and assembly experts, assembly organizers/facilitators, as well as assembly delegate members to assess the potential and drawbacks of this visualization tool and AI sensemaking backbone. Positive feedback obtained from these user studies include the clear potential for research, narrative building, and facilitation improvement, in addition to greater perceived transparency into the workings of an assembly process. Further work is still needed, however, to address significant lingering issues, such as adjusting presentation to better serve specific use cases and to reduce complexity and confusion, the most referenced drawback of Delibrary. Overall, my thesis aims to \textbf{build transparent insights into the human-led structures of assemblies, enabling relevant stakeholders--from delegates, policy makers, to the general public--to achieve a better understanding of the assembly process and engender legitimacy perception by illustrating that delegates drawn from all walks of life do have meaningful voice in an impactful process}. By helping to promote this understanding and perception of legitimacy of an effective and respectful deliberation process, I strive to ultimately help scaffold healthier democratic decision-making.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Facilitating Creative Learning: Engaging in a Practice of Care</title>
<link href="https://hdl.handle.net/1721.1/164261" rel="alternate"/>
<author>
<name>Presicce, Carmelo</name>
</author>
<id>https://hdl.handle.net/1721.1/164261</id>
<updated>2025-12-11T03:06:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Facilitating Creative Learning: Engaging in a Practice of Care
Presicce, Carmelo
Creative learning is shaped not only by tools and activities, but by relationships. This dissertation explores facilitation in creative learning environments as a relational practice centered on care—not as a set of techniques, but as a deeply human way of being with others, a commitment to creating spaces where people feel supported enough to explore, connected enough to share, and valued enough to express themselves. Grounded in constructionist, socioconstructivist, and humanistic pedagogies, the research draws from my multi-year engagement with Learning Creative Learning (LCL)—an online course and global community for educators—and WeScratch, a series of hands-on, collaborative online workshops introducing educators to creative coding. Through qualitative analysis of small-group facilitation during WeScratch workshops, I explore how volunteer facilitators experience and reflect on their practice. Drawing from three case studies, I examine how care takes shape in the situated, relational work of creative learning facilitation. In particular, I identify three interrelated forms of care: epistemic care, which focuses on what and how people learn; affirming care, which supports what learners value and who they are; and convivial care, which attends to how learners feel and relate to one another in a group. After introducing these three forms of care through the work of individual facilitators, I show how epistemic, affirming, and convivial care are deeply interwoven in practice—at times reinforcing one another, at times pulling in different directions. Facilitators must navigate these tensions in the moment, making situated judgments about when to step in, when to hold back, and how to respond to the evolving needs of individuals and groups. By centering care, this research highlights facilitation as deeply human, relational work that sustains the conditions for creative learning, contributing to the broader and evolving discourse on constructionism. It also makes the case for seeing facilitation as an ethical and political practice. In a time when educational discourse is increasingly shaped by ideals of efficiency and optimization—and the world faces rising authoritarianism and dehumanization—choosing to care is not only pedagogically meaningful, but also politically urgent.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Novel Earth Abundant Catalytic Materials for Abatement of Atmospheric Methane Sources, and Evaluation of Agricultural Deployment Environments</title>
<link href="https://hdl.handle.net/1721.1/164260" rel="alternate"/>
<author>
<name>Brenneis, Rebecca J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164260</id>
<updated>2025-12-11T03:06:10Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Novel Earth Abundant Catalytic Materials for Abatement of Atmospheric Methane Sources, and Evaluation of Agricultural Deployment Environments
Brenneis, Rebecca J.
Annual global average temperatures in the past year have already exceeded the international target limit of 1.5°C, and the window to prevent that rise from extending is rapidly closing. The high global warming potential (GWP) and short atmospheric residence time (half-life of around 12 years) of methane make it a critical target for action to slow the pace of climate change in this decade. Yet technological solutions for methane abatement are challenged by methane’s inertness, dilute atmospheric concentrations, and diffuse, variable emissions sources. In this thesis, I propose the use of a bio-inspired, earth-abundant, heterogeneous catalysts as a novel tool for atmospheric and emissions-based methane abatement. Copper zeolites were characterized for their ability to convert low levels of methane, continuously, at low temperatures, for moderate durations, and in the presence of a variety of gaseous mixture influents, designed to mimic atmospheric air at standard temperatures and pressures. Catalytic performance was tested under conditions designed to mimic those found at two of the primary sources of low-level, anthropogenic emissions: ventilation air methane (VAM) and industrial dairy. Laboratory synthesized catalysts were shown to completely oxidize methane at concentrations ranging from atmospheric to 1%, covering the range of subflarable levels. Conversion was demonstrated at temperature as low a 270°C, with complete conversion achievable as low as 350°C, in the presence of 20% oxygen. While the presence of water vapor, nitric oxide, and hydrogen sulfide were shown to partially reduce catalytic efficiency, conversion efficiency was restored with increased temperature. The presence of carbon dioxide, alkanes, ammonia and hydrogen, at industrially relevant concentrations, had no effect on catalytic performance. Finally, atmospheric samples were collected at six industrial scale dairy barns across the Midwest and compared with the simulated laboratory conditions. Dairy samples fell within the ranges tested at the bench scale showing no evidence of any impediment to copper zeolite as a potential abatement tool. Methane concentrations at dairies were shown to be on the order of atmospheric to low 10s of ppmv making copper zeolites the only currently identified abatement strategy to address methane emissions at these locations. While it remains to be shown that these zeolites can provide net greenhouse gas benefit in the conditions required, copper zeolites are a strong option on a short list of technologies to address methane at any subflarable concentration, sources of which comprise 80% of global emissions sources, showing great promise as a climate technology breakthrough.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>To Co- Is Human: Designing Technologies That Center Human Connection, Co-creativity, and Calm in the Era of AI</title>
<link href="https://hdl.handle.net/1721.1/164259" rel="alternate"/>
<author>
<name>Dhariwal, Shruti</name>
</author>
<id>https://hdl.handle.net/1721.1/164259</id>
<updated>2025-12-11T03:06:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">To Co- Is Human: Designing Technologies That Center Human Connection, Co-creativity, and Calm in the Era of AI
Dhariwal, Shruti
In an era where Artificial Intelligence (AI) systems are increasingly framed as our “companions” and “co-creators,” this dissertation reclaims “co-” as a fundamental marker of shared human experience—using it as a foundation to reimagine and build technologies that consciously center interhuman connection and co-creativity. Central to this work, we’ve developed CoCo (coco.build)—a general-purpose, real-time co-creative learning platform that empowers young people to engage in a wide variety of safe, shared creative experiences with their peers, spanning creative computing, AI education, digital art, writing, and more. Through the platform, we showcase how digital environments can move beyond isolated modes of learning and creating to support multiple ways of being creative together with others—introducing a new paradigm for real-time digital collaboration. We further illuminate how CoCo has been envisioned as a “self-less” social platform that de-emphasizes comparison-based, self-centric metrics (profiles, likes, followers) prevalent in most online systems for youth. We anchor these interconnected ideas in a unifying theme of “Being. Creative. Together.”—reflecting timeless values that have become especially timely in an era when AI tools can further accentuate individualized digital experiences for young people. We supplement the broader design, technical, practical, and pedagogical contributions of this work by sharing insights and feedback from pilots with over 2,000 young people and educators across diverse settings. Ultimately, we see this dissertation as both a contribution and a call—to preserve the human essence of co-, to distinguish it from the useful, powerful, but instrumental AI interactions, and to shape digital environments that nurture young people’s capacity to co-imagine, co-create, co-learn, co-exist, and co-evolve—with and through one another. &#13;
&#13;
Note: This work has been co-developed with Manuj Dhariwal. See https://coco.build/thesis for suggested citation and updates on this work.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Dual-Branch Coupled Fourier Neural Operator for High-Resolution Multi-Phase Flow Modeling in Porous Media</title>
<link href="https://hdl.handle.net/1721.1/164258" rel="alternate"/>
<author>
<name>Al Hashim, Hassan</name>
</author>
<author>
<name>Elyas, Odai</name>
</author>
<author>
<name>Williams, John</name>
</author>
<id>https://hdl.handle.net/1721.1/164258</id>
<updated>2025-12-11T03:12:56Z</updated>
<published>2025-11-22T00:00:00Z</published>
<summary type="text">A Dual-Branch Coupled Fourier Neural Operator for High-Resolution Multi-Phase Flow Modeling in Porous Media
Al Hashim, Hassan; Elyas, Odai; Williams, John
This paper investigates a physics-informed surrogate modeling framework for multi-phase flow in porous media based on the Fourier Neural Operator. Traditional numerical simulators, though accurate, suffer from severe computational bottlenecks due to fine-grid discretizations and the iterative solution of highly nonlinear partial differential equations. By parameterizing the kernel integral directly in Fourier space, the operator provides a discretization-invariant mapping between function spaces, enabling efficient spectral convolutions. We introduce a Dual-Branch Adaptive Fourier Neural Operator with a shared Fourier encoder and two decoders: a saturation branch that uses an inverse Fourier transform followed by a multilayer perceptron and a pressure branch that uses a convolutional decoder. Temporal information is injected via Time2Vec embeddings and a causal temporal transformer, conditioning each forward pass on step index and time step to maintain consistent dynamics across horizons. Physics-informed losses couple data fidelity with residuals from mass conservation and Darcy pressure, enforcing the governing constraints in Fourier space; truncated spectral kernels promote generalization across meshes without retraining. On SPE10-style heterogeneities, the model shifts the infinity-norm error mass into the 10−2 to 10−1 band during early transients and sustains lower errors during pseudo-steady state. In zero-shot three-dimensional coarse-to-fine upscaling from 30 ×110 ×5 to 60 ×220 ×5, it attains &#119877;2 =0.90, RMSE = 4.4 ×10−2, and MAE = 3.2 ×10−2, with more than 90% of voxels below five percent absolute error across five unseen layers, while the end-to-end pipeline runs about three times faster than a full-order fine-grid solve and preserves water-flood fronts and channel connectivity. Benchmarking against established baselines indicates a scalable, high-fidelity alternative for high-resolution multi-phase flow simulation in porous media.
</summary>
<dc:date>2025-11-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unveiling IPv6 Scanning Dynamics: A Longitudinal Study Using Large Scale Proactive and Passive IPv6 Telescopes</title>
<link href="https://hdl.handle.net/1721.1/164257" rel="alternate"/>
<author>
<name>Tanveer, Hammas Bin</name>
</author>
<author>
<name>Chan, Echo</name>
</author>
<author>
<name>Mok, Ricky K. P.</name>
</author>
<author>
<name>Kappes, Sebastian</name>
</author>
<author>
<name>Richter, Philipp</name>
</author>
<author>
<name>Gasser, Oliver</name>
</author>
<author>
<name>Ronan, John</name>
</author>
<author>
<name>Berger, Arthur</name>
</author>
<author>
<name>Claffy, kc</name>
</author>
<id>https://hdl.handle.net/1721.1/164257</id>
<updated>2025-12-10T06:57:25Z</updated>
<published>2025-09-04T00:00:00Z</published>
<summary type="text">Unveiling IPv6 Scanning Dynamics: A Longitudinal Study Using Large Scale Proactive and Passive IPv6 Telescopes
Tanveer, Hammas Bin; Chan, Echo; Mok, Ricky K. P.; Kappes, Sebastian; Richter, Philipp; Gasser, Oliver; Ronan, John; Berger, Arthur; Claffy, kc
We introduce new tools and vantage points to develop and integrate proactive techniques to attract IPv6 scan traffic, thus enabling its analysis. By deploying the largest-ever IPv6 proactive telescope in a production ISP network, we collected over 600M packets of unsolicited traffic from 1.9k Autonomous Systems in 10 months. We characterized the sources of unsolicited traffic, evaluated the effectiveness of five major features across the network stack, and inferred scanners' sources of target addresses and their strategies.
</summary>
<dc:date>2025-09-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>HARMONY: A Scalable Distributed Vector Database for High-Throughput Approximate Nearest Neighbor Search</title>
<link href="https://hdl.handle.net/1721.1/164256" rel="alternate"/>
<author>
<name>Xu, Qian</name>
</author>
<author>
<name>Zhang, Feng</name>
</author>
<author>
<name>Li, Chengxi</name>
</author>
<author>
<name>Cao, Lei</name>
</author>
<author>
<name>Chen, Zheng</name>
</author>
<author>
<name>Zhai, Jidong</name>
</author>
<author>
<name>Du, Xiaoyong</name>
</author>
<id>https://hdl.handle.net/1721.1/164256</id>
<updated>2025-12-10T06:57:53Z</updated>
<published>2025-09-23T00:00:00Z</published>
<summary type="text">HARMONY: A Scalable Distributed Vector Database for High-Throughput Approximate Nearest Neighbor Search
Xu, Qian; Zhang, Feng; Li, Chengxi; Cao, Lei; Chen, Zheng; Zhai, Jidong; Du, Xiaoyong
Approximate Nearest Neighbor Search (ANNS) is essential for various data-intensive applications, including recommendation systems, image retrieval, and machine learning. Scaling ANNS to handle billions of high-dimensional vectors on a single machine presents significant challenges in memory capacity and processing efficiency. To address these challenges, distributed vector databases leverage multiple nodes for the parallel storage and processing of vectors. However, existing solutions often suffer from load imbalance and high communication overhead, primarily due to traditional partition strategies that fail to effectively distribute the workload. In this paper, we introduce Harmony, a distributed ANNS system that employs a novel multi-granularity partition strategy, combining dimension-based and vector-based partition. This strategy ensures a balanced distribution of computational load across all nodes while effectively minimizing communication costs. Furthermore, Harmony incorporates an early-stop pruning mechanism that leverages the monotonicity of distance computations in dimensionbased partition, resulting in significant reductions in both computational and communication overhead. We conducted extensive experiments on diverse real-world datasets, demonstrating that Harmony outperforms leading distributed vector databases, achieving 4.63&amp;#215; throughput on average in four nodes and 58% performance improvement over traditional distribution for skewed workloads.
</summary>
<dc:date>2025-09-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Kernel Extension DSLs Should Be Verifier-Safe!</title>
<link href="https://hdl.handle.net/1721.1/164255" rel="alternate"/>
<author>
<name>Solleza, Franco</name>
</author>
<author>
<name>Adam, Justus</name>
</author>
<author>
<name>Crotty, Andrew</name>
</author>
<author>
<name>Narayan, Akshay</name>
</author>
<author>
<name>Schwarzkopf, Malte</name>
</author>
<author>
<name>Tatbul, Nesime</name>
</author>
<id>https://hdl.handle.net/1721.1/164255</id>
<updated>2025-12-10T06:57:52Z</updated>
<published>2025-09-08T00:00:00Z</published>
<summary type="text">Kernel Extension DSLs Should Be Verifier-Safe!
Solleza, Franco; Adam, Justus; Crotty, Andrew; Narayan, Akshay; Schwarzkopf, Malte; Tatbul, Nesime
eBPF allows developers to write safe operating system extensions, but writing these extensions remains challenging because it requires detailed knowledge of both the extension's domain and eBPF's programming interface. Most importantly, the extension must pass the eBPF verifier.&#13;
This paper argues that DSLs for extensions should guarantee verifier-safety: valid DSL programs should result in eBPF code that always passes the verifier. This avoids complex debugging and the need for extension developers to be eBPF experts. We show that three existing DSLs for different domains are compatible with verifier-safety. Beyond verifier-safety, practical extension DSLs must also achieve good performance. Inspired by database query optimization, we sketch an approach to creating DSL-specific optimizers capable of maintaining verifier-safety. A preliminary evaluation shows that optimizing verifier-safe extension performance is feasible.
eBPF ’25, September 8–11, 2025, Coimbra, Portugal
</summary>
<dc:date>2025-09-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>Experiencing EmbedNet: Embedding self-sensing to 3D casting objects</title>
<link href="https://hdl.handle.net/1721.1/164254" rel="alternate"/>
<author>
<name>Liu, Fangzheng</name>
</author>
<author>
<name>Dementyev, Artem</name>
</author>
<author>
<name>Wicaksono, Irmandy</name>
</author>
<author>
<name>Paradiso, Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/164254</id>
<updated>2025-12-10T06:57:19Z</updated>
<published>2025-09-27T00:00:00Z</published>
<summary type="text">Experiencing EmbedNet: Embedding self-sensing to 3D casting objects
Liu, Fangzheng; Dementyev, Artem; Wicaksono, Irmandy; Paradiso, Joseph
This paper introduces EmbedNet, a method for integrating dense sensor networks into casting objects. With EmbedNet, sensor nodes are seamlessly incorporated into casting objects during fabrication. The process involves extruding base materials like silicone rubber or liquid plastic and a custom-designed sensor strip using a hand-held extruder into a mold tailored to specific applications. The base material mixes with the sensor strip in the mold, and upon curing, the result is an object with a defined shape housing a sensor network. EmbedNet employs a small Host node to access sensor data from all nodes on the strip. Each sensor node is self-contained and provides status indications through an onboard RGB LED. The Host connects with all sensor nodes using just three wires: power, ground, and data. This one-wire communication is facilitated through a custom-designed software serial port for each sensor node. The paper showcases various applications of EmbedNet, including wearables, home sensing, and entertainment devices.
UIST Adjunct ’25, Busan, Republic of Korea
</summary>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Demonstrating NeuroFlux: A Non-Invasive Peripheral Magnetic Stimulation Device for Multimodal Haptic Feedback</title>
<link href="https://hdl.handle.net/1721.1/164253" rel="alternate"/>
<author>
<name>Huang, Bingjian</name>
</author>
<author>
<name>Chin, Sam</name>
</author>
<author>
<name>Wigdor, Daniel</name>
</author>
<author>
<name>Paradiso, Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/164253</id>
<updated>2025-12-10T06:57:50Z</updated>
<published>2025-09-27T00:00:00Z</published>
<summary type="text">Demonstrating NeuroFlux: A Non-Invasive Peripheral Magnetic Stimulation Device for Multimodal Haptic Feedback
Huang, Bingjian; Chin, Sam; Wigdor, Daniel; Paradiso, Joseph
We demonstrate NeuroFlux, a wearable armband that delivers multimodal haptic feedback through non-invasive peripheral magnetic stimulation. Unlike conventional haptic devices limited to either tactile or kinesthetic modalities, NeuroFlux stimulates peripheral nerves to independently evoke both muscle movements and localized skin sensations. Our system features a custom-designed control circuit and a multi-coil armband, enabling precise, real-time control of stimulation location and intensity. This hardware innovation significantly expands the design space of haptic feedback by bridging kinesthetic and tactile modalities through a single, compact device. In our demonstration, participants will experience a wide range of magnetically induced haptic sensations, including independent stimulation of muscular and cutaneous nerves in the forearm. The setup includes interactive tasks that showcase NeuroFlux’s ability to generate diverse haptic effects such as finger flexion, wrist movement, as well as immersive virtual reality object interactions. By offering hands-on exposure to peripheral magnetic stimulation, we aim to spark new research directions in multimodal haptic feedback and make neural stimulation more accessible to the HCI community.
UIST Adjunct ’25, Busan, Republic of Korea
</summary>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Supernotes: Driving Consensus in Crowd-Sourced Fact-Checking</title>
<link href="https://hdl.handle.net/1721.1/164252" rel="alternate"/>
<author>
<name>De, Soham</name>
</author>
<author>
<name>Bakker, Michiel</name>
</author>
<author>
<name>Baxter, Jay</name>
</author>
<author>
<name>Saveski, Martin</name>
</author>
<id>https://hdl.handle.net/1721.1/164252</id>
<updated>2025-12-10T06:57:49Z</updated>
<published>2025-04-22T00:00:00Z</published>
<summary type="text">Supernotes: Driving Consensus in Crowd-Sourced Fact-Checking
De, Soham; Bakker, Michiel; Baxter, Jay; Saveski, Martin
X's Community Notes, a crowd-sourced fact-checking system, allows users to annotate potentially misleading posts. Notes rated as helpful by a diverse set of users are prominently displayed below the original post. While demonstrably effective at reducing misinformation's impact when notes are displayed, there is an opportunity for notes to appear on many more posts: for 91% of posts where at least one note is proposed, no notes ultimately achieve sufficient support from diverse users to be shown on the platform. This motivates the development of Supernotes: AI-generated notes that synthesize information from several existing community notes and are written to foster consensus among a diverse set of users. Our framework uses an LLM to generate many diverse Supernote candidates from existing proposed notes. These candidates are then evaluated by a novel scoring model, trained on millions of historical Community Notes ratings, selecting candidates that are most likely to be rated helpful by a diverse set of users. To test our framework, we ran a human subjects experiment in which we asked participants to compare the Supernotes generated by our framework to the best existing community notes for 100 sample posts. We found that participants rated the Supernotes as significantly more helpful, and when asked to choose between the two, preferred the Supernotes 75.2% of the time. Participants also rated the Supernotes more favorably than the best existing notes on quality, clarity, coverage, context, and argumentativeness. Finally, in a follow-up experiment, we asked participants to compare the Supernotes against LLM-generated summaries and found that the participants rated the Supernotes significantly more helpful, demonstrating that both the LLM-based candidate generation and the consensus-driven scoring play crucial roles in creating notes that effectively build consensus among diverse users.
WWW ’25, Sydney, NSW, Australia
</summary>
<dc:date>2025-04-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>I Feel Your Pain: a Haptic Interface for Improving Pain Literacy</title>
<link href="https://hdl.handle.net/1721.1/164251" rel="alternate"/>
<author>
<name>Yin, Peggy</name>
</author>
<author>
<name>Chen, Sofia</name>
</author>
<author>
<name>Chang, Ethan</name>
</author>
<id>https://hdl.handle.net/1721.1/164251</id>
<updated>2025-12-10T06:57:16Z</updated>
<published>2025-09-27T00:00:00Z</published>
<summary type="text">I Feel Your Pain: a Haptic Interface for Improving Pain Literacy
Yin, Peggy; Chen, Sofia; Chang, Ethan
There is no sensation more universal and misunderstood than pain. While pain presents itself in nearly every eukaryotic organism, it remains one of the most elusive disease states to express, let alone treat. Here, we introduce Pain by Numbers, a haptic, immersive storytelling interface that facilitates user recognition and communication of low-to-medium-intensity pain, in order to improve pain literacy for patients, physicians, and society-at-large.
UIST Adjunct ’25, Busan, Republic of Korea
</summary>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding and Scaling Collaborative Filtering Optimization from the Perspective of Matrix Rank</title>
<link href="https://hdl.handle.net/1721.1/164250" rel="alternate"/>
<author>
<name>Loveland, Donald</name>
</author>
<author>
<name>Wu, Xinyi</name>
</author>
<author>
<name>Zhao, Tong</name>
</author>
<author>
<name>Koutra, Danai</name>
</author>
<author>
<name>Shah, Neil</name>
</author>
<author>
<name>Ju, Mingxuan</name>
</author>
<id>https://hdl.handle.net/1721.1/164250</id>
<updated>2025-12-10T06:57:47Z</updated>
<published>2025-04-22T00:00:00Z</published>
<summary type="text">Understanding and Scaling Collaborative Filtering Optimization from the Perspective of Matrix Rank
Loveland, Donald; Wu, Xinyi; Zhao, Tong; Koutra, Danai; Shah, Neil; Ju, Mingxuan
Collaborative Filtering (CF) methods dominate real-world recommender systems given their ability to learn high-quality, sparse ID-embedding tables that effectively capture user preferences. These tables scale linearly with the number of users and items, and are trained to ensure high similarity between embeddings of interacted user-item pairs, while maintaining low similarity for non-interacted pairs. Despite their high performance, encouraging dispersion for non-interacted pairs necessitates expensive regularization (e.g., negative sampling), hurting runtime and scalability. Existing research tends to address these challenges by simplifying the learning process, either by reducing model complexity or sampling data, trading performance for runtime. In this work, we move beyond model-level modifications and study the properties of the embedding tables under different learning strategies. Through theoretical analysis, we find that the singular values of the embedding tables are intrinsically linked to different CF loss functions. These findings are empirically validated on real-world datasets, demonstrating the practical benefits of higher stable rank -- a continuous version of matrix rank which encodes the distribution of singular values. Based on these insights, we propose an efficient warm-start strategy that regularizes the stable rank of the user and item embeddings. We show that stable rank regularization during early training phases can promote higher-quality embeddings, resulting in training speed improvements of up to 65.9%. Additionally, stable rank regularization can act as a proxy for negative sampling, allowing for performance gains of up to 21.2% over loss functions with small negative sampling ratios. Overall, our analysis unifies current CF methods under a new perspective -- their optimization of stable rank -- motivating a flexible regularization method that is easy to implement, yet effective at enhancing CF systems.
WWW ’25, April 28-May 2, 2025, Sydney, NSW, Australia
</summary>
<dc:date>2025-04-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>21W.749 / CMS.935 Documentary Photography and Photojournalism: Still Images of a World in Motion, Spring 2016</title>
<link href="https://hdl.handle.net/1721.1/144327.2" rel="alternate"/>
<author>
<name>Colen, B. D.</name>
</author>
<id>https://hdl.handle.net/1721.1/144327.2</id>
<updated>2025-12-09T19:31:47Z</updated>
<published>2016-06-01T00:00:00Z</published>
<summary type="text">21W.749 / CMS.935 Documentary Photography and Photojournalism: Still Images of a World in Motion, Spring 2016
Colen, B. D.
In this course, you will be exposed to the work of many great documentary photographers and photojournalists, as well as to writing about the documentary tradition. Further, throughout the term, you will hone your photographic skills and 'eye,' and you will work on a photo documentary project of your own, attempting to reduce a tiny area of the moving world to a set of still images that convey what the viewer needs to know about what you saw&amp;mdash;without hearing the sounds, smelling the odors, experiencing what was happening outside the viewfinder, and without seeing the motion.
</summary>
<dc:date>2016-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unified and Generalizable Reinforcement Learning for Facility Location Problems on Graphs</title>
<link href="https://hdl.handle.net/1721.1/164249" rel="alternate"/>
<author>
<name>Guo, Wenxuan</name>
</author>
<author>
<name>Wang, Runzhong</name>
</author>
<author>
<name>Xu, Yanyan</name>
</author>
<author>
<name>Jin, Yaohui</name>
</author>
<id>https://hdl.handle.net/1721.1/164249</id>
<updated>2025-12-10T06:56:51Z</updated>
<published>2025-04-22T00:00:00Z</published>
<summary type="text">Unified and Generalizable Reinforcement Learning for Facility Location Problems on Graphs
Guo, Wenxuan; Wang, Runzhong; Xu, Yanyan; Jin, Yaohui
Facility location problems on graphs are ubiquitous in the real world and hold significant importance, yet their resolution is often impeded by NP-hardness. MIP solvers can find the optimal solutions but fail to handle large instances, while algorithm efficiency has a higher priority in cases of emergency. Recently, machine learning methods have been proposed to tackle such classical problems with fast inference, but they are limited to the myopic constructive pattern and only consider simple cases in Euclidean space. This paper introduces a unified and generalizable approach to tackle facility location problems on weighted graphs with deep reinforcement learning, demonstrating a keen awareness of complex graph structures. Striking a harmonious balance between solution quality and running time, our method stands out with superior efficiency and steady performance. Our model trained on small graphs is highly scalable and consistently generates high-quality solutions, achieving a speedup of more than 2000 times to Gurobi on instances with 1000 nodes. The experiments on Shanghai road networks further demonstrate its practical value in solving real-world problems. The source codes are available at https://github.com/AryaGuo/PPO-swap.
WWW ’25, Sydney, NSW, Australia
</summary>
<dc:date>2025-04-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Paratrouper: Exploratory Creation of Character Cast Visuals Using Generative AI</title>
<link href="https://hdl.handle.net/1721.1/164248" rel="alternate"/>
<author>
<name>Leong, Joanne</name>
</author>
<author>
<name>Ledo, David</name>
</author>
<author>
<name>Driscoll, Thomas</name>
</author>
<author>
<name>Grossman, Tovi</name>
</author>
<author>
<name>Fitzmaurice, George</name>
</author>
<author>
<name>Anderson, Fraser</name>
</author>
<id>https://hdl.handle.net/1721.1/164248</id>
<updated>2025-12-10T06:57:09Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Paratrouper: Exploratory Creation of Character Cast Visuals Using Generative AI
Leong, Joanne; Ledo, David; Driscoll, Thomas; Grossman, Tovi; Fitzmaurice, George; Anderson, Fraser
Great characters are critical to the success of many forms of media, such as comics, games, and films. Designing visually compelling casts of characters requires significant skill and consideration, and there is a lack of specialized tools to support this endeavor. We investigate how AI-driven image-generation techniques can empower creatives to explore a variety of visual design possibilities for individual and groups of characters. Informed by interviews with character designers, Paratrouper is a multi-modal system that enables creating and experimenting with multiple permutations for character casts and visualizing them in various contexts as part of a holistic approach to design. We demonstrate how Paratrouper supports different aspects of the character design process, and share insights from its use by eight creators. Our work highlights the interplay between creative agency and serendipity, as well as the visual interrelationships among character aesthetics.
Joanne Leong, David Ledo, Thomas Driscoll, Tovi Grossman, George Fitzmaurice, and Fraser Anderson. 2025. Paratrouper: Exploratory Creation of Character Cast Visuals Using Generative AI. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI '25). Association for Computing Machinery, New York, NY, USA, Article 189, 1–20.
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>FiberCircuits: A Miniaturization Framework To Manufacture Fibers That Embed Integrated Circuits</title>
<link href="https://hdl.handle.net/1721.1/164247" rel="alternate"/>
<author>
<name>Honnet, Cedric</name>
</author>
<author>
<name>Babatain, Wedyan</name>
</author>
<author>
<name>Luo, Yiyue</name>
</author>
<author>
<name>Kilic Afsar, Ozgun</name>
</author>
<author>
<name>Bensahel, Chloe</name>
</author>
<author>
<name>Nicita, Sarah</name>
</author>
<author>
<name>Zhu, Yunyi</name>
</author>
<author>
<name>Danielescu, Andreea</name>
</author>
<author>
<name>Gershenfeld, Neil</name>
</author>
<author>
<name>Paradiso, Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/164247</id>
<updated>2025-12-10T06:57:23Z</updated>
<published>2025-09-27T00:00:00Z</published>
<summary type="text">FiberCircuits: A Miniaturization Framework To Manufacture Fibers That Embed Integrated Circuits
Honnet, Cedric; Babatain, Wedyan; Luo, Yiyue; Kilic Afsar, Ozgun; Bensahel, Chloe; Nicita, Sarah; Zhu, Yunyi; Danielescu, Andreea; Gershenfeld, Neil; Paradiso, Joseph
UIST ’25, Busan, Republic of Korea
</summary>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>FIP: Endowing Robust Motion Capture on Daily Garment by Fusing Flex and Inertial Sensors</title>
<link href="https://hdl.handle.net/1721.1/164246" rel="alternate"/>
<author>
<name>Zheng, Ruonan</name>
</author>
<author>
<name>Fang, Jiawei</name>
</author>
<author>
<name>Yao, Yuan</name>
</author>
<author>
<name>Gao, Xiaoxia</name>
</author>
<author>
<name>Zuo, Chengxu</name>
</author>
<author>
<name>Guo, Shihui</name>
</author>
<author>
<name>Luo, Yiyue</name>
</author>
<id>https://hdl.handle.net/1721.1/164246</id>
<updated>2025-12-10T06:57:04Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">FIP: Endowing Robust Motion Capture on Daily Garment by Fusing Flex and Inertial Sensors
Zheng, Ruonan; Fang, Jiawei; Yao, Yuan; Gao, Xiaoxia; Zuo, Chengxu; Guo, Shihui; Luo, Yiyue
What if our clothes could capture our body motion accurately? This paper introduces Flexible Inertial Poser (FIP), a novel motion-capturing system using daily garments with two elbow-attached flex sensors and four Inertial Measurement Units (IMUs). To address the inevitable sensor displacements in loose wearables which degrade joint tracking accuracy significantly, we identify the distinct characteristics of the flex and inertial sensor displacements and develop a Displacement Latent Diffusion Model and a Physics-informed Calibrator to compensate for sensor displacements based on such observations, resulting in a substantial improvement in motion capture accuracy. We also introduce a Pose Fusion Predictor to enhance multimodal sensor fusion. Extensive experiments demonstrate that our method achieves robust performance across varying body shapes and motions, significantly outperforming SOTA IMU approaches with a 19.5% improvement in angular error, a 26.4% improvement in elbow angular error, and a 30.1% improvement in positional error. FIP opens up opportunities for ubiquitous human-computer interactions and diverse interactive applications such as Metaverse, rehabilitation, and fitness analysis. Our project page can be seen at Flexible Inertial Poser.
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>GPU-accelerated dynamic nonlinear optimization with ExaModels and MadNLP</title>
<link href="https://hdl.handle.net/1721.1/164245" rel="alternate"/>
<author>
<name>Pacaud, François</name>
</author>
<author>
<name>Shin, Sungho</name>
</author>
<id>https://hdl.handle.net/1721.1/164245</id>
<updated>2025-12-10T06:57:44Z</updated>
<published>2025-02-26T00:00:00Z</published>
<summary type="text">GPU-accelerated dynamic nonlinear optimization with ExaModels and MadNLP
Pacaud, François; Shin, Sungho
We investigate the potential of Graphics Processing Units (GPUs) to solve large-scale nonlinear programs with a dynamic structure. Using ExaModels, a GPU-accelerated automatic differentiation tool, and the interior-point solver MadNLP, we significantly reduce the time to solve dynamic nonlinear optimization problems. The sparse linear systems formulated in the interior-point method is solved on the GPU using a hybrid solver combining an iterative method with a sparse Cholesky factorization, which harness the newly released NVIDIA cuDSS solver. Our results on the classical distillation column instance show that despite a significant pre-processing time, the hybrid solver allows to reduce the time per iteration by a factor of $\mathbf{2 5}$ for the largest instance.
2024 IEEE 63rd Conference on Decision and Control (CDC), Milan, Italy, 2024
</summary>
<dc:date>2025-02-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scalable Primal Decomposition Schemes for Large-Scale Infrastructure Networks</title>
<link href="https://hdl.handle.net/1721.1/164244" rel="alternate"/>
<author>
<name>Engelmann, Alexander</name>
</author>
<author>
<name>Shin, Sungho</name>
</author>
<author>
<name>Pacaud, François</name>
</author>
<author>
<name>Zavala, Victor M</name>
</author>
<id>https://hdl.handle.net/1721.1/164244</id>
<updated>2026-03-08T03:32:20Z</updated>
<published>2025-06-01T00:00:00Z</published>
<summary type="text">Scalable Primal Decomposition Schemes for Large-Scale Infrastructure Networks
Engelmann, Alexander; Shin, Sungho; Pacaud, François; Zavala, Victor M
The operation of large-scale infrastructure networks requires scalable optimization schemes. To guarantee safe system operation, a high degree of feasibility in a small number of iterations is important. Decomposition schemes can help to achieve scalability. In terms of feasibility, however, classical approaches, such as the alternating direction method of multipliers (ADMMs), often converge slowly. In this work, we present primal decomposition schemes for hierarchically structured strongly convex quadratic programs. These schemes offer high degrees of feasibility in a small number of iterations in combination with global convergence guarantees. We benchmark their performance against the centralized off-the-shelf interior-point solver Ipopt and ADMM on problems with up to 300 000 decision variables and constraints. We find that the proposed approaches solve problems as fast as Ipopt, but with reduced communication and without requiring a full model exchange. Moreover, the proposed schemes achieve a higher accuracy than ADMM.
</summary>
<dc:date>2025-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Coplanarity of rooted spanning-tree vectors</title>
<link href="https://hdl.handle.net/1721.1/164243" rel="alternate"/>
<author>
<name>Polettini, Matteo</name>
</author>
<author>
<name>Harunari, Pedro E.</name>
</author>
<author>
<name>Cengio, Sara D.</name>
</author>
<author>
<name>Lecomte, Vivien</name>
</author>
<id>https://hdl.handle.net/1721.1/164243</id>
<updated>2025-12-09T03:11:01Z</updated>
<published>2025-12-05T00:00:00Z</published>
<summary type="text">Coplanarity of rooted spanning-tree vectors
Polettini, Matteo; Harunari, Pedro E.; Cengio, Sara D.; Lecomte, Vivien
Employing a recent technology of tree surgery, we prove a “deletion–constriction” formula for products of rooted spanning-trees on weighted directed graphs that generalizes deletion–contraction on undirected graphs. The formula implies that, letting τ x ∅ , τ x + , and τ x - be the rooted spanning-tree polynomials obtained, respectively, by removing both directed edges between two vertices, or by forcing the tree to pass through either edge, the vectors ( τ x ∅ , τ x + , τ x - ) are coplanar for all roots x . We deploy the result to give an alternative derivation of a recently found mutual linearity of stationary currents of Markov chains. We generalize deletion–constriction and current linearity for two pairs of edges and conjecture that similar results may hold for arbitrary subsets of edges.
</summary>
<dc:date>2025-12-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Search for the decay B0 → ϕϕ</title>
<link href="https://hdl.handle.net/1721.1/164242" rel="alternate"/>
<author>
<name>Aaij, R.</name>
</author>
<author>
<name>Abdelmotteleb, A. S. W.</name>
</author>
<author>
<name>Abellan Beteta, C.</name>
</author>
<author>
<name>Abudinén, F.</name>
</author>
<author>
<name>Ackernley, T.</name>
</author>
<author>
<name>Adefisoye, A. A.</name>
</author>
<author>
<name>Adeva, B.</name>
</author>
<author>
<name>Adinolfi, M.</name>
</author>
<author>
<name>Adlarson, P.</name>
</author>
<author>
<name>Agapopoulou, C.</name>
</author>
<author>
<name>Aidala, C. A.</name>
</author>
<author>
<name>Ajaltouni, Z.</name>
</author>
<author>
<name>Akar, S.</name>
</author>
<author>
<name>Akiba, K.</name>
</author>
<author>
<name>Albicocco, P.</name>
</author>
<author>
<name>Albrecht, J.</name>
</author>
<author>
<name>Aleksiejunas, R.</name>
</author>
<id>https://hdl.handle.net/1721.1/164242</id>
<updated>2026-03-08T03:32:10Z</updated>
<published>2025-12-03T00:00:00Z</published>
<summary type="text">Search for the decay B0 → ϕϕ
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; Aleksiejunas, R.
A search for the decay B0 → ϕϕ is made using pp collision data collected with the LHCb detector at centre-of-mass energies of 7, 8 and 13 TeV, corresponding to an integrated luminosity of 9 fb−1. No significant signal is observed, and an upper limit on the branching fraction of 1.3 (1.4) × 10−8 at 90 (95)% confidence level is set. This result supersedes the previous LHCb study and improves the upper limit by a factor of two.
</summary>
<dc:date>2025-12-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Regulating Sommerfeld resonances for multi-state systems and higher partial waves</title>
<link href="https://hdl.handle.net/1721.1/164241" rel="alternate"/>
<author>
<name>Parikh, Aditya</name>
</author>
<author>
<name>Sato, Ryosuke</name>
</author>
<author>
<name>Slatyer, Tracy R.</name>
</author>
<id>https://hdl.handle.net/1721.1/164241</id>
<updated>2026-03-08T03:32:09Z</updated>
<published>2025-12-03T00:00:00Z</published>
<summary type="text">Regulating Sommerfeld resonances for multi-state systems and higher partial waves
Parikh, Aditya; Sato, Ryosuke; Slatyer, Tracy R.
Long-range attractive interactions between dark matter particles can significantly enhance their annihilation, particularly at low velocities. This “Sommerfeld enhancement” is typically computed by evaluating the deformation of the two-particle wavefunction due to the long-range potential, while ignoring the physics associated with the annihilation, and then scaling the appropriate annihilation matrix elements by factors that depend on the wavefunction in the limit where the particles approach zero relative separation. It has long been recognized that this approach is a valid approximation only in the limit where the annihilation rate is small, and breaks down in the regime where the enhanced annihilation rate approaches the unitarity bound, in which case ignoring the impact of the annihilation physics on the two-particle wavefunction cannot be justified and leads to apparent violations of unitarity. In the case where the physics relevant to annihilation occurs at a parametrically shorter distance scale (higher energy scale) compared with the long-range potential, we provide a simple prescription for correcting the Sommerfeld enhancement for the effects of the short-range physics, valid for all partial waves and for systems where multiple states are coupled by the long-range potential.
</summary>
<dc:date>2025-12-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>A field guide to event-shape observables using optimal transport</title>
<link href="https://hdl.handle.net/1721.1/164240" rel="alternate"/>
<author>
<name>Cesarotti, Cari</name>
</author>
<author>
<name>LeBlanc, Matt</name>
</author>
<id>https://hdl.handle.net/1721.1/164240</id>
<updated>2026-03-08T03:32:07Z</updated>
<published>2025-12-02T00:00:00Z</published>
<summary type="text">A field guide to event-shape observables using optimal transport
Cesarotti, Cari; LeBlanc, Matt
We lay out the phenomenological behavior of event-shape observables evaluated by solving optimal transport problems between collider events and reference geometries — which we name manifold distances — to provide guidance regarding their use in future studies. This discussion considers several choices related to the metric used to quantify these distances. We explore the differences between the various options, for the first time using a combination of analytical studies and simulated minimum-bias and multi-jet events. Making judicious choices when defining the metric and reference geometry can improve sensitivity to interesting signal features and reduce sensitivity to non-perturbative effects in QCD. The goal of this article is to provide a ‘field guide’ that can inform how choices made when defining a manifold distance can be tailored for the analysis at-hand.
</summary>
<dc:date>2025-12-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Search for the lepton-flavour-violating decays B0 → K*0τ±e∓</title>
<link href="https://hdl.handle.net/1721.1/164239" rel="alternate"/>
<author>
<name>Aaij, R.</name>
</author>
<author>
<name>Abdelmotteleb, A. S. W.</name>
</author>
<author>
<name>Abellan Beteta, C.</name>
</author>
<author>
<name>Abudinén, F.</name>
</author>
<author>
<name>Ackernley, T.</name>
</author>
<author>
<name>Adefisoye, A. A.</name>
</author>
<author>
<name>Adeva, B.</name>
</author>
<author>
<name>Adinolfi, M.</name>
</author>
<author>
<name>Adlarson, P.</name>
</author>
<author>
<name>Agapopoulou, C.</name>
</author>
<author>
<name>Aidala, C. A.</name>
</author>
<author>
<name>Ajaltouni, Z.</name>
</author>
<author>
<name>Akar, S.</name>
</author>
<author>
<name>Akiba, K.</name>
</author>
<author>
<name>Albicocco, P.</name>
</author>
<author>
<name>Albrecht, J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164239</id>
<updated>2026-03-08T03:32:10Z</updated>
<published>2025-11-26T00:00:00Z</published>
<summary type="text">Search for the lepton-flavour-violating decays B0 → K*0τ±e∓
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
A first search at LHCb for the lepton-flavour-violating decays B0 → K*0τ±e∓ is presented. The analysis is performed using a sample of proton-proton collision data, collected with the LHCb detector at a centre-of-mass energy of 13 TeV between 2016 and 2018, corresponding to an integrated luminosity of 5.4 fb−1. No significant signal is observed, and upper limits on the branching fractions are determined to be B B 0 → K ∗ 0 τ − e + &lt; 5.9 7.1 × 10 − 6 and B B 0 → K ∗ 0 τ + e − &lt; 4.9 5.9 × 10 − 6 at the 90% (95%) confidence level. These results correspond to the current most stringent upper limits for b → sτl transitions.
</summary>
<dc:date>2025-11-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Guidelines for environmental life cycle assessment of cultivated meat</title>
<link href="https://hdl.handle.net/1721.1/164238" rel="alternate"/>
<author>
<name>Blackstone, Nicole T.</name>
</author>
<author>
<name>Pavlova, Anisiya</name>
</author>
<author>
<name>Trinidad, Kirsten R.</name>
</author>
<author>
<name>Nikkhah, Amin</name>
</author>
<author>
<name>Sinke, Pelle</name>
</author>
<author>
<name>Heller, Martin</name>
</author>
<author>
<name>Duncan-Duggal, Joe</name>
</author>
<author>
<name>Ridoutt, Brad</name>
</author>
<author>
<name>Smetana, Sergiy</name>
</author>
<author>
<name>Makov, Tamar</name>
</author>
<author>
<name>Shabtai, Shira</name>
</author>
<id>https://hdl.handle.net/1721.1/164238</id>
<updated>2026-03-08T03:32:18Z</updated>
<published>2025-12-03T00:00:00Z</published>
<summary type="text">Guidelines for environmental life cycle assessment of cultivated meat
Blackstone, Nicole T.; Pavlova, Anisiya; Trinidad, Kirsten R.; Nikkhah, Amin; Sinke, Pelle; Heller, Martin; Duncan-Duggal, Joe; Ridoutt, Brad; Smetana, Sergiy; Makov, Tamar; Shabtai, Shira
Purpose Cultivated meat is produced by growing animal cells in vitro without using, or reducing the use of, animals for meat, poultry, or seafood production. Responsibly and consistently investigating the environmental impacts of cultivated meat is essential to provide reliable performance benchmarks and realistic comparisons with animal-based production systems. In this contribution, we provide technical, actionable guidelines for conducting life cycle assessments (LCAs) of cultivated meat and highlight further research needs for the field. Methods We assembled a global team of recognized and active scientists in cultivated meat LCA, livestock systems LCA, and ISO LCA standards to develop this set of guidelines using a workshop (in person and online) and online meetings, as well as asynchronous feedback, to reach consensus. Results and discussion These guidelines provide specifications throughout the four phases of LCA, from goal definition to the interpretation of LCA results. Data gaps, including the availability and quality of feed or food-grade culture media component inventories, are among the areas highlighted for further exploration. Conclusion We invite LCA practitioners to apply these guidelines when investigating cultivated meat systems to increase the consistency and reliability of environmental impact evaluations for these emerging products.
</summary>
<dc:date>2025-12-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sharp Bound for the Erdős–Straus Non-averaging Set Problem</title>
<link href="https://hdl.handle.net/1721.1/164237" rel="alternate"/>
<author>
<name>Pham, Huy T.</name>
</author>
<author>
<name>Zakharov, Dmitrii</name>
</author>
<id>https://hdl.handle.net/1721.1/164237</id>
<updated>2026-03-08T03:32:21Z</updated>
<published>2025-12-03T00:00:00Z</published>
<summary type="text">Sharp Bound for the Erdős–Straus Non-averaging Set Problem
Pham, Huy T.; Zakharov, Dmitrii
A set of integers A is non-averaging if there is no element a in A which can be written as an average of a subset of A not containing a . We show that the largest non-averaging subset of { 1 , … , n } has size n 1 / 4 + o ( 1 ) , thus solving the Erdős–Straus problem. We also determine the largest size of a non-averaging set in a d -dimensional box for any fixed d . Our main tool includes the structure theorem for the set of subset sums due to Conlon, Fox and the first author, together with a result about the structure of a point set in nearly convex position.
</summary>
<dc:date>2025-12-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Seasonal variations of the atmospheric muon neutrino spectrum measured with IceCube</title>
<link href="https://hdl.handle.net/1721.1/164236" rel="alternate"/>
<author>
<name>IceCube Collaboration</name>
</author>
<id>https://hdl.handle.net/1721.1/164236</id>
<updated>2026-03-08T03:32:19Z</updated>
<published>2025-12-01T00:00:00Z</published>
<summary type="text">Seasonal variations of the atmospheric muon neutrino spectrum measured with IceCube
IceCube Collaboration
This study presents an analysis of seasonal variations in the atmospheric muon neutrino flux, using 11.3 years of data from the IceCube Neutrino Observatory. By leveraging a novel spectral unfolding method, we explore the energy range from 125 GeV to 10 TeV for zenith angles from 90 ∘ to 110 ∘ , corresponding to the Antarctic atmosphere. Our findings reveal that the differential measurement of the amplitudes of the seasonal variation is consistent with an energy-dependent decrease reaching ( - 4.5 ± 1.2)% during Austral winter and increase to (+ 3.9 ± 1.3)% during Austral summer relative to the annual average at 10 TeV. While the unfolded flux exceeds the model predictions by up to 30%, the differential measurement of the seasonal to annual average flux remains unaffected. The measured seasonal variations of the muon neutrino spectrum are consistent with theoretical predictions using the MCEq code and the NRLMSISE-00 atmospheric model.
</summary>
<dc:date>2025-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Capacity lower bound for the Ising perceptron</title>
<link href="https://hdl.handle.net/1721.1/164235" rel="alternate"/>
<author>
<name>Ding, Jian</name>
</author>
<author>
<name>Sun, Nike</name>
</author>
<id>https://hdl.handle.net/1721.1/164235</id>
<updated>2026-03-08T03:32:21Z</updated>
<published>2025-02-23T00:00:00Z</published>
<summary type="text">Capacity lower bound for the Ising perceptron
Ding, Jian; Sun, Nike
We consider the Ising perceptron with gaussian disorder, which is equivalent to the discrete cube { - 1 , + 1 } N intersected by M random half-spaces. The perceptron’s capacity is the largest integer M N for which the intersection is nonempty. It is conjectured by Krauth and Mézard (1989) that the (random) ratio M N / N converges in probability to an explicit constant α ⋆ ≐ 0.83 . Kim and Roche (1998) proved the existence of a positive constant γ such that γ ⩽ M N / N ⩽ 1 - γ with high probability; see also Talagrand (1999). In this paper we show that the Krauth–Mézard conjecture α ⋆ is a lower bound with positive probability, under the condition that an explicit univariate function S ⋆ ( λ ) is maximized at λ = 0 . Our proof is an application of the second moment method to a certain slice of perceptron configurations, as selected by the so-called TAP (Thouless, Anderson, and Palmer, 1977) or AMP (approximate message passing) iteration, whose scaling limit has been characterized by Bayati and Montanari (2011) and Bolthausen (2012).
</summary>
<dc:date>2025-02-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tackling the UK’s regional economic inequality: binding constraints and avenues for policy intervention</title>
<link href="https://hdl.handle.net/1721.1/164234" rel="alternate"/>
<author>
<name>Stansbury, Anna</name>
</author>
<author>
<name>Turner, Dan</name>
</author>
<author>
<name>Balls, Ed</name>
</author>
<id>https://hdl.handle.net/1721.1/164234</id>
<updated>2026-03-08T03:32:17Z</updated>
<published>2023-08-14T00:00:00Z</published>
<summary type="text">Tackling the UK’s regional economic inequality: binding constraints and avenues for policy intervention
Stansbury, Anna; Turner, Dan; Balls, Ed
We analyse binding constraints to productivity growth in the UK’s regions outside London and the greater South East. These analyses challenge a number of common arguments about the UK’s regional economic inequality problem. We find little evidence consistent with the hypotheses (i) that low shares of university graduates remain the primary constraint on growth for the UK’s regions; (ii) that there is a generalised issue with access to finance for firms outside the South East; or (iii) that low or falling regional migration rates are to blame for the persistence of the UK’s regional economic inequalities. Instead, we find evidence consistent with (i) a specific relative shortage of STEM degrees; (ii) binding transport infrastructure constraints within major non-London conurbations; (iii) a failure of public innovation policy to support clusters beyond the South East, in particular through the regional distribution of public support for Research and Development (R&amp;D); and (iv) missed opportunities for higher internal mobility due to London’s overheating housing market. We also find some suggestive evidence consistent with constraints on access to early-stage equity financing for high-growth-potential SMEs in certain regions.
</summary>
<dc:date>2023-08-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Chile’s Inclusion Law: the arduous drive to regulate an unequal education system, 2006–19</title>
<link href="https://hdl.handle.net/1721.1/164233" rel="alternate"/>
<author>
<name>Cummings, Peter MM</name>
</author>
<author>
<name>Mizala, Alejandra</name>
</author>
<author>
<name>Schneider, Ben Ross</name>
</author>
<id>https://hdl.handle.net/1721.1/164233</id>
<updated>2026-03-08T03:32:12Z</updated>
<published>2025-04-16T00:00:00Z</published>
<summary type="text">Chile’s Inclusion Law: the arduous drive to regulate an unequal education system, 2006–19
Cummings, Peter MM; Mizala, Alejandra; Schneider, Ben Ross
Chile’s Inclusion Law, passed in 2015, significantly increased government regulation of one of the most privatised education systems in the world and provided major redistributive benefits. How did Chile’s government succeed in passing and implementing this legislation in the face of a powerful and cohesive opposition? Our study finds that student protesters served as the initial impetus, shaping the education debate and increasing the political salience and urgency of education reform. In line with power resource theory, other left movement organisations and voters used their power to support redistributive education reform, and Bachelet’s centre-left coalition followed through on its mandate by proposing the Inclusion Law. Also, a well-connected policy network helped articulate problems with the status quo and shaped the specifics of the education bill. To develop this argument, the paper draws on historical information on the student movement in Chile, quantitative data on education stakeholder appearances in the press, public opinion surveys, and detailed analysis of the 13-month legislative proceedings – to explain the law’s passage in congress. To underscore the significance of the Inclusion Law and to contextualise the Chilean case, the paper also compares Chile to other countries with nation-wide school choice systems.
</summary>
<dc:date>2025-04-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bayesian Network–Based Fault Diagnostic System for Nuclear Power Plant Assets</title>
<link href="https://hdl.handle.net/1721.1/164232" rel="alternate"/>
<author>
<name>Zhao, Xingang</name>
</author>
<author>
<name>Wang, Xinyan</name>
</author>
<author>
<name>Golay, Michael W</name>
</author>
<id>https://hdl.handle.net/1721.1/164232</id>
<updated>2026-03-08T03:32:11Z</updated>
<published>2023-03-04T00:00:00Z</published>
<summary type="text">Bayesian Network–Based Fault Diagnostic System for Nuclear Power Plant Assets
Zhao, Xingang; Wang, Xinyan; Golay, Michael W
Future advances in nuclear power technologies call for enhanced operator advice and autonomous control capabilities that can leverage simpler designs and increased safety features to reduce reliance on human labor. One of the first tasks in the development of such capabilities is the formulation of symptom-based conditional failure probabilities for the plant structures, systems, and components (SSCs) of interest. The primary goal is to aid plant personnel in (1) deducing the probabilistic performance status of the monitored SSCs and (2) detecting impending faults/failures. The task of estimating conditional failure probability is a bidirectional inference problem, and a logical approach is to use the Bayesian network (BN) method. As a knowledge-based explainable artificial intelligence tool and a probabilistic graphical model, BN offers reasoning capability under uncertainty, graphical representation emulating physical behavior of the target SSC, and interpretability of the model structure and results. This paper provides a systematic overview of the BN technique and the software tools for implementing BN models, along with the associated knowledge representation and reasoning paradigm. Both operational data and expert judgment can be readily incorporated into the knowledge base of a BN model. The challenges with data availability are highlighted, and the general approach to target SSC identification is presented. The focus is on failure-prone and risk-important balance of plant assets, especially for cases with strong operator involvement. Two example case studies on the failure of (1) a centrifugal pump and (2) an electric motor are conducted to demonstrate the usefulness and technical feasibility of the proposed BN-based fault diagnostic system using an expert system shell.
</summary>
<dc:date>2023-03-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Your body tells more than words – predicting perceived meeting productivity through body signals</title>
<link href="https://hdl.handle.net/1721.1/164231" rel="alternate"/>
<author>
<name>Zeyda, Maximilian</name>
</author>
<author>
<name>Stracke, Selina</name>
</author>
<author>
<name>Knipfer, Kristin</name>
</author>
<author>
<name>Gloor, Peter A</name>
</author>
<id>https://hdl.handle.net/1721.1/164231</id>
<updated>2026-03-08T03:32:19Z</updated>
<published>2024-03-03T00:00:00Z</published>
<summary type="text">Your body tells more than words – predicting perceived meeting productivity through body signals
Zeyda, Maximilian; Stracke, Selina; Knipfer, Kristin; Gloor, Peter A
The productivity of work meetings is mostly assessed through post-hoc questionnaires. These questionnaires are impractical as they require additional time after the meeting has ended. Thus, measuring meeting productivity in a non-intrusive manner is of practical and theoretical importance. Extending research on physiological arousal and the healthy physiological variability thesis to the context of work meetings, we take a novel approach and investigate whether physiological arousal and the variability in implicit body signals of meeting participants (heart rate, arm movements, and speech intensity) can be accurate predictors of perceived meeting productivity. In a preliminary field study, we used smartwatches and tracked the body signals of 16 team members in 26 team meetings. The perceived meeting productivity was assessed at the end of the meetings. Partly supporting our assumptions, multilevel analysis showed that the variance in arm acceleration was a significant predictor of perceived meeting productivity. Further, using a random forest classifier, we accurately predicted perceived meeting productivity in roughly 60% of the cases with body signals. This study adds to previous work on meeting effectiveness by tapping into the potential of wearables to provide valid information about perceived meeting productivity. Cultivating our findings, we discuss lessons learned for future research.
</summary>
<dc:date>2024-03-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Phonon Sampling Method for Inelastic Thermal Neutron Scattering Events</title>
<link href="https://hdl.handle.net/1721.1/164230" rel="alternate"/>
<author>
<name>Trainer, Amelia</name>
</author>
<author>
<name>Forget, Benoit</name>
</author>
<id>https://hdl.handle.net/1721.1/164230</id>
<updated>2026-03-08T03:32:17Z</updated>
<published>2023-08-03T00:00:00Z</published>
<summary type="text">Phonon Sampling Method for Inelastic Thermal Neutron Scattering Events
Trainer, Amelia; Forget, Benoit
Accurate representation of thermal neutron scattering in Monte Carlo transport simulations requires that the molecular vibrations of the target material be accounted for. Historically, this has been achieved by precomputing large multidimensional tables that are a function of temperature and the cosine of the scattering angle, as well as incoming and outgoing neutron energy. Most commonly used sampling techniques for thermal neutron scattering rely on large multidimensional tables, where higher resolution results in an increase in required memory and attempts to reduce memory can result in grid coarseness errors. An alternative sampling method is introduced here that is a significant departure from precomputed tables and instead relies on a more physical model of the scattering behavior. The phonon sampling method classifies neutron scattering events by the number of phonons excited/de-excited during the scattering collision. In doing so, energy exchange may be obtained via rejection sampling, and an analytical representation of the momentum exchange is obtained. This sampling method has been tested on graphite, yttrium hydride, and uranium nitride, and preliminary implementation of the phonon sampling method shows accurate results for angular and energy distributions, though resulting in up to a 40% slowdown in overall calculation time. This notable slowdown is countered, however, by a large reduction in storage (over 99% reduction compared to standard multidimensional tables).
</summary>
<dc:date>2023-08-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessment of Engineering Turbulence Models in Buoyant Diabatic Turbulent Flow</title>
<link href="https://hdl.handle.net/1721.1/164229" rel="alternate"/>
<author>
<name>Wiser, Ralph</name>
</author>
<author>
<name>Baglietto, Emilio</name>
</author>
<id>https://hdl.handle.net/1721.1/164229</id>
<updated>2026-03-08T03:32:14Z</updated>
<published>2024-07-02T00:00:00Z</published>
<summary type="text">Assessment of Engineering Turbulence Models in Buoyant Diabatic Turbulent Flow
Wiser, Ralph; Baglietto, Emilio
Turbulent heat transfer in buoyancy-dominated flows is a challenging problem for computational fluid dynamics (CFD). Many authors attribute model error in these conditions to the Reynolds analogy. We leverage a brand-new direct numerical simulation database to evaluate the performance of several popular turbulence models in buoyant diabatic channel flow. We find that heat transfer results are relatively accurate, with a Nusselt number error less than 20%. However, the turbulent flow solution is very inaccurate, with wall shear overpredicted by up to 100%. This indicates significant turbulence model error in such flows. We determined that the dominant sources of model error are missing physics in the algebraic Reynolds stress framework and the simple buoyancy production term used in industrial CFD. We suggest that future modeling efforts focus on these two sources of model error. We demonstrate that the Reynolds analogy is not the dominant source of model error.
</summary>
<dc:date>2024-07-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>The insurgent smart city: How a social movement created an alternative imaginary of the smart city</title>
<link href="https://hdl.handle.net/1721.1/164228" rel="alternate"/>
<author>
<name>Stokols, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/164228</id>
<updated>2026-03-08T03:32:16Z</updated>
<published>2023-07-17T00:00:00Z</published>
<summary type="text">The insurgent smart city: How a social movement created an alternative imaginary of the smart city
Stokols, Andrew
Urban scholars have critiqued smart cities for their association with neoliberal governance, narrow focus on quantifiable aspects of urban systems, and failure to incorporate citizens’ needs or aspirations. The “smart city” remains a contested concept and as such is subject to reappropriation. Here, I analyze the case of an urban social movement, the 2019–2020 Hong Kong Anti-ELAB protests, as an alternative, “insurgent smart city.” Following from an earlier network analysis of Telegram channels used during the protests, I show how the communications system underpinning much of the protest action simultaneously enabled coordination while also remaining open to grassroots decision-making and innovations of new protest formats as the movement responded to countertactics of the state and police. Telegram channels linked neighborhood-based organizing to the citywide movement. These actions not only emulated but also inverted top-down visions of a total urban information system underpinning many smart city projects. Framing the Hong Kong Anti-ELAB protests as an insurgent smart city offers an alternative sociotechnical imaginary of what smart cities could be, and raises possibilities for an “insurgent digital citizenship” as an alternative to both state and platform-mediated forms of digital citizenship.
</summary>
<dc:date>2023-07-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>The evolution of global cybersecurity norms in the digital age: A longitudinal study of the cybersecurity norm development process</title>
<link href="https://hdl.handle.net/1721.1/164227" rel="alternate"/>
<author>
<name>Madnick, Benjamin</name>
</author>
<author>
<name>Huang, Keman</name>
</author>
<author>
<name>Madnick, Stuart</name>
</author>
<id>https://hdl.handle.net/1721.1/164227</id>
<updated>2026-03-08T03:32:13Z</updated>
<published>2024-05-03T00:00:00Z</published>
<summary type="text">The evolution of global cybersecurity norms in the digital age: A longitudinal study of the cybersecurity norm development process
Madnick, Benjamin; Huang, Keman; Madnick, Stuart
Developing cybersecurity norms and global normative cybersecurity behaviors play an increasingly critical role in global cybersecurity governance. This paper takes a longitudinal approach to analyze cybersecurity norms development activities during the period 1997–2020. A total of 206 individual cases were collected, and 233 individual cybersecurity norms were identified and compiled into 25 subject categories. Categorizing the norm subjects alongside the frequency of cases and norms identified each year allowed for a longitudinal view of cyber norm activities and the evolution in developments over these years. This examination enables us to categorize cybersecurity norms, including their dynamic focus and evolution patterns. By studying those viewed as “successful,” we gain guidance regarding the construction of global cybersecurity governance in the digital age.
</summary>
<dc:date>2024-05-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Anthropology Has One Job (On Genocide in the United States)</title>
<link href="https://hdl.handle.net/1721.1/164226" rel="alternate"/>
<author>
<name>Lowry, David Shane</name>
</author>
<id>https://hdl.handle.net/1721.1/164226</id>
<updated>2026-03-08T03:32:15Z</updated>
<published>2023-01-02T00:00:00Z</published>
<summary type="text">Anthropology Has One Job (On Genocide in the United States)
Lowry, David Shane
In an introductory anthropology course, the instructor might provide a definition of anthropology similar to this: “Anthropology is the most scientific of the humanities, and it is the most humanistic of the sciences.” If something like that is said, it stems from a statement in Anthropology, a 1964 book by famed anthropologist Eric Wolf in which he attempted to define the discipline. Wolf’s approach came at a time when many anthropologists were attempting to intervene in the historical telling of the world.Footnote1 In particular, Wolf argued that non-Europeans were also participants in global, colonial processes. The value of Wolf’s voice—indeed, the value of most anthropology at the time—was that it offered a wide-scale view of human events for which the anthropologist was merely an observer, hence not responsible.
</summary>
<dc:date>2023-01-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Online Bidding under RoS Constraints without Knowing the Value</title>
<link href="https://hdl.handle.net/1721.1/164225" rel="alternate"/>
<author>
<name>Vijayan, Sushant</name>
</author>
<author>
<name>Feng, Zhe</name>
</author>
<author>
<name>Padmanabhan, Swati</name>
</author>
<author>
<name>Shanmugam, Karthikeyan</name>
</author>
<author>
<name>Suggala, Arun</name>
</author>
<author>
<name>Wang, Di</name>
</author>
<id>https://hdl.handle.net/1721.1/164225</id>
<updated>2025-12-06T03:09:02Z</updated>
<published>2025-04-22T00:00:00Z</published>
<summary type="text">Online Bidding under RoS Constraints without Knowing the Value
Vijayan, Sushant; Feng, Zhe; Padmanabhan, Swati; Shanmugam, Karthikeyan; Suggala, Arun; Wang, Di
We consider the problem of bidding in online advertising, where an advertiser aims to maximize value while adhering to budget and Return-on-Spend (RoS) constraints. Unlike prior work that assumes knowledge of the value generated by winning each impression (e.g., conversions), we address the more realistic setting where the advertiser must simultaneously learn the optimal bidding strategy and the value of each impression opportunity. This introduces a challenging exploration-exploitation dilemma: the advertiser must balance exploring different bids to estimate impression values with exploiting current knowledge to bid effectively. To address this, we propose a novel Upper Confidence Bound (UCB)-style algorithm that carefully manages this trade-off. Via a rigorous theoretical analysis, we prove that our algorithm achieves Õ(₲T log(|B|T) ) regret and constraint violation, where T is the number of bidding rounds and B is the domain of possible bids. This establishes the first optimal regret and constraint violation bounds for bidding in the online setting with unknown impression values. Moreover, our algorithm is computationally efficient and simple to implement. We validate our theoretical findings through experiments on synthetic data, demonstrating that our algorithm exhibits strong empirical performance compared to existing approaches.
WWW ’25, April 28–May 2, 2025, Sydney, NSW, Australia.
</summary>
<dc:date>2025-04-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tactile Vega-Lite: Rapidly Prototyping Tactile Charts with Smart Defaults</title>
<link href="https://hdl.handle.net/1721.1/164224" rel="alternate"/>
<author>
<name>Chen, Mengzhu (Katie)</name>
</author>
<author>
<name>Pedraza Pineros, Isabella</name>
</author>
<author>
<name>Satyanarayan, Arvind</name>
</author>
<author>
<name>Zong, Jonathan</name>
</author>
<id>https://hdl.handle.net/1721.1/164224</id>
<updated>2025-12-06T03:09:11Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Tactile Vega-Lite: Rapidly Prototyping Tactile Charts with Smart Defaults
Chen, Mengzhu (Katie); Pedraza Pineros, Isabella; Satyanarayan, Arvind; Zong, Jonathan
Tactile charts are essential for conveying data to blind and low vision (BLV) readers but are difficult for designers to construct. Non-expert designers face barriers to entry due to complex guidelines, while experts struggle with fragmented and time-consuming workflows that involve extensive customization. Inspired by formative interviews with expert tactile graphics designers, we created Tactile Vega-Lite (TVL): an extension of Vega-Lite that offers tactile-specific abstractions and synthesizes existing guidelines into a series of smart defaults. Predefined stylistic choices enable non-experts to produce guideline-compliant tactile charts quickly. Expert users can override defaults to tailor customizations for their intended audience. In a user study with 12 tactile graphics creators, we show that Tactile Vega-Lite enhances flexibility and consistency by automating tasks like adjusting spacing and translating braille while accelerating iterations through pre-defined textures and line styles. Through expert critique, we also learn more about tactile chart design best practices and design decisions.
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward Everyday Perceptual and Physiological Augmentation</title>
<link href="https://hdl.handle.net/1721.1/164223" rel="alternate"/>
<author>
<name>Tao, Yujie</name>
</author>
<author>
<name>Gemicioglu, Tan</name>
</author>
<author>
<name>Chin, Sam</name>
</author>
<author>
<name>Huang, Bingjian</name>
</author>
<author>
<name>Brooks, Jas</name>
</author>
<author>
<name>Follmer, Sean</name>
</author>
<author>
<name>Lopes, Pedro</name>
</author>
<author>
<name>Nanayakkara, Suranga</name>
</author>
<id>https://hdl.handle.net/1721.1/164223</id>
<updated>2025-12-06T03:09:17Z</updated>
<published>2025-09-27T00:00:00Z</published>
<summary type="text">Toward Everyday Perceptual and Physiological Augmentation
Tao, Yujie; Gemicioglu, Tan; Chin, Sam; Huang, Bingjian; Brooks, Jas; Follmer, Sean; Lopes, Pedro; Nanayakkara, Suranga
Human senses are fundamental to how we interpret and interact with the world. Computing devices are increasingly coupled with the human sensory system through interfaces such as smart glasses, earbuds, and wristbands. This opens up opportunities to dynamically mediate, modify, and augment perceptual experiences and physiological processes through multisensory stimulation. These devices go beyond assistive technologies designed for individuals with sensory impairments (e.g., hearing aids) and are now available for everyday use. Applications range from enriching immersive entertainment experiences to supporting well-being through multisensory interventions.&#13;
The UIST community has been a key venue for introducing many proof-of-concept prototypes in multisensory stimulation. However, gaps remain in systematically understanding how such technologies can be designed, studied, and contextualized in long-term, everyday use. This workshop will examine barriers to transitioning prototypes from proof-of-concepts into systems for real-world use. The session will feature keynote talks, demo sessions, and an interactive device-swap activity where participants exchange and wear different devices during the afternoon session, and conclude with an open discussion to develop implementation frameworks.
UIST Adjunct ’25, Busan, Republic of Korea
</summary>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>GyFoam: Fabricating Lattice Foam with Customizable Stiffness through Uniform Expansion</title>
<link href="https://hdl.handle.net/1721.1/164222" rel="alternate"/>
<author>
<name>Wang, Guanyun</name>
</author>
<author>
<name>Chen, Haotian</name>
</author>
<author>
<name>Wang, Yufeng</name>
</author>
<author>
<name>Li, Songyun</name>
</author>
<author>
<name>Tao, Yue</name>
</author>
<author>
<name>Qi, Fanke</name>
</author>
<author>
<name>Cao, Lizhuo</name>
</author>
<author>
<name>Jin, Xiao</name>
</author>
<author>
<name>Tao, Ye</name>
</author>
<author>
<name>Li, Jiaji</name>
</author>
<id>https://hdl.handle.net/1721.1/164222</id>
<updated>2025-12-06T03:09:34Z</updated>
<published>2025-09-27T00:00:00Z</published>
<summary type="text">GyFoam: Fabricating Lattice Foam with Customizable Stiffness through Uniform Expansion
Wang, Guanyun; Chen, Haotian; Wang, Yufeng; Li, Songyun; Tao, Yue; Qi, Fanke; Cao, Lizhuo; Jin, Xiao; Tao, Ye; Li, Jiaji
We present GyFoam, a fabrication method integrating foam material with lattice structure to enable controlled and uniform expansion, which supports high-quality forming in appearance and customizable stiffness in function, using standard 3D printers, filaments, commercially available Thermo-Expandable Microspheres and silicone. To achieve customizable stiffness, we propose two methods: modifying material concentration and adjusting lattice structural parameters. Additionally, we propose three shape control strategies for creating complex shapes: bending, wavy edges, and internal doming. Furthermore, a user-friendly design tool is established for users to construct lattice structures, preview basic deformation, and generate mold models for printing. Finally, through a series of applications, we validate GyFoam’s practical usage of fabricating large objects, wearable products, enabling flexible interactions and creating aesthetic designs.
UIST ’25, Busan, Republic of Korea
</summary>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>EmbroChet: A Hybrid Textile Fabrication Approach for 3D Personalized Handicraft via Heat-Shrinking</title>
<link href="https://hdl.handle.net/1721.1/164221" rel="alternate"/>
<author>
<name>Wang, Guanyun</name>
</author>
<author>
<name>Wang, Zhiqi</name>
</author>
<author>
<name>Li, Fanyu</name>
</author>
<author>
<name>Liu, Qinyang</name>
</author>
<author>
<name>Dong, Tianshu</name>
</author>
<author>
<name>Hong, Zixiang</name>
</author>
<author>
<name>Li, Xinyi</name>
</author>
<author>
<name>Zhu, Kuangqi</name>
</author>
<author>
<name>Li, Jiaji</name>
</author>
<author>
<name>Zhao, Xiaoliang</name>
</author>
<author>
<name>Tao, Ye</name>
</author>
<id>https://hdl.handle.net/1721.1/164221</id>
<updated>2025-12-06T03:09:28Z</updated>
<published>2025-09-27T00:00:00Z</published>
<summary type="text">EmbroChet: A Hybrid Textile Fabrication Approach for 3D Personalized Handicraft via Heat-Shrinking
Wang, Guanyun; Wang, Zhiqi; Li, Fanyu; Liu, Qinyang; Dong, Tianshu; Hong, Zixiang; Li, Xinyi; Zhu, Kuangqi; Li, Jiaji; Zhao, Xiaoliang; Tao, Ye
We propose EmbroChet, a hybrid approach that bridges digital fabrication and textile craftsmanship, empowering individuals unfamiliar with intricate craft techniques to design and fabricate 3D textile handicrafts intuitively. EmbroChet allows the creation of handicrafts by embroidering chain stitches (a fundamental embroidery technique) onto a heat-shrinkable film, which subsequently self-transforms from a 2D composite to a 3D textile through a freely controllable heating triggering process. Through a single stitch type, the method enables custom designs and intricate geometries to be achieved without complex manual skills that often requires expertise between different stitch knowledge. To better demonstrate EmbroChet, we propose a design tool that includes shape-changing libraries to assist users in customizing 3D shapes. The evaluation demonstrates its unique strength in balancing geometric complexity and textile softness. Furthermore, our workshop verifies the feasibility of EmbroChet, exploring its potential for personalized textile fabrication, and synergizing the precision of digital fabrication with the tactile artistry of textile craftsmanship.
UIST ’25, Busan, Republic of Korea
</summary>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Meta-antenna: Mechanically Frequency Reconfigurable Metamaterial Antennas</title>
<link href="https://hdl.handle.net/1721.1/164220" rel="alternate"/>
<author>
<name>AlAlawi, Marwa</name>
</author>
<author>
<name>Zheng, Regina</name>
</author>
<author>
<name>Ahn, Sooyeon</name>
</author>
<author>
<name>Yan, Katherine</name>
</author>
<author>
<name>Sethapakdi, Ticha</name>
</author>
<author>
<name>Zhu, Junyi</name>
</author>
<author>
<name>Mueller, Stefanie</name>
</author>
<id>https://hdl.handle.net/1721.1/164220</id>
<updated>2025-12-06T03:09:24Z</updated>
<published>2025-09-27T00:00:00Z</published>
<summary type="text">Meta-antenna: Mechanically Frequency Reconfigurable Metamaterial Antennas
AlAlawi, Marwa; Zheng, Regina; Ahn, Sooyeon; Yan, Katherine; Sethapakdi, Ticha; Zhu, Junyi; Mueller, Stefanie
We introduce Meta-antenna, a design and fabrication pipeline for creating frequency reconfigurable antennas while making use of a single type of mechanical metamaterial structure. Unlike traditional static antenna systems with fixed radiation patterns and frequency responses per geometry, Meta-antenna leverages mechanical reconfiguration to alter the radiation and geometry characteristics of the antenna, making it more versatile for sensing and communication. Meta-antenna provides a design space of resonance frequency from 500 MHz to 6.3 GHz (≥ 10dB) upon the structure’s compression, bending, or rotation. Additionally, we provide an Ansys-based editor that allows users to generate metamaterial antenna geometries and simulate their resonance frequency. We also provide a code template for Meta-antenna based sensing interactions. Our technical evaluation demonstrates that our fabricated Meta-antenna structures remain functional even after 10,000 compression cycles. Finally, we contribute three example applications showcasing Meta-antenna’s potential in adaptive personal devices, smart home systems, and tangible user interfaces.
UIST ’25, Busan, Republic of Korea
</summary>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational Tailor-Making for Personalized, Shape-changing, and Sustainable Fabrics</title>
<link href="https://hdl.handle.net/1721.1/164219" rel="alternate"/>
<author>
<name>Narumi, Koya</name>
</author>
<author>
<name>Hirose, Yuichi</name>
</author>
<author>
<name>Lee, Hsuanling</name>
</author>
<author>
<name>Larsson, Maria</name>
</author>
<author>
<name>He, Liang</name>
</author>
<author>
<name>Leake, Mackenzie</name>
</author>
<author>
<name>Forman, Jack</name>
</author>
<author>
<name>Farahi, Behnaz</name>
</author>
<author>
<name>Yao, Lining</name>
</author>
<author>
<name>Igarashi, Takeo</name>
</author>
<id>https://hdl.handle.net/1721.1/164219</id>
<updated>2025-12-06T03:09:20Z</updated>
<published>2025-09-27T00:00:00Z</published>
<summary type="text">Computational Tailor-Making for Personalized, Shape-changing, and Sustainable Fabrics
Narumi, Koya; Hirose, Yuichi; Lee, Hsuanling; Larsson, Maria; He, Liang; Leake, Mackenzie; Forman, Jack; Farahi, Behnaz; Yao, Lining; Igarashi, Takeo
Fabrics are fundamental elements of our daily lives, which are woven, knitted, or embroidered into diverse products like clothing and furniture. Recent advances in materials science and digital fabrication have enabled us to fabricate personalized and responsive fabric products computationally and interactively, which we call “computational tailor-making.” In this workshop, we will build an interdisciplinary network of researchers on computational tailor-making and discuss (1) computational fabric design, (2) novel fabric fabrication tools, (3) shape-changing fabrics, and (4) sustainable fabric production, from the viewpoint of HCI. The workshop session will help attendees build a shared vision, recognize potential challenges, find unexpected solutions and ideas, collaborate beyond disciplines, and explore the possible connection to industries.
UIST Adjunct ’25, Busan, Republic of Korea
</summary>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ori-TENG: 3D Printed Origami Tessellations as Triboelectric Nanogenerators for Self-powered Sensing and Energy Harvesting</title>
<link href="https://hdl.handle.net/1721.1/164218" rel="alternate"/>
<author>
<name>AlAlawi, Marwa</name>
</author>
<author>
<name>Wang, Kexin</name>
</author>
<author>
<name>Zheng, Regina</name>
</author>
<author>
<name>Chan, Adelene</name>
</author>
<author>
<name>Feick, Martin</name>
</author>
<author>
<name>Mueller, Stefanie</name>
</author>
<id>https://hdl.handle.net/1721.1/164218</id>
<updated>2025-12-06T03:09:19Z</updated>
<published>2025-09-27T00:00:00Z</published>
<summary type="text">Ori-TENG: 3D Printed Origami Tessellations as Triboelectric Nanogenerators for Self-powered Sensing and Energy Harvesting
AlAlawi, Marwa; Wang, Kexin; Zheng, Regina; Chan, Adelene; Feick, Martin; Mueller, Stefanie
We introduce Ori-TENG, a design and fabrication framework for 3D&#13;
printed origami tessellations that function as triboelectric sensors&#13;
and energy harvesters. Ori-TENG structures are 3D printed flat in&#13;
a single step, then folded, with internal electrical routing optimized&#13;
for both folding mechanics and triboelectric performance.
UIST Adjunct ’25, Busan, Republic of Korea
</summary>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>GreenMix: Energy-Efficient Serverless Computing via Randomized Sketching on Asymmetric Multi-Cores</title>
<link href="https://hdl.handle.net/1721.1/164217" rel="alternate"/>
<author>
<name>Basu Roy, Rohan</name>
</author>
<author>
<name>Patel, Tirthak</name>
</author>
<author>
<name>Li, Baolin</name>
</author>
<author>
<name>Samsi, Siddharth</name>
</author>
<author>
<name>Gadepally, Vijay</name>
</author>
<author>
<name>Tiwari, Devesh</name>
</author>
<id>https://hdl.handle.net/1721.1/164217</id>
<updated>2025-12-06T03:09:49Z</updated>
<published>2025-11-15T00:00:00Z</published>
<summary type="text">GreenMix: Energy-Efficient Serverless Computing via Randomized Sketching on Asymmetric Multi-Cores
Basu Roy, Rohan; Patel, Tirthak; Li, Baolin; Samsi, Siddharth; Gadepally, Vijay; Tiwari, Devesh
GreenMix is motivated by the renewed interest in asymmetric multi-core processors and the emergence of the serverless computing model. Asymmetric multi-cores offer better energy and performance trade-offs by placing different core types on the same die. However, existing serverless scheduling techniques do not leverage these benefits. GreenMix is the first serverless work to reduce energy and serverless keep-alive costs while meeting QoS targets by leveraging asymmetric multi-cores. GreenMix employs randomized sketching, tailored for serverless execution and keep-alive, to perform within 10% of the optimal solution in terms of energy efficiency and keep-alive cost reduction. GreenMix’s effectiveness is demonstrated through evaluations on clusters of ARM big.LITTLE and Intel Alder Lake asymmetric processors. It outperforms competing state-of-the-art schedulers, offering a novel approach for energy-efficient serverless computing.
SC ’25, St Louis, MO, USA
</summary>
<dc:date>2025-11-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scalable and Low Power Localization for Underwater Robots</title>
<link href="https://hdl.handle.net/1721.1/164216" rel="alternate"/>
<author>
<name>Afzal, Sayed Saad</name>
</author>
<author>
<name>Rademacher, Jack</name>
</author>
<author>
<name>Chen, Weitung</name>
</author>
<author>
<name>Wang, Purui</name>
</author>
<author>
<name>Adib, Fadel</name>
</author>
<id>https://hdl.handle.net/1721.1/164216</id>
<updated>2025-12-06T03:09:46Z</updated>
<published>2025-11-21T00:00:00Z</published>
<summary type="text">Scalable and Low Power Localization for Underwater Robots
Afzal, Sayed Saad; Rademacher, Jack; Chen, Weitung; Wang, Purui; Adib, Fadel
Localization is a critical task for underwater robots, yet today’s underwater localization systems are limited by their accuracy, scalability, and/or energy consumption (i.e., longevity).&#13;
We present the design, implementation, and evaluation of&#13;
EchoBLUE– an accurate, scalable, and low-power localization system for underwater robots.&#13;
In EchoBLUE, an underwater robot transmits SONARstyle (FMCW) signals, and leverages ultra-low power underwater backscatter nodes as location anchors. EchoBLUE’s&#13;
design introduces two key innovations. The first is a novel&#13;
doppler compensation mechanism that enables it to accurately self-localize under mobility: the technique employs a&#13;
cross-chirp mechanism that exploits the quad-band nature of&#13;
the resulting backscatter response to overcome the rangedoppler ambiguity. Second, it introduces the first semi-active&#13;
retrodirective underwater backscatter design and uses it for&#13;
location anchors; this design achieves wide bandwidth to&#13;
backscatter the full FMCW signal, enabling fine-grained localization.&#13;
We implemented a proof of concept prototype of EchoBLUE&#13;
by building a base station mounted on a BlueROV2 underwater robot and custom-designed low-power retrodirective&#13;
location anchors deployed in a pool. Our evaluation across&#13;
700 real-world trials demonstrates that EchoBLUE achieves a&#13;
median 3D localization accuracy of 28 cm and 90th percentile&#13;
of 48 cm. Moreover, these anchors consume only 740 &#120583;&#119882; for&#13;
semi-active backscatter, paving the way for truly low-power&#13;
and scalable underwater localization.
</summary>
<dc:date>2025-11-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>A polyurethane-urea elastomer at low to extreme strain rates</title>
<link href="https://hdl.handle.net/1721.1/164215" rel="alternate"/>
<author>
<name>Lee, Jaehee</name>
</author>
<author>
<name>Veysset, David</name>
</author>
<author>
<name>Hsieh, Alex J</name>
</author>
<author>
<name>Rutledge, Gregory C</name>
</author>
<author>
<name>Cho, Hansohl</name>
</author>
<id>https://hdl.handle.net/1721.1/164215</id>
<updated>2025-12-06T03:09:59Z</updated>
<published>2023-09-15T00:00:00Z</published>
<summary type="text">A polyurethane-urea elastomer at low to extreme strain rates
Lee, Jaehee; Veysset, David; Hsieh, Alex J; Rutledge, Gregory C; Cho, Hansohl
A finite strain nonlinear constitutive model is presented to study the extreme mechanical behavior of a polyurethane-urea (PUU) well suited for many engineering applications. The micromechanically- and thermodynamically based constitutive model captures salient features in resilience and dissipation in the material from low to extreme strain rates. The extreme deformation features are further elucidated by laser-induced micro-particle impact tests for the material, where an ultrafast strain rate ( &gt; 1 0 6 s−1) incurs. Numerical simulations for the strongly inhomogeneous deformation events are in good agreement with the experimental data, supporting the predictive capabilities of the constitutive model for the extreme deformation features of the PUU material over at least 9 orders of magnitude in strain rates ( 1 0 − 3 to 1 0 6 s−1).
</summary>
<dc:date>2023-09-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Molecular simulation of flow-enhanced nucleation of polyethylene crystallites in biaxial flows</title>
<link href="https://hdl.handle.net/1721.1/164214" rel="alternate"/>
<author>
<name>Gangal, Chinmay S</name>
</author>
<author>
<name>Rutledge, Gregory C</name>
</author>
<id>https://hdl.handle.net/1721.1/164214</id>
<updated>2025-12-06T03:09:53Z</updated>
<published>2024-04-17T00:00:00Z</published>
<summary type="text">Molecular simulation of flow-enhanced nucleation of polyethylene crystallites in biaxial flows
Gangal, Chinmay S; Rutledge, Gregory C
Flow-enhanced nucleation (FEN) of n-pentacontahectane (C150) under biaxial extensional flows of varying strain rate ratios is studied using nonequilibrium molecular dynamics simulation. The nucleation rates thus calculated are used to test previously published FEN models based on invariants of the conformation tensor of Kuhn segments and the extra stress tensor. Models based on the conformation tensor provide a more accurate description of FEN observed in biaxial flow simulations than those based on the extra stress tensor. In addition, the formation of nematic domains previously reported to be stabilized by shear or extensional flow is absent in equibiaxial flows. However, such domains do form in non-equibiaxial flows, and nucleation occurs in these domains preferentially. The shape and orientation of nuclei formed under biaxial flows of various strengths and strain rate ratios are also reported.
</summary>
<dc:date>2024-04-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cholesterol Nanofiber Patches with Sustainable Oil Delivery Eliminate Inflammation in Atopic Skin</title>
<link href="https://hdl.handle.net/1721.1/164213" rel="alternate"/>
<author>
<name>Sroczyk, Ewa A</name>
</author>
<author>
<name>Tarasiuk, Aleksandra</name>
</author>
<author>
<name>Talar, Marcin</name>
</author>
<author>
<name>Rutledge, Gregory C</name>
</author>
<author>
<name>Makaro, Adam</name>
</author>
<author>
<name>Misztal, Zofia</name>
</author>
<author>
<name>Wołyniak, Maria</name>
</author>
<author>
<name>Berniak, Krzysztof</name>
</author>
<author>
<name>Sałaga, Maciej</name>
</author>
<author>
<name>Fichna, Jakub</name>
</author>
<author>
<name>Stachewicz, Urszula</name>
</author>
<id>https://hdl.handle.net/1721.1/164213</id>
<updated>2025-12-06T03:10:01Z</updated>
<published>2024-07-12T00:00:00Z</published>
<summary type="text">Cholesterol Nanofiber Patches with Sustainable Oil Delivery Eliminate Inflammation in Atopic Skin
Sroczyk, Ewa A; Tarasiuk, Aleksandra; Talar, Marcin; Rutledge, Gregory C; Makaro, Adam; Misztal, Zofia; Wołyniak, Maria; Berniak, Krzysztof; Sałaga, Maciej; Fichna, Jakub; Stachewicz, Urszula
Atopic skin is dry and itchy and lacks integrity. Impaired skin barrier results from altered lipid composition of the skin. A crucial skin lipid, cholesterol, provides flexibility and homeostasis of the cell membranes' lipid bilayer. Cholesterol-based creams and natural oils, especially blackcurrant seed oil, are beneficial for skin care as they hydrate the skin and improve its integrity. The major atopic symptom, skin dryness, can be overcome by the application of porous patches enhanced with cholesterol and natural oil. The base of the patches is constructed of polyimide (PI) nanofibers with cholesterol coatings and externally added blackcurrant seed oil. The presence of cholesterol in PI mats hinders the passage of oil through the patches to the skin, resulting in sustained and prolonged skin hydration. The theoretical and numerical investigations of oil dynamics in porous mats confirmed the experimental results, showing a prolonged skin hydration effect up to 6 h. Additionally, as demonstrated by in vivo tests on atopic mice, cholesterol patches lower serum immunoglobulin E levels and expression of proinflammatory cytokines in the skin, thereby accelerating skin healing. Our results hold great promise for the long-term application of the patches in atopic dermatitis treatment.
</summary>
<dc:date>2024-07-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Polls and the U.S. Presidential Election in 2020 …. and 2024</title>
<link href="https://hdl.handle.net/1721.1/164212" rel="alternate"/>
<author>
<name>Barnett, Arnold</name>
</author>
<author>
<name>Sarfati, Arnaud</name>
</author>
<id>https://hdl.handle.net/1721.1/164212</id>
<updated>2025-12-06T03:09:55Z</updated>
<published>2023-05-30T00:00:00Z</published>
<summary type="text">The Polls and the U.S. Presidential Election in 2020 …. and 2024
Barnett, Arnold; Sarfati, Arnaud
Arguably, the single greatest determinant of U.S. public policy is the identity of the president. And if trusted, polls not only provide forecasts about presidential-election outcomes but can act to shape those outcomes. Looking ahead to the 2024 U.S. presidential election and recognizing that polls before the 2020 presidential election were sharply criticized, we consider whether such harsh assessments are warranted. Initially, we explore whether such polls as processed by the sophisticated aggregator FiveThirtyEight successfully forecast actual 2020 state-by-state outcomes. We evaluate FiveThirtyEight’s forecasts using customized statistical methods not used previously, methods that take account of likely correlations among election outcomes in similar states. We find that, taken together, the pollsters and FiveThirtyEight did an excellent job in predicting who would win in individual states, even those “tipping point” states where forecasting is more difficult. However, we also find that FiveThirtyEight underestimated Donald Trump’s vote shares by state to a modest but statistically significant extent. We further consider how the polls performed when the more primitive aggregator Real Clear Politics combined their results, and then how well single statewide polls performed without aggregation. It emerges that both Real Clear Politics and the individual polls fared surprisingly well.
</summary>
<dc:date>2023-05-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Profession’s Vanguards: Arab Architects and Regional Architectural Exchange, 1900–50</title>
<link href="https://hdl.handle.net/1721.1/164211" rel="alternate"/>
<author>
<name>Abusaada, Nadi</name>
</author>
<id>https://hdl.handle.net/1721.1/164211</id>
<updated>2025-12-06T03:09:58Z</updated>
<published>2023-07-19T00:00:00Z</published>
<summary type="text">The Profession’s Vanguards: Arab Architects and Regional Architectural Exchange, 1900–50
Abusaada, Nadi
Writings on architecture in the Middle East during the first half of the twentieth century have often focused on the legacies of colonial architects and planners in shaping Middle Eastern cities and built environments. Contrarily, this article focuses on the overlooked history of the first milieu of trained Arab architects in Middle East, focusing on Palestine, Syria, Lebanon and Egypt. Examining unstudied historical materials and archives, it maps out the trajectories of individual architects as well as the architectural profession more generally in this period of rapid change. It is divided into three main sections that highlight this: first, architecture’s transition from the Ottoman guild system to its professionalisation by the turn of the century; second, the mobility of architectural knowledge and expertise in the Arab region following the First World War; finally, the development of a new institutionalised architectural culture that sought to cultivate bonds between Arab architects not only in their individual countries, but also regionally throughout the Arab world towards the mid-twentieth century.
</summary>
<dc:date>2023-07-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fossil fuel divestment and public climate change policy preferences: an experimental test in three countries</title>
<link href="https://hdl.handle.net/1721.1/164210" rel="alternate"/>
<author>
<name>Schwartz, Joshua A.</name>
</author>
<author>
<name>Lendway, Paul</name>
</author>
<author>
<name>Nuri, Abolfazl</name>
</author>
<id>https://hdl.handle.net/1721.1/164210</id>
<updated>2025-12-06T03:10:03Z</updated>
<published>2023-02-26T00:00:00Z</published>
<summary type="text">Fossil fuel divestment and public climate change policy preferences: an experimental test in three countries
Schwartz, Joshua A.; Lendway, Paul; Nuri, Abolfazl
Divestment is a prominent strategy championed by activists to induce positive social change. For example, the current fossil fuel divestment movement includes over 1,500 institutions that control $40 trillion in assets. A primary pathway through which divestment is theorized to be effective is by influencing public beliefs and policy preferences, thus pressuring policymakers to take action. However, prior research only tests this argument via qualitative case studies. We assess the impact of exposure to information about fossil fuel divestment on public opinion through the use of national survey experiments in three major greenhouse gas emitters: the U.S., India, and South Africa. We find surprisingly little evidence that exposure to information about the fossil fuel divestment movement can increase public support for policies that address climate change. Our findings suggest that divestment movements may be less effective at changing beliefs and policy preferences than previously realized.
</summary>
<dc:date>2023-02-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>“Sculptress Interprets Land’s Spirit”: Elizabeth Wyn Wood, the Group of Seven, and analogy as equivalence</title>
<link href="https://hdl.handle.net/1721.1/164209" rel="alternate"/>
<author>
<name>Nikčević, Hana</name>
</author>
<id>https://hdl.handle.net/1721.1/164209</id>
<updated>2025-12-06T03:09:56Z</updated>
<published>2023-09-26T00:00:00Z</published>
<summary type="text">“Sculptress Interprets Land’s Spirit”: Elizabeth Wyn Wood, the Group of Seven, and analogy as equivalence
Nikčević, Hana
Canadian sculptor Elizabeth Wyn Wood (1903–66), best known for her modernist landscape sculptures, has since the inception of her artistic career been compared, through analogy, with the Group of Seven (fl. 1920–33), Canada’s enduringly famous and overtly nationalistic collective of modernist landscape painters. Critics claimed that Wood “achieved for sculpture what the Group of Seven achieved for painting” and, occasionally, invoked specific Group artists, dubbing Wood the “Lawren Harris of sculpture.” Analogizing across disciplines, the Wood/Group likening appears to posit a formal comparison in gendered language: the Group’s bold, decorative portrayals of the northern Ontario “wilderness” find clear visual comparands in Wood’s abstracted compositions of the same region. In this article, however, I demonstrate that the apparently visual basis for the comparison is inextricable from the textual discourse fundamental to Canadian art in the early twentieth century and beyond; it is only through analyzing this discourse that an understanding of the Wood/Group analogy can be reached. The Group ostensibly pioneered the first genuine Canadian landscape aesthetic; through immersing himself in the land, the mythology went, the Canadian artist learned to paint Canada on its own terms. This landscape artist-as-woodsman myth was a form of settler indigenization by which Canada laid cultural claim to colonized land. Analogy frames Wood as not an epigone but an equal of the Group: in producing organically, anew, a genuine Canadian landscape aesthetic for sculpture, Wood “achieved for sculpture what the Group of Seven achieved for painting”—its deployment as a medium in the service of Canada’s land claim.
</summary>
<dc:date>2023-09-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Landscape “Difficult to Describe”: The Model Village and the Capital City</title>
<link href="https://hdl.handle.net/1721.1/164208" rel="alternate"/>
<author>
<name>Springstubb, Phoebe</name>
</author>
<id>https://hdl.handle.net/1721.1/164208</id>
<updated>2025-12-06T03:09:51Z</updated>
<published>2023-03-22T00:00:00Z</published>
<summary type="text">A Landscape “Difficult to Describe”: The Model Village and the Capital City
Springstubb, Phoebe
In mid-twentieth-century Punjab, grassroots development projects sought to modernize the countryside by decentralizing power to villages. The capital city Chandigarh, built in the same period, seems to represent the opposite: a national symbol of a newly independent India’s centralized power. Yet, this article argues, rural and urban were reciprocal and volatile counterparts. Through the work of M.S. Randhawa, it reorients analysis of Chandigarh to reveal how the materiality of landscape itself was a medium for territorial planning, indelibly linking—and managing the distinctions between—city and countryside. A botanist and civil servant, Randhawa used landscape to realize modernizing agendas and to constrain social change in projects from model villages and a “bioaesthetic” plan for the city to new land-grant universities that ushered in the Green Revolution’s industrialized agriculture. His work offers a revisionist history of development’s practitioners and periodization. It shows how an uneven fabric of late-colonial rural uplift shaped the contours of postcolonial, state-directed agrarian transformation. Following the civil servant in the landscape, this article calls for the grounding of abstract theories like development and state formation in histories of their local inflections.
</summary>
<dc:date>2023-03-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Policy Search through Genetic Programming and LLM-assisted Curriculum Learning</title>
<link href="https://hdl.handle.net/1721.1/164207" rel="alternate"/>
<author>
<name>Jorgensen, Steven</name>
</author>
<author>
<name>Nadizar, Giorgia</name>
</author>
<author>
<name>Pietropolli, Gloria</name>
</author>
<author>
<name>Manzoni, Luca</name>
</author>
<author>
<name>Medvet, Eric</name>
</author>
<author>
<name>O'Reilly, Una-May</name>
</author>
<author>
<name>Hemberg, Erik</name>
</author>
<id>https://hdl.handle.net/1721.1/164207</id>
<updated>2025-12-05T04:15:40Z</updated>
<published>2025-10-31T00:00:00Z</published>
<summary type="text">Policy Search through Genetic Programming and LLM-assisted Curriculum Learning
Jorgensen, Steven; Nadizar, Giorgia; Pietropolli, Gloria; Manzoni, Luca; Medvet, Eric; O'Reilly, Una-May; Hemberg, Erik
Curriculum learning (CL) consists in using a diverse set of user-provided test cases, with varying levels of difficulty and organized in a suitable progression, for learning a policy. The quality of test cases is important to allow optimization techniques as genetic programming (GP) to solve policy search problems. In this work, we evaluate large language models (LLMs) as providers of test cases for GP-based policy search. We consider two policy search tasks, a single-player and a multi-player game, and four LLMs differing in complexity and specialization, which we prompt in order to generate suitable test cases for the two games. We experimentally assess the intrinsic quality of LLM-generated test cases and their utility when inserted in a curriculum consumed by a GP optimization. We evaluate the robustness of the approach with respect to the way cases are scheduled in curricula and with respect to the policy representation, for which we use both graphs and linear programs evolved by GP. We observe that the effectiveness of LLM-assisted CL depends on both the choice of LLM and the design of the prompting and scheduling strategies. These findings highlight important considerations for leveraging LLMs in automated curriculum design for GP-based optimization.
</summary>
<dc:date>2025-10-31T00:00:00Z</dc:date>
</entry>
<entry>
<title>Robust Biharmonic Skinning Using Geometric Fields</title>
<link href="https://hdl.handle.net/1721.1/164206" rel="alternate"/>
<author>
<name>Dodik, Ana</name>
</author>
<author>
<name>Sitzmann, Vincent</name>
</author>
<author>
<name>Solomon, Justin</name>
</author>
<author>
<name>Stein, Oded</name>
</author>
<id>https://hdl.handle.net/1721.1/164206</id>
<updated>2025-12-05T04:15:49Z</updated>
<published>2025-10-28T00:00:00Z</published>
<summary type="text">Robust Biharmonic Skinning Using Geometric Fields
Dodik, Ana; Sitzmann, Vincent; Solomon, Justin; Stein, Oded
Bounded bihramonic weights are a popular tool used to rig and deform characters for animation, to compute reduced-order simulations, and to define feature descriptors for geometry processing. They necessitate tetrahedralizing the volume bounded by the surface, introducing the possibility of meshing artifacts or tetrahedralization failure. We introduce a mesh-free and robust automatic skinning technique that generates weights comparable to the current state of the art, but works reliably even on open surfaces, triangle soups, and point clouds where current methods fail. We achieve this through the use of a specialized Lagrangian representation enabled by the advent of hardware ray-tracing, which circumvents the need for finite elements while optimizing the biharmonic energy and enforcing boundary conditions. The flexibility of our formulation allows us to integrate artistic control through weight painting during the optimization. We offer a thorough qualitative and quantitative evaluation of our method.
</summary>
<dc:date>2025-10-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>SquareLoop: Explore Optimal Authentication Block Strategy for ML</title>
<link href="https://hdl.handle.net/1721.1/164205" rel="alternate"/>
<author>
<name>Strzeszynski, Jan</name>
</author>
<author>
<name>Tong, Jianming</name>
</author>
<author>
<name>Lee, Kyungmi</name>
</author>
<author>
<name>Xiong, Nathan</name>
</author>
<author>
<name>Parashar, Angshuman</name>
</author>
<author>
<name>Emer, Joel</name>
</author>
<author>
<name>Krishna, Tushar</name>
</author>
<author>
<name>Yan, Mengjia</name>
</author>
<id>https://hdl.handle.net/1721.1/164205</id>
<updated>2025-12-05T04:15:46Z</updated>
<published>2025-10-18T00:00:00Z</published>
<summary type="text">SquareLoop: Explore Optimal Authentication Block Strategy for ML
Strzeszynski, Jan; Tong, Jianming; Lee, Kyungmi; Xiong, Nathan; Parashar, Angshuman; Emer, Joel; Krishna, Tushar; Yan, Mengjia
Off-chip memory in ML accelerators is vulnerable to both hardware&#13;
and software attack, which needs encryption and authentication.&#13;
Precise performance modeling of it requires (1) representation of&#13;
authentication blocks (AuthBlock) to cover the full design space of&#13;
shapes and orientations, and (2) precise memory behavior modeling,&#13;
as encryption and authentication mainly increase memory traffic.&#13;
This paper introduces &#119878;&#13;
2Loop, a framework that resolves these&#13;
challenges by introducing (1) flexible, all-level partitioning based&#13;
AuthBlocks for ensuring full coverage of the entire design space, (2)&#13;
a realistic layout-based memory model, and (3) an Mapping-LayoutAuthentication co-search algorithm to explore the drastic combinatorial design space to figure out optimal mapping, layout, and&#13;
AuthBlock shape choice for multi-layer workloads. SquareLoop’s&#13;
detailed memory model helps find better mapping to achieve 1.32×&#13;
speedup on ResNet18 compared to the SotA SecureLoop, and our&#13;
latency predictions are validated to within 7.3% of an RTL implementation. &#119878;&#13;
2&#119871;&#119900;&#119900;&#119901; also achieve up-to 1.08×/1.82× overall speedup for&#13;
authenticated ResNet18/MobileNet-V3 on various accelerators with&#13;
AuthBlock and Mapping co-searching. We open-source &#119878;&#13;
2Loop to&#13;
provide a powerful and validated tool for designing efficient, secure&#13;
accelerators at https://github.com/maeri-project/squareloop.
HASP 2025, Seoul, Republic of Korea
</summary>
<dc:date>2025-10-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Close Look at RMP Entry Caching and Its Security Implications in SEV-SNP</title>
<link href="https://hdl.handle.net/1721.1/164204" rel="alternate"/>
<author>
<name>Bagia, Alexis</name>
</author>
<author>
<name>Ulitzsch, Vincent</name>
</author>
<author>
<name>Trujillo, Dani?l</name>
</author>
<author>
<name>Li, Mengyuan</name>
</author>
<author>
<name>Yan, Mengjia</name>
</author>
<author>
<name>Seifert, Jean-Pierre</name>
</author>
<id>https://hdl.handle.net/1721.1/164204</id>
<updated>2025-12-05T04:15:44Z</updated>
<published>2025-10-18T00:00:00Z</published>
<summary type="text">A Close Look at RMP Entry Caching and Its Security Implications in SEV-SNP
Bagia, Alexis; Ulitzsch, Vincent; Trujillo, Dani?l; Li, Mengyuan; Yan, Mengjia; Seifert, Jean-Pierre
AMD’s Secure Encrypted Virtualization (SEV) technology is a pivotal component in AMD server processors that boosts cloud computing security. It achieves this by offering transparent memory encryption and managing keys for protecting virtual machines (VMs),&#13;
independently of the hypervisor’s trustworthiness. The latest iteration, SEV-Secure Nested Paging (SEV-SNP), introduces memory&#13;
integrity protection through a data structure called the Reverse&#13;
Map Table (RMP), which maps system physical addresses to guest&#13;
physical addresses and tracks ownership of physical pages.&#13;
The RMP is maintained in a dedicated region in DRAM. As every memory write triggers a check against an RMP entry, caching&#13;
RMP entries is crucial to alleviating the RMP’s performance impact. However, caching may create new security challenges, as it&#13;
can introduce new microarchitectural side-channels. In addition,&#13;
maintaining cache coherence is crucial for the RMP’s security guarantees. However, so far, neither the details of the RMP’s caching&#13;
behavior nor its security implications have been explored. This&#13;
paper aims to fill this gap by conducting a systematic study of the&#13;
RMP’s caching behavior. Through reverse engineering, we identify&#13;
that the RMP is not only cached in the TLB, but also in the L1D&#13;
and L2 data cache. Interestingly, this caching depends on the access&#13;
type on Zen 5. We also uncover the mechanisms by which cache&#13;
coherence across the TLB is enforced. We find that each update to&#13;
the RMP table triggers a global TLB flush across all cores. Finally,&#13;
we present several potential security implications and demonstrate&#13;
that an attacker can exploit RMP’s caching to leak physical address&#13;
information. A user process can leak 6 bits of the Physical Frame&#13;
Number (PFN) of its pages via the L1D cache within 2.5 µs per page,&#13;
with success rates of 97 % (Zen 4) and 99 % (Zen 3 and Zen 5).
</summary>
<dc:date>2025-10-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Guarding LLM-aided Software Transformation Tasks via Component Exoskeletons</title>
<link href="https://hdl.handle.net/1721.1/164203" rel="alternate"/>
<author>
<name>Lamprou, Evangelos</name>
</author>
<author>
<name>Kalhauge, Christian</name>
</author>
<author>
<name>Rinard, Martin</name>
</author>
<author>
<name>Vasilakis, Nikos</name>
</author>
<id>https://hdl.handle.net/1721.1/164203</id>
<updated>2025-12-05T04:15:47Z</updated>
<published>2025-10-13T00:00:00Z</published>
<summary type="text">Guarding LLM-aided Software Transformation Tasks via Component Exoskeletons
Lamprou, Evangelos; Kalhauge, Christian; Rinard, Martin; Vasilakis, Nikos
Large language models (LLMs) are achieving state-of-the-art results across a wide variety of software transformation tasks---including translating across languages and lifting opaque software components to high-level languages. Unfortunately, their results are often subtly incorrect, insecure, or underperformant---affecting the widespread deployment of these LLM-driven techniques in settings that go beyond the narrow scope of academic papers. This paper posits that such widespread deployment crucially depends on developing appropriate model guardrails for safeguarding the results of the transformation process. Such guardrails can be supported by component exoskeletons, tunable partial specifications extracted mostly automatically from the original, pre-transformed component. Exoskeletons serve as component projections that supplement, and often go through, the entire transformation process, confirming that the new, transformed component meets the original specifications. They show promise on several real-world scenarios and unearth exciting research directions.
PACMI ’25, October 13-16, 2025, Seoul, Republic of Korea
</summary>
<dc:date>2025-10-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Continuous Tensor Abstraction: Where Indices Are Real</title>
<link href="https://hdl.handle.net/1721.1/164202" rel="alternate"/>
<author>
<name>Won, Jaeyeon</name>
</author>
<author>
<name>Ahrens, Willow</name>
</author>
<author>
<name>Collin, Teodoro Fields</name>
</author>
<author>
<name>Emer, Joel S.</name>
</author>
<author>
<name>Amarasinghe, Saman</name>
</author>
<id>https://hdl.handle.net/1721.1/164202</id>
<updated>2025-12-05T04:15:43Z</updated>
<published>2025-10-09T00:00:00Z</published>
<summary type="text">The Continuous Tensor Abstraction: Where Indices Are Real
Won, Jaeyeon; Ahrens, Willow; Collin, Teodoro Fields; Emer, Joel S.; Amarasinghe, Saman
This paper introduces the continuous tensor abstraction, allowing indices to take real-number values (e.g., A[3.14]). It also presents continuous tensor algebra expressions, such as Cx,y = Ax,y ∗ Bx,y, where indices are defined over a continuous domain. This work expands the traditional tensor model to include continuous tensors. Our implementation supports piecewise-constant tensors, on which infinite domains can be processed in finite time. We also introduce a new tensor format for efficient storage and a code generation technique for automatic kernel generation. For the first time, our abstraction expresses domains like computational geometry and computer graphics in the language of tensor programming. Our approach demonstrates competitive or better performance to hand-optimized kernels in leading libraries across diverse applications. Compared to hand-implemented libraries on a CPU, our compiler-based implementation achieves an average speedup of 9.20× on 2D radius search with ∼60× fewer lines of code (LoC), 1.22× on genomic interval overlapping queries (with ∼18× LoC saving), and 1.69× on trilinear interpolation in Neural Radiance Field (with ∼6× LoC saving).
</summary>
<dc:date>2025-10-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Domain-Specific Probabilistic Programming Language for Reasoning about Reasoning (Or: A Memo on memo)</title>
<link href="https://hdl.handle.net/1721.1/164201" rel="alternate"/>
<author>
<name>Chandra, Kartik</name>
</author>
<author>
<name>Chen, Tony</name>
</author>
<author>
<name>Tenenbaum, Joshua B.</name>
</author>
<author>
<name>Ragan-Kelley, Jonathan</name>
</author>
<id>https://hdl.handle.net/1721.1/164201</id>
<updated>2025-12-05T04:15:21Z</updated>
<published>2025-10-09T00:00:00Z</published>
<summary type="text">A Domain-Specific Probabilistic Programming Language for Reasoning about Reasoning (Or: A Memo on memo)
Chandra, Kartik; Chen, Tony; Tenenbaum, Joshua B.; Ragan-Kelley, Jonathan
The human ability to think about thinking ("theory of mind") is a fundamental object of study in many disciplines. In recent decades, researchers across these disciplines have converged on a rich computational paradigm for modeling theory of mind, grounded in recursive probabilistic reasoning. However, practitioners often find programming in this paradigm challenging: first, because thinking-about-thinking is confusing for programmers, and second, because models are slow to run. This paper presents memo, a new domain-specific probabilistic programming language that overcomes these challenges: first, by providing specialized syntax and semantics for theory of mind, and second, by taking a unique approach to inference that scales well on modern hardware via array programming. memo enables practitioners to write dramatically faster models with much less code, and has already been adopted by several research groups.
</summary>
<dc:date>2025-10-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pyrosome: Verified Compilation for Modular Metatheory</title>
<link href="https://hdl.handle.net/1721.1/164200" rel="alternate"/>
<author>
<name>Jamner, Dustin</name>
</author>
<author>
<name>Kammer, Gabriel</name>
</author>
<author>
<name>Nag, Ritam</name>
</author>
<author>
<name>Chlipala, Adam</name>
</author>
<id>https://hdl.handle.net/1721.1/164200</id>
<updated>2025-12-05T04:15:31Z</updated>
<published>2025-10-09T00:00:00Z</published>
<summary type="text">Pyrosome: Verified Compilation for Modular Metatheory
Jamner, Dustin; Kammer, Gabriel; Nag, Ritam; Chlipala, Adam
We present Pyrosome, a generic framework for modular language metatheory that embodies a novel approach to extensible semantics and compilation, implemented in Coq. Common techniques for semantic reasoning are often tied to the specific structures of the languages and compilers that they support. Contextual equivalence is difficult to work with directly, and both logical relations and transition system-based approaches typically fix a specific notion of effect globally. While modular transition systems have been effective in imperative settings, they are suboptimal for functional code. These limitations restrict the extension and composition of semantics in these systems. In Pyrosome, verified compilers are fully extensible, meaning that to extend a language simply requires defining and verifying the compilation of the new feature, reusing the old correctness theorem for all other cases. The novel enabling idea is an inductive formulation of equivalence preservation that supports the addition of new rules to the source language, target language, and compiler.&#13;
&#13;
Pyrosome defines a formal, deeply embedded notion of programming languages with semantics given by dependently sorted equational theories, so all compiler-correctness proofs boil down to type-checking and equational reasoning. We support vertical composition of any compilers expressed in our framework in addition to feature extension. Since our design requires compilers to support open programs, our correctness guarantees support linking with any target code of the appropriate type. As a case study, we present a multipass compiler from System F with simple references, through CPS translation and closure conversion. Specifically, we demonstrate how we can build such a compiler incrementally by starting with a compiler for simply typed lambda-calculus and adding natural numbers, the unit type, recursive functions, and a global heap, then extending judgments with a type environment and adding type abstraction, all while reusing the original theorems. We also present a linear version of the simply typed CPS pass and compile a small imperative language to the simply typed target to show how Pyrosome handles substructural typing and imperative features.
</summary>
<dc:date>2025-10-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>What You See Is What It Does: A Structural Pattern for Legible Software</title>
<link href="https://hdl.handle.net/1721.1/164199" rel="alternate"/>
<author>
<name>Meng, Eagon</name>
</author>
<author>
<name>Jackson, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/164199</id>
<updated>2025-12-05T04:15:32Z</updated>
<published>2025-10-09T00:00:00Z</published>
<summary type="text">What You See Is What It Does: A Structural Pattern for Legible Software
Meng, Eagon; Jackson, Daniel
The opportunities offered by LLM coders (and their current limitations) demand a reevaluation of how software is structured. Software today is often “illegible”—lacking a direct correspondence between code and observed behavior—and insufficiently modular, leading to a failure of three key requirements of robust coding: incrementality (the ability to deliver small increments by making localized changes), integrity (avoiding breaking prior increments) and transparency (making clear what has changed at build time, and what actions have happened at runtime).&#13;
A new structural pattern offers improved legibility and modularity. Its elements are concepts and synchronizations: fully independent services and event-based rules that mediate between them. A domain-specific language for synchronizations allows behavioral features to be expressed in a granular and declarative way (and thus readily generated by an LLM). A case study of the RealWorld benchmark is used to illustrate and evaluate the approach.
Onward! ’25, Singapore, Singapore
</summary>
<dc:date>2025-10-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Gauguin, Descartes, Bayes: A Diurnal Golem’s Brain</title>
<link href="https://hdl.handle.net/1721.1/164198" rel="alternate"/>
<author>
<name>Chandra, Kartik</name>
</author>
<author>
<name>Liu, Amanda</name>
</author>
<author>
<name>Ragan-Kelley, Jonathan</name>
</author>
<author>
<name>Tenenbaum, Joshua B.</name>
</author>
<id>https://hdl.handle.net/1721.1/164198</id>
<updated>2025-12-05T04:15:29Z</updated>
<published>2025-10-09T00:00:00Z</published>
<summary type="text">Gauguin, Descartes, Bayes: A Diurnal Golem’s Brain
Chandra, Kartik; Liu, Amanda; Ragan-Kelley, Jonathan; Tenenbaum, Joshua B.
A "quine" is a deterministic program that prints itself. In this essay, I will show you a "gauguine": a probabilistic program that infers itself. A gauguine is repeatedly asked to guess its own source code. Initially, its chances of guessing correctly are of course minuscule. But as the gauguine observes more and more of its own previous guesses, it detects patterns of behavior and gains information about its inner workings. This information allows it to bootstrap self-knowledge, and ultimately discover its own source code. We will discuss how—and why—we might write a gauguine, and what we stand to learn by constructing one.
Onward! ’25, Singapore, Singapore
</summary>
<dc:date>2025-10-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Low-Fidelity vs. High-Fidelity Spatial Design in Virtual Reality for Non-professionals</title>
<link href="https://hdl.handle.net/1721.1/164197" rel="alternate"/>
<author>
<name>Wei, Lan</name>
</author>
<author>
<name>Dai, Chenyue</name>
</author>
<author>
<name>Peng, Xuening</name>
</author>
<author>
<name>Tong, Xin</name>
</author>
<author>
<name>Liu, Can</name>
</author>
<id>https://hdl.handle.net/1721.1/164197</id>
<updated>2025-12-05T04:15:37Z</updated>
<published>2024-10-29T00:00:00Z</published>
<summary type="text">Low-Fidelity vs. High-Fidelity Spatial Design in Virtual Reality for Non-professionals
Wei, Lan; Dai, Chenyue; Peng, Xuening; Tong, Xin; Liu, Can
In spatial design, non-professionals lack effective hands-on opportunities to participate in the design process. Although VR platforms can support spatial design with immersive interaction, existing tools simply provide high-fidelity 3D objects for users to choose and place around. Low-fidelity design approach is rarely supported, nor investigated in this context. In this work, we present a user study comparing low-fidelity and high-fidelity spatial design in VR. Eighteen participants were recruited to use both versions of a prototype with varied geometric fidelity to complete home designs. Their design outcome and intent was evaluated by professional designers. Our findings show, the low-fidelity version allowed participants to think more openly and creatively, leading to a more holistic expression of their design intent and needs, while the high-fidelity version promoted users’ thinking of realistic scenarios. We discuss the design implications and how they can be combined in co-design activities.
CHCHI 2024, Shenzhen, China
</summary>
<dc:date>2024-10-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Trans Data: A Research and Design Agenda from Trans Activists' Transformative Data Science</title>
<link href="https://hdl.handle.net/1721.1/164196" rel="alternate"/>
<author>
<name>Stevens, Nikko</name>
</author>
<author>
<name>D'Ignazio, Catherine</name>
</author>
<author>
<name>Doğan, Amelia</name>
</author>
<id>https://hdl.handle.net/1721.1/164196</id>
<updated>2025-12-05T04:15:38Z</updated>
<published>2025-10-16T00:00:00Z</published>
<summary type="text">Trans Data: A Research and Design Agenda from Trans Activists' Transformative Data Science
Stevens, Nikko; D'Ignazio, Catherine; Doğan, Amelia
Trans activists play a deeply important role in caring for and advocating for the transgender community using data. Through an interview study with 16 trans activists working in trans-led and trans-serving organizations in the United States, we document how they use restorative/transformative data science processes of resolving, researching, recording, and refusing and using data. We incorporate their data practices with trans technology and trans competent interaction design approaches to propose a research agenda for trans data: materially improve trans lives, cross data boundaries, and constantly engage in power analysis. We expound on how a trans data research agenda can benefit data advocacy and CSCW research and design.
</summary>
<dc:date>2025-10-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Techno-economic analysis and life cycle assessment for catalytic fast pyrolysis of mixed plastic waste</title>
<link href="https://hdl.handle.net/1721.1/164195" rel="alternate"/>
<author>
<name>Yadav, Geetanjali</name>
</author>
<author>
<name>Singh, Avantika</name>
</author>
<author>
<name>Dutta, Abhijit</name>
</author>
<author>
<name>Uekert, Taylor</name>
</author>
<author>
<name>DesVeaux, Jason S</name>
</author>
<author>
<name>Nicholson, Scott R</name>
</author>
<author>
<name>Tan, Eric CD</name>
</author>
<author>
<name>Mukarakate, Calvin</name>
</author>
<author>
<name>Schaidle, Joshua A</name>
</author>
<author>
<name>Wrasman, Cody J</name>
</author>
<author>
<name>Carpenter, Alberta C</name>
</author>
<author>
<name>Baldwin, Robert M</name>
</author>
<author>
<name>Román-Leshkov, Yuriy</name>
</author>
<author>
<name>Beckham, Gregg T</name>
</author>
<id>https://hdl.handle.net/1721.1/164195</id>
<updated>2025-12-05T04:15:58Z</updated>
<published>2023-06-05T00:00:00Z</published>
<summary type="text">Techno-economic analysis and life cycle assessment for catalytic fast pyrolysis of mixed plastic waste
Yadav, Geetanjali; Singh, Avantika; Dutta, Abhijit; Uekert, Taylor; DesVeaux, Jason S; Nicholson, Scott R; Tan, Eric CD; Mukarakate, Calvin; Schaidle, Joshua A; Wrasman, Cody J; Carpenter, Alberta C; Baldwin, Robert M; Román-Leshkov, Yuriy; Beckham, Gregg T
yrolysis of waste plastics has gained interest as a candidate chemical recycling technology. To examine the potential of this approach, we conducted a techno-economic analysis (TEA) and life cycle assessment (LCA) of a conceptual catalytic fast pyrolysis (CFP) facility that converts 240 metric tons/day of mixed plastic waste. The modeled base case predicts the minimum selling price (MSP) of a benzene, toluene, and xylenes (BTX) mixture at $1.07 per kg when co-products are sold at their average market prices. We predict that the aromatic product stream can be cost-competitive with virgin BTX mixtures ($0.68/kg) if the mixed waste plastics are available for less than $0.10/kg or if crude oil prices exceed $60/barrel. Moreover, we estimate that CFP-based conversion of waste plastics can reduce the total supply chain energy use by 24% but with a 2.4-fold increase in greenhouse gas (GHG) emissions per kilogram of BTX, relative to incumbent manufacturing process. Sensitivity analysis highlights that feedstock cost, co-product selling prices, capital cost for product separations, and operating costs are key cost drivers. Further, we examine three additional CFP processes that differ in product composition, namely naphtha, and a case where the products are rich in either C2–C4 olefins or BTX aromatic hydrocarbons. Whereas the MSP of naphtha ($2.18/kg) is ∼4-fold higher than virgin naphtha, both the olefin-rich and aromatics-rich product cases exhibit a potential reduction in MSP up to 40%, with a 21%–45% reduction in total supply chain energy and 2.2–3.8-fold increase in GHG emissions relative to incumbent manufacturing processes. LCA predicts that the CFP process exhibits lower fossil fuel depletion than virgin manufacturing across all cases as well as lower acidification, ozone depletion, and smog formation for select cases, but high utility and feedstock preparation requirements result in poorer performance across other metrics. Overall, this study highlights important process parameters for improving CFP of mixed waste plastics from economic and environmental perspectives.
</summary>
<dc:date>2023-06-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Togedule: Scheduling Meetings with Large Language Models and Adaptive Representations of Group Availability</title>
<link href="https://hdl.handle.net/1721.1/164194" rel="alternate"/>
<author>
<name>Song, Jaeyoon</name>
</author>
<author>
<name>Ashktorab, Zahra</name>
</author>
<author>
<name>Malone, Thomas</name>
</author>
<id>https://hdl.handle.net/1721.1/164194</id>
<updated>2025-12-05T04:15:34Z</updated>
<published>2025-10-16T00:00:00Z</published>
<summary type="text">Togedule: Scheduling Meetings with Large Language Models and Adaptive Representations of Group Availability
Song, Jaeyoon; Ashktorab, Zahra; Malone, Thomas
Scheduling is a perennial-and often challenging-problem for many groups. Existing tools are mostly static, showing an identical set of choices to everyone, regardless of the current status of attendees' inputs and preferences. In this paper, we propose Togedule, an adaptive scheduling tool that uses large language models to dynamically adjust the pool of choices and their presentation format. With the initial prototype, we conducted a formative study (N=10) and identified the potential benefits and risks of such an adaptive scheduling tool. Then, after enhancing the system, we conducted two controlled experiments, one each for attendees and organizers (total N=66). For each experiment, we compared scheduling with verbal messages, shared calendars, or Togedule. Results show that Togedule significantly reduces the cognitive load of attendees indicating their availability and improves the speed and quality of the decisions made by organizers.
</summary>
<dc:date>2025-10-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Core‐passivation: A concept for stable core‐shell nanoparticles in aqueous electrocatalysis</title>
<link href="https://hdl.handle.net/1721.1/164193" rel="alternate"/>
<author>
<name>Göhl, Daniel</name>
</author>
<author>
<name>Paciok, Paul</name>
</author>
<author>
<name>Wang, Zhenshu</name>
</author>
<author>
<name>Kang, Jin Soo</name>
</author>
<author>
<name>Heggen, Marc</name>
</author>
<author>
<name>Mayrhofer, Karl JJ</name>
</author>
<author>
<name>Román‐Leshkov, Yuriy</name>
</author>
<author>
<name>Ledendecker, Marc</name>
</author>
<id>https://hdl.handle.net/1721.1/164193</id>
<updated>2025-12-04T03:14:19Z</updated>
<published>2023-01-19T00:00:00Z</published>
<summary type="text">Core‐passivation: A concept for stable core‐shell nanoparticles in aqueous electrocatalysis
Göhl, Daniel; Paciok, Paul; Wang, Zhenshu; Kang, Jin Soo; Heggen, Marc; Mayrhofer, Karl JJ; Román‐Leshkov, Yuriy; Ledendecker, Marc
The stability of nanoparticles is a major challenge in thermal and electrocatalysis. This is especially true for core‐shell nanoparticles where only a few monolayers of noble metal protect the usually non‐noble core material. In this work, we utilize the practical nobility concept to engineer stable core‐shell nanoparticles with a self‐passivating core material. Specifically, tantalum carbide as core material in combination with a 1–3 monolayer thick platinum shell exhibits exceptional stability in aqueous media. The core‐shell catalyst shows no sign of structural changes after 10,000 degradation cycles up to 1.0 V&lt;jats:sub&gt;RHE&lt;/jats:sub&gt;. Due to the efficient passivation of tantalum carbide at the solid/liquid interface, the dissolution reduces by a factor of eight compared to bare Pt. Our findings confirm that passivating core materials are highly beneficial for the stabilization of core‐shell nanomaterials in aqueous media. They open up new ways for the rational design of cost‐efficient but stable non‐noble core – platinum shell nanoparticles where harsh, oxidizing conditions are employed.
</summary>
<dc:date>2023-01-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interdependence of Solvent and Catalyst Selection on Low Pressure Hydrogen-Free Reductive Catalytic Fractionation</title>
<link href="https://hdl.handle.net/1721.1/164192" rel="alternate"/>
<author>
<name>Facas, Gregory G</name>
</author>
<author>
<name>Brandner, David G</name>
</author>
<author>
<name>Bussard, Jeremy R</name>
</author>
<author>
<name>Román-Leshkov, Yuriy</name>
</author>
<author>
<name>Beckham, Gregg T</name>
</author>
<id>https://hdl.handle.net/1721.1/164192</id>
<updated>2025-12-04T03:14:28Z</updated>
<published>2023-03-13T00:00:00Z</published>
<summary type="text">Interdependence of Solvent and Catalyst Selection on Low Pressure Hydrogen-Free Reductive Catalytic Fractionation
Facas, Gregory G; Brandner, David G; Bussard, Jeremy R; Román-Leshkov, Yuriy; Beckham, Gregg T
Hydrogen-free reductive catalytic fractionation (RCF) is a promising method to produce aromatic compounds directly from native biomass without the use of external hydrogen gas. In this work, we show that by using high boiling point diols as a solvent in hydrogen-free RCF, reaction pressures can be reduced by an order of magnitude compared to conventional RCF with methanol and hydrogen gas, while still producing appreciable aromatic monomer yields. Importantly, the use of diols with secondary alcohol functional groups increases hydrogenation activity on Ru/C, Pt/C, and Ni/C, measured by the yield of aromatic compounds with saturated propyl side chains, compared to processing in ethylene glycol, indicating that the choice of solvent and catalyst together can be tuned to control product selectivity of aromatic monomers in RCF.
</summary>
<dc:date>2023-03-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Propylene Metathesis over Molybdenum Silicate Microspheres with Dispersed Active Sites</title>
<link href="https://hdl.handle.net/1721.1/164191" rel="alternate"/>
<author>
<name>Skoda, David</name>
</author>
<author>
<name>Zhu, Ran</name>
</author>
<author>
<name>Hanulikova, Barbora</name>
</author>
<author>
<name>Styskalik, Ales</name>
</author>
<author>
<name>Vykoukal, Vit</name>
</author>
<author>
<name>Machac, Petr</name>
</author>
<author>
<name>Simonikova, Lucie</name>
</author>
<author>
<name>Kuritka, Ivo</name>
</author>
<author>
<name>Poleunis, Claude</name>
</author>
<author>
<name>Debecker, Damien P</name>
</author>
<author>
<name>Román-Leshkov, Yuriy</name>
</author>
<id>https://hdl.handle.net/1721.1/164191</id>
<updated>2025-12-04T03:14:25Z</updated>
<published>2023-09-20T00:00:00Z</published>
<summary type="text">Propylene Metathesis over Molybdenum Silicate Microspheres with Dispersed Active Sites
Skoda, David; Zhu, Ran; Hanulikova, Barbora; Styskalik, Ales; Vykoukal, Vit; Machac, Petr; Simonikova, Lucie; Kuritka, Ivo; Poleunis, Claude; Debecker, Damien P; Román-Leshkov, Yuriy
In this work, we demonstrate that amorphous and porous molybdenum silicate microspheres are highly active catalysts for heterogeneous propylene metathesis. Homogeneous molybdenum silicate microspheres and aluminum-doped molybdenum silicate microspheres were synthesized via a nonaqueous condensation of a hybrid molybdenum biphenyldicarboxylate-based precursor solution with (3-aminopropyl)triethoxysilane. The as-prepared hybrid metallosilicate products were calcined at 500 °C to obtain amorphous and porous molybdenum silicate and aluminum-doped molybdenum silicate microspheres with highly dispersed molybdate species inserted into the silicate matrix. These catalysts contain mainly highly dispersed MoOx species, which possess high catalytic activity in heterogeneous propylene metathesis to ethylene and butene. Compared to conventional silica-supported MoOx catalysts prepared via incipient wetness impregnation (MoIWI), the microspheres with low Mo content (1.5–3.6 wt %) exhibited nearly 2 orders of magnitude higher steady-state propylene metathesis rates at 200 °C, approaching site time yields of 0.11 s–1.
</summary>
<dc:date>2023-09-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accessing monomers from lignin through carbon–carbon bond cleavage</title>
<link href="https://hdl.handle.net/1721.1/164190" rel="alternate"/>
<author>
<name>Palumbo, Chad T</name>
</author>
<author>
<name>Ouellette, Erik T</name>
</author>
<author>
<name>Zhu, Jie</name>
</author>
<author>
<name>Román-Leshkov, Yuriy</name>
</author>
<author>
<name>Stahl, Shannon S</name>
</author>
<author>
<name>Beckham, Gregg T</name>
</author>
<id>https://hdl.handle.net/1721.1/164190</id>
<updated>2025-12-04T03:14:20Z</updated>
<published>2024-10-04T00:00:00Z</published>
<summary type="text">Accessing monomers from lignin through carbon–carbon bond cleavage
Palumbo, Chad T; Ouellette, Erik T; Zhu, Jie; Román-Leshkov, Yuriy; Stahl, Shannon S; Beckham, Gregg T
Lignin, the heterogeneous aromatic macromolecule found in the cell walls of vascular plants, is an abundant feedstock for the production of biochemicals and biofuels. Many valorization schemes rely on lignin depolymerization, with decades of research focused on accessing monomers through C–O bond cleavage, given the abundance of β–O–4 bonds in lignin and the large number of available C–O bond cleavage strategies. Monomer yields are, however, invariably lower than desired, owing to the presence of recalcitrant C–C bonds whose selective cleavage remains a major challenge in catalysis. In this Review, we highlight lignin C–C cleavage reactions, including those of linkages arising from biosynthesis (β–1, β–5, β–β and 5–5) and industrial processing (5–CH2–5 and α–5). We examine multiple approaches to C–C cleavage, including homogeneous and heterogeneous catalysis, photocatalysis and biocatalysis, to identify promising strategies for further research and provide guidelines for definitive measurements of lignin C–C bond cleavage.
</summary>
<dc:date>2024-10-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Direct propylene epoxidation via water activation over Pd-Pt electrocatalysts</title>
<link href="https://hdl.handle.net/1721.1/164189" rel="alternate"/>
<author>
<name>Chung, Minju</name>
</author>
<author>
<name>Maalouf, Joseph H</name>
</author>
<author>
<name>Adams, Jason S</name>
</author>
<author>
<name>Jiang, Chenyu</name>
</author>
<author>
<name>Román-Leshkov, Yuriy</name>
</author>
<author>
<name>Manthiram, Karthish</name>
</author>
<id>https://hdl.handle.net/1721.1/164189</id>
<updated>2025-12-04T03:14:16Z</updated>
<published>2024-01-04T00:00:00Z</published>
<summary type="text">Direct propylene epoxidation via water activation over Pd-Pt electrocatalysts
Chung, Minju; Maalouf, Joseph H; Adams, Jason S; Jiang, Chenyu; Román-Leshkov, Yuriy; Manthiram, Karthish
Direct electrochemical propylene epoxidation by means of water-oxidation intermediates presents a sustainable alternative to existing routes that involve hazardous chlorine or peroxide reagents. We report an oxidized palladium-platinum alloy catalyst (PdPtOx/C), which reaches a Faradaic efficiency of 66 ± 5% toward propylene epoxidation at 50 milliamperes per square centimeter at ambient temperature and pressure. Embedding platinum into the palladium oxide crystal structure stabilized oxidized platinum species, resulting in improved catalyst performance. The reaction kinetics suggest that epoxidation on PdPtOx/C proceeds through electrophilic attack by metal-bound peroxo intermediates. This work demonstrates an effective strategy for selective electrochemical oxygen-atom transfer from water, without mediators, for diverse oxygenation reactions.
</summary>
<dc:date>2024-01-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interaction Configurations and Prompt Guidance in Conversational AI for Question Answering in Human-AI Teams</title>
<link href="https://hdl.handle.net/1721.1/164188" rel="alternate"/>
<author>
<name>Song, Jaeyoon</name>
</author>
<author>
<name>Ashktorab, Zahra</name>
</author>
<author>
<name>Pan, Qian</name>
</author>
<author>
<name>Dugan, Casey</name>
</author>
<author>
<name>Geyer, Werner</name>
</author>
<author>
<name>Malone, Thomas</name>
</author>
<id>https://hdl.handle.net/1721.1/164188</id>
<updated>2025-12-04T03:14:02Z</updated>
<published>2025-10-16T00:00:00Z</published>
<summary type="text">Interaction Configurations and Prompt Guidance in Conversational AI for Question Answering in Human-AI Teams
Song, Jaeyoon; Ashktorab, Zahra; Pan, Qian; Dugan, Casey; Geyer, Werner; Malone, Thomas
Understanding the dynamics of human-AI interaction in question answering is crucial for enhancing collaborative efficiency. Extending from our initial formative study, which revealed challenges in human utilization of conversational AI support, we designed two configurations for prompt guidance: a Nudging approach, where the AI suggests potential responses for human agents, and a Highlight strategy, emphasizing crucial parts of reference documents to aid human responses. Through two controlled experiments, the first involving 31 participants and the second involving 106 participants, we compared these configurations against traditional human-only approaches, both with and without AI assistance. Our findings suggest that effective human-AI collaboration can enhance response quality, though merely combining human and AI efforts does not ensure improved outcomes. In particular, the Nudging configuration was shown to help improve the quality of the output when compared to AI alone. This paper delves into the development of these prompt guidance paradigms, offering insights for refining human-AI collaborations in conversational question-answering contexts and contributing to a broader understanding of human perceptions and expectations in AI partnerships.
</summary>
<dc:date>2025-10-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pushing on an Open Door: Japan’s Evolutionary Security Posture</title>
<link href="https://hdl.handle.net/1721.1/164187" rel="alternate"/>
<author>
<name>Heginbotham, Eric</name>
</author>
<author>
<name>Leiter, Samuel</name>
</author>
<author>
<name>Samuels, Richard J</name>
</author>
<id>https://hdl.handle.net/1721.1/164187</id>
<updated>2025-12-04T03:14:27Z</updated>
<published>2023-07-13T00:00:00Z</published>
<summary type="text">Pushing on an Open Door: Japan’s Evolutionary Security Posture
Heginbotham, Eric; Leiter, Samuel; Samuels, Richard J
At the 2022 Shangri­-La Dialogue, Japan’s Prime Minister Fumio Kishida warned defense ministers from across the Indo-Pacific region that “Ukraine today may be East Asia tomorrow.” Russia’s war of aggression and China’s tacit support for the invasion have amplified the urgency of the threat posed by China’s economic and military rise and have informed material changes to Japanese defense policy.
</summary>
<dc:date>2023-07-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dissecting User Experience of Social Virtual Reality: A Tale of Five Platforms</title>
<link href="https://hdl.handle.net/1721.1/164186" rel="alternate"/>
<author>
<name>Cheng, Ruizhi</name>
</author>
<author>
<name>Li, Jie</name>
</author>
<author>
<name>Chen, Songqing</name>
</author>
<author>
<name>Han, Bo</name>
</author>
<id>https://hdl.handle.net/1721.1/164186</id>
<updated>2025-12-04T03:14:05Z</updated>
<published>2025-10-16T00:00:00Z</published>
<summary type="text">Dissecting User Experience of Social Virtual Reality: A Tale of Five Platforms
Cheng, Ruizhi; Li, Jie; Chen, Songqing; Han, Bo
Social virtual reality (VR) has the potential to replace conventional online social media by offering quasi-realworld social experiences. As such, it has been extensively examined by the research community. However,&#13;
existing studies fall short of providing a comprehensive understanding of how different aspects of social&#13;
VR platforms interact to affect user experience. Motivated by this limitation, we conduct a user study with&#13;
Oculus Quest 2 headsets and dissect the user experience on five social VR platforms. We evenly and randomly&#13;
divide 42 participants into short-term (spending 10–30 minutes/platform) and long-term (spending at least&#13;
120 minutes/platform) groups. Besides employing surveys and interviews, we measure the frame rate and&#13;
resolution of these platforms and explore how various factors interplay to influence the user experience of&#13;
social VR. Our findings reveal that the frame rate, resolution, and interactive events of social VR platforms&#13;
have a more significant impact on the experience of long-term users compared to short-term users. The&#13;
scalability limitations of these platforms, as evidenced by decreased frame rates with the increasing number&#13;
of concurrent users, result in an increased prevalence of motion sickness among long-term users, negatively&#13;
impacting their overall experience. Moreover, the absence of highly interactive events also deteriorates their&#13;
overall experience, and the low resolution combined with the lack of interactive events further decreases their&#13;
sense of social presence. Additionally, our study demonstrates several common limitations negatively affecting&#13;
the experience of both long-term and short-term users. For example, the harassment prevention mechanisms&#13;
on all five platforms are inadequate, and being harassed has a detrimental effect on users’ overall experience&#13;
and sense of social presence. The avatar embodiment of investigated platforms has limited contribution to&#13;
users’ sense of social presence, mainly due to the lack of realism and full-body tracking. Our findings call for&#13;
more research in scalability support, motion sickness relief, interactive event design, harassment prevention,&#13;
and avatar development for improving social VR platforms in the future.
</summary>
<dc:date>2025-10-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation of critical heat flux enhancement on nanoengineered surfaces in pressurized subcooled flow boiling using infrared thermometry</title>
<link href="https://hdl.handle.net/1721.1/164185" rel="alternate"/>
<author>
<name>Wang, Chi</name>
</author>
<author>
<name>Su, Guanyu</name>
</author>
<author>
<name>Akinsulire, Olorunsola</name>
</author>
<author>
<name>Zhang, Limiao</name>
</author>
<author>
<name>Rahman, Md Mahamudur</name>
</author>
<author>
<name>Bucci, Matteo</name>
</author>
<id>https://hdl.handle.net/1721.1/164185</id>
<updated>2025-12-04T03:14:24Z</updated>
<published>2023-03-28T00:00:00Z</published>
<summary type="text">Investigation of critical heat flux enhancement on nanoengineered surfaces in pressurized subcooled flow boiling using infrared thermometry
Wang, Chi; Su, Guanyu; Akinsulire, Olorunsola; Zhang, Limiao; Rahman, Md Mahamudur; Bucci, Matteo
Enhancing the flow boiling critical heat flux (CHF) is beneficial to the economics and safety margins of many industrial applications cooled by boiling heat transfer. While many studies have shown that surfaces with hydrophilic nanoscale and micro-scale features can enhance CHF in pool boiling, it is still not clear how these engineered surfaces affect the CHF in subcooled flow boiling at ambient pressure, let alone high-pressure conditions. Here, two nano-engineered surfaces, i.e., a surface coated with a porous layer of hydrophilic silica nanoparticles and a surface coated with zinc oxide nanowires, were tested. Flow boiling tests with a 10 K subcooling and a mass flux of 1000 kg/(m2·s) were conducted at 1 bar and 4 bars using infrared thermometry diagnostics. At 1 bar, the CHF enhancement is around 15% for both coatings. At 4 bars, the CHF enhancement is around 17% for the nanowire surface, and around 25% for the nano-porous surface. Infrared thermometry measurements reveal that the CHF enhancement comes from an increase of both two-phase heat transfer and single-phase heat transfer mechanisms, which is due to a change of bubble dynamics on the nanoengineered surfaces. It is also shown that the boiling crisis can be predicted using a percolation model based on Monte Carlo (MC) simulations.
</summary>
<dc:date>2023-03-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Connecting Digitalization and Sustainability: Proptech in the Real Estate Operations and Management</title>
<link href="https://hdl.handle.net/1721.1/164184" rel="alternate"/>
<author>
<name>Tan, Zhengzhen</name>
</author>
<author>
<name>Miller, Norm G.</name>
</author>
<id>https://hdl.handle.net/1721.1/164184</id>
<updated>2025-12-04T03:14:31Z</updated>
<published>2023-04-27T00:00:00Z</published>
<summary type="text">Connecting Digitalization and Sustainability: Proptech in the Real Estate Operations and Management
Tan, Zhengzhen; Miller, Norm G.
Digitalization of building operations and maintenance enable real-time monitoring, optimization, and automation for environment sustainability. Proptech startups are important change agents in accelerating building digitalization. While many researchers analyze economic and environmental savings from deployment of digital technology, far less attention has been devoted to challenges for proptech startups to transform efficiency gains into viable businesses. We analyze the Unissu global proptech startup database to reveal the scope and competitive landscape of proptech solutions. We conduct interviews with building owners/operators to understand what impedes the adoption of proptech solutions. Despite rapid growth, ongoing challenges remain for sustainability-focused proptech firms with three adoption barriers: (1) integration of the technology stacks; (2) integration of technology stacks with business processes; and (3) integration of owner/operators and the occupants’ solutions. Proptech with applications that work with existing infrastructure or provide more complete holistic solutions with extensive capital reserves, are more likely to survive. Other pathways include having data standardization and security protocols in place; technology partnership with technology incumbents; and effective communication with owners/operators to fill the knowledge gap. Findings can provide insights to emerging digital proptech startups as they spearhead market adoption in the real estate sector and monetize the sustainability value creation.
</summary>
<dc:date>2023-04-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Driver response and recovery following automation initiated disengagement in real-world hands-free driving</title>
<link href="https://hdl.handle.net/1721.1/164183" rel="alternate"/>
<author>
<name>Gershon, Pnina</name>
</author>
<author>
<name>Mehler, Bruce</name>
</author>
<author>
<name>Reimer, Bryan</name>
</author>
<id>https://hdl.handle.net/1721.1/164183</id>
<updated>2025-12-04T03:14:22Z</updated>
<published>2023-03-29T00:00:00Z</published>
<summary type="text">Driver response and recovery following automation initiated disengagement in real-world hands-free driving
Gershon, Pnina; Mehler, Bruce; Reimer, Bryan
Objective&#13;
Advanced driver assistance systems are increasingly available in consumer vehicles, making the study of drivers’ behavioral adaptation and the impact of automation beneficial for driving safety. Concerns over driver’s being out-of-the-loop, coupled with known limitations of automation, has led research to focus on time-critical, system-initiated disengagements. This study used real-world data to assess drivers’ response to, and recovery from, automation-initiated disengagements by quantifying changes in visual attention, vehicle control, and time to steady-state behaviors.&#13;
&#13;
Methods&#13;
Fourteen drivers drove for one month each a Cadillac CT6 equipped with Super Cruise (SC), a partial automation system that, when engaged, enables hands-free driving. The vehicles were instrumented with data acquisition systems recording driving kinematics, automation use, GPS, and video. The dataset included 265 SC-initiated disengagements identified across 5,514 miles driven with SC.&#13;
&#13;
Results&#13;
Linear quantile mixed-effects models of glance behavior indicated that following SC-initiated disengagement, the proportions of glances to the Road decreased (Q50Before=0.91, Q50After=0.69; Q85Before=1.0, Q85After=0.79), the proportions of glances to the Instrument Cluster increased (Q50Before=0.14, Q50After=0.25; Q85Before=0.34, Q85After=0.45), and mean glance duration to the Road decreased by 4.86 sec in Q85. Multinomial logistic regression mixed-models of glance distributions indicated that the number of transitions between glance locations following disengagement increased by 43% and that glances were distributed across fewer locations. When driving hands-free, take over time was significantly longer (2.4 sec) compared to when driving with at least one hand on the steering wheel (1.8 sec). Analysis of moment-to-moment distributional properties of visual attention and steering wheel control following disengagement indicated that on average it took drivers 6.1 sec to start the recovery of glance behavior to the Road and 1.5 sec for trend-stationary proportions of at least one hand on the steering wheel.&#13;
&#13;
Conclusions&#13;
Automation-initiated disengagements triggered substantial changes in driver glance behavior including shorter on-road glances and frequent transitions between Road and Instrument Cluster glance locations. This information seeking behavior may capture drivers’ search for information related to the disengagement or the automation state and is likely shaped by the automation design. The study findings can inform the design of more effective driver-centric information displays for smoother transitions and faster recovery.
</summary>
<dc:date>2023-03-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lessons in Sanctions-Proofing from Russia</title>
<link href="https://hdl.handle.net/1721.1/164182" rel="alternate"/>
<author>
<name>Glenn, Caileigh</name>
</author>
<id>https://hdl.handle.net/1721.1/164182</id>
<updated>2025-12-04T03:14:33Z</updated>
<published>2023-04-04T00:00:00Z</published>
<summary type="text">Lessons in Sanctions-Proofing from Russia
Glenn, Caileigh
Government actors and other observers across Europe and the United States called the multilateral sanctions imposed on Russia in early 2022 “unprecedented.”Footnote1 Even Russian President Vladimir Putin acknowledged their severity when he stressed “the need to counter economic restrictions that were imposed on us, which are truly unprecedented without any exaggeration.”Footnote2 Part of the response to the Russian invasion of Ukraine, these financial and trade sanctions—imposed on Russia by Western governments—target key firms in the financial and energy sectors, debt financing, technology, Russia’s foreign currency reserves, and more recently, most Russian oil and transportation insurers.
</summary>
<dc:date>2023-04-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Why Haven’t We Applied the Lessons from Lean to Innovation?</title>
<link href="https://hdl.handle.net/1721.1/164181" rel="alternate"/>
<author>
<name>Wright, Randall S.</name>
</author>
<id>https://hdl.handle.net/1721.1/164181</id>
<updated>2025-12-04T03:14:32Z</updated>
<published>2023-04-20T00:00:00Z</published>
<summary type="text">Why Haven’t We Applied the Lessons from Lean to Innovation?
Wright, Randall S.
Yes, I know. People have been doingLean innovation—increasing efficiencyby capturing customer feedback earlyand often and minimizing waste in theproduct development cycle—for the last10 years.I’m not talking about applying Leanprinciples to innovation. I’m talkingabout how American business leadershad the humility to admit their firmsneeded to learn Lean from Japaneseculture to master globally competitiveoperations, and why they need now tolearn innovation from the culture ofuniversities to master globally compet-itive innovation.
</summary>
<dc:date>2023-04-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>Becoming Infrastructure: A Critical Realist Account of the Evolution of DHIS2 as Digital Public Health Infrastructure in Sierra Leone</title>
<link href="https://hdl.handle.net/1721.1/164180" rel="alternate"/>
<author>
<name>Ndubuisi-Obi, Innocent</name>
</author>
<author>
<name>Chen, Nuole</name>
</author>
<author>
<name>Tsai, Lily</name>
</author>
<id>https://hdl.handle.net/1721.1/164180</id>
<updated>2025-12-04T03:14:04Z</updated>
<published>2025-10-16T00:00:00Z</published>
<summary type="text">Becoming Infrastructure: A Critical Realist Account of the Evolution of DHIS2 as Digital Public Health Infrastructure in Sierra Leone
Ndubuisi-Obi, Innocent; Chen, Nuole; Tsai, Lily
Today, the District Health Information System 2 (DHIS2) has become the de-facto standard for open-source health management information systems and Sierra Leone's status as the first country in sub-Saharan Africa to implement DHSI2 makes it a productive place for researchers interested in understanding the end-to-end process of infrastructuring in a low-resource bureaucratic setting. In this article, we examine its design, implementation, and maintenance in Sierra Leone over a period of 14 years - from 2008 to 2022. We present an intensive case study discretized by three morphogenetic cycles (decentralization, centralization, and fragmentation) and furnished with explanatory account's of DHIS2's evolution using a critical realist research methodology to describe the emergence of DHIS2 as digital public health infrastructure. These accounts highlight the structural and cultural systems of DHIS2, their elaborations, and their interaction with agents over successive periods of DHIS2's evolution. Our study finds that, despite its continued use in Sierra Leone, the increasing generativity in the structural and cultural systems of DHSI2 and Sierra Leone&amp;#8217;s public health system engenders a persistent instability that requires continuous resolution. Though we find that extant literature aids in our understanding of DHIS2&amp;#8217;s evolution, we proffer two mechanisms, infrastructural capture and socio-technical debt, which aid our explanation of events observed in our case study. Our work makes a case for more ontologically-diverse theorizing of bureaucracy-aware computing systems.
</summary>
<dc:date>2025-10-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Factorization in additive monoids of evaluation polynomial semirings</title>
<link href="https://hdl.handle.net/1721.1/164179" rel="alternate"/>
<author>
<name>Ajran, Khalid</name>
</author>
<author>
<name>Bringas, Juliet</name>
</author>
<author>
<name>Li, Bangzheng</name>
</author>
<author>
<name>Singer, Easton</name>
</author>
<author>
<name>Tirador, Marcos</name>
</author>
<id>https://hdl.handle.net/1721.1/164179</id>
<updated>2025-12-04T03:14:18Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Factorization in additive monoids of evaluation polynomial semirings
Ajran, Khalid; Bringas, Juliet; Li, Bangzheng; Singer, Easton; Tirador, Marcos
For a positive real α, we can consider the additive submonoid M of the real line that is generated by the nonnegative powers of α. When α is transcendental, M is a unique factorization monoid. However, when α is algebraic, M may not be atomic, and even when M is atomic, it may contain elements having more than one factorization (i.e., decomposition as a sum of irreducibles). The main purpose of this paper is to study the phenomenon of multiple factorizations inside M. When α is algebraic but not rational, the arithmetic of factorizations in M is highly interesting and complex. In order to arrive to that conclusion, we investigate various factorization invariants of M, including the sets of lengths, sets of Betti elements, and catenary degrees. Our investigation gives continuity to recent studies carried out by Chapman et al. in 2020 and by Correa-Morris and Gotti in 2022.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Argentella scandal: why French officials did not make Corsica a nuclear test site in 1960</title>
<link href="https://hdl.handle.net/1721.1/164178" rel="alternate"/>
<author>
<name>Cooper, Austin R.</name>
</author>
<id>https://hdl.handle.net/1721.1/164178</id>
<updated>2025-12-04T03:14:29Z</updated>
<published>2023-04-17T00:00:00Z</published>
<summary type="text">The Argentella scandal: why French officials did not make Corsica a nuclear test site in 1960
Cooper, Austin R.
Top French officials made plans in early 1960 to transform an abandoned silver mine in Corsica, called the Argentella Massif, into an underground site for nuclear explosions. By June 1960, they had canceled these plans. This article shows how a mass movement on the Mediterranean island forced their hand, and it explains why Corsicans of diverse political affiliations took to the streets. The Argentella project—and the health, environmental, and strategic risks that it entailed—looked in Corsica like evidence that Paris saw the islanders as second-class citizens, even residents of an internal colony. French police intelligence, which maintained surveillance on the Corsican anti-nuclear movement, feared that this movement might have drawn inspiration from the contemporaneous struggle for national liberation in Algeria, where French nuclear explosions began. The Argentella protests illustrated national disagreements about French nuclear ambitions that previous scholarship, proposing official consensus, has minimized. They show how, in a nuclear-armed democracy, local officials, political activists, and ordinary citizens can shape nuclear-weapons policy. But Corsican anti-nuclear action in 1960 did not demand disarmament. These protests also illuminate a longer trajectory in French nuclear history, which involved atmospheric explosions in colonized territories in Algeria and Polynesia until the 1970s, despite local and international resistance.
</summary>
<dc:date>2023-04-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Foundation Model for Spatiotemporal Data Analysis</title>
<link href="https://hdl.handle.net/1721.1/164177" rel="alternate"/>
<author>
<name>Wu, Yuankai</name>
</author>
<author>
<name>Chen, Xinyu</name>
</author>
<author>
<name>Zhuang, Dingyi</name>
</author>
<id>https://hdl.handle.net/1721.1/164177</id>
<updated>2025-12-04T03:13:57Z</updated>
<published>2025-10-14T00:00:00Z</published>
<summary type="text">Towards Foundation Model for Spatiotemporal Data Analysis
Wu, Yuankai; Chen, Xinyu; Zhuang, Dingyi
Spatiotemporal data modeling has long been a fundamental task&#13;
across disciplines such as climate &amp; environmental science, and&#13;
transportation engineering. A typical goal is to estimate unknown&#13;
information at specific spatiotemporal points based on partially&#13;
observed data—for example, interpolating weather conditions at&#13;
unmeasured locations, reconstructing missing historical records, or&#13;
forecasting the future trajectories of financial markets. These are&#13;
all core tasks within the broader scope of spatiotemporal modeling.&#13;
This tutorial (1 hours) introduces a cohesive view of spatiotemporal data modeling, tracing the evolution from traditional statistical&#13;
approaches to modern deep learning paradigms. We begin by revisiting Kriging and time series decomposition to highlight the essential&#13;
assumptions and strengths of these classical methods. Next, we explore low-rank matrix and tensor completion techniques, which&#13;
leverage the structured patterns of spatiotemporal data. We then&#13;
elaborate on spatiotemporal graph neural networks, which characterize complex dependencies by integrating graph structures with&#13;
dynamic temporal features. Finally, we discuss recent advances in&#13;
applying large foundation models to spatiotemporal tasks, including their capabilities and current limitations.&#13;
Throughout the tutorial, we emphasize how lessons from traditional methods—such as the importance of locality, periodicity, and&#13;
smoothness priors—can inspire new directions for developing and&#13;
fine-tuning foundation models in the spatiotemporal domain. We&#13;
conclude by outlining key challenges and opportunities in bridging&#13;
classical wisdom with emerging AI capabilities.
SSTD ’25, Osaka, Japan
</summary>
<dc:date>2025-10-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Gender gaps in South Korea’s labour market: children explain most of the gender employment gap, but little of the gender wage gap</title>
<link href="https://hdl.handle.net/1721.1/164176" rel="alternate"/>
<author>
<name>Stansbury, Anna</name>
</author>
<author>
<name>Kirkegaard, Jacob Funk</name>
</author>
<author>
<name>Dynan, Karen</name>
</author>
<id>https://hdl.handle.net/1721.1/164176</id>
<updated>2025-12-04T03:14:15Z</updated>
<published>2023-05-03T00:00:00Z</published>
<summary type="text">Gender gaps in South Korea’s labour market: children explain most of the gender employment gap, but little of the gender wage gap
Stansbury, Anna; Kirkegaard, Jacob Funk; Dynan, Karen
South Korea’s gender wage and employment gaps are among the largest in the OECD. Using labour force survey data over 2010–19, we estimate gender wage and employment gaps, and child earnings penalties, for women aged 25–54. We show (i) that the large gender gaps in South Korea’s labour market are mostly not a function of differential sorting by gender along education, occupation, or industry lines, (ii) that caring for children (and, perhaps increasingly, for the elderly) is the major factor inhibiting women’s labour force participation, and (iii) that large gender wage gaps exist even for women without care responsibilities. These findings suggest that improving opportunities for work–family balance is crucial to helping increase women’s labour force participation, but may do little to close gender wage gaps: other major obstacles also appear to stand in the way of Korean women’s full inclusion in the labour force.
</summary>
<dc:date>2023-05-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Madman or Mad Genius? The International Benefits and Domestic Costs of the Madman Strategy</title>
<link href="https://hdl.handle.net/1721.1/164175" rel="alternate"/>
<author>
<name>Schwartz, Joshua A.</name>
</author>
<id>https://hdl.handle.net/1721.1/164175</id>
<updated>2025-12-04T03:14:13Z</updated>
<published>2023-05-04T00:00:00Z</published>
<summary type="text">Madman or Mad Genius? The International Benefits and Domestic Costs of the Madman Strategy
Schwartz, Joshua A.
According to the “Madman Theory” outlined by Daniel Ellsberg and Thomas C. Schelling, and embraced by Presidents Richard Nixon and Donald Trump, being perceived as mad can help make seemingly incredible threats—such as starting a nuclear war—more credible. However, recent research has largely concluded that the Madman Theory does not work. In this study, I theorize that the international benefits of the Madman Theory have been underestimated, but also that there are significant domestic barriers associated with adopting such a strategy that undermine its effectiveness. Through a series of five novel survey experiments, I find evidence that perceived madness provides limited advantages in coercive bargaining vis-à-vis foreign adversaries, but it also entails significant domestic costs that potentially erode its efficacy. Overall, this study provides clearer support for the Madman Theory than most previous literature has found, but also breaks new theoretical ground by analyzing the domestic politics of perceived madness.
</summary>
<dc:date>2023-05-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Purrfect Pitch: Exploring Pitch Interval Learning through an Audio-Haptic Interface</title>
<link href="https://hdl.handle.net/1721.1/164174" rel="alternate"/>
<author>
<name>Chin, Sam</name>
</author>
<author>
<name>Fang, Cathy Mengying</name>
</author>
<author>
<name>Singh, Nikhil</name>
</author>
<author>
<name>Ibrahim, Ibrahim</name>
</author>
<author>
<name>Paradiso, Joe</name>
</author>
<author>
<name>Maes, Pattie</name>
</author>
<id>https://hdl.handle.net/1721.1/164174</id>
<updated>2025-12-04T03:13:59Z</updated>
<published>2025-10-09T00:00:00Z</published>
<summary type="text">Purrfect Pitch: Exploring Pitch Interval Learning through an Audio-Haptic Interface
Chin, Sam; Fang, Cathy Mengying; Singh, Nikhil; Ibrahim, Ibrahim; Paradiso, Joe; Maes, Pattie
We introduce Purrfect Pitch, a system consisting of a wearable haptic device and a custom-designed learning interface for musical ear training. We focus on the ability to identify pitch intervals (sequences of two musical notes), a perceptually ambiguous task that usually requires rote training. With our system, users hear two tones while simultaneously receiving two corresponding vibrotactile stimuli on the back. Providing haptic feedback on the back makes the auditory distance between tones salient, and the back-worn design is comfortable and unobtrusive. During training, users receive multi-sensory feedback from our system and input their guessed interval value on our web-based learning interface. Our study with 18 participants shows that our system enables novice learners to identify intervals more accurately and consistently than those who only received audio feedback, even after removing the haptic feedback. We also share further insights on designing a multisensory learning system.
AHs 2025, Masdar City, Abu Dhabi, United Arab Emirates
</summary>
<dc:date>2025-10-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Teaching AI to Feel: A Collaborative, Full-Body Exploration of Emotive Communication</title>
<link href="https://hdl.handle.net/1721.1/164173" rel="alternate"/>
<author>
<name>Lemus, Lissette</name>
</author>
<author>
<name>Pilcher, Kris</name>
</author>
<author>
<name>Sprengel, Holger</name>
</author>
<author>
<name>Sabater-Mir, Jordi</name>
</author>
<author>
<name>Tütüncü, Esen K.</name>
</author>
<id>https://hdl.handle.net/1721.1/164173</id>
<updated>2025-12-04T03:13:56Z</updated>
<published>2025-10-27T00:00:00Z</published>
<summary type="text">Teaching AI to Feel: A Collaborative, Full-Body Exploration of Emotive Communication
Lemus, Lissette; Pilcher, Kris; Sprengel, Holger; Sabater-Mir, Jordi; Tütüncü, Esen K.
Commonaiverse is an interactive installation exploring human emotions through full-body motion tracking and real-time AI feedback. Participants engage in three phases: Teaching, Exploration and the Cosmos Phase, collaboratively expressing and interpreting emotions with the system. The installation integrates MoveNet for precise motion tracking and a multi-recommender AI system to analyze emotional states dynamically, responding with adaptive audiovisual outputs. By shifting from top-down emotion classification to participant-driven, culturally diverse definitions, we highlight new pathways for inclusive, ethical affective computing. We discuss how this collaborative, out-of-the-box approach pushes multimedia research beyond single-user facial analysis toward a more embodied, co-created paradigm of emotional AI. Furthermore, we reflect on how this reimagined framework fosters user agency, reduces bias, and opens avenues for advanced interactive applications.
MM ’25, October 27–31, 2025, Dublin, Ireland
</summary>
<dc:date>2025-10-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Personalized Animations for Affective Feedback: Generative AI Helps to Visualize Skin Conductance</title>
<link href="https://hdl.handle.net/1721.1/164172" rel="alternate"/>
<author>
<name>Scheirer, Jocelyn</name>
</author>
<author>
<name>Picard, Rosalind</name>
</author>
<author>
<name>Cantrell, Aubrey</name>
</author>
<id>https://hdl.handle.net/1721.1/164172</id>
<updated>2025-12-04T03:13:49Z</updated>
<published>2025-10-26T00:00:00Z</published>
<summary type="text">Personalized Animations for Affective Feedback: Generative AI Helps to Visualize Skin Conductance
Scheirer, Jocelyn; Picard, Rosalind; Cantrell, Aubrey
Biofeedback interfaces traditionally rely on abstract visualizations,&#13;
tones, or haptics to convey physiological states—but these often lack&#13;
personal relevance, emotional salience, and engagement. In this&#13;
paper, we present a novel system that bridges wearable sensing and&#13;
generative AI to create real-time, personalized animated&#13;
biofeedback experiences. Users describe emotionally meaningful&#13;
objects or scenes to a language model in our system, which outputs&#13;
generate customized Processing animations. These animations are&#13;
then dynamically driven by electrodermal activity (EDA) signals&#13;
from a wrist sensor. We co-design and evaluate the system with&#13;
autistic adults, many of whom have unique “special interests” that&#13;
are likely to engage them more than a one-sized-fits-all&#13;
visualization. Many of these individuals also have difficulty with&#13;
interoception -- feeling or sensing their own internal and&#13;
physiological state changes. We built this tool to transform passive&#13;
physiological monitoring into an interactive multimedia&#13;
experience, where the visual representation of the body is authored&#13;
by the user. We introduce a prompt-engineered GPT-based&#13;
interface that streamlines code generation, sensor mapping, and&#13;
iterative refinement, requiring no prior coding expertise. The&#13;
technical pipeline we built includes signal filtering, dynamic&#13;
parameter mapping, and natural language-based customization—&#13;
delivering a real-time, visually immersive feedback loop. We report&#13;
on initial case studies with 12 autistic adults using the system,&#13;
which highlight both the expressive potential and individual&#13;
variability of user responses, reinforcing the need for adaptable&#13;
multimedia frameworks in health technologies. By merging realtime physiological data with generative animation and natural&#13;
language interaction, this work expands the creative frontier of&#13;
personalized affective biofeedback. We also address ethical&#13;
challenges arising from using AI with physiological sensors.
MRAC '25, October 27–28, 2025, Dublin, Ireland
</summary>
<dc:date>2025-10-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hands-on Strategies for Teaching Social and Societal Impacts of Computing</title>
<link href="https://hdl.handle.net/1721.1/164171" rel="alternate"/>
<author>
<name>Kurkovsky, Stan</name>
</author>
<author>
<name>Nnamani, Manee Ngozi</name>
</author>
<author>
<name>Hunter, Aaron</name>
</author>
<author>
<name>Sobomehin, Olatunde</name>
</author>
<author>
<name>Braught, Grant</name>
</author>
<author>
<name>Goldweber, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/164171</id>
<updated>2025-12-04T03:13:47Z</updated>
<published>2025-10-21T00:00:00Z</published>
<summary type="text">Hands-on Strategies for Teaching Social and Societal Impacts of Computing
Kurkovsky, Stan; Nnamani, Manee Ngozi; Hunter, Aaron; Sobomehin, Olatunde; Braught, Grant; Goldweber, Michael
The topic of hands-on strategies for teaching the social and societal impacts of computing is of growing interest to the computer&#13;
science education community because it addresses a critical gap in&#13;
traditional CS curricula [7]. While technical skills remain central,&#13;
educators increasingly recognize the need to prepare students for&#13;
the ethical, social, and human-centered challenges posed by modern computing technologies. From AI-driven decision-making to&#13;
digital accessibility and data privacy, computing profoundly affects&#13;
individuals and communities, making it essential for students to&#13;
engage with these issues through experiential learning [12].&#13;
Different viewpoints on this topic emerge based on pedagogical&#13;
approaches, disciplinary perspectives, and technological optimism&#13;
or skepticism. Some educators advocate for integrating servicelearning and community-based projects, arguing that real-world&#13;
engagement fosters empathy and ethical awareness. Others emphasize case studies and simulations, providing structured exposure to&#13;
societal challenges without the unpredictability of external partnerships. Additionally, viewpoints may diverge on the role of AI: while&#13;
some see AI tools as an opportunity to enhance social good, others&#13;
worry they may exacerbate biases and reduce human agency in&#13;
computing. Despite these differences, there is broad agreement that&#13;
computing education must go beyond technical training to include&#13;
a deeper understanding of computing’s role in society.
CompEd 2025, October 21–25, 2025, Gaborone, Botswana
</summary>
<dc:date>2025-10-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interactive Storybooks for Early AI Literacy</title>
<link href="https://hdl.handle.net/1721.1/164170" rel="alternate"/>
<author>
<name>Pu, Isabella</name>
</author>
<id>https://hdl.handle.net/1721.1/164170</id>
<updated>2025-12-04T03:09:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Interactive Storybooks for Early AI Literacy
Pu, Isabella
As artificial intelligence (AI) becomes increasingly present in children's everyday environments, there is an urgent need for developmentally appropriate tools that help young learners understand and shape these technologies. To be effective, these tools must not only successfully convey complex concepts but also engage children in ways that are meaningful, accessible, and fun.&#13;
&#13;
This thesis introduces the Interactive Storybooks for Early AI Literacy, a series of ten interactive storybooks for children ages 6–9 that combine narrative, mini-games, and scaffolded creative AI interactions to teach core AI and robotics concepts. The storybooks follow an overarching narrative featuring a friendly robot, Doodlebot, who must learn creative tasks with the child's help, framing the child as an AI designer and introducing them to the concept of training AI models through the narrative. The storybooks additionally contain interactive games and activities which help keep kids excited and engaged, while providing structured opportunities to experiment with and explore AI creation tools.&#13;
&#13;
First, a pilot study was conducted at a community summer camp with four Interactive Storybooks. Children expressed joy and pride in their AI creations, used the characters as emotional anchors for learning, and began to successfully articulate key AI concepts. Four engagement archetypes emerged: the Reader, the Gamer, the Showcaser, and the Social Connector, each representing a distinct way children interacted with the storybooks. However, despite behavioral signs of engagement, many children described the narrative portions as boring and claimed to prefer games.&#13;
&#13;
To explore this tension, a home deployment study compared two versions of the system: a "Books" condition with the full narrative and a "Games" condition with only instructional text. Both conditions included the same mini-games and AI interactions. While children in both groups reported similar levels of enjoyment, those in the Books condition showed significantly higher learning gains, greater increases in perceived knowledge and confidence, and stronger connections to the characters. Children in the Books condition also more frequently referenced the narrative when describing AI concepts and demonstrated more creative and iterative behavior during and after gameplay.&#13;
&#13;
Overall, these findings suggest that combining storytelling, gameplay, and creative AI interactions is an effective and engaging approach to teaching AI and robotics to young children. Narrative context appears to support concept recall, deepen emotional investment, and promote thoughtful experimentation, even with complex concepts for this age group, like AI and robotics. Based on insights from both studies, this thesis concludes with six design recommendations for creating developmentally appropriate, emotionally resonant AI education tools for early learners using narrative and play.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decentralized Machine Learning over Fragmented Data</title>
<link href="https://hdl.handle.net/1721.1/164169" rel="alternate"/>
<author>
<name>Singh, Abhishek</name>
</author>
<id>https://hdl.handle.net/1721.1/164169</id>
<updated>2025-12-04T03:07:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Decentralized Machine Learning over Fragmented Data
Singh, Abhishek
The remarkable scaling of data and computation has unlocked unprecedented capabilities in text and image generation, raising the question: Why hasn’t healthcare seen similar breakthroughs? This disparity stems primarily from healthcare data being fragmented across thousands of institutions, each safeguarding patient records in regulatory-compliant silos. The problem is not limited to healthcare but extends to other industries with fragmented data across institutions and individuals. Instead of centralizing various datasets to solve the fragmentation problem, which raises regulatory and ethical concerns, this thesis proposes systems and algorithms to decentralize the machine learning pipeline. Current approaches in this area have centered around Federated Learning (FL), which enables model training over distributed data. However, FL’s dependence on central coordination and inflexibility with heterogeneous systems limit its applicability in healthcare settings. Motivated by these challenges, I explore the following three core themes:&#13;
&#13;
1) Coordination – Today’s coordination algorithms typically rely on static rules or randomized communication, approaches that turn out to be sub-optimal when data heterogeneity is high. I present a new system and a benchmark framework that enables systematic assessment of different coordination algorithms. Next, I propose an adaptive coordination algorithm that leverages historical performance and learning dynamics to improve coordination.&#13;
&#13;
2) Heterogeneity – Data owners can vary significantly in their data distributions, computational resources, and privacy requirements. To address this heterogeneity, I turn the focus from the traditionally protected training phase to securing the critical inference process. Next, I develop techniques for distributed training that adapt to heterogeneous computational capabilities across different agents.&#13;
&#13;
3) Scalability – Enabling scaling in decentralized ML requires addressing three key challenges: parallelization, synchronization, and self-scaling. While parallelization has advanced significantly, the other two remain challenging. I present a framework for offline collaboration through sanitized, synthetic datasets that eliminates constant synchronization needs while preserving privacy.&#13;
&#13;
This thesis identifies and addresses some of the bottlenecks along these three core themes through a complementary set of solutions: adaptive coordination, heterogeneity-aware training, and scalable collaboration. Together, these building blocks can enable a practical framework for unlocking data silos across institutions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interplay between spatial structure and competition in ecological communities</title>
<link href="https://hdl.handle.net/1721.1/164168" rel="alternate"/>
<author>
<name>Swartz, Daniel W.</name>
</author>
<id>https://hdl.handle.net/1721.1/164168</id>
<updated>2025-12-04T03:07:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Interplay between spatial structure and competition in ecological communities
Swartz, Daniel W.
Ecology, much like physics, has a long history of theoretical contribution. In this thesis, we take a physics approach to describing ecological communities, searching for simple, emergent features that can generalize beyond specific models of community dynamics. Unifying all of the models we study is an underlying spatial structure, leading to a richer set of possible behaviors than a typical well-mixed model. We first study the case of a metapopulation, a collection of smaller communities linked by dispersal. We find that when the environment is allowed to fluctuate stochastically, new growth laws emerge at the single species level, and high diversity is achieved in the case with many species. We then study the case of pathogen evolution, again in the metapopulation framework. We find that intermediate dispersal can act as a strong driver of pathogen evolution. We also study what happens as a population of microbes expands into unexplored territory, known as a range expansion. We find that a simple model can capture all morphological phases observed in experiments and predict invasion fitness as a function of local and global competitive ability. We also break a standard assumption in microbial ecology, the isotropy of space, and find that a new sector morphology emerges.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Algorithmic Cookbook of Quantum Science: Quantum and Classical Recipes for Computation</title>
<link href="https://hdl.handle.net/1721.1/164167" rel="alternate"/>
<author>
<name>Martyn, John Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/164167</id>
<updated>2025-12-04T03:07:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Algorithmic Cookbook of Quantum Science: Quantum and Classical Recipes for Computation
Martyn, John Michael
Since the dawn of science, computation and physics have evolved alongside each other, both driven by a shared quest to solve problems and calculate properties of the natural world. Today, this symbiotic relationship is epitomized in quantum information science, which proposes to use quantum mechanics to solve hard computational problems and develop new paradigms of communication and cryptography. Yet often absent from these developments is a clear, human-interpretable understanding, with many quantum protocols built from inherently quantum concepts (e.g., entanglement, superposition) that defy our classical line of thought and muddle the search for efficient quantum algorithms. Here we show that this search need not be so opaque: simple mathematical tools, namely polynomials and their fundamental theorems, in unison with concepts from classical computing, provide a powerful framework for the design of quantum algorithms. We develop this framework and use it to construct an assortment of quantum algorithms, including methods for quantum simulation, parallel computing, randomized algorithms, and continuousvariable quantum hardware. In illuminating this framework, we find a striking bidirectional flow: just as classical concepts inspire new quantum algorithms, so too can quantum mechanical insights bring about novel methods of classical computing. In this reverse direction, we adopt inherently quantum concepts, such as random compilation and bosonic symmetry, to develop new classical methods, with applications in simulating quantum systems and designing robust neural networks. In aggregate, this thesis provides a compendium of algorithmic techniques for probing quantum systems and solving hard problems, using both quantum and classical tools—an “algorithmic cookbook”—predicated on deep connections between these two domains. The recipes presented here aim to demystify black boxes of quantum information science, and provide a valuable resource for future developments.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Death of Quasiparticles: Strongly Interacting Gapless Phases&#13;
with Fermi Surfaces and Fractional Statistics</title>
<link href="https://hdl.handle.net/1721.1/164166" rel="alternate"/>
<author>
<name>Shi, Zhengyan</name>
</author>
<id>https://hdl.handle.net/1721.1/164166</id>
<updated>2025-12-04T03:07:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Death of Quasiparticles: Strongly Interacting Gapless Phases&#13;
with Fermi Surfaces and Fractional Statistics
Shi, Zhengyan
The emergence of quasiparticles at low temperature provides a powerful organizing principle for many quantum phases of matter, ranging from conventional magnets and superconductors to exotic insulators with topological order. In this thesis, I describe my research in gapless quantum phases in which the framework of quasiparticles breaks down. The main characters are two categories of gapless phases that feature the interplay between strong interactions and two additional ingredients – Fermi surfaces and fractional statistics. Chapter 2 through Chapter 5 focus on strongly interacting metals with Fermi surfaces. The most salient examples are a class of Hertz-Millis models describing the onset of spontaneous symmetry breaking in a metallic environment. At the quantum critical point, gapless order parameter fluctuations destroy quasiparticles living on the Fermi surface, giving rise to a strongly coupled non-Fermi liquid metal. A key result of these chapters is the identification of an infinite-dimensional symmetry that survives in these non-Fermi liquid metals despite the death of quasiparticles. This infinite-dimensional symmetry and its quantum anomaly lead to a series of non-perturbative results on thermodynamics and transport, which are confirmed by perturbative diagrammatic calculations in special examples. Chapter 6 through Chapter 8 explore quantum phases in which anyonic quasiparticles with fractional statistics play an essential role. When parameters in the system are tuned to close the anyon energy gaps, the original anyons lose their coherence and a variety of novel phases emerge. A highlight in this direction is a new mechanism for topological superconductivity in itinerant abelian and non-abelian anyon fluids, which could make contact with experiments on doped fractional quantum anomalous Hall states in the near future.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Particles Inside Particles: The Flow of Energy in Quarks, Gluons, and Jets</title>
<link href="https://hdl.handle.net/1721.1/164165" rel="alternate"/>
<author>
<name>Alipour-fard, Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/164165</id>
<updated>2025-12-04T03:07:34Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Particles Inside Particles: The Flow of Energy in Quarks, Gluons, and Jets
Alipour-fard, Samuel
This thesis presents the author’s work in developing probes of the inner structure of jets in high-energy particle collisions. We begin by introducing QCD and the scattering of partons (quarks and gluons), discussing jets as theoretical and experimental proxies for partonic physics, and presenting the partonic cascade model of jet formation and jet substructure. Noting the ubiquitous presence of low-energy pollution in particle collision events, in the forms of hadronization, detector effects, the underlying event (UE), and pileup (PU), we then move towards the modern research area of developing pollution-insensitive probes of jet substructure. Pollution-insensitive features of jet substructure are often accessed theoretically either through jet grooming or energyweighted correlation functions. We present the basics of the modern theory of jet grooming as well as the work of the author in developing the Piranha paradigm for continuous jet grooming, introduced by the author in Ref. [1], and explore the formal and phenomenological benefits of continuous grooming techniques as pollutioninsensitive probes of jet substructure. We introduce the basics of the simplest energy-weighted correlation function – the energy-energy correlator (EEC), which probes angular correlations between particle pairs – and discuss its multi-particle analogues. We focus on the efficient and visually intuitive projected and resolved energy correlators introduced by the author in Ref. [2], which provide computationally-realistic, pollution-insensitive probes of angular many-body correlations in QCD jets. Finally, we exposit the generic theory of energy-weighted observable correlations (EWOCs), introduced by the author in Ref. [3], which utilizes the energy weighting of the EEC to provide pollution-insensitive probes of non-angular correlations within jets.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Solid-state cavity quantum electrodynamics with spin ensembles</title>
<link href="https://hdl.handle.net/1721.1/164164" rel="alternate"/>
<author>
<name>Wang, Hanfeng</name>
</author>
<id>https://hdl.handle.net/1721.1/164164</id>
<updated>2025-12-04T03:07:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Solid-state cavity quantum electrodynamics with spin ensembles
Wang, Hanfeng
Quantum sensors have the potential to operate at fundamental physical performance limits. Among various quantum sensing platforms, solid-state spin emitters stand out due to advantageous characteristics such as room-temperature spin polarization and readout, atomic-scale spatial resolution, and extended coherence times. Despite these strengths, traditional optical detection methods exhibit low readout fidelity in solid-state ensembles, severely limiting their achievable sensitivity. This thesis addresses this limitation by coupling a solid-state emitter ensemble to a microwave cavity, forming a cavity quantum electrodynamics system. Our approach eliminates the need for photon collection required by conventional optical readout methods, and the resulting strongly coupled system allows efficient cavity-based probing of the solid-state spin ensemble. By exploiting the hybrid quantum system with cavity quantum electrodynamics, we achieve record-high sensitivity for solid-state quantum sensors, representing a substantial advancement toward achieving fundamental sensing limits.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Expanding the Phase Space of Photons in Matter: From High-Throughput Screening to Atom-by-Atom Engineering</title>
<link href="https://hdl.handle.net/1721.1/164163" rel="alternate"/>
<author>
<name>Ghorashi, Ali</name>
</author>
<id>https://hdl.handle.net/1721.1/164163</id>
<updated>2025-12-04T03:07:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Expanding the Phase Space of Photons in Matter: From High-Throughput Screening to Atom-by-Atom Engineering
Ghorashi, Ali
Focusing on the topological band properties of photonic crystals and the plasmonic properties of two-dimensional metals, we seek to answer the question: what is the phase space of photons in matter? For topology, what are the physical parameters that determine whether a given photonic crystal band hosts Dirac points, a non-zero Chern number, or topologically protected corner states? And for plasmons, what are the experimentally addressable ranges of plasmonic dispersions, phase velocities, confinements, and losses? In particular, is it possible to engineer the elusive lossless plasmon? Using high-throughput screening, artificial intelligence, and atom-by-atom engineering through density functional theory, we determine the topological prevalence of photonic bands, propose two systems that evade plasmonic losses through the electron-phonon interaction, and (re)discover general physical laws that govern the geometries of photonic eigenstates.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the Sample Efficiency of Data-Driven Decision Making</title>
<link href="https://hdl.handle.net/1721.1/164162" rel="alternate"/>
<author>
<name>Qian, Jian</name>
</author>
<id>https://hdl.handle.net/1721.1/164162</id>
<updated>2025-12-04T03:07:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">On the Sample Efficiency of Data-Driven Decision Making
Qian, Jian
This thesis studies the fundamental problem of decision making under uncertainty through the lens of statistical decision theory. We characterize the minimax risk, which captures the sample efficiency required for effective decision making across three key settings: offline estimation with batch data, online estimation with sequential data, and interactive decision making as exemplified by multi-armed bandits and reinforcement learning. The first part of the thesis develops novel algorithmic and theoretical tools to enhance decision making in these regimes and to bridge the gaps between them. We revisit logistic regression in the offline setting and provide guarantees without restrictive boundedness assumptions. We then propose meta-algorithms that reduce online estimation to offline estimation, enabling any offline estimator to be used effectively in online scenarios. Furthermore, we present general-purpose algorithms for interactive decision making problems by leveraging offline or online estimation techniques. The second part of the thesis introduces a unified approach to understanding the fundamental complexity of interactive decision making. We propose the Decision Making with Structured Observation (DMSO) framework, which encompasses bandits, reinforcement learning, and more general settings. Within this framework, we develop a new complexity measure—the Decision-Estimation Coefficient (DEC)—which captures both upper and lower bounds for minimax regret. DEC extends classical notions such as the modulus of continuity to interactive scenarios by introducing an adaptive variant of Le Cam’s method. Finally, we unify the three classical lower bound techniques—Le Cam’s method, Assouad’s lemma, and Fano’s inequality—through a generalized formulation that also incorporates the DEC, offering a comprehensive understanding of the minimax risk in decision making tasks.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards achieving power autonomy in soft-actuated micro aerial robots</title>
<link href="https://hdl.handle.net/1721.1/164161" rel="alternate"/>
<author>
<name>Ren, Zhijian</name>
</author>
<id>https://hdl.handle.net/1721.1/164161</id>
<updated>2025-12-04T03:07:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards achieving power autonomy in soft-actuated micro aerial robots
Ren, Zhijian
Micro aerial robots with insect-like flight capabilities hold immense promise for various applications, including environmental monitoring, precision agriculture, and infrastructure inspection in confined spaces. However, realizing power autonomy in these miniature robotic platforms presents significant challenges due to weight constraints, power density limitations, and inefficient actuation at small scales. This dissertation presents three essential improvements towards achieving power autonomy in soft-actuated micro aerial robots. Our robotic platform is driven by a dielectric elastomer actuator (DEA) and generates lift force through flapping wings, a similar mechanism found in flying insects. First, we implemented a dynamic model to optimize the robot components for pairing with an improved DEA to generate a higher lift force. The robot achieved a peak lift-to-weight ratio of 4.3 and demonstrated a 20-second hovering flight with position and attitude errors smaller than 2.5 cm and 2◦ . Second, we fabricated a lightweight high-voltage boost converter that transformed a 7 V DC input into an AC waveform of 600 V and 400 Hz to drive the actuator. This is the first onboard boost converter that can drive the soft-actuated micro aerial robot to take off, and it represents a substantial achievement in miniaturizing power electronics for microrobots. Third, we took inspiration from the natural autorotation of maple seeds in their slow descent. We implemented the first samara-inspired mechanism on micro aerial robots, enhancing lift generation while maintaining in-flight attitude stability without feedback control. The 1.22-gram vehicle can stably take off in 1 second with a total input thrust of 1 gram-force. These accomplishments provide a pathway towards achieving power autonomy and open opportunities for developing agile, robust, and autonomous micro aerial robots for diverse applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Low-Energy Electron-Photon Interactions in a Scanning Electron Microscope</title>
<link href="https://hdl.handle.net/1721.1/164160" rel="alternate"/>
<author>
<name>Simonaitis, John</name>
</author>
<id>https://hdl.handle.net/1721.1/164160</id>
<updated>2025-12-04T03:07:07Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Low-Energy Electron-Photon Interactions in a Scanning Electron Microscope
Simonaitis, John
The interaction of free-electrons with matter and light is among the most fundamental of processes in nature. From the use of free-electrons for atomic imaging,  to their use in the generation of high-intensity, tunable light in synchrotrons, the physics of unconfined electrons has wide application. In recent years, there has been a new focus on the quantum nature of individual electrons in electron microscopes to enable further improvements in these technologies. This work takes advantage of developments in ultrafast optics, electron spectroscopy, quantum optics, and nanofabrication to explore various electron-electron, electron-photon, and electron-material interactions. In this thesis, we construct a low-energy, ultrafast scanning electron microscope,  using it to explore quantum coherent interactions between electrons, light, and matter.&#13;
&#13;
In Chapter 1, we review the history of free electron experiments and how advances in nanofabrication, low-dimensional materials, and ultrafast optics have opened new opportunities for electron-light interactions to a degree not previously possible. In Chapter 2 we discuss experimental forms of quantum electron microscopy known as interaction-free measurement and electron multi-passing. Chapter 3 details a general theory of electron-photon interactions, including simulations with quantum two-level systems and extended optical nanostructures. In Chapter 4, we design and construct a second microscope with ultrafast triggering, an electron spectrometer with sub-eV resolution, nanostructured interaction regions, and active beam alignment. Chapter 5 explores various experimental results, demonstrating enhanced loss spectroscopy of 2D materials, energy resolution of gold nanoparticle plasmons, as well as spectroscopy of time-tagged cathodoluminescence from optical fibers.  Finally, in chapter 6 we discuss future perspectives of this approach, analyzing the impact a heralded electron source would have on electron microscopy and lithography.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Studies of Jet Modification in Heavy Ion Collisions with the CMS Experiment</title>
<link href="https://hdl.handle.net/1721.1/164159" rel="alternate"/>
<author>
<name>Park, Mary Isabelle</name>
</author>
<id>https://hdl.handle.net/1721.1/164159</id>
<updated>2025-12-04T03:07:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Studies of Jet Modification in Heavy Ion Collisions with the CMS Experiment
Park, Mary Isabelle
In the Compact Muon Solenoid (CMS) detector at the Large Hadron Collider (LHC), lead ions are collided at ultra-relativistic velocities to produce Quark-Gluon Plasma (QGP), a state of matter where quarks and gluons are deconfined and move collectively. Jets are produced in high-momentum transfer parton scatterings prior to and independently of QGP formation, and serve as natural probes of its properties. As the high-energy partons pass through the QGP, they lose energy through medium-induced gluon radiation and elastic scattering, resulting in jets that are modified with respect to the vacuum baseline. In this thesis, jet modification is quantified by measuring the jet production cross section as a function of jet radius in inclusive jets and the jet axis decorrelation in jets recoiling from isolated photons in Lead-Lead (PbPb) and Proton-Proton (pp) collisions. Both measurements indicate that effects of medium-induced jet broadening may be balanced by survivor bias in PbPb collisions, potentially due to differences in the magnitude of quenching of wide versus narrow jets. The results underline the importance of constraining the initial jet kinematics with bosons, which are unmodified by the QGP.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Drivers of Stratospheric Ozone Change and Fingerprinting its Recovery</title>
<link href="https://hdl.handle.net/1721.1/164158" rel="alternate"/>
<author>
<name>Wang, Peidong</name>
</author>
<id>https://hdl.handle.net/1721.1/164158</id>
<updated>2025-12-24T03:27:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Understanding Drivers of Stratospheric Ozone Change and Fingerprinting its Recovery
Wang, Peidong
Stratospheric ozone serves as Earth’s natural protective layer, shielding the surface from harmful ultraviolet radiation. The discovery of the Antarctic ozone “hole” in the late 1980s raised significant societal and scientific concern, prompting the rapid regulation of ozonedepleting substances (ODSs) under international treaties. While the signs of ozone recovery have begun, new challenges continue to arise. This thesis investigates three critical factors driving stratospheric ozone changes and influencing the detection of ozone recovery: (1) ODS emissions, (2) chemical chlorine processes, and (3) internal climate variability. With ODS emissions being regulated under the Montreal Protocol and studies now focusing on illicit new production on the order of tens of gigagrams per year, the ocean’s role as both a natural source and sink of ODSs becomes increasingly important. However, these processes have often been overlooked or highly simplified in past ozone assessments. Using a hierarchy of models, from simple box models to global ocean general circulation models, I quantified the ocean’s uptake and release of various ODSs. Chapter 2 examines the ocean’s uptake of chlorofluorocarbons (CFCs), particularly emphasizing its influence on recent illicit CFC emissions estimation. Chapter 3 extends this analysis to include ocean uptake and potential microbial degradation processes, evaluating their effects on emission estimates for various hydrochlorofluorocarbons (HCFCs) and hydrofluorocarbons (HFCs), which are chemical constituents that have been used to replace CFCs. Once these man-made ODSs reach the stratosphere, they are photolyzed to chlorine reservoir species (e.g., HCl and ClONO2), which, through heterogeneous reactions, can transform into reactive chlorine that depletes ozone. While heterogeneous chlorine activation on volcanic ash is well understood, the unprecedented 2020 Australian wildfires raised new questions about chemical processes on smoke particles. This knowledge gap existed because only a few wildfires had injected significant amounts of smoke particles into the stratosphere during the satellite era. Leveraging over 30 years of satellite data, I separated chemical and dynamic processes affecting chlorine reservoir species to quantify chemical chlorine activation across different aerosol types. In Chapter 4, I developed a new approach to quantitatively estimate the onset temperature for chemical chlorine activation after the 2020 Australian wildfire using satellite observations. Chapter 5 applies this method to compare the impact of chemical chlorine activation from two independent wildfire events with that from a series of volcanic eruptions of varying magnitudes. Despite emerging challenges such as illicit emissions and recent wildfires and volcanic eruptions, advancements in observational records, our understanding of ozone chemistry, and computational power have significantly enhanced our ability to quantitatively detect and attribute stratospheric ozone changes. In Chapter 6, I applied a pattern-based “fingerprinting” technique to quantitatively separate the contributions of ODS forcing from other external forcings and internal variabilities in satellite observations. This analysis shows that Antarctic ozone increases cannot be explained by climate internal variability alone, providing strong confidence that ozone recovery is underway, primarily driven by human efforts to reduce ODS emissions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Light-Induced Collective Interactions in Arrays of Quantum Emitters</title>
<link href="https://hdl.handle.net/1721.1/164157" rel="alternate"/>
<author>
<name>Rubies-Bigorda, Oriol</name>
</author>
<id>https://hdl.handle.net/1721.1/164157</id>
<updated>2025-12-04T03:06:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Light-Induced Collective Interactions in Arrays of Quantum Emitters
Rubies-Bigorda, Oriol
The interaction between light and matter has captivated physicists for centuries, from early studies of vision and refraction in ancient Greece to the development of quantum mechanics and quantum electrodynamics in the past century. While the response of a single quantum emitter to light is well understood, the radiative properties of an ensemble of closely spaced emitters are far more intricate. Coupling to a shared electromagnetic environment induces coherent and dissipative interactions between emitters, giving rise to a collective response that cannot be captured by treating them independently. In the regime of few excitations, the system hosts delocalized subradiant states, that is, coherent superpositions that are largely decoupled from the electromagnetic field and thus decay at suppressed rates. While this weak coupling makes subradiant states attractive for quantum technologies, it also renders them difficult to manipulate. At higher excitation densities, the intrinsic nonlinearity of emitters and the exponential growth of the Hilbert space make theoretical and numerical descriptions of the system and its dynamics increasingly challenging. This thesis explores two fundamental questions: How can subradiant and dark states be selectively accessed and harnessed for practical applications in quantum technologies? And how can interacting ensembles of quantum emitters be efficiently simulated to uncover their many-body physics? The first part of the thesis develops protocols for controlling and addressing dark states in free-space and waveguide-coupled atomic arrays, demonstrating their utility in quantum storage and the deterministic generation of entangled photonic states. Incorporating atomic motion, we further show that collective subradiant states can enhance cooling in dense atomic arrays, offering new avenues for controlling motional dynamics. In the second part, we introduce cumulant expansions of the equations of motion as a powerful tool to analytically and numerically investigate collective decay in the many-body regime. We first examine the collective decay of fully excited atomic arrays in free space, characterizing the onset and scaling of the superradiant burst across different geometries. In collaboration with experiments on ultracold erbium atoms in optical lattices, we provide the first direct observations of many-body collective effects in free-space ordered arrays, including early-time superradiant bursts, late-time subradiant tails, and the emergence of atomic correlations throughout the dynamics. Finally, we theoretically and numerically explore the transient formation of multi-excitation subradiant states, and demonstrate how the existence of multiple dissipation channels suppresses steady-state superradiance in extended arrays.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hybrid Core Inductors for High Saturation Capability</title>
<link href="https://hdl.handle.net/1721.1/164156" rel="alternate"/>
<author>
<name>Yang, Rachel S.</name>
</author>
<id>https://hdl.handle.net/1721.1/164156</id>
<updated>2025-12-04T03:06:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Hybrid Core Inductors for High Saturation Capability
Yang, Rachel S.
Power electronics are critical for any system requiring electricity and often impact the performance of these systems. In many cases, the performance of power electronics is limited by lossy and large inductors that are constrained by the saturation of their magnetic core material. Such saturation-limited inductors are typically found in power electronics applications where the inductor sees large dc current with relatively small ac ripple, such as EMI filters or converters operating in continuous conduction mode. This thesis investigates two types of inductor designs that can achieve higher saturation capability by combining multiple materials in a single core, enabling these designs to achieve greater energy storage or lower loss than conventional single-material cores. The first design combines a permanent magnet with a soft magnetic material (e.g. ferrite) to form a PM hybrid core. This core achieves higher saturation capability by directing PM flux to oppose winding flux in the ferrite. First-order models, design processes, and other theory for the PM hybrid core are developed in this thesis, and different geometries for this core are explored. Additionally, two PM hybrid core prototypes are presented, one using a pot core geometry and one using a modified E core geometry. The PM hybrid pot core prototype achieves 70% more energy storage or 50% of the dc loss versus comparable ferrite prototypes, while the PM hybrid E core prototype achieves 30% more energy storage or a minimum of 52% of the total loss versus comparable ferrite prototypes. The second design pairs a low-frequency, high-saturation material (e.g. steel) with a low-saturation, highfrequency material (e.g. ferrite) to form a steel hybrid core. This core achieves higher saturation capability by directing most of the dc flux to the steel and all of the ac flux to the ferrite, enabling the core to leverage both materials’ advantages while avoiding their detriments. First-order models and design processes for the steel hybrid core are developed in this thesis. An example steel hybrid core design using a pot core is also presented. This design can achieve 220% more energy storage versus a comparable ferrite prototype, and it may achieve lower loss. Its performance, though, is sensitive to manufacturing and assembly imperfections. In this thesis, both the PM hybrid and steel hybrid cores are demonstrated to have great potential in achieving high saturation capability. By leveraging these hybrid cores, inductor designs can achieve greater energy storage density or lower loss and thus enable higher performance power electronics.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantum Gas Microscopy of Bosonic Correlations in the Continuum</title>
<link href="https://hdl.handle.net/1721.1/164155" rel="alternate"/>
<author>
<name>Xiang, Jinggang</name>
</author>
<id>https://hdl.handle.net/1721.1/164155</id>
<updated>2025-12-04T03:06:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Quantum Gas Microscopy of Bosonic Correlations in the Continuum
Xiang, Jinggang
This thesis details the complete upgrade and renovation of an existing experimental platform into a high-resolution quantum gas microscope for ultracold 87Rb atoms. Quantum gas microscopes enable site-resolved imaging, providing unprecedented access to quantum statistical effects and many-body phenomena. While such instruments are often employed to study physics in optical lattices, we have innovatively adapted our apparatus to investigate bulk system behavior. A major part of this project involved upgrading the scientific apparatus and retrofitting the previous system. We introduced new optical components, including a high-NA objective, and improved the vacuum system for better optical access. Extensive lab renovations, from upgrading the optical table to reorganizing the laser and imaging setups, were carried out to enhance mechanical and thermal stability. Rigorous optical benchmarking confirmed that the objective achieves diffractionlimited imaging, which is critical for resolving single atoms. This capability allowed us to detect density fluctuations at the scale of the thermal de Broglie wavelength in a quasi-two-dimensional gas of 87Rb atoms. In an experiment resembling Hanbury Brown and Twiss interferometry, we measured a 30% enhancement in the second-order correlation function in situ, demonstrating strong bosonic bunching. This outcome underscores the microscope’s precision and the importance of high-resolution imaging in capturing subtle quantum statistical effects. The successful realization of this apparatus demonstrates the utility of quantum gas microscopes in probing bulk systems. With this new platform in place, future studies can explore critical phenomena, many-body correlations, matter-wave emission, and quantum simulations with cold atoms.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measurement of Cosmic Ray Lithium Isotopes Using the&#13;
Alpha Magnetic Spectrometer</title>
<link href="https://hdl.handle.net/1721.1/164154" rel="alternate"/>
<author>
<name>LaVecchia, Gianni</name>
</author>
<id>https://hdl.handle.net/1721.1/164154</id>
<updated>2025-12-04T03:06:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Measurement of Cosmic Ray Lithium Isotopes Using the&#13;
Alpha Magnetic Spectrometer
LaVecchia, Gianni
The study of cosmic rays and their properties provides insight into the origins of our universe and is a unique lens on the nuclear physics of the cosmos. The identification of cosmic ray isotopes poses a particular challenge, as it requires the measurement of multiple observables to a high degree of accuracy for the deduction of nuclear mass. Using the unique detection capabilities of the Alpha Magnetic Spectrometer (AMS), the isotope fluxes of cosmic ray lithium in the rigidity range of 1.92 to 25 GV are presented. This work is based on 0.97 million ⁶Li and 1.04 million ⁷Li nuclei collected by the AMS over a 12.5 year period, and improves the error and extent of existing measurements by a factor of 10. These results lead to the conclusion that there is no sizable primary component in cosmic ray ⁷Li. The&#13;
improvements to the AMS velocity measurement establishes the groundwork for future cosmic ray isotope measurements.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Metrics, Muons, Moments, Models, Machine Learning, Measurements, and More: A Manifesto on Collider Physics</title>
<link href="https://hdl.handle.net/1721.1/164153" rel="alternate"/>
<author>
<name>Gambhir, Rikab</name>
</author>
<id>https://hdl.handle.net/1721.1/164153</id>
<updated>2025-12-04T03:05:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Metrics, Muons, Moments, Models, Machine Learning, Measurements, and More: A Manifesto on Collider Physics
Gambhir, Rikab
The interface between particle theory and particle experiments is essential to improving our understanding of the Standard Model and looking for new physics beyond it. At this interface lies a complicated web of complex and expensive simulations that cannot fully be trusted, experimental and theoretical uncertainties, overwhelmingly large amounts of data, all while we have yet to find any deviations from the Standard Model.&#13;
&#13;
In this thesis, we propose strategies for improving the theory ↔ experiment pipeline at all stages. We first show how modern Machine Learning and statistical techniques can be used to improve the calibration and resolution of particle detectors in a robust way, which can lead to improved measurement precision. We then develop brand new classes of measurable observables based on the principle of infrared-and-collinear-safety, geometry, and machine learning, which come with guarantees about their theoretical calculability and interpretability, in turn motivating measurements at collider experiments. Finally, we then present two complementary approaches to search for new physics: one, in the form of an experimental proposal for a muon beam dump experiment that is viable alongside a full future collider program; and the other, in the form of machine-learning based anomaly detection to search for subtle signals in already-published data.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Practical Algorithms for Modeling Causality to Accelerate Scientific Discovery</title>
<link href="https://hdl.handle.net/1721.1/164152" rel="alternate"/>
<author>
<name>Wu, Menghua</name>
</author>
<id>https://hdl.handle.net/1721.1/164152</id>
<updated>2025-12-04T03:06:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Practical Algorithms for Modeling Causality to Accelerate Scientific Discovery
Wu, Menghua
Scientific research revolves around the discovery and validation of causal relationships between variables. Machine learning has the potential to increase the efficiency of this process by proposing novel hypotheses from data observations, or by designing experiments that maximize success rate. This thesis addresses these problems through pragmatic approaches, designed to model large systems and incorporate rich domain knowledge. These algorithms are applied to use cases in molecular biology and drug discovery, which highlight their potential to inform efficient experiment design and to automate the analysis of experimental results.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Recycling and Regeneration of Spent Perfusion Media via Ion&#13;
Concentration Polarization</title>
<link href="https://hdl.handle.net/1721.1/164151" rel="alternate"/>
<author>
<name>Wynne, Eric Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/164151</id>
<updated>2025-12-04T03:06:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Recycling and Regeneration of Spent Perfusion Media via Ion&#13;
Concentration Polarization
Wynne, Eric Michael
The widespread adoption of monoclonal antibody therapies is often constrained by their high prices, which can limit accessibility, particularly for patients in low- and middle-income countries. Addressing this economic barrier is crucial to ensure that life-saving treatments can reach all who need them. We present a series of bioprocessing innovations designed to reduce the cost of monoclonal antibody manufacturing and improve global access to these critical therapeutics. The work focuses on developing technologies for media regeneration and recycling, with the goal of reducing the economic and environmental impact of cell culture media in perfusion mammalian cell culture.&#13;
We demonstrate a microfluidic separation device engineered to selectively remove metabolic waste products—specifically ammonia and lactate—from spent media using ion concentration polarization. Building on this foundation, a scalable millifluidic system was developed to enable higher-throughput waste removal. We characterized the impact of media recycling upon batch and perfusion cell cultures. We devised a nutrient supplementation strategy to create ‘regenerated’ media that minimized any effect on cell growth and productivity compared to fresh media.&#13;
To support continuous manufacturing, a perfusion culture system incorporating a microfluidic spiral cell retention device and continuous cell bleed was established, and stable performance was maintained over extended durations. A further innovation introduced a multi-stage waste recovery system that increased media regeneration yield to 87.5%. This recovery rate enabled a self-recycling perfusion bioreactor in which 75% of the media feed was regenerated, without significant impact on cell growth, productivity, or product quality.&#13;
Together, these advances establish a novel biomanufacturing platform that combines electrokinetic waste removal with media regeneration and recycling. The approach is broadly adaptable to mammalian cell culture processes and offers a promising path toward more sustainable, cost-effective, and environmentally responsible production of monoclonal antibodies and other biologics.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scaling Cooperative Intelligence via Inverse Planning and Probabilistic Programming</title>
<link href="https://hdl.handle.net/1721.1/164150" rel="alternate"/>
<author>
<name>Zhi-Xuan, Tan</name>
</author>
<id>https://hdl.handle.net/1721.1/164150</id>
<updated>2025-12-04T03:06:42Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Scaling Cooperative Intelligence via Inverse Planning and Probabilistic Programming
Zhi-Xuan, Tan
How can we build cooperative machines that model and understand human minds — machines that assist us with our goals, coordinate on plans, infer the intentions behind our words, and even learn our norms and values? This thesis presents a scalable model-based approach to building such systems via inverse planning and probabilistic programming. First, we introduce a probabilistic programming architecture that implements a Bayesian theory of mind. This architecture, Sequential Inverse Plan Search (SIPS), performs online inference of human goals and plans by inverting a Bayesian model of incremental human planning. By combining high-performance symbolic planners with sequential Monte Carlo (SMC) inference, SIPS achieves faster-than-real-time speed, while scaling to hundreds of possible goals, and remaining robust to human mistakes due to boundedly-rational planning. Second, we present Cooperative Language-guided Inverse Plan Search (CLIPS), a system that integrates SIPS with large language models (LLMs) to model communicative cooperation. By using LLMs as likelihood functions within probabilistic programs, CLIPS can infer human goals from ambiguous instructions, then provide uncertainty-aware assistance with much higher levels of reliability than LLMs can on their own. In addition, CLIPS can be used to infer the shared intentions of communicating agents from their actions and words. Third, we show how inverse planning can model the acquisition of social normativity, formalizing norm-guided societal behavior as a norm-augmented stochastic game (NSG). In NSGs, agents assume that society follows a shared set of social norms, and infer these norms from the actions of other agents. By doing so, agents can rapidly learn cooperative social norms using orders of magnitude less data than model-free approaches. Finally, we present advances in probabilistic programming infrastructure that have enabled architectures such as SIPS and CLIPS. Through interfaces for programmable SMC and probabilistic programming with LLMs, developers can readily compose modeling and inference subroutines when designing probabilistically coherent intelligent systems. Together, these innovations demonstrate the feasibility and scalability of rational AI engineering for cooperatively intelligent machines, while illuminating the computational and algorithmic foundations of human cooperative intelligence.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Atomistic Study of Traveling Skyrmions in Multi-Sublattice Magnetic Materials</title>
<link href="https://hdl.handle.net/1721.1/164149" rel="alternate"/>
<author>
<name>Tremsina, Elizaveta A.</name>
</author>
<id>https://hdl.handle.net/1721.1/164149</id>
<updated>2025-12-04T03:05:19Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Atomistic Study of Traveling Skyrmions in Multi-Sublattice Magnetic Materials
Tremsina, Elizaveta A.
The development of novel energy-efficient computing hardware is imperative for the reduction of the carbon footprint and for the extension of computing, mobile and wearable device lifespan. Recent advances have been focused on turning to novel material systems, and one such avenue is magnetic thin films. Bits of information can be encoded by magnetic twisted textures called skyrmions, which can be efficiently driven by applying electrical current. Recently, emphasis has been placed on investigating antiferromagnetic and ferrimagnetic skyrmions, as opposed to the single-sublattice ferromagnetic ones studied earlier, due to their potential for more rapid dynamics and magnetic stability. However, there is a pressing need for a thorough and detailed understanding of the intricacies of skyrmion motion, in particular, limiting velocity, optimization of trajectory, controlled mobility and, notably, the observed dynamic distortions of skyrmion profiles. For this reason, experimental studies are simply not enough to provide a complete picture, since the material parameter space for systems hosting skyrmions is quite large. We perform a comprehensive study combining simulation-based as well as analytical approaches, of the spin-orbit torque motion of skyrmions in a wide host of magnetic materials, ranging from crystalline antiferromagnetic to ferrimagnetic, to ferromagnetic. We systematically analyze the relationship between physical distortions of the skyrmion profiles, based on the action of local Thiele forces, and internal elastic tension forces, providing a quantitative and nuanced explanation of these effects. These results expand the understanding of fundamental properties of magnetic skyrmions, as well as their potential use in spintronics applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transformative Lenses: Empowering Learners with New Perspectives Using Generative AI and Augmented Reality</title>
<link href="https://hdl.handle.net/1721.1/164148" rel="alternate"/>
<author>
<name>Leong, Joanne Sau Ling</name>
</author>
<id>https://hdl.handle.net/1721.1/164148</id>
<updated>2025-12-04T03:06:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Transformative Lenses: Empowering Learners with New Perspectives Using Generative AI and Augmented Reality
Leong, Joanne Sau Ling
Learning is a fundamental human drive that has been shaped by technological advancements over the years. The emergence of generative AI marks a profound shift—its capacity to produce text, images, and video challenges long‐held beliefs about what only humans could create. This shift creates new opportunities for learning, including enabling the design of more customized and personalized learning experiences. Recognizing that learning is deeply influenced by our perceptions of ourselves, others, and our materials and environments, I propose creating transformative lenses powered by generative AI and augmented reality (AR) to adapt what learners perceive, as a means to empower them with new perspectives. I design and implement a set of novel interactive systems and experiences as case studies that address factors including creativity, communication, and motivation. Studying the use of these systems, I gather early evidence that such lenses can help people to overcome their own limiting thoughts and emotions to move towards realizing their full potential. Reflecting on these case studies, I distill key considerations for designing and applying transformative lenses. Finally, I discuss the broader implications of this work at the evolving intersection of generative AI and learning.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Language Models as Opinion Models: Techniques and Applications</title>
<link href="https://hdl.handle.net/1721.1/164147" rel="alternate"/>
<author>
<name>Brannon, William</name>
</author>
<id>https://hdl.handle.net/1721.1/164147</id>
<updated>2025-12-04T03:06:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Language Models as Opinion Models: Techniques and Applications
Brannon, William
Real-time social media platforms now host the news cycle and shape public opinion, while large language models (LLMs) give us new tools to observe and predict those shifts. This dissertation links the new affordances of social media with the predictive power of LLMs to explain -- and forecast -- opinion change. We first quantify the dynamics of news on an influential social platform, then develop LLM-based tools to forecast persuasion and predict heterogeneous treatment effects (HTEs).&#13;
&#13;
Study I — Media tempo and tone. Using 518,000 hours of U.S. talk-radio broadcasts and 26.6 million tweets from elite and mass users, we show that Twitter discourse (i) moves faster at both take-off and fade-out stages of a news event and (ii) sustains greater outrage than radio – despite radio’s often explicitly outrage-focused business model. To our knowledge, this is the first large-scale, data-driven comparison between Twitter and traditional media of both outrage levels and the rate of decay of attention to news.&#13;
&#13;
Study II — Zero-shot persuasion forecasting. Across a diverse set of 28 randomized experiments, LLM-based methods outperform an ensemble of strong baselines at predicting HTEs and deliver good performance at predicting average treatment effects (ATEs) — all without any experiment-specific fine-tuning.&#13;
&#13;
Study III — Transfer and scaling. Fine-tuning LLMs on contemporaneous news coverage boosts HTE (and ATE) prediction performance greatly, to more than 3x baseline performance. A new minibatch-moment-matching (M3) objective lets us train a 400M-parameter model to nearly match the HTE prediction performance of an 8B model at a fraction of the inference cost. Transfer, however, falters out of distribution on held-out experiments and demographic groups, lending support to contextual theories of persuasion.&#13;
&#13;
Overall, we (i) quantify how platform affordances shape the tone and tempo of public discourse, (ii) introduce LLM-based methods that make causal experiments more sample-efficient, and (iii) chart the limits of transfer learning for opinion prediction. Our findings provide practical tools for HTE prediction and help researchers anticipate persuasion dynamics in a media landscape shaped by both humans and machines.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Across the Scales of the Nucleus: Understanding Short Range Correlations from Medium Modification to Probe Independence</title>
<link href="https://hdl.handle.net/1721.1/164146" rel="alternate"/>
<author>
<name>Denniston, Andrew W.</name>
</author>
<id>https://hdl.handle.net/1721.1/164146</id>
<updated>2025-12-04T03:06:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Across the Scales of the Nucleus: Understanding Short Range Correlations from Medium Modification to Probe Independence
Denniston, Andrew W.
The atomic nucleus presents an intricate system due to the non-linear forces described by Quantum Chromodynamics (QCD) that govern its structure. The range of scales involved is remarkable; the most massive nuclei weigh approximately five orders of magnitude more than the quarks that compose them. The nucleus can be analyzed at various levels, from quarks to hadrons to the nucleus as a whole. Short-Range Correlations (SRCs) within the nucleus play a significant role that spans these diverse scales. At the most fundamental level, SRCs influence the interaction between nucleons. The nucleon-nucleon (NN) interaction, arising from QCD, is crucial in determining nuclear properties. SRCs serve as valuable probes for measuring this NN interaction, as the nucleons within SRCs become effectively decoupled from the rest of the nucleus. Multiple experimental techniques, including electron scattering, have been employed to investigate the NN interaction through SRCs. However, our first project demonstrates that inclusive measurements alone are inadequate to constrain this interaction fully. Moving to the scale of the nucleus, SRCs contribute to the high-momentum tail of the nuclear spectral function. While the low-momentum region is characterized by nucleons exhibiting bulk properties, nucleons begin to pair into SRCs at higher momenta. Our research aims to bridge the understanding between the mean-field portion of the nucleus and its high-momentum SRC components. Additionally, SRCs affect the quark structure of protons, as evidenced by the EMC effect, which indicates that quarks behave differently when protons are embedded within a nucleus—an effect referred to as medium modification. This thesis explores the correlation between SRCs and medium modification across various experimental setups. Finally, we seek to establish an interpretation of the nuclear ground-state. Accomplishing this requires demonstrating that our SRC observables are independent of the probe’s scale and scheme. The concluding project of this thesis illustrates how we utilize triple coincidence quasi-elastic scattering across a range of (Q2 ) values to develop a model-dependent framework for understanding SRC distributions within the nucleus’s ground-state wavefunction.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Discovering and Engineering the Computation Underlying Large Intelligent Agents</title>
<link href="https://hdl.handle.net/1721.1/164145" rel="alternate"/>
<author>
<name>Sharma, Pratyusha</name>
</author>
<id>https://hdl.handle.net/1721.1/164145</id>
<updated>2025-12-04T03:06:47Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Discovering and Engineering the Computation Underlying Large Intelligent Agents
Sharma, Pratyusha
The richness of language and intelligent behavior has often been attributed to latent compositional structure. Can we build tools for discovering how deep networks learn and represent this latent structure implicitly? And more importantly, can we use this knowledge to improve generalization in largely structure-less general purpose models or refine our understanding of the world they describe? In this dissertation, I present three perspectives to answer these questions. First, I present experimental methods to functionally characterize the space of learnt solutions in LLMs and demonstrate how this understanding can be used to improve their empirical generalization in a gradient free manner, sometimes by as much as 30% points on language understanding benchmarks. Following that, I show how to decipher the structure of another (black box) language-like system, the naturally arising communication system of sperm whales in the wild, discovering for the first time a unique combinatorial communication system. Finally, I apply insights from these results to equip embodied agents with a latent language of thought—hierarchical and compositional—and show how it can enable long-horizon reasoning and planning in these systems. This dissertation ultimately aims to bridge the gap between natural and artificial intelligence, offering new insights into both the fundamental nature of communication in complex biological organisms in the wild and the development of more powerful, and improved AI systems. A key pattern in the discoveries in this thesis has been how simple structures enable complex externalized behaviors in both biological organisms and AI systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Volume Mount Devices</title>
<link href="https://hdl.handle.net/1721.1/164144" rel="alternate"/>
<author>
<name>Han, Alan</name>
</author>
<id>https://hdl.handle.net/1721.1/164144</id>
<updated>2025-12-04T03:09:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Volume Mount Devices
Han, Alan
As Moore's Law ends and AI demands increasingly tax our climate and resources, the limitations of two-dimensional electronics integration have become critical bottlenecks. Surface-mount devices (SMDs) remain entrenched in industry practice despite being insufficient for today's computing challenges and sustainability needs. This thesis introduces the volume mount device (VMD), a three-dimensional electronics packaging standard that bypasses the traditional die-to-server stack while offering a scalable, reversible framework inspired by natural ecosystems' circularity.&#13;
The VMD approach embeds both electrical function and mechanical structure into modular elements that assemble freely in 3D space. Rather than building circuits on planar PCBs, this system constructs functional circuits by linking components into a self-constraining lattice architecture. My current implementation leverages existing supply chains by incorporating SMD components on small tile PCBs, while establishing a pathway toward eventually replacing SMDs at the IC packaging level.&#13;
I developed a hybrid assembly system combining 3D printing and pick-and-place automation to build multi-layered electronic assemblies efficiently. Where prior work achieved only tens of parts at hundreds of components per hour (CPH), my system demonstrates automated assembly of hundreds of integrated elements at approximately 1000 CPH. I evaluate various geometric configurations, assess performance overhead compared to conventional approaches, and develop cost-effective, self-aligning connector interfaces for reliable joints—creating a foundation for electronics systems that can be assembled, disassembled, and reassembled as needed while improving resilience against supply chain disruptions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generative Diffusion Models Towards De Novo Protein Design</title>
<link href="https://hdl.handle.net/1721.1/164143" rel="alternate"/>
<author>
<name>Yim, Jason</name>
</author>
<id>https://hdl.handle.net/1721.1/164143</id>
<updated>2025-12-04T03:06:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Generative Diffusion Models Towards De Novo Protein Design
Yim, Jason
De novo protein design aims to generate proteins with desired functions by rationally engineering novel protein structures and sequences. The structure requires modeling continuous 3D coordinates of atoms with rigid biochemical constraints of the polymer chain while the sequence is a series of discrete amino acids that should fold into a plausible structure. Understanding the protein function-structure-sequence relationship necessary for protein design is complex, but deep learning has proven promising to learn the relationship from large protein datasets. This thesis aims to develop deep learning models that generate novel structures and sequences that can be guided towards desired functions. We first describe novel generative models that learn to generate protein structures and sequences by developing diffusion models over general state spaces including Riemannian manifolds and discrete tokens. The resulting methods – FrameDiff, FrameFlow, and MultiFlow – demonstrate the ability of diffusion models to extrapolate beyond the training data to generate novel and diverse protein structures and sequences that pass in silico protein design filters. Next, we apply diffusion models to practical protein design challenges by collaborating with experimental and computational biologists to develop RoseTTAFold Diffusion (RFdiffusion). By combining the structure prediction capabilities of RoseTTAFold and diffusion modeling principles, RFdiffusion can generate functional proteins with in vitro validated properties such as high-affinity binders and symmetric protein assemblies. De novo protein design aims to generate proteins with desired functions by rationally engineering novel protein structures and sequences. The structure requires modeling continuous 3D coordinates of atoms with rigid biochemical constraints of the polymer chain while the sequence is a series of discrete amino acids that should fold into a plausible structure. Understanding the protein function-structure-sequence relationship necessary for protein design is complex, but deep learning has proven promising to learn the relationship from large protein datasets. This thesis aims to develop deep learning models that generate novel structures and sequences that can be guided towards desired functions. We first describe novel generative models that learn to generate protein structures and sequences by developing diffusion models over general state spaces including Riemannian manifolds and discrete tokens. The resulting methods – FrameDiff, FrameFlow, and MultiFlow – demonstrate the ability of diffusion models to extrapolate beyond the training data to generate novel and diverse protein structures and sequences that pass in silico protein design filters. Next, we apply diffusion models to practical protein design challenges by collaborating with experimental and computational biologists to develop RoseTTAFold Diffusion (RFdiffusion). By combining the structure prediction capabilities of RoseTTAFold and diffusion modeling principles, RFdiffusion can generate functional proteins with in vitro validated properties such as high-affinity binders and symmetric protein assemblies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning to Solve Long-Horizon Robot Manipulation&#13;
Problems</title>
<link href="https://hdl.handle.net/1721.1/164142" rel="alternate"/>
<author>
<name>Yang, Zhutian</name>
</author>
<id>https://hdl.handle.net/1721.1/164142</id>
<updated>2025-12-04T03:06:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Learning to Solve Long-Horizon Robot Manipulation&#13;
Problems
Yang, Zhutian
If we want mobile robots that perform multi-step tasks in visually diverse and geometrically complex environments, we need them to quickly decide what to do and how to do it. Manipulating multiple objects in environments with movable and articulated obstacles over time requires the robot to satisfy constraints like collision-freeness, reachability, and action feasibility. For problems with large state spaces, continuous action spaces, and long decision horizons, the hybrid constraint satisfaction problems induced by planners become combinatorially difficult to solve. In this thesis, I will discuss strategies for using offline learning to speed up deploymenttime planning, i.e., using a plan feasibility predictor, a subgoal generator, or a compositional joint continuous constraint solver. I will also present strategies for chaining policies learned from demonstrations using conditional inputs, such as key poses and natural language, for generalization in real-world environments. With the resulting efficient long-horizon manipulation planning system, we can solve complex robotic manipulation problems faster at deployment time. It can also be used to generate diverse large-scale whole-body trajectories as part of the data mixture for training robot foundation models in embodied reasoning, planning, and acting.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building small domain-specific masked language models&#13;
vs. large generative models for clinical decision support&#13;
and their effects on users.</title>
<link href="https://hdl.handle.net/1721.1/164141" rel="alternate"/>
<author>
<name>Sergeeva, Elena</name>
</author>
<id>https://hdl.handle.net/1721.1/164141</id>
<updated>2025-12-04T03:06:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Building small domain-specific masked language models&#13;
vs. large generative models for clinical decision support&#13;
and their effects on users.
Sergeeva, Elena
The frequently adopted definition of knowledge defines it as “justified true belief”. As one may notice this definition presents some issues when applied to AI: it is unclear to which degree it is justified to use “humanizing” vocabulary like “belief” or “justification” when describing the performance of an AI system. Traditional explicit knowledge-representation based AI involves reasoning over symbolic representation of statements standing for such “justified true beliefs” [1], the modern connectionist methodology however replaces explicit reasoning with making a prediction based on a set of computations done over weighted continuous representations of the inputs. The continuous representations learned by such systems remain “black box-like”, where the only elements directly understandable by the human user are the model inputs and outputs. In the first part of this thesis I introduce a set of Masked-Language model transformer based models for a diverse set of medical natural language processing tasks including Named Entity Recognition, Negation Extraction and Relation extraction that perform as well or better than bigger prompt-and-generate transformer-based causal language models. In the second part of the thesis, I discuss the modern “prompt-and-generate” approach to natural language processing where both the inputs and the outputs of the model are word-like elements commonly referred to as “tokens”. I explore the nature of token based representation of the input and look at the way token “meaning” is refined at each layer of the successive transformer computation. With respect to the outputs, I explore how people engage with AI generated sequences of tokens that people perceive as “explained” predictions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Language-Centric Medical Image Understanding</title>
<link href="https://hdl.handle.net/1721.1/164140" rel="alternate"/>
<author>
<name>Wang, Peiqi</name>
</author>
<id>https://hdl.handle.net/1721.1/164140</id>
<updated>2025-12-04T03:05:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Language-Centric Medical Image Understanding
Wang, Peiqi
This thesis advances medical image understanding by leveraging the multifaceted roles of language: as supervision, prior knowledge, and a medium for communication. We introduce three main contributions: (1) a weakly supervised framework that uses language in clinical reports to guide fine-grained alignment between image regions and textual descriptions, (2) an adaptive debiasing method that uses language prior to improve the robustness of learning algorithms under noisy supervision, and (3) a novel approach for calibrating linguistic expressions of diagnostic certainty, enabling more reliable communication of clinical findings. Together, these methods lead to more accurate, robust, and reliable machine learning systems, ultimately streamlining clinical workflows and improving patient care.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring spin physics with ultracold atoms</title>
<link href="https://hdl.handle.net/1721.1/164139" rel="alternate"/>
<author>
<name>Lee, Yoo Kyung</name>
</author>
<id>https://hdl.handle.net/1721.1/164139</id>
<updated>2025-12-04T03:06:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Exploring spin physics with ultracold atoms
Lee, Yoo Kyung
The dynamics of many interacting spins is an active frontier of research; not only can they explain magnetic phenomena, but they also provide paradigmatic models with deep connections to high-T_c superconductivity, optimization problems, neural networks, and more. Experiments with ultracold alkali atoms in optical lattices have realized spin models with great success. In particular, the isotropic Heisenberg model---the XXX model---was realized more than a decade ago. The ⁷Li apparatus described here was the first to realize a tunable, anisotropic Heisenberg model, also known as the XXZ model.&#13;
&#13;
In this thesis, I will describe how the capabilities of this apparatus were harnessed to characterize the spin models we realize, employ them to observe new resonances, and to contribute to studies in spin squeezing and fundamental physics. First, I will discuss how we prepared and observed phantom helix states: eigenstates of the XXZ Hamiltonian. Our understanding of the contact interactions and the phantom helix states enabled us to observe long-predicted lattice-induced resonances, whose effects can be leveraged as another knob to tune the XXZ Hamiltonian. Furthermore, our control over the spin system allowed us to generate spin-squeezed states,  a paradigmatic form of entanglement for spin ensembles. This is the first time squeezed states were realized with nearest-neighbor contact interactions in a lattice. Finally, our control over the spin degree of freedom and defects in our state preparation allowed us to create pristine periodic lattices with which to study gedankenexperiments in light scattering.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Probing the Diversity of Fast Radio Bursts with CHIME/FRB</title>
<link href="https://hdl.handle.net/1721.1/164138" rel="alternate"/>
<author>
<name>Shin, Kaitlyn</name>
</author>
<id>https://hdl.handle.net/1721.1/164138</id>
<updated>2025-12-04T03:05:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Probing the Diversity of Fast Radio Bursts with CHIME/FRB
Shin, Kaitlyn
Fast radio bursts (FRBs) are extremely bright extragalactic radio transients that flash for microseconds to milliseconds at a time, most never to repeat again. Encoded in every observed FRB is information from burst propagation effects, giving us clues about their mysterious origins as well as the environments they traveled through. With inferred all-sky rates of hundreds per day, FRBs have held great interest for those interested in extreme astrophysical processes as well as those interested in cosmological properties of the Universe. The Canadian Hydrogen Intensity Mapping Experiment (CHIME)/FRB project has revolutionized the FRB field with its field-leading discovery rate. With CHIME/FRB, we can start to carry out population-level studies of FRBs to constrain their origins and inform their use as cosmological probes. I present the first population-level studies of CHIME/FRB-observed FRBs using the CHIME/FRB Catalog 1 data release and the injections system to account for observational biases. I discover that CHIME/FRB is likely observationally biased against bursts originating from turbulent local environments, and constrain the energy and distance distributions of FRBs. I also present the Catalog 1 dataset updated with channelized raw voltage (“baseband”) data (“BaseCat1”), for which I played a pivotal role. The CHIME/FRB baseband localization pipeline can localize FRBs to arcminute-precision as long as the signal is bright enough to trigger the saving of offline baseband data. I then discuss two single source-studies enabled by the baseband localization pipeline — one discovering repeaters during phases of unusually heightened burst activity, and one using the burst properties of an unusual FRB to probe the properties of its sightline. In the latter study, I constrain the electron density content of a diffuse filamentary structure on the outskirts of the Virgo Cluster, demonstrating the power of FRBs as probes of diffuse media.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Controlling for the Ionospheric and Baseline-Offset Uncertainties in the CHIME/FRB Outriggers VLBI Network for Milliarcsecond Precision</title>
<link href="https://hdl.handle.net/1721.1/164137" rel="alternate"/>
<author>
<name>Willis, Jacob</name>
</author>
<id>https://hdl.handle.net/1721.1/164137</id>
<updated>2025-12-04T03:09:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Controlling for the Ionospheric and Baseline-Offset Uncertainties in the CHIME/FRB Outriggers VLBI Network for Milliarcsecond Precision
Willis, Jacob
Fast radio bursts (FRBs) are a novel form of radio transients discovered in 2007. These bright, extragalactic radio signals have an inferred all-sky rate of hundreds of detections per day. The properties of FRBs hold valuable clues about the extreme physical processes driving them while also holding information about the astrophysical plasmas they traverse on their journey to Earth. The Canadian Hydrogen Intensity Mapping Experiment (CHIME)/FRB project has led the field with the hundreds of FRB detections the collaboration has published to date. However, these detections typically have localization regions so large that we cannot identify a single host galaxy, never mind its local environment. To improve upon this, CHIME/FRB has been transformed into a very long baseline interferometry (VLBI) array, drastically increasing the angular resolution of CHIME/FRB from arcminute to sub-arcsecond precision.&#13;
&#13;
In this work, I present my contributions to commissioning the CHIME/FRB VLBI Outrigger station located at the Green Bank Observatory (GBO) in West Virginia. This includes measuring and validating GBO's exact position to enable the localization of FRBs to sub-arcsecond precision.&#13;
&#13;
For VLBI networks spanning thousands of kilometers, the difference in the local ionospheric environments is significant and leads to errors in the CHIME/FRB Outrigger localizations. I present a thin shell model of the ionosphere to parameterize the local ionospheric environment for each VLBI station. This model may be used to interpolate the error induced by the ionosphere in FRB observations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Z-hadron correlations to probe the medium response in PbPb and pp collisions at √ˢNN = 5.02 TeV</title>
<link href="https://hdl.handle.net/1721.1/164136" rel="alternate"/>
<author>
<name>Chou, Pin-Chun</name>
</author>
<id>https://hdl.handle.net/1721.1/164136</id>
<updated>2025-12-04T03:09:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Using Z-hadron correlations to probe the medium response in PbPb and pp collisions at √ˢNN = 5.02 TeV
Chou, Pin-Chun
The first measurement of Z-hadron two-particle correlation function are reported in PbPb collisions at √ˢNN = 5.02 TeV, using the PbPb collision data taken in 2018. The integrated luminosity of the PbPb data is 1.67 ±0.03 nb⁻¹ which made the analysis possible for the first time. Collision data with at at least one Z boson with 40 &lt;pT &lt;200 GeV/c are analyzed. The azimuthal angle distributions with respect to the Z bosons, whih are sensitive to modification of in-medium parton shower and medium recoils, are measured in central PbPb collisions. A significant modification of the two particle correlation in pseudorapidity difference and azimuthal angle difference is observed with respect to the reference measured in pp collisions. Those results are compared to phenomenological models that include medium-recoil, medium response and thermalization of the QGP wakes.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>DePUDS: Decentralized Prosocial Urban Development System</title>
<link href="https://hdl.handle.net/1721.1/164135" rel="alternate"/>
<author>
<name>Zhang, Yan</name>
</author>
<id>https://hdl.handle.net/1721.1/164135</id>
<updated>2025-12-04T03:06:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">DePUDS: Decentralized Prosocial Urban Development System
Zhang, Yan
Urban areas face severe socio-economic and environmental challenges like housing crises, inequity, and environmental degradation, often worsened by traditional zoning practices. These are typically rigid, inefficient, outdated, and susceptible to obstruction by narrow special interests (NIMBYism), failing to engage the broader community or adapt to evolving needs. This dissertation proposes the Decentralized Prosocial Urban Development System (DePUDS), a novel governance framework designed to overcome these shortcomings by empowering informed collective consensus and including the often-silent majority.&#13;
DePUDS integrates decentralized technologies like blockchain and smart contracts with structured economic incentives, facilitated through an accessible user-friendly Decentralized Application (DApp) to encourage broad participation. This system fosters transparent, inclusive, and equitable urban development. Its core mechanism, adaptive incentive-based zoning, dynamically aligns developer profitability with community-endorsed priorities—such as affordable housing, public amenities, and sustainability—providing flexibility absent in traditional zoning.&#13;
Employing advanced agent-based simulations enhanced by large language models (LLMs), this research rigorously assesses DePUDS's effectiveness across two distinct case studies: Kendall Square in Cambridge, MA (a dynamic innovation hub) and the Inner Richmond District in San Francisco, CA (a culturally rich but housing-constrained neighborhood). Simulation results demonstrate DePUDS significantly aligns development outcomes with community preferences. In Kendall Square, targeted incentives substantially increased affordable housing and public amenities without hindering private investment. In the Inner Richmond, substantial community-driven incentives successfully unlocked constrained development, markedly reducing displacement risks, boosting affordable housing, enhancing amenity access, lowering carbon emissions via density, and preserving local cultural assets.&#13;
The comparative analysis underscores DePUDS's versatility, showing its potential to enhance growth in active markets and stimulate development in constrained ones. Key policy implications point towards structured DApp-based community participation, adaptive incentive zoning, and dedicated funding. While acknowledging practical implementation hurdles (legal, economic, technological), the findings affirm the feasibility, effectiveness, and transformative potential of decentralized, incentive-driven urban governance. This dissertation offers significant theoretical contributions, practical policy guidelines, and future research pathways to foster more inclusive, sustainable, and resilient urban communities.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Materializing Light: Real-time, Handheld Fabrication of Programmable Structural Color</title>
<link href="https://hdl.handle.net/1721.1/164134" rel="alternate"/>
<author>
<name>Myers, Paris G.</name>
</author>
<id>https://hdl.handle.net/1721.1/164134</id>
<updated>2025-12-04T03:09:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Materializing Light: Real-time, Handheld Fabrication of Programmable Structural Color
Myers, Paris G.
Structural color is nature’s programmable color palette. While pigments and dyes absorb light to produce color, structural color uses nanoscale, light-reflecting structures to appear iridescently colored. We present MorphoChrome, an optical device for real-time, handheld, programmable structural color fabrication. Analogous to painting with light, MorphoChrome creates multicolor, structurally colored designs&#13;
by exposing a commercially available holographic photopolymer film to user-controlled wavelengths. Within the device, red, green, and blue laser diodes go through an optical prism, combining light and producing mixed color outputs on the film. Additionally, we introduce a resin-based process to adhere and integrate the structurally-colored film with flexible and rigid objects and diverse making processes. In this thesis, we focus on the device optical design and fabrication, color-mixing,&#13;
color output UI controller, device aperture tips, and holographic photo-polymer film adherence process. We evaluate the available color space and color resolution, and demonstrate creative fabrication applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Biophysical specializations supporting efficiency in neural&#13;
networks</title>
<link href="https://hdl.handle.net/1721.1/164133" rel="alternate"/>
<author>
<name>Toloza, Enrique H.S.</name>
</author>
<id>https://hdl.handle.net/1721.1/164133</id>
<updated>2025-12-04T03:06:19Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Biophysical specializations supporting efficiency in neural&#13;
networks
Toloza, Enrique H.S.
Neuroscience and artificial intelligence (AI) research have long enjoyed a synergistic relationship. AI has drawn key inspiration from the organization and function of the brain, while our understanding of the biological processes underlying computation has been profoundly enriched by studying the behavior of artificial systems. As breakthroughs in generative AI continue to transform our world, and as the need for more sustainable artificial neural systems becomes more urgent, the neuro-AI feedback loop has never been more important. AI needs ever more powerful and efficient systems, and neuroscience needs further insight into how our brains work. The development of more brain-like AI promises solutions to both of these problems. Unfortunately, this has thus far been stymied by two critical challenges: 1) how do we identify the features that make a system brain-like and 2) how do we incorporate these features into artificial networks in a useful and interpretable way? To address the first of these challenges, I will use the remarkable structural and biophysical diversity of the brain as an introduction into what it means for a system to be “brain-like.” This will lead us to a discussion of dendrites, the tree-like structures implicated at virtually every length scale of neural computation. Dendrites will thereafter act as the focal point for our study of brain-like computation. Specifically, I will trace how relatively simple biophysical features defined at the subcellular level can transform the computational landscape of large networks of neurons. To address the second of these challenges, it is necessary to discuss several enduring problems in computational neuroscience, broken down as chapters in this thesis. In Chapter 2, I will present the development of a new model of single-neuron dynamics that is realistic enough to capture the rich dynamics of dendritic spiking but efficient enough for use in simulations of thousands of neurons, thereby filling a long unmet need in the field. In Chapter 3, I will describe a solution to the general problem of training neural networks with arbitrary differentiable dynamics, thus opening the door for the study of countless biophysical phenomena in the context of networks that can learn to perform computations. In Chapter 4, I will use these tools to test several longstanding hypotheses regarding the utility of different biophysical features in neurons, performing first-of-their-kind fair comparisons of the computational performance of spiking networks, rate-based networks, and networks with nonlinear and linear dendrites. Finally, in Chapter 5, I will use insights gained from studying dendrites at the network level to provide a new perspective as to how the structural and biophysical diversity of the brain could emerge from a complex interplay of functional pressures (e.g., task demands) and physical constraints (e.g., space and energy). Together, the chapters of this thesis outline a general quantitative framework for building more brain-like AI for use in both AI research and neuroscience. This framework illustrates how biophysical specializations arising at the level of single neurons shape the emergent dynamics of the brain.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Next Week Tonight: Simulating Counterfactual Narratives of the future using Agentic Knowledge Graphs</title>
<link href="https://hdl.handle.net/1721.1/164132" rel="alternate"/>
<author>
<name>Agarwal, Gauri</name>
</author>
<id>https://hdl.handle.net/1721.1/164132</id>
<updated>2025-12-04T03:09:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Next Week Tonight: Simulating Counterfactual Narratives of the future using Agentic Knowledge Graphs
Agarwal, Gauri
Understanding the ripple effects of events—both real and speculative—is essential for navigating complex futures. Large Language Models (LLMs) have emerged as powerful tools that offer a user-friendly and narrative experience for question answering and reasoning across large corpuses of unstructured data [15, 96]. While LLMs can respond to complex ‘what-if’ questions, they typically provide single, unverifiable answers. Even with retrievalaugmented generation (RAG) that grounds LLM responses on external sources, the opacity of reasoning pathways undermines trust in model outputs [97]. Next Week Tonight builds on the narrative and reasoning capability of LLMs further by enhancing the exploration of what-if futures and making it more transparent and evidencebased. NWT exposes the underlying knowledge graph, allowing users to inspect inference pathways directly. This also enables the generation of multiple, diverse scenarios from a single condition—each following different but explainable causal chains. In testing 15 counterfactual prompts that span diverse news topics, NWT produced scenario narratives that were rated as significantly more causally coherent, transparent, and easier to audit than standard chat completions. Beyond technical performance, NWT reinvents scenario planning as an interactive narrative experience - encouraging curiosity, critical thinking, and deeper engagement with the complexities of future events. By surfacing not only what could happen but why and how, NWT aims to empower analysts, policymakers, and the public to navigate uncertainty with greater clarity and confidence. Github: https://github.com/viral-medialab/next-week-tonight
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the Learnability of General Reinforcement-Learning Objectives</title>
<link href="https://hdl.handle.net/1721.1/164131" rel="alternate"/>
<author>
<name>Yang, Cambridge</name>
</author>
<id>https://hdl.handle.net/1721.1/164131</id>
<updated>2025-12-04T03:05:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">On the Learnability of General Reinforcement-Learning Objectives
Yang, Cambridge
Reinforcement learning enables agents to learn decision-making policies in unknown environments to achieve specified objectives. Traditionally, these objectives are expressed through reward functions, enabling well-established guarantees on learning near-optimal policies with a high probability — a property known as probably approximately correct (PAC) -learnability. However, reward functions often serve as imperfect surrogates for true objectives, leading to reward hacking and undermining these guarantees. This thesis formalizes the specification and learnability of general reinforcement-learning objectives beyond rewards, addressing fundamental questions of expressivity and policy learnability. I examine three increasingly expressive classes of objectives: (1) Linear Temporal Logic (LTL) objectives, which extend conventional scalar rewards to temporal specifications of behavior and have garnered recent attention, (2) Computable objectives, encompassing a broad class of structured, algorithmically definable objectives and (3) Non-computable objectives, representing general objectives beyond the computable class. For LTL objectives, I prove that only finitary LTL objectives are PAC-learnable, while infinite-horizon LTL objectives are inherently intractable under the PAC-MDP framework. Extending this result, I establish a general criterion: an objective is PAC-learnable if it is continuous and computable. This criterion facilitates the establishment of PAC-learnability for various existing classes of objectives with unknown PAC-learnability and informs the design of new, learnable objective specifications. Finally, for non-computable objectives, I introduce limit PAC-learnability, a practical relaxation where a sequence of computable, PAC-learnable objectives approximates a non-computable objective. I formalize a universal representation of non-computable objectives using nested limits of computable functions and provide sufficient conditions under which limit PAC-learnability holds. By establishing a theoretical foundation for general RL objectives, this thesis advances our understanding of which objectives are learnable, how they can be specified, and how agents can effectively learn policies to optimize them. These results contribute to the broader goal of designing intelligent agents that align with expressive, formally defined objectives—moving beyond the limitations of reward-based surrogates.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Solid-State Quantum Memories for Near-Term Quantum Repeaters</title>
<link href="https://hdl.handle.net/1721.1/164130" rel="alternate"/>
<author>
<name>Sutula, Madison M.</name>
</author>
<id>https://hdl.handle.net/1721.1/164130</id>
<updated>2025-12-04T03:06:12Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Solid-State Quantum Memories for Near-Term Quantum Repeaters
Sutula, Madison M.
Over the past decade, quantum computers have emerged as a promising technology to enable transformational advances in information processing and communication and solve problems that are intractable to classical computers. While there is great promise in linking quantum computers together over long distances via quantum channels, these technologies are still under development. Solid-state emitters with coherent spin-photon interfaces, long spin lifetimes, and narrow optical transitions are a leading platform for use as quantum memories in networked quantum repeaters. However, while such emitters have already enabled advanced quantum networking demonstrations in laboratory settings, applying these devices as useful memory devices at scale is a key outstanding challenge. In this thesis, we experimentally investigate solid-state quantum memories for quantum information applications. First, we develop experimental techniques to characterize solid-state emitters with high throughput, enabling both better understanding of the distribution of emitter properties and improved feedback on material preparation and device fabrication. Next, we implement quantum frequency conversion to create a coherent spin-photon interface between silicon-vacancy centers in diamond and optical photons in the low-loss telecom band. Finally, we investigate color centers in other engineering materials, including silicon and silicon carbide, to better understand the fundamental trade space of requirements for solid-state hosts. Together, these efforts represent a significant advance in creating, controlling, and deploying telecom-compatible spin interfaces, paving the way for memory-enabled quantum repeaters.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inferring Clonal Dynamics in Blood using Single-Cell Measurements</title>
<link href="https://hdl.handle.net/1721.1/164129" rel="alternate"/>
<author>
<name>Perry, Andrea N.</name>
</author>
<id>https://hdl.handle.net/1721.1/164129</id>
<updated>2025-12-04T03:09:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Inferring Clonal Dynamics in Blood using Single-Cell Measurements
Perry, Andrea N.
In this work, we uniquely tag hematopoietic (blood) stem cells with genetic barcodes and follow their progeny over time to ask whether clonally related cells in myeloproliferative neoplasms (MPNs) favor particular blood cell fates. Myeloproliferative neoplasms are clonal disorders driven most frequently by the JAK2-V617F mutation, which arises in a single hematopoietic stem cell (HSC) and ultimately dominates the normal process of blood cell production. Although all patients carry the same driver mutation, they still branch into three distinct disease forms—essential thrombocythemia (ET), polycythemia vera (PV), or primary myelofibrosis (PMF)—and the reason for this variation remains unknown. One compelling hypothesis is that the JAK2-V617F mutation may arise in HSC subsets with intrinsic biases toward platlet-producing cells (as in ET) or red blood cell precursors (PV). To investigate this question, we analyzed bone-marrow cKit⁺ cells from mice engineered for inducible MPN disease and CRISPR array repair lineage tracing (CARLIN), using single-cell RNA sequencing. Our gene expression analysis shows that the mutation keeps key signaling and stress-response genes switched on and boosts growth-promoting enzymes, collectively pushing blood production toward the myeloid line. At the resolution of individual CARLIN clones (i.e. cells grouped by a shared progenitor), however, we observe no robust mutation-induced lineage bias—an outcome attributable to limited clone recovery and inter-mouse variability. Crucially, this work establishes a scalable analysis pipeline for future, higher-yield CARLIN experiments. Enhancing lineage-tracing sensitivity, barcode diversity, and biological replication will be essential to test whether these interferon-/stress-response and kinase programs manifest as subtle, clone-level fate biases in JAK2-driven MPN.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Principled Approaches for Latency Reduction in Networking Systems</title>
<link href="https://hdl.handle.net/1721.1/164128" rel="alternate"/>
<author>
<name>Pit-Claudel, Benoit</name>
</author>
<id>https://hdl.handle.net/1721.1/164128</id>
<updated>2025-12-04T03:05:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Principled Approaches for Latency Reduction in Networking Systems
Pit-Claudel, Benoit
Modern networks face unprecedented challenges due to exponential growth in traffic demands, driven by AI workloads in datacenters and the ubiquitous adoption of cloud services across the internet. This dissertation addresses three critical challenges in network systems: efficient scheduling of inference tasks, performance optimization in hybrid networks, and memory-efficient load balancing in datacenters.&#13;
&#13;
First, we introduce Nona, a stochastic scheduling framework that leverages queueing theory to optimize task placement in datacenter environments. By employing randomized algorithms and considering both network and compute constraints, Nona demonstrates multiple orders of magnitude improvements in job completion times while maintaining implementation simplicity. Nona proposes stochastic scheduling, in which the complexity of the scheduling problem is moved to an offline phase. When handling jobs online, stochastic schedulers are oblivious to the instantaneous state of the network and only rely on predetermined allocation probabilities to make lightning-fast decisions. Second, we present LINC, an in-network coding solution designed for hybrid backbone networks. Through comprehensive mathematical analysis and simulation, we highlight the benefits of network coding in cases where no modifications of the end-hosts are possible. Finally, we develop Sirona, a memory-efficient version of a reactive subflow spraying mechanism suited for hardware deployment. We show that Sirona can achieve competitive performance in homogeneous and heterogeneous datacenter networks while keeping a low memory footprint.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forward Modeling for Bolometry and Disruption Mitigation in Tokamaks or How to Kill Your Plasma With Confidence, Style, and Pizzazz</title>
<link href="https://hdl.handle.net/1721.1/164127" rel="alternate"/>
<author>
<name>Stein-Lubrano, Benjamin</name>
</author>
<id>https://hdl.handle.net/1721.1/164127</id>
<updated>2025-12-04T03:06:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Forward Modeling for Bolometry and Disruption Mitigation in Tokamaks or How to Kill Your Plasma With Confidence, Style, and Pizzazz
Stein-Lubrano, Benjamin
The tokamak is a promising approach to magnetic confinement fusion. Tokamak functionality is threatened by plasma disruption events, which can damage critical machine components. Disruption damage can be mitigated by high-Z impurities, delivered by Massive Gas Injection (MGI) or Shattered Pellet Injection (SPI). Impurities radiate energy out of the plasma and onto the first wall. Evenly distributed radiation causes less damage than unmitigated disruption pathways, which deliver concentrated heat loads. In order to successfully develop and deploy mitigation systems, it is important to accurately measure and characterize disruption radiation. Accurate measurement is challenged by fast disruption timescales and highly asymmetric radiation patterns, which push the time and spatial resolution limits of radiant heat sensors. Previous radiation analysis approaches are typically limited to two dimensions or less by the highly under-determined nature of tomographic reconstruction and limited spatial resolution of sensors. Two dimensional analysis is often inaccurate for disruption radiation, which can be highly three dimensional as a result of localized impurity sources and fast 3D MHD events. In this thesis, I present a new algorithm for 3D radiation analysis in tokamak disruptions, called Emis3D. When Emis3D is applied to mitigated disruptions on the JET tokamak, a significant injection plume radiation effect in mitigated disruptions is revealed. When this effect is included in radiated energy calculations, the mitigated radiation fraction of plasmas with high thermal energy content is significantly improved, indicating that thermal mitigation is more effective than previously thought. Emis3D can also be used as a design tool to evaluate potential radiant heat sensor layouts. When applied to the SPARC tokamak, Emis3D demonstrates that toroidally skewed sensor sightlines improve spatial resolution and reduce blind spots, allowing more accurate measurement.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding the Milky Way with Stars</title>
<link href="https://hdl.handle.net/1721.1/164126" rel="alternate"/>
<author>
<name>Ou, Xiaowei</name>
</author>
<id>https://hdl.handle.net/1721.1/164126</id>
<updated>2025-12-04T03:05:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Understanding the Milky Way with Stars
Ou, Xiaowei
"How do galaxies form?" is one of the most important questions in modern astrophysics. Hierarchical growth, the most plausible theory behind galaxy formation, suggests that galaxies, including the Milky Way, assemble through the accretion of smaller systems, over a scaffolding of the invisible Dark Matter. Such growth is evidenced by the differences in stellar structures found in the Galaxy over the last few decades, accelerated most recently by the Gaia space mission. Yet, we still lack a full picture of the formation of the Milky Way and its stellar components, and we are even further in understanding its underlying Dark Matter distribution. For the latter, discrepancies between observations and predictions from CDM model at galactic scales have sparked debate about how well this model accounts for the evolution of the Milky Way. Stellar tracers provide a powerful tool for examining these discrepancies, helping us explore the hierarchical assembly of galaxies in the Local Group and test different models for dark matter. At the same time, cosmological simulations and machine learning techniques offer a bridge between the theory and observations.&#13;
&#13;
In this thesis, I combine observation of stellar kinematics and chemistry with cosmological simulations to understand the formation and evolution of the Milky Way and its satellite dwarf galaxies. I map the dark matter distributions in the Milky Way and one of its ultra-faint dwarf galaxies using stellar dynamics, combining simulations of tidal disruption with observational data to study ongoing merger events and how hierarchical assembly shaped the Milky Way today. I conduct robust machine learning searches of kinematic substructures from disrupted dwarf galaxy debris in the Milky Way and utilize stellar heavy element abundances to probe the galaxies that merged with the Milky Way in the past. Lastly, I develop synthetic surveys from simulations to bridge gaps between theory and observation, testing the robustness of current and future methodologies in understanding how the Milky Way came to be.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Coating Thermal Noise in Gravitational-Wave Detectors</title>
<link href="https://hdl.handle.net/1721.1/164125" rel="alternate"/>
<author>
<name>Demos, Nicholas</name>
</author>
<id>https://hdl.handle.net/1721.1/164125</id>
<updated>2025-12-04T03:05:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Coating Thermal Noise in Gravitational-Wave Detectors
Demos, Nicholas
The direct detection of gravitational waves, originating from cataclysmic events such as black hole and neutron star mergers, has ushered in a new era of observational astronomy. These signals offer unique insights into astrophysical phenomena and fundamental physics, but fully realizing their potential requires continued improvements in detector sensitivity. A primary factor limiting the performance of current ground-based interferometers like Advanced LIGO and Advanced Virgo is thermal noise arising from the highly reflective multilayer coatings on the test mass mirrors. Reducing this coating thermal noise, particularly its Brownian component, while simultaneously maintaining exceptionally low optical absorption and scatter is necessary to advance detector capabilities.&#13;
&#13;
This thesis addresses this challenge through the characterization and development of alternative coating materials and designs. Central to this work is a dedicated experimental apparatus employing a high-finesse folded optical cavity and a multimode co-resonance technique. This system enables direct, high-precision measurements of coating thermal noise in the frequency band relevant to gravitational-wave detectors and allows for relatively rapid evaluation of candidate coatings, providing timely feedback for materials development.&#13;
&#13;
Coating materials such as niobia-based oxides, hafnia-tantala mixtures, and substoichiometric silica, were explored employing strategies like compositional optimization, post-deposition annealing, and multimaterial designs with buried layers. Progress toward lower-noise coatings is demonstrated. Highly reflective coatings based on optimized titania-silica, titania-germania, and ternary silicon nitride structures achieved thermal noise levels approximately 75% that of current detector coatings. These coatings also exhibited exceptionally low optical absorption, reaching levels near 1 part-per-million following appropriate heat treatment. While challenges related to defect formation during annealing and discrepancies between different noise measurement methodologies were identified, ongoing research, particularly on defect mitigation in materials like titania-germania, continues to advance the field. The findings presented here contribute to the materials science foundation for improving current gravitational-wave detectors and guiding the design of future observatories.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Time-Domain Astrophysics with the Transiting Exoplanet Survey Satellite</title>
<link href="https://hdl.handle.net/1721.1/164124" rel="alternate"/>
<author>
<name>Jayaraman, Rahul</name>
</author>
<id>https://hdl.handle.net/1721.1/164124</id>
<updated>2025-12-04T03:05:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Time-Domain Astrophysics with the Transiting Exoplanet Survey Satellite
Jayaraman, Rahul
The Transiting Exoplanet Survey Satellite (TESS) is conducting an all-sky survey with the primary aim of detecting planets orbiting nearby stars. However, its large field of view and 200 s imaging cadence are useful for other science cases, ranging from stellar astrophysics to transient science. This thesis focuses on using TESS to study both the circumstellar environment and stellar interiors, as well as using the satellite to detect and characterize optical emission from gamma-ray bursts (GRBs). Chapter 2 focuses on the discovery of HD 135348, a "rigidly rotating magnetospheric" star–wherein the stellar magnetic field traps dust in a co-rotating orbit and leads to complex periodic photometric modulations–using solely photometric data. Chapter 3 focuses on the discovery of a long-period subdwarf B (sdB) star using 20 s cadence TESS data and proposes a novel formation mechanism for long-period sdB stars that relies upon stable, nonconservative mass transfer. Chapters 4 and 5 focus on pulsating stars in close binaries, and the evolutionary insights that these "tidally tilted" pulsations enable. In particular, we focus on developing models to track the amplitude and phase of these pulsations as a function of orbital phase, as well as tools to perform physically-motivated modeling of the binary components. Chapters 6-7 focus on the optical signatures of gamma-ray bursts in TESS, and analyze the prompt optical flash that is often observed contemporaneously with the high-energy emission from these bursts. Chapter 7, in particular, aims to connect the prompt optical flash to the high-energy spectral energy distribution (SED), and explains the suppression of the optical flash (compared to the extrapolation of the high-energy SED) by invoking dust extinction in the host galaxy. This thesis represents a significant step forward in both stellar and transient astrophysics; throughout this work, we emphasize the use of an unconventional tool–TESS–to pursue timely scientific questions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building Intelligence that can Interact with the Physical World</title>
<link href="https://hdl.handle.net/1721.1/164123" rel="alternate"/>
<author>
<name>Wang, Tsun-Hsuan (Johnson)</name>
</author>
<id>https://hdl.handle.net/1721.1/164123</id>
<updated>2025-12-04T03:05:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Building Intelligence that can Interact with the Physical World
Wang, Tsun-Hsuan (Johnson)
Recent advances in Artificial Intelligence (AI) have demonstrated remarkable success in parsing, reasoning, and generating digital content across modalities such as natural language, speech, images, videos, and 3D data. However, these breakthroughs have yet to extend meaningfully beyond the digital realm into the physical world. Developing AI for physical interaction poses challenges such as limited grounding, scarce physical data, and high reliability demands in safety-critical settings. This thesis takes a holistic approach to building intelligence that can interact with the physical world – through the lenses of data, brain, and body. Data is the fuel powering highly capable AI systems. We present methods for data-driven simulation that synthesize sensor measurements from physical processes, and knowledge-driven simulation that leverages large language models to generate actor behaviors and scenarios. By reverse engineering the generative processes behind physical data, we address data scarcity while enabling scalable and effective evaluation. The brain, driven by data, demands a deep understanding of the physical world and reliable interaction with it. We introduce methods to bridge the internet-scale knowledge of digital AI with the physical world to improve generalization and interpretability. For greater reliability, we integrate control-theoretic modules into AI models to enable certifiability. Beyond the behavioral intelligence, the body plays a crucial role in physical interaction. We demonstrate how morphological intelligence can emerge from computation and show how pre-trained generative AI models (brain), when augmented with physics-based simulation that provides feedback on generated data, can be applied to robot design. To this end, this thesis explores how digital AI can be extended into the physical world through a comprehensive investigation of data, brain, and body – laying the groundwork for building physical AI.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Biomolecular Modeling at Scale</title>
<link href="https://hdl.handle.net/1721.1/164122" rel="alternate"/>
<author>
<name>Wohlwend, Jeremy</name>
</author>
<id>https://hdl.handle.net/1721.1/164122</id>
<updated>2025-12-04T03:05:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Biomolecular Modeling at Scale
Wohlwend, Jeremy
Predicting the structure and interactions of biomolecules is a fundamental problem in computational biology, with broad implications for disease understanding and drug discovery. Advances in deep learning have enabled remarkable progress, but scaling these approaches to the varied and complex realities of biology is a persistent challenge. This work introduces deep learning methods for biomolecular modeling at scale, designed for efficiency, adaptability, and accessibility. The early chapters present models developed in the general molecular domain, including prediction of structure and interactions for proteins, nucleic acids, and small molecules. To demonstrate how these methods extend to specific biological problems, the latter portion of this work focuses on modeling T cell receptor recognition. As a key immunological mechanism, it highlights the promise of scalable models, but also their present limitations in capturing fine-grained molecular selectivity. Together, these contributions define a framework for bridging foundational models and domain-specific applications, with the potential to scale, and meet the demands of increasingly complex biological systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multimessenger signatures of compact binaries</title>
<link href="https://hdl.handle.net/1721.1/164121" rel="alternate"/>
<author>
<name>Mo, Geoffrey Kwan Lok</name>
</author>
<id>https://hdl.handle.net/1721.1/164121</id>
<updated>2025-12-04T03:06:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Multimessenger signatures of compact binaries
Mo, Geoffrey Kwan Lok
Gravitational waves and electromagnetic observations provide complementary views into some of the most extreme objects in the Universe. In this thesis, I present studies of multimessenger compact binaries from two angles: electromagnetic follow-up of gravitational-waves, and gravitational-wave follow-up of electromagnetic sources. I first describe technical and computational efforts to enable the distribution of alerts of kHz gravitational-wave sources as a member of the LIGO--Virgo--KAGRA collaboration, and to improve localizations of these events by folding in galaxy catalog information. I then detail work to enable electromagnetic follow-up observations of binary neutron star and neutron star--black hole mergers with two telescopes, the Transiting Exoplanet Survey Satellite (TESS) and the Wide-field Infrared Transient Explorer (WINTER). Approaching multimessenger observations from the opposite direction, I describe a search for gravitational waves coincident with fast radio bursts from the only Galactic fast radio burst source. Lastly, I perform an electromagnetic study of Type Ia supernovae in the mid-infrared, whose white dwarf binary progenitors will be mHz gravitational-wave sources for the future LISA space mission.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Network Systems Design for Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/164120" rel="alternate"/>
<author>
<name>Yang, Mingran</name>
</author>
<id>https://hdl.handle.net/1721.1/164120</id>
<updated>2025-12-04T03:05:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Efficient Network Systems Design for Machine Learning
Yang, Mingran
Machine learning (ML) is transforming modern life by powering a diverse range of groundbreaking applications. As ML models and datasets expand, the scale of training and inference workloads in modern datacenters is increasing at an unprecedented pace. As the demand for computing resources grows, the need for low-latency and energy-efficient network systems becomes increasingly urgent.&#13;
&#13;
This thesis introduces efficient network systems designed to support machine learning workloads. It presents three key systems: Trio-ML, which accelerates ML training; Lightning, which enhances ML inference efficiency; and on-fiber photonic computing, a forward-looking vision for next-generation computing systems.&#13;
&#13;
The first system, Trio-ML, accelerates data-parallel distributed ML training by leveraging in-network computing on Juniper Networks' programmable chipset Trio. Trio-ML features two key designs: in-network aggregation, which utilizes Trio packet processing threads to aggregate gradients directly inside the network, and in-network straggler mitigation, which utilizes Trio timer threads to detect and address stragglers. We prototype Trio-ML on a testbed with three real DNN models (ResNet50, DenseNet161, and VGG11) to demonstrate its effectiveness in mitigating stragglers while performing in-network aggregation. Our evaluations show that when stragglers occur in the cluster, Trio-ML outperforms today's state-of-the-art in-network aggregation solutions by up to 1.8x.&#13;
&#13;
The second system, Lightning, is the first reconfigurable photonic-electronic smartNIC to serve real-time ML inference requests. Lightning uses a fast datapath to feed traffic from the NIC into the photonic domain without creating digital packet processing and data movement bottlenecks. To do so, Lightning leverages a novel reconfigurable count-action abstraction that keeps track of the required computation operations of each inference packet. Our count-action abstraction decouples the compute control plane from the data plane by counting the number of operations in each task and triggers the execution of the next task(s) without interrupting the dataflow. We evaluate Lightning's performance using four platforms: prototype, chip synthesis, emulations, and simulations. Our simulations with large DNN models show that compared to Nvidia A100 GPU, A100X DPU, and Brainwave smartNIC, Lightning accelerates the average inference serve time by 337x, 329x, and 42x, while consuming 352x, 419x, and 54x less energy, respectively.&#13;
&#13;
Building on the in-network computing and photonic computing concepts discussed in Trio-ML and Lightning, we present a forward-looking vision for future computing systems. We argue that pluggable transponders are a prime platform for performing photonic computing inside the network without having to replace networking switches and routers. Optical transponders are ubiquitous in today's wide-area and datacenter networks, giving us a unique opportunity to re-purpose them for photonic computing. To this end, we introduce on-fiber photonic computing, explore key research challenges in bringing this vision to reality, and discuss real-world applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wireless, Battery-Free, High-Sensitivity 5G RF Energy Harvesters for Next Generation IoT Sensor Tags</title>
<link href="https://hdl.handle.net/1721.1/164119" rel="alternate"/>
<author>
<name>Yildirim, Deniz Umut</name>
</author>
<id>https://hdl.handle.net/1721.1/164119</id>
<updated>2025-12-04T03:05:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Wireless, Battery-Free, High-Sensitivity 5G RF Energy Harvesters for Next Generation IoT Sensor Tags
Yildirim, Deniz Umut
The Internet of Things (IoT) is revolutionizing various industries, enabling a new wave of smart applications such as automated asset tracking in warehouses, substation monitoring in smart grids, and precision agriculture. However, as IoT devices proliferate, powering these devices in a sustainable and maintenance-free manner has become a critical challenge. Traditional IoT systems rely on batteries, which present issues of limited lifespan, environmental impact, and maintenance costs, especially in large-scale deployments. As a result, the development of battery-free IoT devices powered by ambient energy harvesting has gained significant attention. Among various energy-harvesting technologies, radio frequency (RF) energy harvesting has emerged as a promising solution for powering IoT devices. By harvesting energy from ambient RF signals in licensed frequency bands, RF energy-harvesting systems eliminate the need for batteries and allow for continuous, maintenance-free operation. This is especially crucial in environments where battery replacement is impractical or impossible, such as in large industrial warehouses, remote infrastructure, and hazardous environments. However, achieving high sensitivity and reliable operation in RF energy-harvesting systems poses several challenges. High-sensitivity rectifiers are required to capture and convert weak RF signals into usable energy, but integrating these rectifiers with ultra-low power baseband data processing circuits remains a significant hurdle. Moreover, antenna-rectifier matching calibration must be compatible with the duty-cycled operation of these tags, where brief communication periods are followed by long charging intervals. Additionally, the antenna system must be robust to detuning when placed on various objects, ensuring that the system can operate effectively in diverse environments. This thesis presents two integrated circuits to work towards these goals. The first chip is designed with the goal of minimizing the charging time as much as possible, which is critical in scenarios such as inventory management in warehouses, and tamper detection. The goal was to achieve &lt; 1-minute charging time while maintaining sensitivity competitive with the state-of-the-art. Unlike previous harvesters that either focused solely on sensitivity without integrating baseband processing and communication, or included those features but considered continuous communication at low sensitivity, the IC developed in this work achieves a sensitivity of −31 dBm and is capable of backscattering data approximately 18 seconds after a cold start. It also provides a detailed description of the difficulty of achieving higher sensitivities at higher 5G frequencies. The second chip in this thesis builds upon the first one and integrates an analog front-end to convert sensor data for environmental monitoring. We implemented an antenna-rectifier calibration method that is maintained as long as there if RF power, even though the tag goes into long charging periods. Even though the charging time, or the data readout interval, for these tags is more relaxed compared to the inventory management applications, we have also developed a design methodology to minimize the energy required to generate a data packet for backscattering, through which we were able to keep the charging time at 4 minutes while having additional functionalities and backscattering at a higher data rate compared to the first chip. Finally, a simple shielding method was implemented to enable the tags to be placed on any objects without resonance frequency detuning. All of these were achieved while still obtaining a sensitivity of −30 dBm, competitive with the state of the art. In addition, the third project investigates the use of heterogeneously integrated “beyondCMOS” devices to enhance overall rectifier performance. These emerging devices, fabricated by Palacios Group at MIT, show promise in overcoming sensitivity limitations commonly found in rectifiers, thereby extending the range and coverage of energy-harvesting IoT systems. We conduct a detailed characterization of these devices, highlighting their unique physical behaviors not present in standard CMOS technology, and provide system-level design guidelines for building improved rectifiers. Preliminary simulation results show that rectifiers using negative-capacitance field-effect transistors (NCFETs) can harvest up to four times as much power than their CMOS-based counterparts, while maintaining the same sensitivity. This thesis outlines the design, implementation, and evaluation of all three systems. The two aforementioned ICs are tested both in simulation and in real-world scenarios such as a typical office environment. Meanwhile, the novel device technologies are explored through simulation, demonstrating their significant potential for next-generation rectifier design.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of Driver and Pedestrian Gesture Use in the Boston Area. Automated Vehicles May Need More Than Kinematics in Ambiguous Situations</title>
<link href="https://hdl.handle.net/1721.1/164118" rel="alternate"/>
<author>
<name>Weibert, Alexander</name>
</author>
<author>
<name>Manstetten, Dietrich</name>
</author>
<author>
<name>Reimer, Bryan</name>
</author>
<author>
<name>Gershon, Pnina</name>
</author>
<author>
<name>Mehler, Bruce</name>
</author>
<author>
<name>Abdenebaoui, Larbi</name>
</author>
<author>
<name>Hatice Şahin, İppoliti</name>
</author>
<id>https://hdl.handle.net/1721.1/164118</id>
<updated>2025-12-04T03:13:50Z</updated>
<published>2025-10-08T00:00:00Z</published>
<summary type="text">Analysis of Driver and Pedestrian Gesture Use in the Boston Area. Automated Vehicles May Need More Than Kinematics in Ambiguous Situations
Weibert, Alexander; Manstetten, Dietrich; Reimer, Bryan; Gershon, Pnina; Mehler, Bruce; Abdenebaoui, Larbi; Hatice Şahin, İppoliti
Roadways, despite their formal regulations, are dynamic spaces where humans interact beyond formal rules to resolve conflicts. In ambiguous situations, the right of way is often unclear. Self-driving vehicles in urban traffic introduce challenges to their coexistence with humans, indicating a need for greater social awareness in these vehicles. To investigate social interactions among roadway users, we analyzed a naturalistic driving dataset focusing on instances where drivers yielded to pedestrians, by noting gestures. Video analysis showed that gestures were more common in ambiguous situations than in regulated scenarios. Drivers used gestures to navigate the right of way efficiently, while pedestrians used them to express gratitude. These findings highlight the importance of understanding social expressions in designing socially aware self-driving vehicles.
AutomotiveUI Adjunct ’25, Brisbane, QLD, Australia
</summary>
<dc:date>2025-10-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing and Optimizing Realistic Workloads on a Commercial Compute-in-SRAM Device</title>
<link href="https://hdl.handle.net/1721.1/164117" rel="alternate"/>
<author>
<name>Zhang, Niansong</name>
</author>
<author>
<name>Zhu, Wenbo</name>
</author>
<author>
<name>Golden, Courtney</name>
</author>
<author>
<name>Ilan, Dan</name>
</author>
<author>
<name>Chen, Hongzheng</name>
</author>
<author>
<name>Batten, Christopher</name>
</author>
<author>
<name>Zhang, Zhiru</name>
</author>
<id>https://hdl.handle.net/1721.1/164117</id>
<updated>2025-12-04T03:13:46Z</updated>
<published>2025-10-17T00:00:00Z</published>
<summary type="text">Characterizing and Optimizing Realistic Workloads on a Commercial Compute-in-SRAM Device
Zhang, Niansong; Zhu, Wenbo; Golden, Courtney; Ilan, Dan; Chen, Hongzheng; Batten, Christopher; Zhang, Zhiru
Compute-in-SRAM architectures offer a promising approach to&#13;
achieving higher performance and energy efficiency across a range&#13;
of data-intensive applications. However, prior evaluations have&#13;
largely relied on simulators or small prototypes, limiting the understanding of their real-world potential. In this work, we present&#13;
a comprehensive performance and energy characterization of a&#13;
commercial compute-in-SRAM device, the GSI APU, under realistic&#13;
workloads. We compare the GSI APU against established architectures, including CPUs and GPUs, to quantify its energy efficiency&#13;
and performance potential. We introduce an analytical framework&#13;
for general-purpose compute-in-SRAM devices that reveals fundamental optimization principles by modeling performance trade-offs,&#13;
thereby guiding program optimizations.&#13;
Exploiting the fine-grained parallelism of tightly integrated&#13;
memory-compute architectures requires careful data management.&#13;
We address this by proposing three optimizations: communicationaware reduction mapping, coalesced DMA, and broadcast-friendly&#13;
data layouts. When applied to retrieval-augmented generation&#13;
(RAG) over large corpora (10GB–200GB), these optimizations enable&#13;
our compute-in-SRAM system to accelerate retrieval by 4.8×–6.6×&#13;
over an optimized CPU baseline, improving end-to-end RAG latency by 1.1×–1.8×. The shared off-chip memory bandwidth is&#13;
modeled using a simulated HBM, while all other components are&#13;
measured on the real compute-in-SRAM device. Critically, this system matches the performance of an NVIDIA A6000 GPU for RAG&#13;
while being significantly more energy-efficient (54.4×-117.9× reduction). These findings validate the viability of compute-in-SRAM&#13;
for complex, real-world applications and provide guidance for advancing the technology.
MICRO ’25, Seoul, Republic of Korea
</summary>
<dc:date>2025-10-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Voice to Vision: A Sociotechnical System for Transparent Civic Decision-Making</title>
<link href="https://hdl.handle.net/1721.1/164116" rel="alternate"/>
<author>
<name>Hughes, Margaret</name>
</author>
<author>
<name>Overney, Cassandra</name>
</author>
<author>
<name>Kamra, Ashima</name>
</author>
<author>
<name>Tepale, Jasmin</name>
</author>
<author>
<name>Hamby, Elizabeth</name>
</author>
<author>
<name>Jasim, Mahmood</name>
</author>
<author>
<name>Roy, Deb</name>
</author>
<id>https://hdl.handle.net/1721.1/164116</id>
<updated>2025-12-04T03:13:39Z</updated>
<published>2025-10-17T00:00:00Z</published>
<summary type="text">Voice to Vision: A Sociotechnical System for Transparent Civic Decision-Making
Hughes, Margaret; Overney, Cassandra; Kamra, Ashima; Tepale, Jasmin; Hamby, Elizabeth; Jasim, Mahmood; Roy, Deb
Communities frequently report sending feedback “into a void” during community engagement processes like neighborhood planning, creating a critical disconnect between public input and decision-making. Voice to Vision addresses this gap with a sociotechnical system that comprises three integrated components: a flexible data architecture linking community input to planning outputs, a sensemaking interface for planners to analyze and synthesize feedback, and a community-facing platform that makes the entire engagement process transparent. By creating a shared information space between stakeholders, our system demonstrates how structured data and specialized interfaces can foster cooperation across stakeholder groups, while addressing tensions in accessibility and trust formation. Our CSCW demonstration will showcase this system’s ability to transform opaque civic decision-making processes into collaborative exchanges, inviting feedback on its potential applications beyond urban planning.
CSCW Companion ’25, Bergen, Norway
</summary>
<dc:date>2025-10-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Augmenting Collaborative Problem-Solving: Exploring the Design and Use of GenAI for Groupwork</title>
<link href="https://hdl.handle.net/1721.1/164115" rel="alternate"/>
<author>
<name>Johnson, Janet</name>
</author>
<author>
<name>Rick, Steven</name>
</author>
<author>
<name>Gr?nb?k, Jens Emil</name>
</author>
<author>
<name>Wong, Emily</name>
</author>
<author>
<name>Yin, Ming</name>
</author>
<author>
<name>Nebeling, Michael</name>
</author>
<author>
<name>Klein, Mark</name>
</author>
<author>
<name>Ackerman, Mark</name>
</author>
<author>
<name>Malone, Thomas</name>
</author>
<id>https://hdl.handle.net/1721.1/164115</id>
<updated>2025-12-04T03:13:44Z</updated>
<published>2025-10-17T00:00:00Z</published>
<summary type="text">Augmenting Collaborative Problem-Solving: Exploring the Design and Use of GenAI for Groupwork
Johnson, Janet; Rick, Steven; Gr?nb?k, Jens Emil; Wong, Emily; Yin, Ming; Nebeling, Michael; Klein, Mark; Ackerman, Mark; Malone, Thomas
Complex problem-solving and creative work in the real world are rarely individual endeavors and typically unfold within teams and group settings. While advancements in generative artificial intelligence (GenAI) have shown promise in augmenting creativity and productivity, these tools are primarily designed for individual use and overlook group dynamics and the collaborative aspects of teamwork. This workshop will provide a platform for researchers and practitioners to explore the design of future human-AI groups across four key themes: (1) the role of GenAI in group settings, (2) collaborative and multimodal interactions with GenAI, (3) evaluating GenAI’s influence within groups and designing for appropriate reliance, and (4) evolving group practices in the presence of GenAI. We hope to build a community and construct alignment across participants around how to pursue research that understands how GenAI can augment, undermine, or bring new practices to collaborative settings and groupwork.
CSCW Companion ’25, Bergen, Norway
</summary>
<dc:date>2025-10-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of systematic uncertainty-aware neural network trainings for binned-likelihood analyses at the LHC</title>
<link href="https://hdl.handle.net/1721.1/164114" rel="alternate"/>
<author>
<name>CMS Collaboration</name>
</author>
<id>https://hdl.handle.net/1721.1/164114</id>
<updated>2025-12-04T03:14:35Z</updated>
<published>2025-11-26T00:00:00Z</published>
<summary type="text">Development of systematic uncertainty-aware neural network trainings for binned-likelihood analyses at the LHC
CMS Collaboration
We propose a neural network training method capable of accounting for the effects of systematic variations of the data model in the training process and describe its extension towards neural network multiclass classification. The procedure is evaluated on the realistic case of the measurement of Higgs boson production via gluon fusion and vector boson fusion in the τ τ decay channel at the CMS experiment. The neural network output functions are used to infer the signal strengths for inclusive production of Higgs bosons as well as for their production via gluon fusion and vector boson fusion. We observe improvements of 12 and 16% in the uncertainty in the signal strengths for gluon and vector-boson fusion, respectively, compared with a conventional neural network training based on cross-entropy.
</summary>
<dc:date>2025-11-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designs Related Through Projective and Hopf Maps</title>
<link href="https://hdl.handle.net/1721.1/164113" rel="alternate"/>
<author>
<name>Lindblad, Ayodeji</name>
</author>
<id>https://hdl.handle.net/1721.1/164113</id>
<updated>2025-12-04T03:14:37Z</updated>
<published>2025-11-28T00:00:00Z</published>
<summary type="text">Designs Related Through Projective and Hopf Maps
Lindblad, Ayodeji
We verify a construction which, for K the reals, complex numbers, quaternions, or octonions, builds a spherical t-design by placing a spherical t-design on each K -projective or K -Hopf fiber associated to the points of a ⌊ t / 2 ⌋ -design on a quotient projective space K P n ≠ O P 2 or sphere. This generalizes work of König and Kuperberg, who verified the K = C case of the projective settings, and of Okuda, who (inspired by independent observation of this construction by Cohn, Conway, Elkies, and Kumar) verified the K = C case of the generalized Hopf settings.
</summary>
<dc:date>2025-11-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>A generative deep learning approach to de novo antibiotic design</title>
<link href="https://hdl.handle.net/1721.1/164112" rel="alternate"/>
<author>
<name>Krishnan, Aarti</name>
</author>
<author>
<name>Anahtar, Melis N.</name>
</author>
<author>
<name>Valeri, Jacqueline A.</name>
</author>
<author>
<name>Jin, Wengong</name>
</author>
<author>
<name>Donghia, Nina M.</name>
</author>
<author>
<name>Sieben, Leif</name>
</author>
<author>
<name>Luttens, Andreas</name>
</author>
<author>
<name>Zhang, Yu</name>
</author>
<author>
<name>Modaresi, Seyed Majed</name>
</author>
<author>
<name>Hennes, Andrew</name>
</author>
<author>
<name>Fromer, Jenna</name>
</author>
<author>
<name>Bandyopadhyay, Parijat</name>
</author>
<author>
<name>Chen, Jonathan C.</name>
</author>
<author>
<name>Rehman, Danyal</name>
</author>
<author>
<name>Desai, Ronak</name>
</author>
<author>
<name>Edwards, Paige</name>
</author>
<author>
<name>Lach, Ryan S.</name>
</author>
<author>
<name>Aschtgen, Marie-Stéphanie</name>
</author>
<author>
<name>Gaborieau, Margaux</name>
</author>
<author>
<name>Gaetani, Massimiliano</name>
</author>
<author>
<name>Palace, Samantha G.</name>
</author>
<author>
<name>Omori, Satotaka</name>
</author>
<author>
<name>Khonde, Lutete</name>
</author>
<author>
<name>Moroz, Yurii S.</name>
</author>
<author>
<name>Blough, Bruce</name>
</author>
<author>
<name>Jin, Chunyang</name>
</author>
<author>
<name>Loh, Edmund</name>
</author>
<author>
<name>Grad, Yonatan H.</name>
</author>
<author>
<name>Saei, Amir Ata</name>
</author>
<author>
<name>Coley, Connor W.</name>
</author>
<author>
<name>Wong, Felix</name>
</author>
<author>
<name>Collins, James J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164112</id>
<updated>2025-12-03T06:24:53Z</updated>
<published>2025-10-16T00:00:00Z</published>
<summary type="text">A generative deep learning approach to de novo antibiotic design
Krishnan, Aarti; Anahtar, Melis N.; Valeri, Jacqueline A.; Jin, Wengong; Donghia, Nina M.; Sieben, Leif; Luttens, Andreas; Zhang, Yu; Modaresi, Seyed Majed; Hennes, Andrew; Fromer, Jenna; Bandyopadhyay, Parijat; Chen, Jonathan C.; Rehman, Danyal; Desai, Ronak; Edwards, Paige; Lach, Ryan S.; Aschtgen, Marie-Stéphanie; Gaborieau, Margaux; Gaetani, Massimiliano; Palace, Samantha G.; Omori, Satotaka; Khonde, Lutete; Moroz, Yurii S.; Blough, Bruce; Jin, Chunyang; Loh, Edmund; Grad, Yonatan H.; Saei, Amir Ata; Coley, Connor W.; Wong, Felix; Collins, James J.
The antimicrobial resistance crisis necessitates structurally distinct antibiotics. While deep learning approaches can identify antibacterial compounds from existing libraries, structural novelty remains limited. Here, we developed a generative artificial intelligence framework for designing de novo antibiotics through two approaches: a fragment-based method to comprehensively screen &gt;107 chemical fragments in silico against Neisseria gonorrhoeae or Staphylococcus aureus, subsequently expanding promising fragments, and an unconstrained de novo compound generation, each using genetic algorithms and variational autoencoders. Of 24 synthesized compounds, seven demonstrated selective antibacterial activity. Two lead compounds exhibited bactericidal efficacy against multidrug-resistant isolates with distinct mechanisms of action and reduced bacterial burden in vivo in mouse models of N. gonorrhoeae vaginal infection and methicillin-resistant S. aureus skin infection. We further validated structural analogs for both compound classes as antibacterial. Our approach enables the generative deep-learning-guided design of de novo antibiotics, providing a platform for mapping uncharted regions of chemical space.
</summary>
<dc:date>2025-10-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Frontiers of biological material intelligence</title>
<link href="https://hdl.handle.net/1721.1/164111" rel="alternate"/>
<author>
<name>Marom, Lee</name>
</author>
<author>
<name>Buehler, Markus J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164111</id>
<updated>2025-12-03T06:25:07Z</updated>
<published>2025-11-26T00:00:00Z</published>
<summary type="text">Frontiers of biological material intelligence
Marom, Lee; Buehler, Markus J.
Biological materials exhibit a form of intelligence that enables them to sense, adapt, and self-optimize in response to their environments. Unlike synthetic materials, which are often designed for singular, static functions, natural material systems integrate sensing, memory, and feedback directly into their architectures. As industries face increasing demands for resilience, sustainability, and efficiency, the development of intelligent materials has become a promising step toward the future of material innovation. Advances in artificial intelligence and machine learning, along with mathematical frameworks spanning graph theory and category theory, provide powerful tools to uncover the underlying design principles of intelligent biological materials. Simultaneously, digital fabrication methods, including additive manufacturing and biofabrication, allow the scalable realization of adaptive material systems. As the integration of deep biological insight, computational modeling, and advanced fabrication continues to evolve, it sets the stage for a profound shift in how we conceive, create, and deploy materials. Advancing this convergence will accelerate the development of intelligent systems that are capable of autonomous adaptation, long-term resilience, and embedded functionality across scales and environments.
</summary>
<dc:date>2025-11-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging community engagement and human-centered design to develop multilevel implementation strategies to enhance adoption of a health equity intervention</title>
<link href="https://hdl.handle.net/1721.1/164110" rel="alternate"/>
<author>
<name>Price, Maggi A.</name>
</author>
<author>
<name>Mulkern, Patrick J.</name>
</author>
<author>
<name>Condon, Madelaine</name>
</author>
<author>
<name>Rakhilin, Marina</name>
</author>
<author>
<name>Johansen, Kara</name>
</author>
<author>
<name>Lyon, Aaron R.</name>
</author>
<author>
<name>Saldana, Lisa</name>
</author>
<author>
<name>Pachankis, John</name>
</author>
<author>
<name>Woodward, Sue A.</name>
</author>
<author>
<name>Roeder, Kathryn M.</name>
</author>
<author>
<name>Moran, Lyndsey R.</name>
</author>
<author>
<name>Jerskey, Beth A.</name>
</author>
<id>https://hdl.handle.net/1721.1/164110</id>
<updated>2025-12-03T06:25:10Z</updated>
<published>2025-11-24T00:00:00Z</published>
<summary type="text">Leveraging community engagement and human-centered design to develop multilevel implementation strategies to enhance adoption of a health equity intervention
Price, Maggi A.; Mulkern, Patrick J.; Condon, Madelaine; Rakhilin, Marina; Johansen, Kara; Lyon, Aaron R.; Saldana, Lisa; Pachankis, John; Woodward, Sue A.; Roeder, Kathryn M.; Moran, Lyndsey R.; Jerskey, Beth A.
Background Health equity intervention implementation (which promotes positive health outcomes for populations experiencing disproportionately worse health) is often impeded by health-equity-specific barriers like provider bias; few studies demonstrate how to overcome these barriers through implementation strategies. An urgent health equity problem in the U.S. is the mental health of transgender youth. To address this, we developed Gender-Affirming Psychotherapy (GAP), a health equity intervention comprising best-practice mental health care for transgender youth. This paper details the identification of implementation determinants and the development of targeted strategies to promote provider adoption of GAP. Methods This study represents part of a larger study of mental health provider adoption of GAP. Here we describe the first 2 stages of the 3-stage community-engaged and human-centered design process – Discover, Design/Build, and Test – to identify implementation determinants of adoption and develop implementation strategies with transgender youth, their parents, and mental health providers. This process involved collecting data via focus groups, design meetings, usability testing, and champion meetings. Data were analyzed using rapid and conventional content analysis. Qualitative coding of implementation determinants was guided by the Health Equity Implementation Framework, and implementation strategy coding was facilitated by the ERIC Implementation Strategy Compilation. Results We identified 15 determinants of GAP adoption, and all were specific to the transgender population (e.g., inclusive record system, anti-transgender attitudes). Seventeen implementation strategies were recommended and 12 were developed, collectively addressing all identified determinants. Most strategies were packaged into an online self-paced mental health provider training (implementation intervention) with 6 training tools. Additional inner-setting strategies were designed to support training uptake (e.g., mandate training) and GAP adoption (e.g., change record system). Conclusions Community-engaged and human-centered design methods can identify health equity intervention implementation determinants and develop targeted strategies. We highlight five generalizable takeaways for health equity implementation scientists: (1) implementer bias may be a key barrier, (2) experience with the health equity population may be an important facilitator, (3) stakeholder stories may be an effective training tool, (4) inner-setting-level implementation strategies may be necessary, and (5) teaching implementers how to build implementation strategies can overcome resource-constraints. Trial registration November 11, 2022; NCT05626231.
</summary>
<dc:date>2025-11-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Breeding of microbiomes conferring salt tolerance to plants</title>
<link href="https://hdl.handle.net/1721.1/164109" rel="alternate"/>
<author>
<name>Guilherme Pereira, Caio</name>
</author>
<author>
<name>Edwards, Joseph A.</name>
</author>
<author>
<name>Khasanova, Albina</name>
</author>
<author>
<name>Carlson, Alexis</name>
</author>
<author>
<name>Brisson, Vanessa</name>
</author>
<author>
<name>Schaefer, Estelle</name>
</author>
<author>
<name>Glavina del Rio, Tijana</name>
</author>
<author>
<name>Tringe, Susannah</name>
</author>
<author>
<name>Vogel, John P.</name>
</author>
<author>
<name>Des Marais, David L.</name>
</author>
<author>
<name>Juenger, Thomas E.</name>
</author>
<author>
<name>Mueller, Ulrich G.</name>
</author>
<id>https://hdl.handle.net/1721.1/164109</id>
<updated>2025-12-03T06:25:08Z</updated>
<published>2025-11-27T00:00:00Z</published>
<summary type="text">Breeding of microbiomes conferring salt tolerance to plants
Guilherme Pereira, Caio; Edwards, Joseph A.; Khasanova, Albina; Carlson, Alexis; Brisson, Vanessa; Schaefer, Estelle; Glavina del Rio, Tijana; Tringe, Susannah; Vogel, John P.; Des Marais, David L.; Juenger, Thomas E.; Mueller, Ulrich G.
Background Microbiome breeding through host-mediated selection is a technique to artificially select for microbiomes conferring beneficial properties to plants. Using a systematic selection protocol that maximises the heritability of microbiome effects, transmission fidelity, and microbiome stability through multiple selection cycles, we previously developed root-associated microbial communities conferring sodium and aluminium tolerance to Brachypodium distachyon, a model for cereal crops. Here, we explore the physiological mechanisms underlying our selected microbiomes’ effect on plant fitness and analyse how our selection protocol shaped the composition and structure of these microbiomes. We analysed the effects of our selected microbiomes on plant fitness and tissue-nutrient concentration, then used 16S rRNA amplicon sequencing to examine microbial community composition and co-occurrence network patterns. Results Our sodium-selected microbiomes reduced leaf sodium concentration by ~ 50%, whereas the aluminium-selected microbiomes had no effect on leaf-tissue nutrient concentration, suggesting different mechanisms underlying the microbiome-mediated stress tolerance. By testing the selected microbiomes in a cross-fostering experiment, we show that our artificially selected microbiomes attained (a) ecological robustness contributing to transplantability (i.e. inheritance) of microbiome-encoded effects between plants; and (b) network features identifying key bacteria promoting salt-stress tolerance. Conclusions Combined, these findings elucidate critical mechanisms underlying host-mediated artificial selection as a framework to breed microbiomes with targeted benefits for plants under salt stresses, with significant implications for sustainable agriculture.
</summary>
<dc:date>2025-11-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measurement of ψ(2S) to J/ψ cross-section ratio as function of multiplicity in pPb collisions at √sNN = 8.16 TeV</title>
<link href="https://hdl.handle.net/1721.1/164108" rel="alternate"/>
<author>
<name>Aaij, R.</name>
</author>
<author>
<name>Abdelmotteleb, A. S. W.</name>
</author>
<author>
<name>Abellan Beteta, C.</name>
</author>
<author>
<name>Abudinén, F.</name>
</author>
<author>
<name>Ackernley, T.</name>
</author>
<author>
<name>Adefisoye, A. A.</name>
</author>
<author>
<name>Adeva, B.</name>
</author>
<author>
<name>Adinolfi, M.</name>
</author>
<author>
<name>Adlarson, P.</name>
</author>
<author>
<name>Agapopoulou, C.</name>
</author>
<author>
<name>Aidala, C. A.</name>
</author>
<author>
<name>Ajaltouni, Z.</name>
</author>
<author>
<name>Akar, S.</name>
</author>
<author>
<name>Akiba, K.</name>
</author>
<author>
<name>Albicocco, P.</name>
</author>
<author>
<name>Albrecht, J.</name>
</author>
<author>
<name>Alessio, F.</name>
</author>
<author>
<name>The LHCb collaboration</name>
</author>
<id>https://hdl.handle.net/1721.1/164108</id>
<updated>2026-03-08T03:32:05Z</updated>
<published>2025-11-26T00:00:00Z</published>
<summary type="text">Measurement of ψ(2S) to J/ψ cross-section ratio as function of multiplicity in pPb collisions at √sNN = 8.16 TeV
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; Alessio, F.; The LHCb collaboration
The production ratio of ψ(2S) to J/ψ charmonium states is presented as a function of multiplicity in proton-lead collisions at a centre-of-mass energy of s NN = 8.16 TeV, for both prompt and nonprompt sources. The total luminosity recorded by the LHCb experiment corresponds to 13.6 nb−1 for pPb collisions and 20.8 nb−1 for Pbp collisions, where the first particle corresponds to the particle traveling towards the detector. Measurements are performed in the dimuon final state at forward (backward) centre-of-mass rapidity, with respect to the proton direction, 1.5 &lt; y* &lt; 4.0 (−5.0 &lt; y* &lt; −2.5) for pPb (Pbp) collisions. A multiplicity dependence of the prompt production ratio is observed in pPb collisions, whereas no dependence is found in nonprompt production, nor in either prompt or nonprompt production in Pbp collisions. These results suggest that in the Pb-going direction additional suppression mechanisms beyond comover effects may be present, possibly related to the formation of quark-gluon plasma. This highlights a transition from small to large collision systems and provides important insight into the suppression of charmonia in proton-nucleus collisions.
</summary>
<dc:date>2025-11-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>CURENet: combining unified representations for efficient chronic disease prediction</title>
<link href="https://hdl.handle.net/1721.1/164107" rel="alternate"/>
<author>
<name>Dao, Cong-Tinh</name>
</author>
<author>
<name>Phan, Nguyen M. T.</name>
</author>
<author>
<name>Ding, Jun-En</name>
</author>
<author>
<name>Wu, Chenwei</name>
</author>
<author>
<name>Restrepo, David</name>
</author>
<author>
<name>Luo, Dongsheng</name>
</author>
<author>
<name>Zhao, Fanyi</name>
</author>
<author>
<name>Liao, Chun-Chieh</name>
</author>
<author>
<name>Peng, Wen-Chih</name>
</author>
<author>
<name>Wang, Chi-Te</name>
</author>
<author>
<name>Chen, Pei-Fu</name>
</author>
<author>
<name>Chen, Ling</name>
</author>
<author>
<name>Ju, Xinglong</name>
</author>
<author>
<name>Liu, Feng</name>
</author>
<author>
<name>Hung, Fang-Ming</name>
</author>
<id>https://hdl.handle.net/1721.1/164107</id>
<updated>2026-03-08T03:32:04Z</updated>
<published>2025-11-27T00:00:00Z</published>
<summary type="text">CURENet: combining unified representations for efficient chronic disease prediction
Dao, Cong-Tinh; Phan, Nguyen M. T.; Ding, Jun-En; Wu, Chenwei; Restrepo, David; Luo, Dongsheng; Zhao, Fanyi; Liao, Chun-Chieh; Peng, Wen-Chih; Wang, Chi-Te; Chen, Pei-Fu; Chen, Ling; Ju, Xinglong; Liu, Feng; Hung, Fang-Ming
Electronic health records (EHRs) are designed to synthesize diverse data types, including unstructured clinical notes, structured lab tests, and time-series visit data. Physicians draw on these multimodal and temporal sources of EHR data to form a comprehensive view of a patient’s health, which is crucial for informed therapeutic decision-making. Yet, most predictive models fail to fully capture the interactions, redundancies, and temporal patterns across multiple data modalities, often focusing on a single data type or overlooking these complexities. In this paper, we present CURENet, a multimodal model (Combining Unified Representations for Efficient chronic disease prediction) that integrates unstructured clinical notes, lab tests, and patients’ time-series data by utilizing large language models (LLMs) for clinical text processing and textual lab tests, as well as transformer encoders for longitudinal sequential visits. Curenet has been capable of capturing the intricate interaction between different forms of clinical data and creating a more reliable predictive model for chronic illnesses. We evaluated CURENet using the public MIMIC-III and private FEMH datasets, where it achieved over 94% accuracy in predicting the top 10 chronic conditions in a multi-label framework. Our findings highlight the potential of multimodal EHR integration to enhance clinical decision-making and improve patient outcomes.
</summary>
<dc:date>2025-11-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effective field theory factorization for diffraction</title>
<link href="https://hdl.handle.net/1721.1/164106" rel="alternate"/>
<author>
<name>Lee, Kyle</name>
</author>
<author>
<name>Schindler, Stella T.</name>
</author>
<author>
<name>Stewart, Iain W.</name>
</author>
<id>https://hdl.handle.net/1721.1/164106</id>
<updated>2026-03-08T03:32:03Z</updated>
<published>2025-11-25T00:00:00Z</published>
<summary type="text">Effective field theory factorization for diffraction
Lee, Kyle; Schindler, Stella T.; Stewart, Iain W.
We derive a factorization formula for coherent and incoherent ep diffraction using the soft collinear effective theory, utilizing multiple power expansion parameters to handle different kinematic regions. This goes beyond the known hard-collinear diffractive factorization to address the small-x Regge dynamics and Pomeron exchange from first principles. The effective field theory analysis also uncovers and factorizes an important irreducible incoherent background generated by color-nonsinglet exchange, dubbed “quasi-diffraction”, for which we calculate the associated Sudakov suppression. For unpolarized scattering we show that there are four diffractive structure functions at leading power, and point out the importance of studying F 3 , 4 D through asymmetries, in addition to F 2 , L D . For the quasi-diffractive background, we make model independent predictions for ratios of the corresponding structure functions in a perturbative kinematic region. Our analysis also makes predictions for six leading-power spin-dependent structure functions. Finally, we provide connections to diffractive parton distributions, and assess the Ingelman-Schlein model. Our work lays a path for further QCD-based studies of diffraction.
</summary>
<dc:date>2025-11-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Coherent photoproduction of ρ0, ω and excited vector mesons in ultraperipheral PbPb collisions</title>
<link href="https://hdl.handle.net/1721.1/164105" rel="alternate"/>
<author>
<name>Aaij, R.</name>
</author>
<author>
<name>Abdelmotteleb, A. S. W.</name>
</author>
<author>
<name>Abellan Beteta, C.</name>
</author>
<author>
<name>Abudinén, F.</name>
</author>
<author>
<name>Ackernley, T.</name>
</author>
<author>
<name>Adefisoye, A. A.</name>
</author>
<author>
<name>Adeva, B.</name>
</author>
<author>
<name>Adinolfi, M.</name>
</author>
<author>
<name>Adlarson, P.</name>
</author>
<author>
<name>Agapopoulou, C.</name>
</author>
<author>
<name>Aidala, C. A.</name>
</author>
<author>
<name>Ajaltouni, Z.</name>
</author>
<author>
<name>Akar, S.</name>
</author>
<author>
<name>Akiba, K.</name>
</author>
<author>
<name>Albicocco, P.</name>
</author>
<author>
<name>Albrecht, J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164105</id>
<updated>2026-03-08T03:32:06Z</updated>
<published>2025-11-18T00:00:00Z</published>
<summary type="text">Coherent photoproduction of ρ0, ω and excited vector mesons in ultraperipheral PbPb collisions
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
The invariant-mass distribution for the coherent photoproduction of dipions in ultraperipheral PbPb collisions is measured using data, corresponding to an integrated luminosity of 224.6 ± 9.6μb−1, collected by the LHCb experiment in 2018 at a nucleon-nucleon centre-of-mass energy s NN = 5.02 TeV. In the mass range from 400 to 1200 MeV, the results are consistent with previous experiments, with the spectrum dominated by the ρ0 meson, which interferes with a nonresonant component, together with a smaller ω meson contribution. In an extended mass range up to 2300 MeV, models previously used do not fit the data and a consistent description requires the introduction of two resonances at masses of 1350 ± 20 MeV and 1790 ± 20 MeV with widths of about 300 MeV. The cross-section for each meson is measured differentially in twelve bins of rapidity from 2.05 to 4.90. The ρ0 cross-section increases with rapidity from about 400 to 600 mb and is measured with a typical precision of 8%, while the cross-section times branching fraction for the ω, ρ′ and ρ′′, with the statistical precision of the data, do not have a pronounced rapidity dependence and are between 0.5 and 1.5mb, with uncertainties up to 30%. A large nuclear suppression is observed for the ρ0 meson compared to expectations based on photoproduction on the proton that use the impulse approximation. Significant suppression is also observed compared to that predicted by elastic scattering described in the Glauber approach, or with the addition of inelastic scattering in a Gribov-Glauber model.
</summary>
<dc:date>2025-11-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forcing with Invariant Measures</title>
<link href="https://hdl.handle.net/1721.1/164104" rel="alternate"/>
<author>
<name>Ackerman, Nathanael</name>
</author>
<author>
<name>Freer, Cameron</name>
</author>
<author>
<name>Golshani, Mohammad</name>
</author>
<author>
<name>Mirabi, Mostafa</name>
</author>
<author>
<name>Patel, Rehana</name>
</author>
<id>https://hdl.handle.net/1721.1/164104</id>
<updated>2026-03-08T03:32:08Z</updated>
<published>2025-11-24T00:00:00Z</published>
<summary type="text">Forcing with Invariant Measures
Ackerman, Nathanael; Freer, Cameron; Golshani, Mohammad; Mirabi, Mostafa; Patel, Rehana
This paper introduces a model-theoretic generalization of the notion of forcing with random reals, in which forcing gives rise to random generic structures. Specifically, we consider forcing with κ -Borel probability measures on the space of L -structures with a (possibly uncountable) infinite set X, focusing on those that are invariant under the action of the symmetric group Sym ( X ) . We demonstrate how any Sym ( X ) -invariant measure where X is countable can be uniquely extended to a Sym ( Y ) -invariant measure where Y is uncountable, and prove that forcing with such measures satisfies the countable chain condition. We also show that we can uniformly distinguish between these random generic structures and the Cohen generic structures that arise from forcing with a strong Fraïssé class: There is a κ -Borel set of low complexity that contains every Cohen generic structure that is not highly homogeneous but contains no random generic structure, implying that a structure that is not highly homogeneous cannot be both Cohen generic and random generic. Finally, we answer an open question of Kostana in the case of ω 1 , by establishing a connection between forcing with a strong Fraïssé class and Cohen forcing.
</summary>
<dc:date>2025-11-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Gnotobiotic growth and phosphorus limitation of Arabidopsis thaliana and co-occurring microbes on phosphated iron oxides</title>
<link href="https://hdl.handle.net/1721.1/164103" rel="alternate"/>
<author>
<name>Mackie, Amanda M.</name>
</author>
<author>
<name>Schuler, Christopher J.</name>
</author>
<author>
<name>McRose, Darcy L.</name>
</author>
<id>https://hdl.handle.net/1721.1/164103</id>
<updated>2026-03-08T03:31:52Z</updated>
<published>2025-11-27T00:00:00Z</published>
<summary type="text">Gnotobiotic growth and phosphorus limitation of Arabidopsis thaliana and co-occurring microbes on phosphated iron oxides
Mackie, Amanda M.; Schuler, Christopher J.; McRose, Darcy L.
The macronutrient phosphorus is vital for sustaining cellular processes in all life forms. Due to its frequent adsorption on iron minerals, phosphorus bioavailability is low in many soils. While the abiotic adsorption of phosphate on iron minerals has been well studied, the direct effects of this process on bioavailability to plants and microbes has not been thoroughly investigated in a simplified laboratory system. We developed a hydroponic growth system that uses hydrous ferric oxide (HFO) to induce phosphorus limitation and can enable both plant and microbial cultivation as well as gnotobiotic co-culture. We demonstrate that this system can be used for phosphorus-limited growth of the model plant Arabidopsis thaliana as well as two root-associated bacterial isolates (from the genera Rhizobium and Pseudomonas). Elemental analysis of phosphorus and iron concentration in A. thaliana shoots reveals that the addition of increasing amounts of HFO leads to a progressive decrease in phosphorus concentration but does not affect iron quotas. We also report that phosphorus concentrations in both bacterial isolates decrease when cultivated in media supplemented with HFO. We further show that A. thaliana can be co-cultured with a Rhizobium isolate in our phosphorus-limited hydroponic system with bacteria relying on plant photosynthate as their sole carbon source. Our work provides a controlled demonstration of the effects of mineral adsorption on phosphorus bioavailability and a tool for further investigation of how plants and microbes access phosphorus in the environment.
</summary>
<dc:date>2025-11-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beyond submodular maximization via one-sided smoothness</title>
<link href="https://hdl.handle.net/1721.1/164102" rel="alternate"/>
<author>
<name>Ghadiri, Mehrdad</name>
</author>
<author>
<name>Santiago, Richard</name>
</author>
<author>
<name>Shepherd, Bruce</name>
</author>
<id>https://hdl.handle.net/1721.1/164102</id>
<updated>2026-03-08T03:32:05Z</updated>
<published>2025-11-24T00:00:00Z</published>
<summary type="text">Beyond submodular maximization via one-sided smoothness
Ghadiri, Mehrdad; Santiago, Richard; Shepherd, Bruce
The multilinear framework for submodular maximization was developed to achieve a tight 1 - 1 / e approximation for maximizing a monotone submodular function subject to a matroid constraint, including as special case the submodular welfare problem. The framework has a continuous optimization step (solving the multilinear extension of a submodular function) and a rounding part (rounding a fractional solution to an integral one). We extend both parts to provide a framework for a wider array of applications. The continuous part works for a more general class of continuous functions parameterized by a new smoothness parameter σ . A twice differential function F is called σ -one-sided-smooth ( σ -OSS) if its second derivatives are bounded as follows: 1 2 u T ∇ 2 F ( x ) u ≤ σ · ‖ u ‖ 1 ‖ x ‖ 1 u T ∇ F ( x ) for all u , x ≥ 0 , x ≠ 0 . For σ = 0 this includes previously studied continuous DR-Submodular functions as well as quadratics defined by copositive matrices. We give a modification of the continuous greedy algorithm which finds a solution for maximizing a monotone σ -OSS F over a polytope in the non-negative orthant; the solution approximates the optimum to within factors which are functions of σ which depend on additional properties. Interestingly, σ -OSS functions arise as the multilinear extensions of set functions associated with several well-studied diversity maximization problems: max f ( S ) = ∑ i , j ∈ S A ij : | S | ≤ k . For instance, when A ij defines a σ -semi-metric, its extension is σ -OSS. In these settings, we also develop rounding schemes to approximate the discrete problem.
</summary>
<dc:date>2025-11-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Dean, School of Engineering</title>
<link href="https://hdl.handle.net/1721.1/164101" rel="alternate"/>
<author>
<name>Gallagher, Mary Beth</name>
</author>
<id>https://hdl.handle.net/1721.1/164101</id>
<updated>2025-12-02T03:09:52Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Dean, School of Engineering
Gallagher, Mary Beth
This report contains the following sections: Administrative Initiatives, Personnel Information, Educational Activities, Strategic Initiatives, Entrepreneurship, Leadership, and Innovation Activities.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, MIT Washington Office</title>
<link href="https://hdl.handle.net/1721.1/164100" rel="alternate"/>
<author>
<name>Zuber, Maria</name>
</author>
<id>https://hdl.handle.net/1721.1/164100</id>
<updated>2025-12-02T03:09:51Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, MIT Washington Office
Zuber, Maria
This report contains the following sections: Personnel, Communications, Federal advocacy, Priority areas, MIT in DC, and Student engagement and mentorship.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Department of Brain and Cognitive Sciences</title>
<link href="https://hdl.handle.net/1721.1/164099" rel="alternate"/>
<author>
<name>Fee, Michale</name>
</author>
<id>https://hdl.handle.net/1721.1/164099</id>
<updated>2025-12-02T03:09:47Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Department of Brain and Cognitive Sciences
Fee, Michale
This report contains the following sections: Introduction: Our Mission and Approach, The Building 46 Community, Strategic Planning, Leadership, Faculty, Research Centers, Academics, Finances and Funding, and Research Highlights.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shotgun Metagenomics of Gastric Biopsies Reveals Compositional and Functional Microbiome Shifts in High- and Low-Gastric-Cancer-Risk Populations from Colombia, South America</title>
<link href="https://hdl.handle.net/1721.1/164098" rel="alternate"/>
<author>
<name>Mannion, Anthony</name>
</author>
<author>
<name>Sheh, Alexander</name>
</author>
<author>
<name>Shen, Zeli</name>
</author>
<author>
<name>Dzink-Fox, JoAnn</name>
</author>
<author>
<name>Piazuelo, M Blanca</name>
</author>
<author>
<name>Wilson, Keith T</name>
</author>
<author>
<name>Peek, Richard</name>
</author>
<author>
<name>Fox, James G</name>
</author>
<id>https://hdl.handle.net/1721.1/164098</id>
<updated>2026-03-08T03:32:01Z</updated>
<published>2023-02-27T00:00:00Z</published>
<summary type="text">Shotgun Metagenomics of Gastric Biopsies Reveals Compositional and Functional Microbiome Shifts in High- and Low-Gastric-Cancer-Risk Populations from Colombia, South America
Mannion, Anthony; Sheh, Alexander; Shen, Zeli; Dzink-Fox, JoAnn; Piazuelo, M Blanca; Wilson, Keith T; Peek, Richard; Fox, James G
Along with Helicobacter pylori infection, the gastric microbiota is hypothesized to modulate stomach cancer risk in susceptible individuals. Whole metagenomic shotgun sequencing (WMS) is a sequencing approach to characterize the microbiome with advantages over traditional culture and 16S rRNA sequencing including identification of bacterial and non-bacterial taxa, species/strain resolution, and functional characterization of the microbiota. In this study, we used WMS to survey the microbiome in extracted DNA from antral gastric biopsy samples from Colombian patients residing in the high-risk gastric cancer town Túquerres (n = 10, H. pylori-positive = 7) and low-risk town of Tumaco (n = 10, H. pylori-positive = 6). Kraken2/Bracken was used for taxonomic classification and abundance. Functional gene profiles were inferred by InterProScan and KEGG analysis of assembled contigs and gene annotation. The most abundant taxa represented bacteria, non-human eukaryota, and viral genera found in skin, oral, food, and plant/soil environments including Staphylococus, Streptococcus, Bacillus, Aspergillus, and Siphoviridae. H. pylori was the predominant taxa present in H. pylori-positive samples. Beta diversity was significantly different based on H. pylori-status, risk group, and sex. WMS detected more bacterial taxa than 16S rRNA sequencing and aerobic, anaerobic, and microaerobic culture performed on the same gastric biopsy samples. WMS identified significant differences in functional profiles found between H. pylori-status, but not risk or sex groups. H. pylori-positive samples were significantly enriched for H. pylori-specific genes including virulence factors such as vacA, cagA, and urease, while carbohydrate and amino acid metabolism genes were enriched in H. pylori-negative samples. This study shows WMS has the potential to characterize the taxonomy and function of the gastric microbiome as risk factors for H. pylori-associated gastric disease. Future studies will be needed to compare and validate WMS versus traditional culture and 16S rRNA sequencing approaches for characterization of the gastric microbiome.
</summary>
<dc:date>2023-02-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Resonance Scattering Treatment with the Windowed Multipole Formalism</title>
<link href="https://hdl.handle.net/1721.1/164097" rel="alternate"/>
<author>
<name>Ridley, Gavin</name>
</author>
<author>
<name>Forget, Benoit</name>
</author>
<author>
<name>Burke, Timothy</name>
</author>
<id>https://hdl.handle.net/1721.1/164097</id>
<updated>2026-03-08T03:32:00Z</updated>
<published>2024-03-03T00:00:00Z</published>
<summary type="text">Resonance Scattering Treatment with the Windowed Multipole Formalism
Ridley, Gavin; Forget, Benoit; Burke, Timothy
A new method for directly sampling the resonance upscattering effect is presented. Alternatives have relied on inefficient rejection sampling techniques or large tabular storage of relative velocities. None of these approaches, which require pointwise energy data, are particularly well suited to the windowed multipole cross-section representation. The new method, called multipole analytic resonance scattering, overcomes these limitations by inverse transform sampling from the target relative velocity distribution where the cross section is expressed in the multipole formalism. The closed-form relative speed distribution contains a novel special function we deem the incomplete Faddeeva function, and we present the first results on its efficient numerical evaluation.
</summary>
<dc:date>2024-03-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing the Structure-Based Turbulence Model Performance for Thermal Striping Applications Using Symmetric Jet Experiments</title>
<link href="https://hdl.handle.net/1721.1/164096" rel="alternate"/>
<author>
<name>Pham, Monica</name>
</author>
<author>
<name>Petrov, Victor</name>
</author>
<author>
<name>Manera, Annalisa</name>
</author>
<author>
<name>Baglietto, Emilio</name>
</author>
<id>https://hdl.handle.net/1721.1/164096</id>
<updated>2026-03-08T03:31:59Z</updated>
<published>2024-07-02T00:00:00Z</published>
<summary type="text">Assessing the Structure-Based Turbulence Model Performance for Thermal Striping Applications Using Symmetric Jet Experiments
Pham, Monica; Petrov, Victor; Manera, Annalisa; Baglietto, Emilio
Turbulent mixing of coolant streams can result in an oscillatory mixing phenomenon called thermal striping. These fluctuations have the potential to lead to anticipated thermal fatigue failures in advanced nuclear reactors. To predict thermal striping, robust and computationally affordable modeling tools that are capable of accurately representing complex turbulence are needed. Hybrid turbulence approaches, such as detached-eddy simulation and scale-adaptive simulation, have shown some success in resolving complex unsteady turbulence for massively separated flows, however the applicability of these models to internal flows is limited. A STRUCTure-based (STRUCT) second-generation Unsteady Reynolds-Averaged Navier–Stokes turbulence model was recently proposed at the Massachusetts Institute of Technology to robustly extend the applicability of hybrid closures. In this work, the STRUCT model is evaluated using experimental data taken at the Reactor Cavity Cooling System separate-effects test facility at the University of Michigan. The experiments observed the interaction of parallel symmetric rectangular jets, and include measurements for mean profiles of velocity and Reynolds stresses. In the present work, the simulation results are assessed against mean profiles of velocity and Reynolds stresses, demonstrating the ability to reproduce the unsteadiness of the jets in close agreement with the measurements at considerably reduced computational cost.
</summary>
<dc:date>2024-07-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep-learning models for forecasting financial risk premia and their interpretations</title>
<link href="https://hdl.handle.net/1721.1/164095" rel="alternate"/>
<author>
<name>Lo, Andrew W</name>
</author>
<author>
<name>Singh, Manish</name>
</author>
<id>https://hdl.handle.net/1721.1/164095</id>
<updated>2026-03-08T03:32:02Z</updated>
<published>2023-05-12T00:00:00Z</published>
<summary type="text">Deep-learning models for forecasting financial risk premia and their interpretations
Lo, Andrew W; Singh, Manish
The measurement of financial risk premia, the amount that a risky asset will outperform a risk-free one, is an important problem in asset pricing. The noisiness and non-stationarity of asset returns makes the estimation of risk premia using machine learning (ML) techniques challenging. In this work, we develop ML models that solve the problems associated with risk premia forecasting by separating risk premia prediction into two independent tasks, a time series model and a cross-sectional model, and using neural networks with skip connections to enable their deep neural network training. These models are tested robustly with different metrics, and we observe that our models outperform several existing standard ML models. A known issue with ML models is their ‘black box’ nature, i.e. their opaqueness to interpretability. We interpret these deep neural networks using local approximation-based techniques that provide explanations for our model's predictions.
</summary>
<dc:date>2023-05-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning Distinguishes Plant Bioelectric Recordings with and Without Nearby Human Movement</title>
<link href="https://hdl.handle.net/1721.1/164094" rel="alternate"/>
<author>
<name>Gloor, Peter A.</name>
</author>
<author>
<name>Weinbeer, Moritz</name>
</author>
<id>https://hdl.handle.net/1721.1/164094</id>
<updated>2026-03-08T03:31:52Z</updated>
<published>2025-11-15T00:00:00Z</published>
<summary type="text">Machine Learning Distinguishes Plant Bioelectric Recordings with and Without Nearby Human Movement
Gloor, Peter A.; Weinbeer, Moritz
Background: Quantitatively detecting whether plants exhibit measurable bioelectric differences in the presence of nearby human movement remains challenging, in part because plant signals are low-amplitude, slow, and easily confounded by environmental factors. Methods: We recorded bioelectric activity from 2978 plant samples across three species (basil, salad, tomato) using differential electrode pairs (leaf and soil electrodes) sampling at 142 Hz. Two trained performers executed three specific eurythmic gestures near experimental plants while control plants remained isolated. Random Forest and Convolutional Neural Network classifiers were applied to distinguish the control from treatment conditions using engineered features including spectral, temporal, wavelet, and frequency domain characteristics. Results: Random Forest classification achieved 62.7% accuracy (AUC = 0.67) distinguishing differences in recordings collected near a moving human from control conditions, representing a statistically significant 12.7 percentage point improvement over chance. Individual performer signatures were detectable with 68.2% accuracy, while plant species classification achieved only 44.5% accuracy, indicating minimal species-specific artifacts. Temporal analysis revealed that the plants with repeated exposure exhibited consistently less negative bioelectric amplitudes compared to single-exposure plants. Innovation: We introduce a data-driven approach that pairs standardized, short-window bioelectric recordings with machine-learning classifiers (Random Forest, CNN) to test, in an exploratory manner, whether plant signals differ between human-moving-nearby and isolation conditions. Conclusions: Plants exhibit modest but statistically detectable bioelectric differences in the presence of nearby human movement. Rather than attributing these differences to eurythmic movement itself, the present design can only demonstrate that plant recordings collected within ~1 m of a moving human differ, modestly but statistically, from recordings taken ≥3 m away. The underlying biophysical pathways and specific contributing factors (airflow, VOCs, thermal plumes, vibration, electromagnetic fields) remain unknown. These results should therefore be interpreted as exploratory correlations, not mechanistic evidence of gesture-specific plant sensing.
</summary>
<dc:date>2025-11-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of Regional Surface CO2 Fluxes Using the MEGA Satellite Data Assimilation System</title>
<link href="https://hdl.handle.net/1721.1/164093" rel="alternate"/>
<author>
<name>Hu, Liting</name>
</author>
<author>
<name>Hu, Xiaoyi</name>
</author>
<author>
<name>Jiang, Fei</name>
</author>
<author>
<name>He, Wei</name>
</author>
<author>
<name>Deng, Zhu</name>
</author>
<author>
<name>Fang, Shuangxi</name>
</author>
<author>
<name>Fang, Xuekun</name>
</author>
<id>https://hdl.handle.net/1721.1/164093</id>
<updated>2026-03-08T03:32:07Z</updated>
<published>2025-11-13T00:00:00Z</published>
<summary type="text">Analysis of Regional Surface CO2 Fluxes Using the MEGA Satellite Data Assimilation System
Hu, Liting; Hu, Xiaoyi; Jiang, Fei; He, Wei; Deng, Zhu; Fang, Shuangxi; Fang, Xuekun
Understanding the dynamics of terrestrial carbon sources and sinks is crucial for addressing climate change, yet significant uncertainties remain at regional scales. We developed the Monitoring and Evaluation of Greenhouse gAs Flux (MEGA) inversion system with satellite data assimilation and applied it to China using OCO-2 V11.1r XCO2 retrievals. Our results show that China’s terrestrial ecosystems acted as a carbon sink of 0.28 ± 0.15 PgC yr−1 during 2018–2023, consistent with other inversion estimates. Validation against surface CO2 flask measurements demonstrated significant improvement, with RMSE and MAE reduced by 30%–46% and 24–44%, respectively. Six sets of prior sensitivity experiments conclusively demonstrated the robustness of MEGA. In addition, this study is the first to systematically compare model-derived and observation-based background fields in satellite data assimilation. Ten sets of background sensitivity experiments revealed that model-based background fields exhibit superior capability in resolving seasonal flux dynamics, though their performance remains contingent on three key factors: (1) initial fields, (2) flux fields, and (3) flux masks (used to control regional flux switches). These findings highlight the potential for further refinement of the atmospheric inversion system.
</summary>
<dc:date>2025-11-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>ZeoSyn: A Comprehensive Zeolite Synthesis Dataset Enabling Machine-Learning Rationalization of Hydrothermal Parameters</title>
<link href="https://hdl.handle.net/1721.1/164092" rel="alternate"/>
<author>
<name>Pan, Elton</name>
</author>
<author>
<name>Kwon, Soonhyoung</name>
</author>
<author>
<name>Jensen, Zach</name>
</author>
<author>
<name>Xie, Mingrou</name>
</author>
<author>
<name>Gómez-Bombarelli, Rafael</name>
</author>
<author>
<name>Moliner, Manuel</name>
</author>
<author>
<name>Román-Leshkov, Yuriy</name>
</author>
<author>
<name>Olivetti, Elsa</name>
</author>
<id>https://hdl.handle.net/1721.1/164092</id>
<updated>2025-11-27T05:20:45Z</updated>
<published>2024-03-06T00:00:00Z</published>
<summary type="text">ZeoSyn: A Comprehensive Zeolite Synthesis Dataset Enabling Machine-Learning Rationalization of Hydrothermal Parameters
Pan, Elton; Kwon, Soonhyoung; Jensen, Zach; Xie, Mingrou; Gómez-Bombarelli, Rafael; Moliner, Manuel; Román-Leshkov, Yuriy; Olivetti, Elsa
Zeolites, nanoporous aluminosilicates with well-defined porous structures, are versatile materials with applications in catalysis, gas separation, and ion exchange. Hydrothermal synthesis is widely used for zeolite production, offering control over composition, crystallinity, and pore size. However, the intricate interplay of synthesis parameters necessitates a comprehensive understanding of synthesis-structure relationships to optimize the synthesis process. Hitherto, public zeolite synthesis databases only contain a subset of parameters and are small in scale, comprising up to a few thousand synthesis routes. We present ZeoSyn, a dataset of 23,961 zeolite hydrothermal synthesis routes, encompassing 233 zeolite topologies and 921 organic structure-directing agents (OSDAs). Each synthesis route comprises comprehensive synthesis parameters: 1) gel composition, 2) reaction conditions, 3) OSDAs, and 4) zeolite products. Using ZeoSyn, we develop a machine learning classifier to predict the resultant zeolite given a synthesis route with &gt;70% accuracy. We employ SHapley Additive exPlanations (SHAP) to uncover key synthesis parameters for &gt;200 zeolite frameworks. We introduce an aggregation approach to extend SHAP to all building units. We demonstrate applications of this approach to phase-selective and intergrowth synthesis. This comprehensive analysis illuminates the synthesis parameters pivotal in driving zeolite crystallization, offering the potential to guide the synthesis of desired zeolites. The dataset is available at https://github.com/eltonpan/zeosyn_dataset.
</summary>
<dc:date>2024-03-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>One-Pot Synthesis of CHA/ERI-Type Zeolite Intergrowth from a Single Multiselective Organic Structure-Directing Agent</title>
<link href="https://hdl.handle.net/1721.1/164091" rel="alternate"/>
<author>
<name>Kwon, Soonhyoung</name>
</author>
<author>
<name>Bello-Jurado, Estefanía</name>
</author>
<author>
<name>Ikonnikova, Evgeniia</name>
</author>
<author>
<name>Lee, Hwajun</name>
</author>
<author>
<name>Schwalbe-Koda, Daniel</name>
</author>
<author>
<name>Corma, Avelino</name>
</author>
<author>
<name>Willhammar, Tom</name>
</author>
<author>
<name>Olivetti, Elsa A</name>
</author>
<author>
<name>Gomez-Bombarelli, Rafael</name>
</author>
<author>
<name>Moliner, Manuel</name>
</author>
<author>
<name>Román-Leshkov, Yuriy</name>
</author>
<id>https://hdl.handle.net/1721.1/164091</id>
<updated>2025-11-27T05:20:32Z</updated>
<published>2024-03-13T00:00:00Z</published>
<summary type="text">One-Pot Synthesis of CHA/ERI-Type Zeolite Intergrowth from a Single Multiselective Organic Structure-Directing Agent
Kwon, Soonhyoung; Bello-Jurado, Estefanía; Ikonnikova, Evgeniia; Lee, Hwajun; Schwalbe-Koda, Daniel; Corma, Avelino; Willhammar, Tom; Olivetti, Elsa A; Gomez-Bombarelli, Rafael; Moliner, Manuel; Román-Leshkov, Yuriy
We report the one-pot synthesis of a chabazite (CHA)/erionite (ERI)-type zeolite intergrowth structure characterized by adjustable extents of intergrowth enrichment and Si/Al molar ratios. This method utilizes readily synthesizable 6-azaspiro[5.6]dodecan-6-ium as the exclusive organic structure-directing agent (OSDA) within a potassium-dominant environment. High-throughput simulations were used to accurately determine the templating energy and molecular shape, facilitating the selection of an optimally biselective OSDA from among thousands of prospective candidates. The coexistence of the crystal phases, forming a distinct structure comprising disk-like CHA regions bridged by ERI-rich pillars, was corroborated via rigorous powder X-ray diffraction and integrated differential-phase contrast scanning transmission electron microscopy (iDPC S/TEM) analyses. iDPC S/TEM imaging further revealed the presence of single offretite layers dispersed within the ERI phase. The ratio of crystal phases between CHA and ERI in this type of intergrowth could be varied systematically by changing both the OSDA/Si and K/Si ratios. Two intergrown zeolite samples with different Si/Al molar ratios were tested for the selective catalytic reduction (SCR) of NO&lt;sub&gt;&lt;i&gt;x&lt;/i&gt;&lt;/sub&gt; with NH&lt;sub&gt;3&lt;/sub&gt;, showing competitive catalytic performance and hydrothermal stability compared to that of the industry-standard commercial NH&lt;sub&gt;3&lt;/sub&gt;-SCR catalyst, Cu-SSZ-13, prevalent in automotive applications. Collectively, this work underscores the potential of our approach for the synthesis and optimization of adjustable intergrown zeolite structures, offering competitive alternatives for key industrial processes.
</summary>
<dc:date>2024-03-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Recommendations for improving rigor and reproducibility in site specific characterization</title>
<link href="https://hdl.handle.net/1721.1/164090" rel="alternate"/>
<author>
<name>Wrasman, Cody J</name>
</author>
<author>
<name>Bell, Alexis T</name>
</author>
<author>
<name>Chandler, Bert D</name>
</author>
<author>
<name>Harris, James W</name>
</author>
<author>
<name>Kwon, Stephanie</name>
</author>
<author>
<name>Ball, Madelyn R</name>
</author>
<author>
<name>Krishna, Siddarth H</name>
</author>
<author>
<name>Khatib, Sheima J</name>
</author>
<author>
<name>Bollini, Praveen</name>
</author>
<author>
<name>Román-Leshkov, Yuriy</name>
</author>
<author>
<name>“Bean” Getsoian, Andrew</name>
</author>
<author>
<name>Weber, Robert S</name>
</author>
<author>
<name>Lercher, Johannes A</name>
</author>
<author>
<name>Liu, Dongxia</name>
</author>
<author>
<name>Resasco, Daniel E</name>
</author>
<author>
<name>Bates, Jason S</name>
</author>
<author>
<name>Hall, Jacklyn N</name>
</author>
<author>
<name>Lebrón-Rodríguez, Edgard A</name>
</author>
<author>
<name>Paz Herrera, Laura</name>
</author>
<author>
<name>Notestein, Justin M</name>
</author>
<author>
<name>Schaidle, Joshua A</name>
</author>
<id>https://hdl.handle.net/1721.1/164090</id>
<updated>2025-11-27T05:20:43Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Recommendations for improving rigor and reproducibility in site specific characterization
Wrasman, Cody J; Bell, Alexis T; Chandler, Bert D; Harris, James W; Kwon, Stephanie; Ball, Madelyn R; Krishna, Siddarth H; Khatib, Sheima J; Bollini, Praveen; Román-Leshkov, Yuriy; “Bean” Getsoian, Andrew; Weber, Robert S; Lercher, Johannes A; Liu, Dongxia; Resasco, Daniel E; Bates, Jason S; Hall, Jacklyn N; Lebrón-Rodríguez, Edgard A; Paz Herrera, Laura; Notestein, Justin M; Schaidle, Joshua A
Heterogeneous catalysis is driven by the interaction of reactant molecules and the catalyst surface. The locus of this interaction as well as the surrounding ensemble of atoms is referred to as the catalyst active site. Active site characterization attempts to distinguish active catalytic sites from inactive surface sites, to elucidate the structural and chemical nature of active sites, and to quantify active site concentration. Numerous techniques have been demonstrated to provide compositional and structural information about the active sites within a catalyst. However, each technique has its own limitations and experimental pitfalls that can lead to data misinterpretation or irreproducible results. This work aims to provide an overview of the types of data that can be collected, to outline common experimental challenges and how to avoid them, and to assemble relevant references for the most used active site characterization techniques. More broadly, we aim to outline best practices for researchers to collect, interpret, and report active site characterization data in a way that provides the most benefit to the broader catalysis community. Increasing the rigor and reproducibility of active site characterization offers a strategy to better link properties with catalytic performance and to enable the community to develop consensus concerning these relationships.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Validation of a High-Throughput Reductive Catalytic Fractionation Method</title>
<link href="https://hdl.handle.net/1721.1/164089" rel="alternate"/>
<author>
<name>Kenny, Jacob K</name>
</author>
<author>
<name>Neefe, Sasha R</name>
</author>
<author>
<name>Brandner, David G</name>
</author>
<author>
<name>Stone, Michael L</name>
</author>
<author>
<name>Happs, Renee M</name>
</author>
<author>
<name>Kumaniaev, Ivan</name>
</author>
<author>
<name>Mounfield, William P</name>
</author>
<author>
<name>Harman-Ware, Anne E</name>
</author>
<author>
<name>Devos, Katrien M</name>
</author>
<author>
<name>Pendergast, Thomas H</name>
</author>
<author>
<name>Medlin, J Will</name>
</author>
<author>
<name>Román-Leshkov, Yuriy</name>
</author>
<author>
<name>Beckham, Gregg T</name>
</author>
<id>https://hdl.handle.net/1721.1/164089</id>
<updated>2025-11-27T05:20:36Z</updated>
<published>2024-06-05T00:00:00Z</published>
<summary type="text">Design and Validation of a High-Throughput Reductive Catalytic Fractionation Method
Kenny, Jacob K; Neefe, Sasha R; Brandner, David G; Stone, Michael L; Happs, Renee M; Kumaniaev, Ivan; Mounfield, William P; Harman-Ware, Anne E; Devos, Katrien M; Pendergast, Thomas H; Medlin, J Will; Román-Leshkov, Yuriy; Beckham, Gregg T
Reductive catalytic fractionation (RCF) is a promising method to extract and depolymerize lignin from biomass, and bench-scale studies have enabled considerable progress in the past decade. RCF experiments are typically conducted in pressurized batch reactors with volumes ranging between 50 and 1000 mL, limiting the throughput of these experiments to one to six reactions per day for an individual researcher. Here, we report a high-throughput RCF (HTP-RCF) method in which batch RCF reactions are conducted in 1 mL wells machined directly into Hastelloy reactor plates. The plate reactors can seal high pressures produced by organic solvents by vertically stacking multiple reactor plates, leading to a compact and modular system capable of performing 240 reactions per experiment. Using this setup, we screened solvent mixtures and catalyst loadings for hydrogen-free RCF using 50 mg poplar and 0.5 mL reaction solvent. The system of 1:1 isopropanol/methanol showed optimal monomer yields and selectivity to 4-propyl substituted monomers, and validation reactions using 75 mL batch reactors produced identical monomer yields. To accommodate the low material loadings, we then developed a workup procedure for parallel filtration, washing, and drying of samples and a &lt;sup&gt;1&lt;/sup&gt;H nuclear magnetic resonance spectroscopy method to measure the RCF oil yield without performing liquid-liquid extraction. As a demonstration of this experimental pipeline, 50 unique switchgrass samples were screened in RCF reactions in the HTP-RCF system, revealing a wide range of monomer yields (21-36%), S/G ratios (0.41-0.93), and oil yields (40-75%). These results were successfully validated by repeating RCF reactions in 75 mL batch reactors for a subset of samples. We anticipate that this approach can be used to rapidly screen substrates, catalysts, and reaction conditions in high-pressure batch reactions with higher throughput than standard batch reactors.
</summary>
<dc:date>2024-06-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Electrifying Hydroformylation Catalysts Exposes Voltage-Driven C–C Bond Formation</title>
<link href="https://hdl.handle.net/1721.1/164088" rel="alternate"/>
<author>
<name>Zeng, Joy S</name>
</author>
<author>
<name>Cosner, Emma L</name>
</author>
<author>
<name>Delgado-Kukuczka, Spencer P</name>
</author>
<author>
<name>Jiang, Chenyu</name>
</author>
<author>
<name>Adams, Jason S</name>
</author>
<author>
<name>Román-Leshkov, Yuriy</name>
</author>
<author>
<name>Manthiram, Karthish</name>
</author>
<id>https://hdl.handle.net/1721.1/164088</id>
<updated>2025-11-27T05:20:26Z</updated>
<published>2024-06-19T00:00:00Z</published>
<summary type="text">Electrifying Hydroformylation Catalysts Exposes Voltage-Driven C–C Bond Formation
Zeng, Joy S; Cosner, Emma L; Delgado-Kukuczka, Spencer P; Jiang, Chenyu; Adams, Jason S; Román-Leshkov, Yuriy; Manthiram, Karthish
Electrochemical reactions can access a significant range of driving forces under operationally mild conditions and are thus envisioned to play a key role in decarbonizing chemical manufacturing. However, many reactions with well-established thermochemical precedents remain difficult to achieve electrochemically. For example, hydroformylation (thermo-HFN) is an industrially important reaction that couples olefins and carbon monoxide (CO) to make aldehydes. However, the electrochemical analogue of hydroformylation (electro-HFN), which uses protons and electrons instead of hydrogen gas, represents a complex C-C bond-forming reaction that is difficult to achieve at heterogeneous electrocatalysts. In this work, we import Rh-based thermo-HFN catalysts onto electrode surfaces to unlock electro-HFN reactivity. At mild conditions of room temperature and 5 bar CO, we achieve Faradaic efficiencies of up to 15% and turnover frequencies of up to 0.7 h&lt;sup&gt;-1&lt;/sup&gt;. This electro-HFN rate is an order of magnitude greater than the corresponding thermo-HFN rate at the same catalyst, temperature, and pressure. Reaction kinetics and &lt;i&gt;operando&lt;/i&gt; X-ray absorption spectroscopy provide evidence for an electro-HFN mechanism that involves distinct elementary steps relative to thermo-HFN. This work demonstrates a step-by-step experimental strategy for electrifying a well-studied thermochemical reaction to unveil a new electrocatalyst for a complex and underexplored electrochemical reaction.
</summary>
<dc:date>2024-06-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling Lignin Valorization Through Integrated Advances in Plant Biology and Biorefining</title>
<link href="https://hdl.handle.net/1721.1/164087" rel="alternate"/>
<author>
<name>Dixon, Richard A</name>
</author>
<author>
<name>Puente-Urbina, Allen</name>
</author>
<author>
<name>Beckham, Gregg T</name>
</author>
<author>
<name>Román-Leshkov, Yuriy</name>
</author>
<id>https://hdl.handle.net/1721.1/164087</id>
<updated>2025-11-27T05:20:40Z</updated>
<published>2024-07-22T00:00:00Z</published>
<summary type="text">Enabling Lignin Valorization Through Integrated Advances in Plant Biology and Biorefining
Dixon, Richard A; Puente-Urbina, Allen; Beckham, Gregg T; Román-Leshkov, Yuriy
Despite lignin having long been viewed as an impediment to the processing of biomass for the production of paper, biofuels, and high-value chemicals, the valorization of lignin to fuels, chemicals, and materials is now clearly recognized as a critical element for the lignocellulosic bioeconomy. However, the intended application for lignin will likely require a preferred lignin composition and form. To that end, effective lignin valorization will require the integration of plant biology, providing optimal feedstocks, with chemical process engineering, providing efficient lignin transformations. Recent advances in our understanding of lignin biosynthesis have shown that lignin structure is extremely diverse and potentially tunable, while simultaneous developments in lignin refining have resulted in the development of several processes that are more agnostic to lignin composition. Here, we review the interface between in planta lignin design and lignin processing and discuss the advances necessary for lignin valorization to become a feature of advanced biorefining.
</summary>
<dc:date>2024-07-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reducing Solvent Consumption in Reductive Catalytic Fractionation through Lignin Oil Recycling</title>
<link href="https://hdl.handle.net/1721.1/164086" rel="alternate"/>
<author>
<name>Jang, Jun Hee</name>
</author>
<author>
<name>Callejón Álvarez, Júlia</name>
</author>
<author>
<name>Neuendorf, Quinn S</name>
</author>
<author>
<name>Román-Leshkov, Yuriy</name>
</author>
<author>
<name>Beckham, Gregg T</name>
</author>
<id>https://hdl.handle.net/1721.1/164086</id>
<updated>2025-11-27T05:20:28Z</updated>
<published>2024-08-14T00:00:00Z</published>
<summary type="text">Reducing Solvent Consumption in Reductive Catalytic Fractionation through Lignin Oil Recycling
Jang, Jun Hee; Callejón Álvarez, Júlia; Neuendorf, Quinn S; Román-Leshkov, Yuriy; Beckham, Gregg T
Reductive catalytic fractionation (RCF) enables the simultaneous valorization of lignin and carbohydrates in lignocellulosic biomass through solvent-based lignin extraction, followed by depolymerization and catalytic stabilization of the extracted lignin. Process modeling has shown that the use of exogenous organic solvent in RCF is a challenge for economic and environmental feasibility, and previous works proposed that lignin oil, a mixture of lignin-derived monomers and oligomers produced by RCF, can be used as a cosolvent in RCF. Here, we further explore the potential of RCF solvent recycling with lignin oil, extending the feasible lignin oil concentration in the solvent to 100 wt %, relative to the previously demonstrated 0-19 wt % range. Solvents containing up to 80 wt % lignin oil exhibited 83-93% delignification, comparable to 83% delignification with a methanol-water mixture, and notably, using lignin oil solely as a solvent achieved 67% delignification in the absence of water. In additional experiments, applying the RCF solvent recycling approach to ten consecutive RCF reactions resulted in a final lignin oil concentration of 11 wt %, without detrimental impacts on lignin extraction, lignin oil molar mass distribution, aromatic monomer selectivity, and cellulose retention. Overall, this work further demonstrates the potential for using lignin oil as an effective cosolvent in RCF, which can reduce the burden on downstream solvent recovery.
</summary>
<dc:date>2024-08-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Career in Catalysis: Mark E. Davis</title>
<link href="https://hdl.handle.net/1721.1/164085" rel="alternate"/>
<author>
<name>Arhancet, Juan P</name>
</author>
<author>
<name>Chen, Cong-Yan</name>
</author>
<author>
<name>Cybulskis, Viktor J</name>
</author>
<author>
<name>Gounder, Rajamani</name>
</author>
<author>
<name>Hong, Suk Bong</name>
</author>
<author>
<name>Jones, Christopher W</name>
</author>
<author>
<name>Kang, Jong Hun</name>
</author>
<author>
<name>Kubota, Yoshihiro</name>
</author>
<author>
<name>Lee, Hyunjoo</name>
</author>
<author>
<name>Orazov, Marat</name>
</author>
<author>
<name>Román-Leshkov, Yuriy</name>
</author>
<author>
<name>Schmidt, Joel E</name>
</author>
<id>https://hdl.handle.net/1721.1/164085</id>
<updated>2025-11-27T05:20:39Z</updated>
<published>2024-08-23T00:00:00Z</published>
<summary type="text">A Career in Catalysis: Mark E. Davis
Arhancet, Juan P; Chen, Cong-Yan; Cybulskis, Viktor J; Gounder, Rajamani; Hong, Suk Bong; Jones, Christopher W; Kang, Jong Hun; Kubota, Yoshihiro; Lee, Hyunjoo; Orazov, Marat; Román-Leshkov, Yuriy; Schmidt, Joel E
Mark E. Davis led an independent research program from 1981 to 2023, beginning at the Virginia Polytechnic Institute and State University (VPI) and then transitioning to the California Institute of Technology (Caltech). His research program was marked by exceptional creativity, breadth, and depth. With classical training in reaction engineering, Davis developed expertise in experimental heterogeneous catalysis and led work in this discipline for more than 40 years. His name is synonymous with zeolites, and today, he is one of the most widely recognized experts in zeolite synthesis, characterization, and catalysis in the world. Early work at the VPI focused on zeolites and catalysis with supported metal coordination complexes. His creativity was evident at the earliest stages of his career, with the development of supported aqueous phase catalysts and the world’s first crystalline, extra-large pore molecular sieve, both reported in the late 1980s. A move to Caltech saw a significant expansion of his zeolite synthesis program and the rapid acceleration of a multidecade collaboration with Dr. Stacey I. Zones of Chevron. At Caltech, his work expanded to include studies of molecular recognition and catalysis with organic/inorganic hybrid materials, and he developed a large, parallel program in drug delivery. His work on catalysis heavily emphasized zeolite catalysis, including major thrusts on the conversion of sugars in the liquid phase and methanol in the gas phase. Numerous new zeolites and molecular sieves were discovered throughout the four decades of the Davis laboratory, highlighted by a successful, multidecade quest to prepare a chiral zeolite with enantioselective catalytic properties. Davis is one of the most decorated researchers of the last four decades. He is one of only 21 living people currently elected to all of the US National Academies (Engineering, Science, Medicine) and elected as a Fellow of the National Academy of Inventors. He was the first engineer to win the NSF’s Alan T. Waterman Award and is one of only two researchers (to date) to win the International Zeolite Association’s Donald Breck Award twice (1989, 2019). Awards from the ACS (Ipatieff, Murphree, and Somorjai Awards), AIChE (Colburn, Professional Progress Awards), and North American Catalysis Society (Emmett Award) are among his accolades.
</summary>
<dc:date>2024-08-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Plant Bioelectrical Signals for Environmental and Emotional State Classification</title>
<link href="https://hdl.handle.net/1721.1/164084" rel="alternate"/>
<author>
<name>Gloor, Peter A.</name>
</author>
<id>https://hdl.handle.net/1721.1/164084</id>
<updated>2025-11-27T05:20:23Z</updated>
<published>2025-11-05T00:00:00Z</published>
<summary type="text">Plant Bioelectrical Signals for Environmental and Emotional State Classification
Gloor, Peter A.
In this study, we present a pilot investigation using a single Purple Heart plant (Tradescantia pallida) to explore whether bioelectrical signals for dual-purpose classification tasks: environmental state detection and human emotion recognition. Using an AD8232 ECG sensor at 400 Hz sampling rate, we recorded 3 s bioelectrical signal segments with 1 s overlap, converting them to mel-spectrograms for ResNet18 CNN (Convolutional Neural Network) classification. For lamp on/off detection, we achieved 85.4% accuracy with balanced precision (0.85–0.86) and recall (0.84–0.86) metrics across 2767 spectrogram samples. For human emotion classification, our system achieved optimal performance at 73% accuracy with 1 s lag, distinguishing between happy and sad emotional states across 1619 samples. These results should be viewed as preliminary and exploratory, demonstrating feasibility rather than definitive evidence of plant-based emotion sensing. Replication across plants, days, and experimental sites will be essential to establish robustness. The current study is limited by a single-plant setup, modest sample size, and reliance on human face-tracking labels, which together preclude strong claims about generalizability.
</summary>
<dc:date>2025-11-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Within-Subtype HIV-1 Polymorphisms and Their Impacts on Intact Proviral DNA Assay (IPDA) for Viral Reservoir Quantification</title>
<link href="https://hdl.handle.net/1721.1/164083" rel="alternate"/>
<author>
<name>Arikatla, Mohith Reddy</name>
</author>
<author>
<name>Mathad, Jyoti S.</name>
</author>
<author>
<name>Reddy, Kavidha</name>
</author>
<author>
<name>Reddy, Nicole</name>
</author>
<author>
<name>Ndung’u, Thumbi</name>
</author>
<author>
<name>Dupnik, Kathryn M.</name>
</author>
<author>
<name>Lee, Guinevere Q.</name>
</author>
<id>https://hdl.handle.net/1721.1/164083</id>
<updated>2025-11-27T05:20:24Z</updated>
<published>2025-10-31T00:00:00Z</published>
<summary type="text">Within-Subtype HIV-1 Polymorphisms and Their Impacts on Intact Proviral DNA Assay (IPDA) for Viral Reservoir Quantification
Arikatla, Mohith Reddy; Mathad, Jyoti S.; Reddy, Kavidha; Reddy, Nicole; Ndung’u, Thumbi; Dupnik, Kathryn M.; Lee, Guinevere Q.
The Intact Proviral DNA Assay (IPDA) is widely used to quantify genome-intact HIV proviruses in people living with HIV, but viral sequence diversity has been observed to cause assay failures due to primer/probe mismatches. Adapted for subtype C, IPDA-BC is a modified version of the IPDA validated on South African HIV-1 subtype C. India is also impacted by subtype C, but IPDA performance within-subtype across geographical regions is not well studied. We analyzed Indian (IN) and South African (ZA) subtype C sequences in silico, hypothesizing that IPDA-BC may underperform with IN viruses. Primer/probe binding was predicted using three increasingly stringent nucleotide mismatch criteria, whose sensitivity and specificity were evaluated against experimental IPDA outcomes. Phylogenetic analyses confirmed that IN and ZA subtype C sequences form distinct clusters with significant compartmentalization (p &lt; 0.003). Across criteria, up to 6–10% decreases in primer/probe binding were observed in IN versus ZA, with the env forward primer being the most affected. These criteria showed low sensitivity (18–53%) and variable specificity (67–100%) in predicting experimental outcomes. In conclusion, even within subtype, HIV-1 variation across geographical regions may impact IPDA performance, underscoring the need for improved predictive models to guide assay design for global HIV cure research.
</summary>
<dc:date>2025-10-31T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Investigation of the Modulating Effects of Sensory Stimulation and Transcranial Magnetic Stimulation on Memory-Related Brain Activity</title>
<link href="https://hdl.handle.net/1721.1/164082" rel="alternate"/>
<author>
<name>Nikolin, Stevan</name>
</author>
<author>
<name>Wang, Matthew</name>
</author>
<author>
<name>Moffa, Adriano</name>
</author>
<author>
<name>Huang, Haijing</name>
</author>
<author>
<name>Xu, Mei</name>
</author>
<author>
<name>Pande, Siddhartha Raj</name>
</author>
<author>
<name>Martin, Donel</name>
</author>
<id>https://hdl.handle.net/1721.1/164082</id>
<updated>2025-11-27T05:20:21Z</updated>
<published>2025-10-31T00:00:00Z</published>
<summary type="text">An Investigation of the Modulating Effects of Sensory Stimulation and Transcranial Magnetic Stimulation on Memory-Related Brain Activity
Nikolin, Stevan; Wang, Matthew; Moffa, Adriano; Huang, Haijing; Xu, Mei; Pande, Siddhartha Raj; Martin, Donel
Background/Objectives: As the global population ages, the prevalence of disorders associated with memory dysfunction (e.g., Alzheimer’s disease) continues to increase. There is a need for novel interventions that can enhance memory and support affected individuals. Non-invasive brain stimulation provides a promising approach to engage circuits within the hippocampal network, a group of brain regions critical for episodic memory, and thereby improve cognition. Methods: Twenty healthy participants completed a single-blind, within-subject crossover study over four sessions. In each session, they received one of four interventions whilst viewing pictures of real-world objects: 40 Hz synchronised audiovisual stimulation (AVS), theta burst stimulation (TBS), a combination of synchronised 5 Hz repetitive transcranial magnetic stimulation with AVS (rTMS + AVS), or sham rTMS. Electroencephalography (EEG) was recorded to measure associated brain activity changes. Following each intervention, participants completed a recognition memory task. Results: Mixed-effect repeated measure models (MRMMs) revealed no significant differences in recognition memory performance or theta (5 Hz) activity across conditions. However, both TBS and rTMS + AVS significantly increased gamma (40 Hz) activity compared to sham rTMS, and TBS induced a widespread increase in theta-gamma phase-amplitude coupling during picture viewing. Conclusions: While the neuromodulatory interventions did not enhance memory performance, the observed increase in gamma activity, particularly following rTMS-based stimulation, suggests potential engagement of neural processes associated with memory. These findings warrant further investigation into the role of gamma oscillations in memory and cognitive enhancement.
</summary>
<dc:date>2025-10-31T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shaping In-Vehicle Behaviours through Activity-Centered Design</title>
<link href="https://hdl.handle.net/1721.1/164081" rel="alternate"/>
<author>
<name>Patel, Ankit</name>
</author>
<author>
<name>Gershon, Pnina</name>
</author>
<author>
<name>Habibovic, Azra</name>
</author>
<author>
<name>Novakazi, Fjoll?</name>
</author>
<author>
<name>Akahoshi, Sakura</name>
</author>
<author>
<name>Alsaid, Areen</name>
</author>
<author>
<name>Cha, Kyungjoo</name>
</author>
<id>https://hdl.handle.net/1721.1/164081</id>
<updated>2025-11-27T05:20:06Z</updated>
<published>2025-10-08T00:00:00Z</published>
<summary type="text">Shaping In-Vehicle Behaviours through Activity-Centered Design
Patel, Ankit; Gershon, Pnina; Habibovic, Azra; Novakazi, Fjoll?; Akahoshi, Sakura; Alsaid, Areen; Cha, Kyungjoo
In today’s fast-paced society, most individuals commute either by personal vehicle or public transportation. User preferences and requirements are crucial, with design playing a significant role. The nature of design should be such that it is both inclusive and assimilative, and its purpose is to propel innovation and progress while also improving the quality of life of the user. That is why a general focus was given to the user-centered design approach while developing vehicles, especially, cabin (cockpit) design. With prioritizing the user activities, it is interesting to explore how users’ experience and behavior vary through the application of different design approaches. Nevertheless, existing literature has significantly overlooked the impact of design approaches on “human activity". Therefore, the main objective of the workshop is to examine the relationships between activity-centered design and user behavior.
AutomotiveUI Adjunct ’25, Brisbane, QLD, Australia
</summary>
<dc:date>2025-10-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scale, Engage, or Both?: Potential and Perils of Applying Large Language Models in Interview and Conversation-Based Research</title>
<link href="https://hdl.handle.net/1721.1/164080" rel="alternate"/>
<author>
<name>Hwang, Angel Hsing-Chi</name>
</author>
<author>
<name>Aubin Le Qu?r?, Marianne</name>
</author>
<author>
<name>Schroeder, Hope</name>
</author>
<author>
<name>Cuevas, Alejandro</name>
</author>
<author>
<name>Dow, Steven</name>
</author>
<author>
<name>Kapania, Shivani</name>
</author>
<author>
<name>Rho, Eugenia</name>
</author>
<id>https://hdl.handle.net/1721.1/164080</id>
<updated>2025-11-27T05:20:01Z</updated>
<published>2025-10-17T00:00:00Z</published>
<summary type="text">Scale, Engage, or Both?: Potential and Perils of Applying Large Language Models in Interview and Conversation-Based Research
Hwang, Angel Hsing-Chi; Aubin Le Qu?r?, Marianne; Schroeder, Hope; Cuevas, Alejandro; Dow, Steven; Kapania, Shivani; Rho, Eugenia
An increasing number of studies apply tools powered by large language models (LLMs) to interview and conversation-based research, one of the most commonly used research methods in CSCW. This panel invites the CSCW community to critically debate the role of LLMs in reshaping interview-based methods. We aim to explore how these tools might (1) address persistent challenges in conversation-based research, such as limited scalability and participant engagement, (2) introduce novel methodological possibilities, and (3) surface additional practical, technical, and ethical concerns. The panel discussion will be grounded on the panelists’ prior experience applying LLMs to their own interview and conversation-based research. We ask whether LLMs offer unique advantages to enhance interview research, beyond automating certain aspects of the research process. Through this discussion, we encourage researchers to reflect on how applying LLM tools may require rethinking research design, conversational protocols, and ethical practices.
CSCW Companion ’25, Bergen, Norway
</summary>
<dc:date>2025-10-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Yeast Display Reveals Plentiful Mutations That Improve Fusion Peptide Vaccine-Elicited Antibodies Beyond 59% HIV-1 Neutralization Breadth</title>
<link href="https://hdl.handle.net/1721.1/164079" rel="alternate"/>
<author>
<name>França, Camila T</name>
</author>
<author>
<name>Pletnev, Sergei</name>
</author>
<author>
<name>Madan, Bharat</name>
</author>
<author>
<name>Katsamba, Phinikoula S</name>
</author>
<author>
<name>McKee, Krisha</name>
</author>
<author>
<name>Morano, Nicholas C</name>
</author>
<author>
<name>Zhang, Baoshan</name>
</author>
<author>
<name>Bahna, Fabiana</name>
</author>
<author>
<name>Bylund, Tatsiana</name>
</author>
<author>
<name>Lin, Bob C</name>
</author>
<author>
<name>Louder, Mark K</name>
</author>
<author>
<name>Mannepalli, Seetha</name>
</author>
<author>
<name>Nimrania, Rajani</name>
</author>
<author>
<name>O’Dell, Sijy</name>
</author>
<author>
<name>Doria-Rose, Nicole A</name>
</author>
<author>
<name>Kwong, Peter D</name>
</author>
<author>
<name>Shapiro, Lawrence</name>
</author>
<author>
<name>Sheng, Zizhang</name>
</author>
<author>
<name>Zhou, Tongqing</name>
</author>
<author>
<name>DeKosky, Brandon J</name>
</author>
<id>https://hdl.handle.net/1721.1/164079</id>
<updated>2025-11-27T05:20:42Z</updated>
<published>2025-10-27T00:00:00Z</published>
<summary type="text">Yeast Display Reveals Plentiful Mutations That Improve Fusion Peptide Vaccine-Elicited Antibodies Beyond 59% HIV-1 Neutralization Breadth
França, Camila T; Pletnev, Sergei; Madan, Bharat; Katsamba, Phinikoula S; McKee, Krisha; Morano, Nicholas C; Zhang, Baoshan; Bahna, Fabiana; Bylund, Tatsiana; Lin, Bob C; Louder, Mark K; Mannepalli, Seetha; Nimrania, Rajani; O’Dell, Sijy; Doria-Rose, Nicole A; Kwong, Peter D; Shapiro, Lawrence; Sheng, Zizhang; Zhou, Tongqing; DeKosky, Brandon J
Background/Objectives: Vaccine elicitation of antibodies with high HIV-1 neutralization breadth is a long-standing goal. Recently, the induction of such antibodies has been achieved at the fusion peptide site of vulnerability. Questions remain, however, as to how much anti-fusion peptide antibodies can be improved and whether their neutralization breadth and potency are sufficient to prevent HIV-1 infection. Methods: Here, we use yeast display coupled with deep mutational screening and biochemical and structural analyses to study the improvement of the best fusion peptide-directed, vaccine-elicited antibody, DFPH_a.01, with an initial 59% breadth. Results: Yeast display identified both single and double mutations that improved recognition of HIV-1 envelope trimers. We characterized two paratope-distal light chain (LC) mutations, S10R and S59P, which together increased breadth to 63%. Biochemical analysis demonstrated DFPH-a.01_10R59P-LC, and its component mutations, to have increased affinity and stability. Cryo-EM structural analysis revealed elbow-angle influencing by S10R-LC and isosteric positioning by S59P-LC as explanations for enhanced breadth, affinity, and stability. Conclusions: These results, along with another antibody with enhanced performance (DFPH-a.01_1G10A56K-LC with 64% breadth), suggest that mutations improving DFPH_a.01 are plentiful, an important vaccine insight.
</summary>
<dc:date>2025-10-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Probabilistic Perspective on Tiling Sparse Tensor Algebra</title>
<link href="https://hdl.handle.net/1721.1/164078" rel="alternate"/>
<author>
<name>Sharma, Ritvik</name>
</author>
<author>
<name>Xue, Zi Yu</name>
</author>
<author>
<name>Zhang, Nathan</name>
</author>
<author>
<name>Lacouture, Rubens</name>
</author>
<author>
<name>Kjolstad, Fredrik</name>
</author>
<author>
<name>Achour, Sara</name>
</author>
<author>
<name>Horowitz, Mark</name>
</author>
<id>https://hdl.handle.net/1721.1/164078</id>
<updated>2025-11-27T05:20:09Z</updated>
<published>2025-10-17T00:00:00Z</published>
<summary type="text">A Probabilistic Perspective on Tiling Sparse Tensor Algebra
Sharma, Ritvik; Xue, Zi Yu; Zhang, Nathan; Lacouture, Rubens; Kjolstad, Fredrik; Achour, Sara; Horowitz, Mark
Sparse tensor algebra computations are often memory-bound due to irregular access patterns and low arithmetic intensity. We present D2T2 (Data-Driven Tensor Tiling), a framework that optimizes static coordinate-space tiling schemes to minimize memory traffic by identifying and leveraging relevant high-level statistics from input operands. For a given tensor algebra computation, D2T2 collects statistics from input tensors, builds a probability distribution-based model of the tensor computation, and uses it to predict traffic for various tiling configurations. It searches over tile shape and size configurations to minimize total traffic. We evaluate D2T2 against Tailors and DRT, two state of the art tiling schemes for sparse tensor algebra. We find that D2T2 achieves, on average, a 2.54 × speedup over Tailors and a 1.13× lower memory bandwidth compared to DRT for sparse-sparse matrix multiplication (SpMSpM). We also achieve 1.22–48.94× lower bandwidth for SpMSpM and up to 34.31× lower bandwidth for tensor operations (TTM and MTTKRP) than conservative static tiling schemes. Unlike prior tiling techniques, D2T2 is deployable without specialized hardware support. On Opal, a 16nm sparse tensor algebra accelerator, D2T2 generated tiling configurations that achieve 1.23–3.34 × speedups compared to their original hand-tuned configurations.
MICRO ’25, Seoul, Republic of Korea
</summary>
<dc:date>2025-10-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>HapticHearing: A Haptic Feedback System for Complementing Auditory Speech Perception for Mild-to-Moderate Hearing Loss</title>
<link href="https://hdl.handle.net/1721.1/164077" rel="alternate"/>
<author>
<name>Chin, Sam</name>
</author>
<author>
<name>Fitz-Gibbon, Emmie</name>
</author>
<author>
<name>Huang, Bingjian</name>
</author>
<author>
<name>Paradiso, Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/164077</id>
<updated>2025-11-27T05:20:00Z</updated>
<published>2025-10-22T00:00:00Z</published>
<summary type="text">HapticHearing: A Haptic Feedback System for Complementing Auditory Speech Perception for Mild-to-Moderate Hearing Loss
Chin, Sam; Fitz-Gibbon, Emmie; Huang, Bingjian; Paradiso, Joseph
Age-related hearing loss is often caused by cochlear hair cell degradation. This creates a challenge for hearing aids, which rely on sound amplification. Once hearing ability in a specific frequency is lost, amplification alone provides little benefit. Previous haptic systems have tried to solve this with complete sensory substitution, converting audio signals like phonemes to tactile patterns. However, these systems require significant amount of time to learn, and induce high cognitive load in haptic perception. Our system, HapticHearing, takes an alternative approach of leveraging a user’s residual hearing and complementing it with tactile feedback. We present a custom multi-actuator haptic device, designed to translate phonemic information from speech into tactile patterns that are customized to a user’s hearing loss and speech perception abilities. The system consists of a microphone for speech capture, four-band energy envelope extraction with vowel embedding, a custom USB-to-haptic driver PCB, and wearable devices containing eight vibrotactile actuators that deliver personalized tactile feedback based on the user’s audiogram. Psychophysical validation (n=9) showed neck-worn devices achieved better spatial localization (67% vs 53%) while while bracelet and necklace devices had lower detection thresholds than over-ear (thresholds  0.09 vs 0.18).
ASSETS ’25, Denver, CO, USA
</summary>
<dc:date>2025-10-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Resonance: Drawing from Memories to Imagine Positive Futures through AI-Augmented Journaling</title>
<link href="https://hdl.handle.net/1721.1/164076" rel="alternate"/>
<author>
<name>Zulfikar, Wazeer</name>
</author>
<author>
<name>Chiaravalloti, Treyden</name>
</author>
<author>
<name>Shen, Jocelyn</name>
</author>
<author>
<name>Picard, Rosalind</name>
</author>
<author>
<name>Maes, Pattie</name>
</author>
<id>https://hdl.handle.net/1721.1/164076</id>
<updated>2025-11-27T05:20:13Z</updated>
<published>2025-10-09T00:00:00Z</published>
<summary type="text">Resonance: Drawing from Memories to Imagine Positive Futures through AI-Augmented Journaling
Zulfikar, Wazeer; Chiaravalloti, Treyden; Shen, Jocelyn; Picard, Rosalind; Maes, Pattie
People inherently use experiences of their past while imagining their future, a capability that plays a crucial role in mental health. Resonance is an AI-powered journaling tool designed to augment this ability by offering AI-generated, action-oriented suggestions for future activities based on the user’s own past memories. Suggestions are offered when a new memory is logged and are followed by a prompt for the user to imagine carrying out the suggestion. In a two-week randomized controlled study (N=55), we found that using Resonance significantly improved mental health outcomes, reducing the users’ PHQ8 scores, a measure of current depression, and increasing their daily positive affect, particularly when they would likely act on the suggestion. Notably, the effectiveness of the suggestions was higher when they were personal, novel, and referenced the user’s logged memories. Finally, through open-ended feedback, we discuss the factors that encouraged or hindered the use of the tool.
AHs 2025, Masdar City, Abu Dhabi, United Arab Emirates
</summary>
<dc:date>2025-10-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>A11yShape: AI-Assisted 3-D Modeling for Blind and Low-Vision Programmers</title>
<link href="https://hdl.handle.net/1721.1/164075" rel="alternate"/>
<author>
<name>Zhang, Zhuohao (Jerry)</name>
</author>
<author>
<name>Li, Haichang</name>
</author>
<author>
<name>Yu, Chun Meng</name>
</author>
<author>
<name>Faruqi, Faraz</name>
</author>
<author>
<name>Xie, Junan</name>
</author>
<author>
<name>Kim, Gene</name>
</author>
<author>
<name>Fan, Mingming</name>
</author>
<author>
<name>Forbes, Angus</name>
</author>
<author>
<name>Wobbrock, Jacob</name>
</author>
<author>
<name>Guo, Anhong</name>
</author>
<author>
<name>He, Liang</name>
</author>
<id>https://hdl.handle.net/1721.1/164075</id>
<updated>2025-11-27T05:20:04Z</updated>
<published>2025-10-22T00:00:00Z</published>
<summary type="text">A11yShape: AI-Assisted 3-D Modeling for Blind and Low-Vision Programmers
Zhang, Zhuohao (Jerry); Li, Haichang; Yu, Chun Meng; Faruqi, Faraz; Xie, Junan; Kim, Gene; Fan, Mingming; Forbes, Angus; Wobbrock, Jacob; Guo, Anhong; He, Liang
Building 3-D models is challenging for blind and low-vision (BLV) users due to the inherent complexity of 3-D models and the lack of support for non-visual interaction in existing tools. To address this issue, we introduce A11yShape, a novel system designed to help BLV users who possess basic programming skills understand, modify, and iterate on 3-D models. A11yShape leverages LLMs and integrates with OpenSCAD, a popular open-source editor that generates 3-D models from code. Key functionalities of A11yShape include accessible descriptions of 3-D models, version control to track changes in models and code, and a hierarchical representation of model components. Most importantly, A11yShape employs a cross-representation highlighting mechanism to synchronize semantic selections across all model representations—code, semantic hierarchy, AI description, and 3-D rendering. We conducted a multi-session user study with four BLV programmers, where, after an initial tutorial session, participants independently completed 12 distinct models across two testing sessions, achieving results that aligned with their own satisfaction. The result demonstrates that participants were able to comprehend provided 3-D models, as well as independently create and modify 3-D models—tasks that were previously impossible without assistance from sighted individuals.
ASSETS ’25, Denver, CO, USA
</summary>
<dc:date>2025-10-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Benthic: Perceptually Congruent Structures for Accessible Charts and Diagrams</title>
<link href="https://hdl.handle.net/1721.1/164074" rel="alternate"/>
<author>
<name>Mei⁎, Catherine</name>
</author>
<author>
<name>Pollock⁎, Josh</name>
</author>
<author>
<name>Hajas, Daniel</name>
</author>
<author>
<name>Zong, Jonathan</name>
</author>
<author>
<name>Satyanarayan, Arvind</name>
</author>
<id>https://hdl.handle.net/1721.1/164074</id>
<updated>2025-11-27T05:20:31Z</updated>
<published>2025-10-22T00:00:00Z</published>
<summary type="text">Benthic: Perceptually Congruent Structures for Accessible Charts and Diagrams
Mei⁎, Catherine; Pollock⁎, Josh; Hajas, Daniel; Zong, Jonathan; Satyanarayan, Arvind
Graphical representations — such as charts and diagrams — have a visual structure that communicates the relationship between visual elements. For instance, we might consider two elements to be connected when there is a line or arrow between them, or for there to be a part-to-whole relationship when one element is contained within the other. Yet, existing screen reader solutions rarely surface this structure for blind and low-vision readers. Recent approaches explore hierarchical trees or adjacency graphs, but these structures capture only parts of the visual structure — containment or direct connections, respectively. In response, we present Benthic, a system that supports perceptually congruent screen reader structures, which align screen reader navigation with a graphic’s visual structure. Benthic models graphical representations as hypergraphs: a relaxed tree structure that allows a single hyperedge to connect a parent to a set of children nodes. In doing so, Benthic is able to capture both hierarchical and adjacent visual relationships in a manner that is domain-agnostic and enables fluid (i.e., concise and reversible) traversal. To evaluate Benthic, we conducted a study with 15 blind participants who were asked to explore two kinds of graphical representations that have previously been studied with sighted readers. We find that Benthic’s perceptual congruence enabled flexible, goal-driven exploration and supported participants in building a clear understanding of each diagram’s structure.
ASSETS ’25, Denver, CO, USA
</summary>
<dc:date>2025-10-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quartz: A Reconfigurable, Distributed-Memory Accelerator for Sparse Applications</title>
<link href="https://hdl.handle.net/1721.1/164073" rel="alternate"/>
<author>
<name>Golden, Courtney</name>
</author>
<author>
<name>Feldmann, Axel</name>
</author>
<author>
<name>Emer, Joel</name>
</author>
<author>
<name>Sanchez, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/164073</id>
<updated>2025-11-27T05:19:04Z</updated>
<published>2025-10-17T00:00:00Z</published>
<summary type="text">Quartz: A Reconfigurable, Distributed-Memory Accelerator for Sparse Applications
Golden, Courtney; Feldmann, Axel; Emer, Joel; Sanchez, Daniel
Iterative sparse matrix computations lie at the heart of many scientific computing and graph analytics algorithms. On conventional systems, their irregular memory accesses and low arithmetic intensity create challenging memory bandwidth bottlenecks. To overcome such bottlenecks, distributed-SRAM architectures are structured as an array of tiles, each with a processing element (PE) and a small local memory, to achieve very high aggregate memory bandwidth. However, current distributed-SRAM architectures suffer from either poor programmability due to over-specialized PEs or poor compute performance due to inefficient general-purpose PEs.&#13;
We propose Quartz, a new architecture that uses short dataflow tasks and reconfigurable PEs in a distributed-SRAM system to deliver both high performance and high programmability. Unlike traditional sparse CGRAs or on-die reconfigurable engines, Quartz allows reconfigurable compute to be highly utilized and scaled by (1) providing high memory bandwidth to each processing element and (2) introducing a task-level dataflow execution model that fits this new setting. Our execution model dynamically reconfigures each tile’s PE in response to inter-tile messages to execute tasks on local data. This execution model enables fine-grained data partitioning across tiles. To make execution efficient, we explore novel data partitioning techniques that use graph and hypergraph partitioning to minimize network traffic and balance load in the face of both static-static and static-dynamic operand sparsity. To ensure programmability, we show how a wide range of Einsum-expressible computations and flexible data distributions can be systematically captured in small tasks for execution on Quartz.&#13;
Quartz’s architecture, data partitioning techniques, and programming model together achieve gmean 21.4 × speedup over a prior state-of-the-art system for six different iterative sparse applications from scientific computing and graph analytics.
MICRO ’25, Seoul, Republic of Korea
</summary>
<dc:date>2025-10-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Converting Spatial to Social: Using Persistent Homology to Understand Social Groups</title>
<link href="https://hdl.handle.net/1721.1/164072" rel="alternate"/>
<author>
<name>Chen, Valerie</name>
</author>
<author>
<name>Liang, Claire</name>
</author>
<author>
<name>Shah, Julie</name>
</author>
<author>
<name>Andrist, Sean</name>
</author>
<id>https://hdl.handle.net/1721.1/164072</id>
<updated>2025-11-27T05:19:58Z</updated>
<published>2025-10-12T00:00:00Z</published>
<summary type="text">Converting Spatial to Social: Using Persistent Homology to Understand Social Groups
Chen, Valerie; Liang, Claire; Shah, Julie; Andrist, Sean
In social settings, people display sophisticated spatial behaviors—for example, one might naturally enter into a conversation by sidling up to a group. Artificial agents will need the ability to reason about spatial representations of social information to understand not only how social groups form, but also how to interact within and around them. Leveraging the insight that people reason about shared space topologically rather than geometrically, we employ techniques from applied topology to introduce a new method for social group analysis that improves quantifiability and enables rigorous analysis of social group structure. We present a novel topological mathematical formalism called the social simplicial complex that provides an equivalence relation for socially analogous configurations of people and is provably robust against small perturbations and noise. Moreover, this formalism suggests quantifiable metrics to assess the confidence of social group existence and the social closeness of people within groups. We further use this formalism to introduce an open-source toolkit for evaluating possible models of social relationships, which we name the Social Topological Analysis (SoTA) Toolkit. Finally, we explore algebraic topology’s potential to serve more generally as a powerful tool for multi-modal social data processing, and its possibilities for further applications in social-spatial analysis.
ICMI ’25, Canberra, ACT, Australia
</summary>
<dc:date>2025-10-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>LLMs in Citation Intent Classification: Progress, Precision, and Reproducibility Challenges</title>
<link href="https://hdl.handle.net/1721.1/164071" rel="alternate"/>
<author>
<name>Fogelson, Alex</name>
</author>
<author>
<name>Thompson, Neil</name>
</author>
<author>
<name>Trišović, Ana</name>
</author>
<id>https://hdl.handle.net/1721.1/164071</id>
<updated>2025-11-27T05:20:07Z</updated>
<published>2025-10-21T00:00:00Z</published>
<summary type="text">LLMs in Citation Intent Classification: Progress, Precision, and Reproducibility Challenges
Fogelson, Alex; Thompson, Neil; Trišović, Ana
Understanding the intent behind scientific citations is critical for&#13;
advancing scholarly search and knowledge mapping. This paper&#13;
reflects on the methodological use of large language models (LLMs)&#13;
for multi-class citation intent classification. Our experiments evaluating a diverse range of models and approaches reveal striking&#13;
disagreement among state-of-the-art (SotA) systems. This inconsistency suggests that citation intent classification remains a challenging task for LLMs raising questions about the robustness, reliability&#13;
and replicability of current methods. Moreover, our findings highlight a concerning dependency on proprietary LLMs that, even&#13;
with access to compute resources, were necessary to achieve sufficient accuracy. This introduces new challenges, as silent updates,&#13;
lack of versioning, and opaque training pipelines pose threats to&#13;
methodological transparency and long-term reproducibility in LLMenabled research.
ACM REP ’25, Vancouver, BC, Canada
</summary>
<dc:date>2025-10-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ancestral Technology: Inside Colombia’s Hidden Technological Landscape</title>
<link href="https://hdl.handle.net/1721.1/164070" rel="alternate"/>
<author>
<name>Reynolds-Cuellar, Pedro</name>
</author>
<id>https://hdl.handle.net/1721.1/164070</id>
<updated>2025-11-27T05:20:35Z</updated>
<published>2025-10-30T00:00:00Z</published>
<summary type="text">Ancestral Technology: Inside Colombia’s Hidden Technological Landscape
Reynolds-Cuellar, Pedro
Luz Marina Burgos’ fingers moved deliberately across the threads, constructing a tšombiach—a ceremonial sash commonly used to protect and strengthen the body. “This is the frog; to us, it represents fertility,” she explained, pointing to an emerging pattern. “This is the sun. Families weave it differently. This is how the tšombiach helps us tell our own story.” What I witnessed in this Colombian village was not simply craft—it was a technology for encoding and transmitting intergenerational knowledge.&#13;
&#13;
Passing most of my time between MIT and Harvard created a sense of technology as merely technical or socio-technical systems serving as a means to undetermined progress that only a few seem able to influence or have power over. A sense of relentless push towards the new, often at the expense of the old. Learning from Luz Marina, a traditional weaver from the Quillasinga Indigenous people, helped me make sense of radically different technological values, motivations and purposes. She is part of a centuries-long tradition of sustaining technologies designed for a different purpose entirely: cultural preservation. These technological systems solve immediate problems while maintaining the social fabric that makes problem-solving possible across generations.&#13;
&#13;
During five years of fieldwork in Colombia’s rural communities —ultimately leading to my doctoral dissertation— I encountered technologies that function according to entirely different logics than those driving “modern” narratives of innovation. I began —along with my collaborators in Colombia— conceptualizing these as “ancestral technologies”: forms of world-making —some of which take the form of artifacts— that primarily support cultural cohesion, remain rooted in specific geographies and carry their history through collective memory. Unlike modern technologies optimized for profit, efficiency or scale, these ancestral systems optimize for continuity and collective meaning. In an era when predictive technology sells the fantasy that unlimited computational power must be our goal as a society, perhaps the question is not whether we can build more powerful systems, but whether we can build systems that help us preserve what matters most.
</summary>
<dc:date>2025-10-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Asymmetric linker generates intrinsically disordered metal–organic framework with local MOF-74 structure</title>
<link href="https://hdl.handle.net/1721.1/164069" rel="alternate"/>
<author>
<name>Dinakar, Bhavish</name>
</author>
<author>
<name>Oppenheim, Julius J</name>
</author>
<author>
<name>Vandone, Marco</name>
</author>
<author>
<name>Torres, Juan F</name>
</author>
<author>
<name>Iliescu, Andrei</name>
</author>
<author>
<name>Yang, Zhentao</name>
</author>
<author>
<name>Román-Leshkov, Yuriy</name>
</author>
<author>
<name>Dincă, Mircea</name>
</author>
<id>https://hdl.handle.net/1721.1/164069</id>
<updated>2025-11-26T03:11:05Z</updated>
<published>2025-08-14T00:00:00Z</published>
<summary type="text">Asymmetric linker generates intrinsically disordered metal–organic framework with local MOF-74 structure
Dinakar, Bhavish; Oppenheim, Julius J; Vandone, Marco; Torres, Juan F; Iliescu, Andrei; Yang, Zhentao; Román-Leshkov, Yuriy; Dincă, Mircea
Here, we report an intrinsically disordered MOF in the MOF-74 family,&#13;
Mg2x(as-dobpdc) (as-dobpdc4 = 30&#13;
,4-dioxidobiphenyl-3,40&#13;
-dicarboxylate). Despite the absence of crystallinity, this material exhibits&#13;
local ordering consistent with that of its crystalline isomers, maintains&#13;
porosity, and exhibits a high density of open metal sites.
</summary>
<dc:date>2025-08-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sustainable aviation fuels from biomass and biowaste via bio- and chemo-catalytic conversion: Catalysis, process challenges, and opportunities</title>
<link href="https://hdl.handle.net/1721.1/164068" rel="alternate"/>
<author>
<name>Zhang, Junyan</name>
</author>
<author>
<name>Webber, Matthew S</name>
</author>
<author>
<name>Pu, Yunqiao</name>
</author>
<author>
<name>Li, Zhenglong</name>
</author>
<author>
<name>Meng, Xianzhi</name>
</author>
<author>
<name>Stone, Michael L</name>
</author>
<author>
<name>Wei, Bingqing</name>
</author>
<author>
<name>Wang, Xueqi</name>
</author>
<author>
<name>Yuan, Sainan</name>
</author>
<author>
<name>Klein, Bruno</name>
</author>
<author>
<name>Seemala, Bhogeswararao</name>
</author>
<author>
<name>Wyman, Charles E</name>
</author>
<author>
<name>Ramasamy, Karthikeyan K</name>
</author>
<author>
<name>Thorson, Mike</name>
</author>
<author>
<name>Langholtz, Matthew H</name>
</author>
<author>
<name>Heyne, Joshua S</name>
</author>
<author>
<name>Koishybay, Aibolat</name>
</author>
<author>
<name>Adhikari, Shiba</name>
</author>
<author>
<name>Cao, Sufeng</name>
</author>
<author>
<name>Sutton, Andrew D</name>
</author>
<author>
<name>Tuskan, Gerald A</name>
</author>
<author>
<name>Román-Leshkov, Yuriy</name>
</author>
<author>
<name>Ragauskas, Arthur J</name>
</author>
<author>
<name>Ling, Tao</name>
</author>
<author>
<name>Davison, Brian H</name>
</author>
<id>https://hdl.handle.net/1721.1/164068</id>
<updated>2025-11-26T03:11:03Z</updated>
<published>2025-06-01T00:00:00Z</published>
<summary type="text">Sustainable aviation fuels from biomass and biowaste via bio- and chemo-catalytic conversion: Catalysis, process challenges, and opportunities
Zhang, Junyan; Webber, Matthew S; Pu, Yunqiao; Li, Zhenglong; Meng, Xianzhi; Stone, Michael L; Wei, Bingqing; Wang, Xueqi; Yuan, Sainan; Klein, Bruno; Seemala, Bhogeswararao; Wyman, Charles E; Ramasamy, Karthikeyan K; Thorson, Mike; Langholtz, Matthew H; Heyne, Joshua S; Koishybay, Aibolat; Adhikari, Shiba; Cao, Sufeng; Sutton, Andrew D; Tuskan, Gerald A; Román-Leshkov, Yuriy; Ragauskas, Arthur J; Ling, Tao; Davison, Brian H
Sustainable aviation fuel (SAF) production from biomass and biowaste streams is an attractive option for decarbonizing the aviation sector, one of the most-difficult-to-electrify transportation sectors. Despite ongoing commercialization efforts using ASTM-certified pathways (e.g., lipid conversion, Fischer–Tropsch synthesis), production capacities are still inadequate due to limited feedstock supply and high production costs. New conversion technologies that utilize lignocellulosic feedstocks are needed to meet these challenges and satisfy the rapidly growing market. Combining bio- and chemo-catalytic approaches can leverage advantages from both methods, i.e., high product selectivity via biological conversion, and the capability to build C-C chains more efficiently via chemical catalysis. Herein, conversion routes, catalysis, and processes for such pathways are discussed, while key challenges and meaningful R&amp;D opportunities are identified to guide future research activities in the space. Bio- and chemo-catalytic conversion primarily utilize the carbohydrate fraction of lignocellulose, leaving lignin as a waste product. This makes lignin conversion to SAF critical in order to utilize whole biomass, thereby lowering overall production costs while maximizing carbon efficiencies. Thus, lignin valorization strategies are also reviewed herein with vital research areas identified, such as facile lignin depolymerization approaches, highly integrated conversion systems, novel process configurations, and catalysts for the selective cleavage of aryl C–O bonds. The potential efficiency improvements available via integrated conversion steps, such as combined biological and chemo-catalytic routes, along with the use of different parallel pathways, are identified as key to producing all components of a cost-effective, 100% SAF.
</summary>
<dc:date>2025-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Methods for Carbon Mass Closure in Polyolefin Hydrocracking</title>
<link href="https://hdl.handle.net/1721.1/164067" rel="alternate"/>
<author>
<name>Brenner, Anna E</name>
</author>
<author>
<name>Drake, Griffin</name>
</author>
<author>
<name>Beckham, Gregg T</name>
</author>
<author>
<name>Román-Leshkov, Yuriy</name>
</author>
<id>https://hdl.handle.net/1721.1/164067</id>
<updated>2025-11-26T03:11:02Z</updated>
<published>2025-07-24T00:00:00Z</published>
<summary type="text">Methods for Carbon Mass Closure in Polyolefin Hydrocracking
Brenner, Anna E; Drake, Griffin; Beckham, Gregg T; Román-Leshkov, Yuriy
Heterogeneous catalytic hydrocracking of polyolefins is a promising approach for the processing of postconsumer plastics, but product quantification methods remain inconsistent across the literature. In systems that generate a large fraction of vapor-phase products, typical product capture methods can result in large carbon balance deficits, exceeding 50%, compromising reported yields and selectivities. Here, we identify the major sources of product loss and develop enhanced capture methods to improve the quantification accuracy. Seven supplemental techniques were evaluated, targeting either increased vapor recovery (by increasing the volatility or system volume) or enhanced retention in the liquid phase (by decreasing volatility). Among these, a flow collection approach using a continuous helium sweep and downstream gas sampling bag capture yielded the highest recovery, achieving a 96 ± 9.2% carbon balance closure. We show that the efficacy of these methods is strongly dependent on product distribution. In general, solvent addition was most effective when condensable species dominate the product distribution, while flow collection was preferred when both condensable species and light gases are present in high concentrations. These results highlight the need for method-specific workup strategies and demonstrate that no single protocol is universally optimal. We provide general guidelines for selecting and implementing robust product capture techniques, enabling accurate yield and selectivity determinations in polyolefin hydrocracking systems.
</summary>
<dc:date>2025-07-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lignin Extraction and Condensation as a Function of Temperature, Residence Time, and Solvent System in Flow-through Reactors</title>
<link href="https://hdl.handle.net/1721.1/164066" rel="alternate"/>
<author>
<name>Brandner, David G</name>
</author>
<author>
<name>Gracia Vitoria, Jaime</name>
</author>
<author>
<name>Kenny, Jacob K</name>
</author>
<author>
<name>Bussard, Jeremy R</name>
</author>
<author>
<name>Jang, Jun Hee</name>
</author>
<author>
<name>Woodworth, Sean P</name>
</author>
<author>
<name>Vanbroekhoven, Karolien</name>
</author>
<author>
<name>Román-Leshkov, Yuriy</name>
</author>
<author>
<name>Beckham, Gregg T</name>
</author>
<id>https://hdl.handle.net/1721.1/164066</id>
<updated>2025-11-26T03:11:00Z</updated>
<published>2025-08-01T00:00:00Z</published>
<summary type="text">Lignin Extraction and Condensation as a Function of Temperature, Residence Time, and Solvent System in Flow-through Reactors
Brandner, David G; Gracia Vitoria, Jaime; Kenny, Jacob K; Bussard, Jeremy R; Jang, Jun Hee; Woodworth, Sean P; Vanbroekhoven, Karolien; Román-Leshkov, Yuriy; Beckham, Gregg T
Solvolytic extraction of lignin from biomass is a critical step in lignin-first biorefining, including the reductive catalytic fractionation (RCF) process. Key to optimal RCF processing is the ability to rapidly extract lignin from biomass at high delignification extents and transfer the lignin molecules to a catalyst surface in a time frame that minimizes lignin condensation reactions. Here, we use a flow-through reactor to study the effects of temperature (175-250 °C), residence time (9 to 36 min), and solvent composition (methanol and methanol-water) on lignin extraction and condensation. We evaluated three metrics at each condition: total delignification, delignification rate, and extent of condensation, the latter measured by a decrease in monomer yield for batch hydrogenolysis reactions of solvolysis liquor compared to batch RCF reactions. We observe that delignification is predominantly determined by temperature, while residence time dictates the lignin condensation extent. Moreover, the extent of both extraction and condensation increased in the methanol-water solvent system compared to that in the methanol system. Lignin extracted in methanol is stable up to 18-min residence times at or below 225 °C, while a majority of the lignin extracted in methanol-water is condensed with a 9-min residence time at 200 °C. These results can inform reactor designs and solvent selection for lignin-first biorefining processes that aim to physically separate the biomass and catalyst.
</summary>
<dc:date>2025-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Additive Manufacturing of Electrical Machines and Electronic Devices</title>
<link href="https://hdl.handle.net/1721.1/164065" rel="alternate"/>
<author>
<name>Cañada Pérez-Sala, Jorge</name>
</author>
<id>https://hdl.handle.net/1721.1/164065</id>
<updated>2025-11-26T03:04:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Additive Manufacturing of Electrical Machines and Electronic Devices
Cañada Pérez-Sala, Jorge
Recent advancements in the additive manufacture of electronics and electrical machines have led to successful demonstrations of 3D-printed passive (e.g., resistors, capacitors, inductors) and active (e.g., transistors) electronic components, as well as magnetic cores and power transfer devices. However, each new demonstration of 3D-printed functional devices has typically required increasingly specialized and expensive manufacturing hardware. This work opposes that trend by developing a technology capable of fabricating all such devices on a single, affordable machine: a material extrusion 3D printer. Material extrusion stands out among additive manufacturing technologies for its widespread availability and its compatibility with monolithic multi-material manufacturing, essential for the fabrication of functional electromagnetic devices. These attributes, together with its well-established ability to fabricate mechanically functional parts, make material extrusion a promising technology for the single-step fabrication of electronics and electrical machines, and for their monolithic integration into complex devices, such as custom functionalized prostheses, robots, and space exploration hardware. In this research, a desktop 3D printer was transformed into an almost-universal manufacturing machine capable of fabricating a myriad of electrically, magnetically, and mechanically functional devices, using various feedstock formats (e.g., filament, pellets, ink). With this machine, milestones such as the fabrication of the first semiconductorfree, fully 3D-printed logic gates, and that of the first fully 3D-printed motor, have been achieved. Built for under $4000 in parts, the modified 3D printer opens the door to the democratization of electronics and electrical machine manufacturing, empowering institutions and individuals alike, and serving as an educational tool to introduce advanced manufacturing to new generations. Additionally, this work investigates optimization strategies for planar inductors and alternative techniques for the creation of miniaturized, three-dimensional, electrically functional components via two-photon polymerization. By demonstrating novel methods and applications, this thesis advances the state of the art in the additive manufacture of electromagnetic devices and paves the way toward the decentralized fabrication of electrical machines and electronic devices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Unified Theory of Representation Learning: How Hidden Relationships Power Algorithms that can Learn without Labels</title>
<link href="https://hdl.handle.net/1721.1/164064" rel="alternate"/>
<author>
<name>Hamilton, Mark T.</name>
</author>
<id>https://hdl.handle.net/1721.1/164064</id>
<updated>2025-11-26T03:04:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Unified Theory of Representation Learning: How Hidden Relationships Power Algorithms that can Learn without Labels
Hamilton, Mark T.
How does the human mind make sense of raw information without being taught how to see or hear? This thesis presents a unifying theory that describes how algorithms can learn and discover structure in complex systems, like natural images, audio, language, and video - without human input. This class of algorithms has the possibility to extend our own understanding of the world by helping us to see previously unseen patterns in nature and science. At the core of this thesis’ unified theory is the notion that relationships between deep network representations hold the key discover the structure of the world without human input. This work will begin with a few examples of this principle in action; discovering hidden connections that span cultures and millennia in the visual arts, discovering visual objects in large image corpora, classifying every pixel of our visual world, and rediscovering the meaning of words from raw audio, all without human labels. In the latter half of this thesis, we will present two unifying mathematical theories of unsupervised learning. The first will explain why relationships between deep features can rediscover the semantic structure of the natural world by connecting model explainability, cooperative game theory, and deep feature relationships. The second mathematical theory will show that relationships between representations can be used to unify over 20 common machine learning algorithms spanning 100 years of progress in the field of machine learning. In particular, we introduce a single equation that unifies classification, regression, large language modeling, dimensionality reduction, clustering, contrastive learning, and spectral methods. This thesis uses this unified equation as the basis for a “periodic table of representation learning” that predicts the existence of new types of algorithms. We show that one of these predicted algorithms is a state-of-the-art unsupervised image classification technique. Finally, this work will summarize the key findings and share ongoing and future directions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Score Estimation for Generative Modeling</title>
<link href="https://hdl.handle.net/1721.1/164063" rel="alternate"/>
<author>
<name>Jayashankar, Tejas Kumar</name>
</author>
<id>https://hdl.handle.net/1721.1/164063</id>
<updated>2025-11-26T03:04:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Score Estimation for Generative Modeling
Jayashankar, Tejas Kumar
Recent advances in score-based (diffusion) generative models have achieved state-of-the-art sample quality across standard benchmarks. Building on the remarkable property of these models in estimating scores, this thesis presents three core contributions: 1) new objectives to reduce score estimation error, 2) a novel Bayesian-inspired optimization framework for solving inverse problems, and 3) a fast one-step generative modeling framework that is based on a novel amortized score estimation framework. In the first part of this thesis, we introduce two new score estimation objectives with applications to both implicit and diffusion-based generative models. To improve spectralbased non-parametric estimators, we propose a theoretically optimal parametric framework that learns the score by projecting it onto its top-L principal directions. Additionally, inspired by matrix-valued kernel methods, we present a second approach that lifts the score into the space of outer products, and minimizes the distance between the estimated and true scores in this higher-order space. In the second part, we shift focus from score estimation to leveraging diffusion models as data-driven priors for solving inverse problems. Centering our development around the problem of source separation, we introduce a novel algorithm inspired by maximum a posteriori estimation. This approach combines multiple levels of Gaussian smoothing with an α-posterior, enabling effective signal separation using only independent priors for the sources. We demonstrate the effectiveness of this method through its application to interference mitigation in digital communication signals. Finally, we outline how this framework can be naturally extended to tackle a broader class of inverse problems. In the final part, we return to the fundamental challenge of efficient sampling, which is critical for enabling practical data-driven engineering systems. We propose a novel generative modeling framework that enables training a one-step neural sampler from scratch. At the core of this method is a new objective based on multi-divergence minimization, guided by a novel approach for score estimation of mixture distributions. Our framework is simple to implement, stable during training, unifies several existing approaches, and achieves state-of-the-art performance in image generation tasks. Furthermore, we discuss how this framework can be naturally extended to multi-step neural sampling and adapted for fast posterior sampling—an essential component in simulation-based inverse problem solvers.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Superconducting Nanowire Integrated Circuits for Scalable Cryogenic Memory</title>
<link href="https://hdl.handle.net/1721.1/164062" rel="alternate"/>
<author>
<name>Medeiros, Owen A.</name>
</author>
<id>https://hdl.handle.net/1721.1/164062</id>
<updated>2025-11-26T03:04:30Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Superconducting Nanowire Integrated Circuits for Scalable Cryogenic Memory
Medeiros, Owen A.
Superconducting nanowire integrated circuits (SNICs) are a promising class of cryogenic electronics that harness the zero resistance, high kinetic inductance, and nanoscale geometry of ultrathin superconducting wires to implement logic, memory, amplification, and sensing with minimal energy dissipation. Unlike Josephson-junction-based circuits, SNICs support compact, planar layouts compatible with single-layer fabrication and operation in unshielded cryogenic environments. This thesis develops superconducting nanowire memory (SNM) as a scalable implementation of SNICs. A modular cell architecture is introduced, exploiting hysteretic switching and inductive asymmetry to enable nonvolatile digital state storage with zero static power consumption. A hierarchical design framework is established, combining automated layout generation, electrothermal simulation in LTspice, and microscopic modeling using the time-dependent Ginzburg–Landau (TDGL) formalism. To enable scalable integration, this work implements a row–column SNM array layout and demonstrates fabrication across full 4-inch wafers using a planar, singlelayer process. Cryogenic measurements validate reliable operation in both single cells and multi-cell arrays, confirming the predictive accuracy of the design and modeling framework. Tradeoffs in bias current levels, pulse timing, and read/write conditions are systematically evaluated through cryogenic measurements, revealing their impact on bit error rate, operational margins, and energy efficiency across single cells and arrays. Together, these contributions establish SNICs as a viable and extensible platform for cryogenic memory, providing the tools, models, and infrastructure needed to enable broader adoption in quantum computing, neuromorphic systems, and other energy-constrained cryogenic applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Next Generation Operating Systems for the Datacenter</title>
<link href="https://hdl.handle.net/1721.1/164061" rel="alternate"/>
<author>
<name>Fried, Joshua</name>
</author>
<id>https://hdl.handle.net/1721.1/164061</id>
<updated>2025-11-26T03:04:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Next Generation Operating Systems for the Datacenter
Fried, Joshua
Modern datacenters face a fundamental challenge: handling demanding real-time and dataintensive workloads that require both microsecond-scale low latency and high throughput, while simultaneously achieving high resource utilization and efficient multi-tenancy. Traditional operating systems, designed for an era of slower hardware, introduce significant overheads to microsecond-scale I/O that prevent applications from exploiting the full performance of the underlying hardware. Furthermore, their millisecond-scale resource management is ill-equipped to handle the microsecond-level burstiness of modern workloads, leading to costly overprovisioning and idle resources. Recognizing the performance limitations imposed by traditional OSes, a common workaround has emerged: letting applications communicate directly with hardware, bypassing the OS entirely. While this approach offers performance gains by removing the OS from the critical path, existing kernel-bypass solutions require dedicated resources, extensive application rewrites, and provide weak isolation, making them unsuitable for general-purpose, shared environments. This thesis presents a new datacenter operating system, composed of three integrated systems: Shenango, Caladan, and Junction. Together, they preserve the high-performance, low-overhead I/O benefits of kernel bypass, while providing efficient resource management, strong isolation for multi-tenant workloads, and compatibility with unmodified software. First, Shenango enables applications to bypass traditional OS-mediated I/O without dedicating CPU cores solely to polling. Next, Caladan ensures that idle resources can be used productively by other applications by actively managing competition for microarchitectural resources, thereby preserving each application’s high I/O performance and responsiveness. Finally, Junction overcomes several common limitations of kernel-bypass solutions, bringing these benefits to all applications by preserving compatibility with existing software and reducing memory and polling overheads. Collectively, these systems provide the advantages of direct hardware access without sacrificing the flexibility or efficiency of a general-purpose operating system. This work demonstrates that high I/O performance, efficient resource utilization, and broad application compatibility can indeed coexist, paving the way for a new generation of datacenter operating systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systematic Development of Healthcare AI: From Data Curation,&#13;
Algorithm Optimization, Benchmark Design and Clinical Applications</title>
<link href="https://hdl.handle.net/1721.1/164060" rel="alternate"/>
<author>
<name>Gao, Mingye</name>
</author>
<id>https://hdl.handle.net/1721.1/164060</id>
<updated>2025-11-26T03:04:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Systematic Development of Healthcare AI: From Data Curation,&#13;
Algorithm Optimization, Benchmark Design and Clinical Applications
Gao, Mingye
Artificial intelligence (AI) has brought transformative changes to healthcare industry in the recent years from various aspects, such as patient care, disease diagnosis and medical research. As healthcare systems worldwide face increasing pressure from aging populations and rising chronic disease rates, there is an urgent need for systematic approaches to develop reliable and safe AI solutions. This thesis advances the systematic development of healthcare AI through four interconnected components: data curation, algorithm optimization, benchmark design, and clinical applications. The primary contribution of this thesis focuses on establishing a comprehensive pipeline for healthcare large language models (LLMs), spanning from data curation to clinical deployment. At the data level, a rule-based filtering framework was developed to select high-quality subsets from the large pre-training corpora, significantly improving both continue pre-training and fine-tuning performance of LLMs. For safety alignment, an automated pipeline was developed for preference learning that includes preference dataset synthesis, rule-based and data-adaptive annotation, and reward model training. Additionally, two novel benchmarks were created to ensure reliability and safety of LLMs in healthcare tasks: one assessing demographic biases of LLMs across common diseases, while another assessing models’ ability to reject illogical requests from users in drug-related scenarios. Finally, LLMs were used to generating patient-friendly educational content for clinical trials, demonstrating their role in improving patient education and engagement in clinical trials. This systematic progression from data to deployment establishes a blueprint for developing safe and effective language models in healthcare settings. Beyond language models, machine learning techniques were applied on an additional healthcare task. In this project, a novel approach combining normalized cross-correlation and attention graph convolutional recurrent networks was developed to realize contactless, continuous and reliable radar-based vital signs monitoring in dynamic home environments. Through systematic data collection and algorithm optimization, the accurate heart rate can be obtained across varying radar-subject distances (2-2.5m) and subject orientations, demonstrating robust performance in real-world conditions through extensive validation in four test houses with six subjects. Collectively, these contributions advance healthcare AI development across 2 fronts: establishing frameworks for safe and effective deployment of language models in healthcare settings and enabling reliable and continuous health monitoring at-home without wearable devices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SERenaDE: Hardware Acceleration of Cloud Serialization Frameworks</title>
<link href="https://hdl.handle.net/1721.1/164059" rel="alternate"/>
<author>
<name>Zarkos, Christos V.</name>
</author>
<id>https://hdl.handle.net/1721.1/164059</id>
<updated>2025-11-26T03:06:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">SERenaDE: Hardware Acceleration of Cloud Serialization Frameworks
Zarkos, Christos V.
Serialization frameworks are a fundamental operation of datacenters, as they enable language- and platform-neutral communication and storage. However, software serialization faces major performance bottlenecks, resulting in a significant fraction of cloud cycles dedicated to this process. Prior work has proposed specialized hardware accelerators to address these overheads. While these proposals achieve considerable speedups, they are expensive in terms of verification, fabrication, and deployment, and often hardcode too many details about the (de)serialization framework in hardware. We propose SERenaDE, a serialization framework designed to integrate general-purpose accelerators currently deployed in datacenters in order to accelerate and offload serialization to hardware. Specifically, we repurpose the Intel In-Memory Analytics Accelerator (IAA), an accelerator engine offering fast compression, to enable fast and transparent to the user serialization and deserialization, completely removing software serialization from the execution pipeline. We evaluate our system on latest-generation production machines, both with synthetic microbenchmarks, and open-source representative fleet-wide benchmarks. Our results show comparable performance in terms of per-request latency across all types of messages, while significantly improving throughput - especially at the tail -, maintaining thread scalability and achieving high compression ratios alongside substantial speedups for larger messages. Under 95th latency percentile latency constraints SERenaDE improves serialization and deserialization throughput by 13% and 30% respectively, while achieving from 0.2x to 6.94x smaller serialized message sizes for messages of a total memory layout larger than 4KB.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Biomolecular Interactions with Generative Models</title>
<link href="https://hdl.handle.net/1721.1/164058" rel="alternate"/>
<author>
<name>Corso, Gabriele</name>
</author>
<id>https://hdl.handle.net/1721.1/164058</id>
<updated>2025-11-26T03:04:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Modeling Biomolecular Interactions with Generative Models
Corso, Gabriele
In 2021, DeepMind’s AlphaFold2 revolutionized single-chain protein structure prediction achieving atomic accuracy, solving a longstanding challenge in biology. However, understanding biomolecular interactions, a critical problem for advancing drug discovery and biological research, remained unsolved. This thesis presents our research to redefine the machine learning approach to this problem, modeling structures with a new generative paradigm and tailoring the neural architectures and learning tasks to the specific challenges that arose. These ideas combined with significant engineering efforts led us to develop a class of open-source models from DiffDock to the recent Boltz-1. These have significantly pushed our ability to understand biomolecular interactions, they have been widely adopted in industry and academia to help with drug development and protein design and they have opened the door to new research paradigms to push biological research further.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Hardware Accelerators for Solving Sparse Linear&#13;
Systems</title>
<link href="https://hdl.handle.net/1721.1/164057" rel="alternate"/>
<author>
<name>Feldmann, Axel</name>
</author>
<id>https://hdl.handle.net/1721.1/164057</id>
<updated>2025-11-26T03:03:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Designing Hardware Accelerators for Solving Sparse Linear&#13;
Systems
Feldmann, Axel
Solving sparse linear systems is a key primitive that sits at the heart of many important numeric algorithms. Because of this primitive’s importance, algorithm designers have spent many decades optimizing linear solvers for high performance hardware. However, despite their efforts, existing hardware has let them down. State-of-the-art linear solvers often utilize &lt; 1% of available compute throughput on existing architectures such as CPUs and GPUs. There are many different algorithms used to solve sparse linear systems. These algorithms are diverse and often have very different computational bottlenecks. These include low arithmetic intensity, fine-grained parallellism, tight dependences, and sparsity-induced load imbalance. This thesis studies the problem of designing hardware accelerators for sparse linear solvers. We propose three novel architectures that explore different parts of the design space. The accelerators exploit static sparsity as the basis of novel hardware-software co-designed scheduling approaches. First, we introduce Spatula, an architecture designed to accelerate direct solvers. Then, we propose Azul, a hardware accelerator targeted at iterative solvers. Taken together, Spatula and Azul demonstrate significant speedups on both of the main classes of sparse linear solver algorithms. Finally, to show that our techniques are useful for end-to-end applications, we present Ōmeteōtl, an accelerator targeted at applications that use iterative solvers in their inner loop. Ōmeteōtl also shows that the techniques in this thesis generalize to sparse matrix computations beyond linear solvers. These accelerators deliver order-of-magnitude speedups over state-of-the-art GPU baselines, achieving &gt; 100× speedups on many inputs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Physics-Optimized Design of 3D Shapes with Part-Based Control</title>
<link href="https://hdl.handle.net/1721.1/164056" rel="alternate"/>
<author>
<name>Zhan, Sean</name>
</author>
<id>https://hdl.handle.net/1721.1/164056</id>
<updated>2025-11-26T03:06:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Physics-Optimized Design of 3D Shapes with Part-Based Control
Zhan, Sean
We introduce PhysiOPart, a computational approach for rapid generative design of 3D objects optimized for physical integrity. PhysiOPart enables users to edit and combine object parts to explore a vast design space. To model continuous surfaces of arbitrary resolution without topology restrictions, we parametrize parts with neural implicit representations. However, when parts are assembled to form an object, the resulting geometry is not guaranteed to be functional. Existing generative modeling approaches use task-specific neural predictors to approximate physical behaviors with limited accuracy. We propose an end-to-end differentiable physics simulation pipeline that performs linear static analysis to optimize for user-specified objectives, leveraging learned geometry priors. Our part-based formulation with finite element method is highly customizable, allowing for user-defined per-part materials, loads, and boundary conditions. The optimized designs exhibit improved physical behavior and can be fabricated.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fast Assembly of Curved Structures from Flat Configuration</title>
<link href="https://hdl.handle.net/1721.1/164055" rel="alternate"/>
<author>
<name>Zaman, Akib</name>
</author>
<id>https://hdl.handle.net/1721.1/164055</id>
<updated>2025-11-26T03:06:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Fast Assembly of Curved Structures from Flat Configuration
Zaman, Akib
Imagine deploying an emergency shelter that transitions seamlessly from a flat configuration to a lifted structure, or a folded robot that is sent through a tunnel and subsequently activated to expand into a larger form at the endpoint, with a single, collective pull of strings. This scenario raises two critical questions: (i) how to decompose the structure into a flat state that encodes the 3D geometry, and (ii) where to place strings through the unit modules to achieve complete actuation. Although these questions have been explored individually, comprehensive solutions remain scarce. To address this challenge, this thesis presents a computational approach for designing freeform structures that can be rapidly assembled from initially flat configurations by a single string pull. Target structures are decomposed into rigid, spatially varied quad tiles optimized to approximate a user-provided surface, forming a flat mechanical linkage. A two-step algorithm is then applied to determine a physically realizable string path that controls only a subset of tiles, enabling smooth actuation from flat to assembled configuration. First, the minimal subset of tiles required for string control is computed by considering both the structure’s geometry and inter-tile interactions. Second, a valid string path is identified through these tiles that minimizes friction, thereby transforming the flat linkage into the target 3D form upon tightening a single string. The resulting designs can be manufactured in flat form using computational fabrication techniques: such as 3D printing, CNC milling, or molding, thereby simplifying both production and transportation. Validation is provided through a series of physical prototypes and application case studies, ranging from medical devices and space shelters to large-scale architectural installations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Theoretical Foundations for Learning in Games and Dynamic Environments</title>
<link href="https://hdl.handle.net/1721.1/164054" rel="alternate"/>
<author>
<name>Golowich, Noah</name>
</author>
<id>https://hdl.handle.net/1721.1/164054</id>
<updated>2025-11-26T03:04:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Theoretical Foundations for Learning in Games and Dynamic Environments
Golowich, Noah
Decision-making problems lie at the heart of numerous aspects of human and algorithmic behavior across our society, ranging from healthcare systems to financial systems to interactions with the physical world. A central challenge that arises across many decision-making problems is the presence of multiple agents, often with competing incentives. To understand how agents will act in such situations, it is often productive to compute equilibria, which have the property that no agent can deviate from them and improve their utility. An additional challenge is that decisions made by agents often change the state of the environment, which is modeled as dynamic. Thus, we need efficient algorithms for learning good policies, which tell the agent what to do as a function of the environment’s state. Extensive work spanning multiple domains such as economics, computer science, and statistics has been developed to model these decision-making problems. This has led to many celebrated results, which include, for instance, a considerable body of work studying the computational properties of Nash equilibria in normal-form games, and a long line of papers on reinforcement learning. However, many of these classical works suffer from a few shortcomings: first, they often do not account for the enormous state or action spaces available to agents in realistic decision-making settings, and second, many of them do not derive computationally efficient algorithms for the desired solution concepts. These shortcomings are brought to the forefront by the remarkable recent progress in artificial intelligence, which holds promise for solving decision-making problems with enormous state or action spaces but which is often bottlenecked by computation. The objective of this thesis is to develop theoretical foundations for the computational aspects of such decision-making problems: e.g., How do we efficiently compute equilibria in large games?, and: How can we efficiently learn near-optimal policies in complex environments? Some highlights of our results are listed below—first, we study problems in which there are multiple agents and the goal is to compute some notion of equilibrium: • We show the first near-optimal rate of convergence to equilibrium for a no-regret learning algorithm in normal-form games, resolving a decade-long line of work which had aimed to establish increasingly better rates. • We establish the first algorithm with sublinear swap regret against arbitrary adversaries enjoying only polylogarithmic dependence on the number of actions, resolving a question of Blum and Mansour from 2007. • As a corollary of the preceding result, we obtain the first polynomial-time algorithm for approximating a correlated equilibrium in extensive-form games (to constant approximation error), addressing a question of von Stengel &amp; Forges from 2008. Additionally we obtain near-optimal bounds on the communication and query complexity of approximating correlated equilibria in normal-form games (to constant approximation error), addressing several open problems in the literature. • We give the first algorithm for the sequential calibration problem with calibration error beating that of the seminal work of Foster &amp; Vohra from 1998. Moving on to decision-making problems where the environment is modeled as dynamic (typically studied in the framework of reinforcement learning (RL)), our results include the following: • We give the first end-to-end computationally efficient algorithms for learning a nearoptimal policy in many fundamental reinforcement learning problems, such as those of (constant-action) Linear Bellman Complete MDPs and sparse linear MDPs. • We give the first quasi-polynomial time algorithm for finding a near-optimal policy in a general and well-motivated class of partially observable RL environments, and show that our bound is tight. • We prove some (perhaps surprising) hardness results that arise in multi-agent RL problems. For instance, we show that it is computationally hard to implement noregret learning algorithms in multi-agent RL environments even when the agents can coordinate on their choice of algorithm, which creates a stark contrast with simpler multi-agent learning settings (e.g., in normal-form games) where no-regret learning has formed the bedrock for a wide array of developments over the last several decades. • Nevertheless, we show that by adjusting the type of equilibrium appropriately, we can circumvent the above hardness results and derive computationally efficient decentralized algorithms for computing equilibria in multi-agent RL environments. Many of the above results have inspired follow-up work which includes applications of our results to various problems in game theory, reinforcement learning, online learning, and related domains, as well as the formulation of new problems which are inspired by the above results.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing human vision through large-scale brain imaging and computational models</title>
<link href="https://hdl.handle.net/1721.1/164053" rel="alternate"/>
<author>
<name>Lahner, Benjamin</name>
</author>
<id>https://hdl.handle.net/1721.1/164053</id>
<updated>2025-11-26T03:04:19Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Characterizing human vision through large-scale brain imaging and computational models
Lahner, Benjamin
Efforts to understand the neural underpinnings of human visual processing require sufficient experimental data and robust models. This thesis significantly contributes to both these fronts while simultaneously elucidating some of the most intriguing aspects of the human visual system. In the first chapter, I use a combination of classical machine learning, artificial neural networks, and a joint MEG/fMRI neuroimaging dataset to reveal that the human visual system extensively processes highly memorable images in regions distributed throughout visual cortex late in time. In the second chapter, I present the BOLD Moments Dataset, a large-scale fMRI dataset using short video stimuli to extend computational models of visual processing into the video domain to better understand how humans process visual content unfolding over time. The last chapter introduces a fMRI dataset aggregation framework titled MOSAIC to achieve the scale and stimulus diversity needed for training modern neural networks directly on brain responses. This body of work exemplifies how large-scale experimental data and artificial neural networks can contribute towards a robust and generalizable understanding of human visual processing.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wireless Systems for a Sustainable Future: From Battery-Free Subsea IoT to THz-Based Agriculture Monitoring</title>
<link href="https://hdl.handle.net/1721.1/164052" rel="alternate"/>
<author>
<name>Afzal, Sayed Saad</name>
</author>
<id>https://hdl.handle.net/1721.1/164052</id>
<updated>2025-11-26T03:03:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Wireless Systems for a Sustainable Future: From Battery-Free Subsea IoT to THz-Based Agriculture Monitoring
Afzal, Sayed Saad
This thesis describes how wireless sensing can drive significant advancements in climate and sustainability. Specifically, it shows how we can leverage diverse signals—acoustics, ultrasound, THz, and optics— in unconventional ways to unlock new capabilities in underwater climate monitoring, food safety, and disaster response. The thesis introduces two novel technologies. The first technology enables long-term, ultra-low power ocean sensor networks for use in climate modeling, marine monitoring, and sustainable aquaculture. Unlike existing IoT technologies – like Bluetooth, WiFi, and GPS – which cannot work underwater, we design and implement an ultra-low power subsea backscatter communication system, enabling battery-free underwater imaging, sensing and localization. Second, the thesis describes a new technology that can support sustainability in agriculture through real-time food quality assessment that reduces food waste. In contrast to existing food quality technologies that require direct contact with produce, we introduce a new wireless system for accurate, non-invasive sensing using sub-THz signals. We describe the design, implementation, and evaluation of multiple systems that leverage these technologies to monitor the ocean and food waste: First, we present a ultra-wideband metamaterial sensor design that facilitates scalable, and long-range battery-free underwater communication. Next, we describe a system that can push the throughput of this technology using higher order modulation. Beyond building sensor networks, we demonstrate their real-world potential through two systems: one for underwater localization that uses rich spatio-temporal-spectral features for accurate positioning, and another for battery-free imaging that fuses acoustic and optical signals to capture color images in the dark. Finally, we present a novel solution for accurate fruit ripeness sensing using sub-terahertz wireless signals. These systems unlock new IoT applications in climate modeling, aquaculture, robotics, and agriculture.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automatic Integration and Differentiation of Probabilistic Programs</title>
<link href="https://hdl.handle.net/1721.1/164051" rel="alternate"/>
<author>
<name>Lew, Alex K.</name>
</author>
<id>https://hdl.handle.net/1721.1/164051</id>
<updated>2025-11-26T03:04:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Automatic Integration and Differentiation of Probabilistic Programs
Lew, Alex K.
This thesis addresses the challenge of automating fundamental operations from probability theory and calculus on probability distributions defined by higher-order probabilistic programs. It does this by developing a suite of composable program transformations for an expressive core calculus for probabilistic programming: • Integration: Compiling a probabilistic program into a deterministic representation of its expectation operator, handling potentially intractable integrals symbolically. • Unbiased estimation: Transforming programs involving intractable operations (like integration) into runnable probabilistic programs that yield provably unbiased estimates of the original value, with flexible levers for users to navigate cost-variance trade-offs. • Radon-Nikodym differentiation: Compiling probabilistic programs into implementations of a novel interface for the unbiased estimation of density ratios, of the sort that arise in Monte Carlo and variational inference. • Differentiation: Extending automatic differentiation (AD) to compose with the above transformations, enabling the optimization of expected values and density ratios of probabilistic programs. These transformations operate on an expressive higher-order probabilistic programming language and are proven correct using denotational semantics and logical relations. The resulting framework enables the sound and automated implementation of a wide range of algorithms for probabilistic inference and learning. To demonstrate the practical value of these techniques, we use them to implement three systems for scalable probabilistic inference in different domains: (1) extensions to the Gen probabilistic programming system that accelerate and automate a broad range of Monte Carlo and variational inference algorithms, (2) the PClean system for automated Bayesian reasoning about relational data, and (3) the GenLM system for controllable generation from language models. We find that our techniques enable these systems to scale to a variety of complex, real-world problems, and to achieve state-of-the-art performance on a range of benchmarks.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Minimizer-space computation</title>
<link href="https://hdl.handle.net/1721.1/164050" rel="alternate"/>
<author>
<name>Ekim, Barış C.</name>
</author>
<id>https://hdl.handle.net/1721.1/164050</id>
<updated>2025-11-26T03:03:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Minimizer-space computation
Ekim, Barış C.
As the volume of DNA sequencing data increases, the need for algorithmic advances to efficiently handle the data arises. One such concept is minimizers, which are genomic substrings that allow for reduced representations of larger DNA sequences. In this thesis, we introduce minimizer-space computation as a new algorithmic paradigm for DNA sequence analysis. Instead of DNA nucleotides, we consider minimizers as the letters of an extended alphabet in which algorithms operate. We present several techniques on how to efficiently construct these extended alphabets, demonstrate how to develop approaches that use these alphabets and consequently use only a fraction of sequence data, and show how fundamental biological tasks, such as genome assembly and read mapping, can be significantly accelerated over state-of-the-art methods.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performant and Resilient Service Composition for Modern Cloud Applications</title>
<link href="https://hdl.handle.net/1721.1/164049" rel="alternate"/>
<author>
<name>Li, Tianyu</name>
</author>
<id>https://hdl.handle.net/1721.1/164049</id>
<updated>2025-11-26T03:04:07Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Performant and Resilient Service Composition for Modern Cloud Applications
Li, Tianyu
Modern cloud applications are often distributed systems composed from vendor-provided building blocks (e.g., object storage services, container orchestration services). Consequently, distributed fault-tolerance is a central concern for application correctness. Although each building block may offer individual fault-tolerance, the end-to-end application is still susceptible to failures, because the composition logic that orchestrates them may still fail. This thesis explores resilient composition, a systematic way to assemble fault-tolerant components into resilient end-to-end distributed applications. We begin by presenting the fail-restart system model, which captures the unique fault-tolerance challenges that arise when composing services. Based on this model, we define Composable Resilient Steps (CReSt), an atomic programming abstraction that guarantees fault-tolerance across the assembled application. We then detail efficient methods for implementing CReSt using a range of database techniques, and a novel distributed protocol that allow optimistic, speculative execution ahead of slower fault-tolerance safeguards. Together, these pieces allow developers to assemble fault-tolerant distributed systems that are correct by construction and often more performant than existing solutions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Succinct Cryptography via Propositional Proofs</title>
<link href="https://hdl.handle.net/1721.1/164048" rel="alternate"/>
<author>
<name>Mathialagan, Surya</name>
</author>
<id>https://hdl.handle.net/1721.1/164048</id>
<updated>2025-11-26T03:03:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Succinct Cryptography via Propositional Proofs
Mathialagan, Surya
The goal in modern cryptography is to obtain security while minimizing the use of computational resources. In recent years, we have been incredibly successful in our pursuit for efficiency, even for cryptographic tasks that were thought to be “science fiction”. For example, we have constructions of fully homomorphic encryption and private information retrieval from standard, cryptographic assumptions which achieve the ideal levels of succinctness. However, there are still some tasks in cryptography where achieving the “ideal” efficiency from standard assumptions has evaded us. In this thesis, we study the problem of achieving succinctness in two such settings: • Can we construct succinct indistinguishability obfuscation (IO) for Turing machines? In particular, can we construct an obfuscated program whose size is independent of the input length? • Can we construct succinct non-interactive arguments (SNARGs) for all of NP? While the problems seem unrelated at first glance, the root difficulty seems to stem from a similar place: both primitives have non-falsifiable security definitions. In fact, this type of barrier exists for many other cryptographic primitives, including witness encryption. This leads to a central question which we refer to as the “non-falsifiability barrier”: how can we construct non-falsifiable primitives from falsifiable assumptions? In this thesis, we show how to leverage propositional proofs to overcome the non-falsifiability barrier, and make substantial progress in the goal of achieving succinctness in both settings. Our main result is universal construction of both SNARGs and succinct IO for Turing machines from standard assumptions using propositional proofs. We then show several applications, including rate-1 IO for many programs, the first succinct secret sharin schemes for monotone circuits, and many more. Our results establish propositional proofs as a foundational tool for achieving succinctness across a broad range of cryptographic settings.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Statistical and Algorithmic Thresholds in Spin Glasses</title>
<link href="https://hdl.handle.net/1721.1/164047" rel="alternate"/>
<author>
<name>Huang, Brice</name>
</author>
<id>https://hdl.handle.net/1721.1/164047</id>
<updated>2025-11-26T03:03:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Statistical and Algorithmic Thresholds in Spin Glasses
Huang, Brice
This thesis studies spin glasses, disordered complex systems originating in statistical physics. Such systems model optimization, sampling, and inference problems from probability and statistics, which are of fundamental importance to modern data science. In particular, spin glasses provide natural examples of random, high-dimensional, and often highly non-convex cost or log-likelihood functions, making them an excellent testing ground for such questions. Part I of this thesis studies statistical properties of these models. Chapter 2 identifies the storage capacity of the Ising perceptron, a simple model of a neural network, subject to a numerical condition. This gives a conditional proof of a 1989 conjecture of Krauth and M´ezard. Chapter 3 gives a new proof of the celebrated Parisi formula for the free energy of the spherical mean-field spin glass, which was first proved by Talagrand and in more generality by Panchenko. Our proof takes a simpler modular approach, drawing on recent advances in spin glass free energy landscapes due to Subag. Chapter 4 characterizes the topology trivialization phase transition of multi-species spherical spin glasses and shows that lowtemperature Langevin dynamics finds the ground state in the topologically trivial regime; the latter result is new even in the single-species setting. Part II of this thesis concerns algorithms for optimization and sampling problems on spin glasses. Chapter 5 studies the problem of optimizing the Hamiltonian of a multi-species spherical spin glass. Our main result exactly characterizes the maximum value attainable by a class of algorithms that are suitably Lipschitz in the disorder. This class includes gradient-based algorithms and Langevin dynamics on constant time scales, and in particular includes the best algorithm known for this problem. This chapter is part of a series of works where we establish exact algorithmic thresholds using the branching overlap gap property (OGP), a landscape property introduced in our earlier work (which appears in our S.M. thesis). In this chapter, we develop a more robust way to establish the branching OGP that does not require Guerra’s interpolation; this allows our method to be applied to models well beyond the (single-species) mean-field spin glass we previously considered. Chapters 6 and 7 study sampling from the Gibbs measure of a spherical mean-field spin glass. Chapter 6 develops a sampling algorithm based on simulating Eldan’s stochastic localization scheme, while Chapter 7 analyzes simulated annealing of Langevin dynamics. We prove both algorithms succeed for inverse temperatures up to a stochastic localization threshold. Chapter 6 gives the first stochastic localization-based sampler with a guarantee of vanishing total variation error, improving on earlier algorithms with vanishing Wasserstein error. Chapter 7 provides the first provable guarantees for a Markov chain in this model beyond the uniqueness threshold, where mixing from worst-case initialization is provably slow.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Cavity-Coupled Rydberg Atom Array for Quantum Science and Quantum Computing</title>
<link href="https://hdl.handle.net/1721.1/164046" rel="alternate"/>
<author>
<name>Hu, Beili</name>
</author>
<id>https://hdl.handle.net/1721.1/164046</id>
<updated>2025-11-26T03:03:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Cavity-Coupled Rydberg Atom Array for Quantum Science and Quantum Computing
Hu, Beili
Neutral atom arrays have rapidly emerged as a leading platform for quantum computing, boasting scalable, configurable arrays of single atoms trapped in optical tweezers, fast, high-fidelity entangling gates through Rydberg interactions, and programmable, parallelized control of qubit operations. Coupling an atom array to an optical cavity opens a new frontier. Leveraging enhanced light-atom interactions in cavity quantum electrodynamics, cavity- coupled atom arrays acquire capabilities that can further expand the neutral atom toolbox, including cavity-enhanced atom readouts, atom-photon entanglement, and photon-mediated interactions between distant atoms.&#13;
&#13;
This thesis presents a quantum hardware platform that integrates an array of neutral atoms with a high-finesse optical cavity. After describing the design and development of the experimental apparatus, I demonstrate high-fidelity atom state readout through the cavity, achieving improved speed and atom survival compared to conventional free-space imaging methods. I then introduce a new technique for selectively controlling atom-cavity coupling on arbitrary subsets of the array, using local AC Stark shifts on the excited states of the atoms. Building on these tools, I demonstrate fast, non-destructive cavity-based readout of atom arrays, a crucial bottleneck of atom array platforms. I also showcase real-time measurement and feedback capabilities with a demonstration of classical error correction, using a register of atomic bits. Finally, I describe progress toward implementing single- and two-qubit gates within the cavity-coupled system. By combining coherent control, tunable interactions, and high-fidelity, non-destructive readout integrated and real-time feedback, the cavity-coupled Rydberg atom array offers a promising path toward fault-tolerant quantum computing.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Estimation, Prediction and Counterfactual Inference with Dependent Observations</title>
<link href="https://hdl.handle.net/1721.1/164045" rel="alternate"/>
<author>
<name>Kandiros, Anthimos Vardis</name>
</author>
<id>https://hdl.handle.net/1721.1/164045</id>
<updated>2025-11-26T03:03:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Estimation, Prediction and Counterfactual Inference with Dependent Observations
Kandiros, Anthimos Vardis
The success of modern data science is largely driven by access to large-scale, high dimensional data. Much of classical machine learning has been developed under the assumption that this data is generated independently from some distribution. However, this assumption is often violated when data exhibit complex dependencies across a spatial or temporal domain, or due to social interactions. In this thesis, our goal is to design and analyze methods that address these dependencies for performing three fundamental estimation tasks: unsupervised learning, supervised learning and counterfactual inference. In supervised learning, we observe a sequence of unlabeled examples and our goal is to infer some structural property from the distribution they came from. The presence of dependencies could severely complicate this question. Our results in this direction encompass both fully observable as well as latent variable models. For fully observable models, we use the celebrated Ising model to describe the dependencies. Assuming we have access to a single sample from some Ising model, which captures a variety of real-world scenarios, we design and analyze polynomial time algorithms for recovering the matrix corresponding to the network structure of the model. We then leverage these techniques to obtain improved guarantees for estimating Ising models in Total Variation (TV) distance from multiple samples. For latent variable models, we focus on the case where the structure is a tree and we get samples from the leaves, which is a common scenario in phylogenetics. Assuming the model is Gaussian, we analyze the behavior of the Expectation-Maximization (EM) algorithm, a popular heuristic for latent variable models. We show that for trees with a single latent node, EM converges to the true model and for general tree topologies, the only stationary point in the interior of the domain is the true model. We then shift our focus to discrete models and study latent tree Ising models, for which we provide polynomial time algorithms for learning the distribution of leaves in TV distance. In supervised learning, we observe a sequence of feature-label pairs and our task is to learn the predictive relationship between the features and the labels. Here, this relationship could be confounded by the presence of dependencies among labels. We formulate this question as a regression problem, where the labels of the units follow the joint distribution of an Ising model with an unknown strength parameter and external fields that are determined by the regression function. We characterize the minimax optimal rate of estimation for the various parameters and provide an efficient algorithm that achieves it. Interestingly, it might not be possible to estimate all the parameters in some cases. In counterfactual inference, we focus on the design of network experiments, where the treatment of a unit could affect the outcome of a neighboring unit in an underlying graph. Our goal is to estimate a general causal effect that can be defined as the average difference in outcomes for a unit under two different interventions. For an arbitrary such effect, we propose an experimental design, called the conflict graph design. For an unbiased estimator of that effect, we prove bounds on its variance that yield the best known rates of estimation for various effects studied in the literature, such as the average direct effect and the total effect, but also provide estimation rates for effects that have received less attention from the perspective of experimental design.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hardening Trusted Execution Environments Against Microarchitectural Side-Channel Attacks: A Constructive Approach</title>
<link href="https://hdl.handle.net/1721.1/164044" rel="alternate"/>
<author>
<name>Dréan, Jules Guillaume Jacques Bénony D</name>
</author>
<id>https://hdl.handle.net/1721.1/164044</id>
<updated>2025-11-26T03:03:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Hardening Trusted Execution Environments Against Microarchitectural Side-Channel Attacks: A Constructive Approach
Dréan, Jules Guillaume Jacques Bénony D
Trusted Execution Environments (TEEs) [1–5] promised to enable secure computation even in the presence of privileged adversaries by providing hardware-enforced isolation. However, the discovery of microarchitectural side-channel and transient execution attacks [6–10] has severely undermined these security guarantees. These attacks exploit shared hardware resources and speculative execution to leak sensitive information across security boundaries, effectively bypassing the architectural isolation enforced by TEEs. The widespread impact of these vulnerabilities is evidenced by more than 43 published attacks [11] targeting commercial TEE platforms including Intel SGX, AMD-SEV, and ARM TrustZone. Existing approaches to defend against these attacks face significant limitations. Hardware-based solutions [12–14] often require complex processor modifications with significant hardware overhead. Replacing trusted hardware with cryptographic approaches incurs prohibitive performance overheads [15]. Meanwhile, formal verification methods struggle to scale to realistic code base sizes and often fail to capture subtle microarchitectural behaviors [16–18]. This thesis proposes a constructive approach to TEE security and demonstrates that practical defenses against microarchitectural attacks are achievable through careful system design. Rather than relying only on models and simulations, we focus on constructing systems that are secure by design. Our work is concretely realized through the design, implementation, and evaluation of two novel platforms: First, we present Citadel, a TEE platform that enables secure shared memory while providing precise guarantees against microarchitectural side-channel attacks. Citadel introduces relaxed microarchitectural isolation (RMI), a novel security property that allows programs to share memory while restricting information leakage to that of a non-speculative execution. To achieve RMI, Citadel combines hardware-enforced microarchitectural isolation with two simple mechanisms for controlled speculation: SpecSafe, which prevents speculative shared-memory accesses entirely, and Burst mode, which enables better performance through constrained speculation on small code snippets. Through a fully functional FPGA prototype, we demonstrate that Citadel can run real-world applications including cryptographic libraries and private ML inference with less than 5% overhead while maintaining strong security guarantees. Second, we develop Argos, an “integrity-only” TEE specifically designed for verifiable fully homomorphic encryption, that enables the deployment of FHE schemes in real-world settings where malicious security is required. We show that by carefully constraining the attack surface and employing simple hardware mechanisms, we can achieve complete security against microarchitectural attacks. Argos introduces a simplified transcript-based attestation scheme that only requires one signature per FHE computation, amortizing the cost of relying on a physical TPM to microarchitecturally isolate secrets. Argos can be used to not only enforce circuit-level integrity of FHE schemes but can also be extended to support more complex FHE-based applications that take (potentially poisoned) input from the (malicious) circuit evaluator. Argos is compatible with commodity hardware and only incurs minimal performance overhead with an average of 3% overhead for FHE evaluation and 8% overhead for complex protocols. Through these systems, we show that effective defenses can be built against microarchitectural side channel and transient execution attacks. Our constructive approach yields practical systems that are secure by design while maintaining efficiency and usability. This thesis opens new possibilities for the deployment of trusted hardware by demonstrating concrete paths toward robust microarchitectural security.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Steering Robots with Inference-Time Interactions</title>
<link href="https://hdl.handle.net/1721.1/164043" rel="alternate"/>
<author>
<name>Wang, Yanwei</name>
</author>
<id>https://hdl.handle.net/1721.1/164043</id>
<updated>2025-11-26T03:03:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Steering Robots with Inference-Time Interactions
Wang, Yanwei
Imitation learning has driven the development of generalist policies capable of autonomously solving multiple tasks. However, when a pretrained policy makes errors during deployment, there are limited mechanisms for users to correct its behavior. While collecting additional data for finetuning can address such issues, doing so for each downstream use case is inefficient at deployment. My research proposes an alternative: keeping pretrained policies frozen as a fixed skill repertoire while allowing user interactions to guide behavior generation toward user preferences at inference time. By making pretrained policies steerable, users can help correct policy errors when the model struggles to generalize—without needing to finetune the policy. Specifically, I propose (1) inference-time steering, which leverages user interactions to switch between discrete skills, and (2) task and motion imitation, which enables user interactions to edit continuous motions while satisfying task constraints defined by discrete symbolic plans. These frameworks correct misaligned policy predictions without requiring additional training, maximizing the utility of pretrained models while achieving inference-time user objectives.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advancing Deep Learning Efficiency: From Specialized Co-Design to Automated Generation</title>
<link href="https://hdl.handle.net/1721.1/164042" rel="alternate"/>
<author>
<name>Lin, Yujun</name>
</author>
<id>https://hdl.handle.net/1721.1/164042</id>
<updated>2025-11-26T03:03:34Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Advancing Deep Learning Efficiency: From Specialized Co-Design to Automated Generation
Lin, Yujun
The explosive growth of artificial intelligence (AI) technologies, particularly large-scale deep learning models such as large language models and diffusion models, has intensified the demand for efficient full-stack inference solutions that effectively balance performance and costs. This work will present a comprehensive exploration into the algorithm-system co-optimization, hardware design specialization and automation for scalable AI deployment. First, we begin with algorithmic optimization for large-scale models, including large language models and diffusion models, developing inference libraries that leverage quantization to boost the performance of generative AIs on existing GPU platforms. Next, we design specialized hardware accelerators for domain-specific applications, specifically point cloud understanding, emphasizing efficiency improvements through the exploitation of data sparsity. Finally, we open up the hardware design space beyond template-based sizing, and progress into the automated learning-based co-design of neural network and hardware architectures, maximizing their synergy with a full-stack joint optimization. We then introduce an automated framework for spatial accelerator generation, transforming high-level mappings into custom hardware designs that support scalable deployment. Together, these contributions advance AI inference efficiency by bridging the gap between advanced computational requirements and hardware capabilities, between theoretical potential and practical solutions, and between design cost and effectiveness.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Theoretic Foundations for Understanding Quantum Systems</title>
<link href="https://hdl.handle.net/1721.1/164041" rel="alternate"/>
<author>
<name>Liu, Allen</name>
</author>
<id>https://hdl.handle.net/1721.1/164041</id>
<updated>2025-11-26T03:03:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Learning Theoretic Foundations for Understanding Quantum Systems
Liu, Allen
Understanding and harnessing the power of quantum systems has the potential to transform many domains in science and technology. However, before we can achieve these aspirations, we must first build a better understanding of how quantum systems fundamentally behave. In this thesis, we approach this question through the lens of learning theory to develop new paradigms for learning about quantum systems and understanding their structural properties. We deliver several surprising results, upending previous beliefs about even fundamental laws and giving provably efficient algorithms for learning about quantum systems in settings previously conjectured to be intractable. Typically in quantum many-body systems, the particles in the system interact locally with respect to some geometry as described by a local Hamiltonian. Two key questions are first, understanding equilibrium properties of a system with a given Hamiltonian and second, recovering the Hamiltonian from measurements of the properties of the system. For the first, we prove a universal law that there is a sudden death of entanglement, at a critical temperature depending only on the geometry but not on the system size. For the second, we give the first efficient algorithm for recovering the Hamiltonian at any temperature, breaking a conjectured barrier at low temperatures. Beyond systems with local interactions, we also consider learning and testing properties of general quantum states, focusing on the interplay between statistical complexity and near-term quantum device constraints, only allowing for entangled measurements over a limited number of copies of the state. We characterize the optimal rates for learning and testing with single-copy measurements and for multi-copy measurements in many relevant near-term regimes.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Domain Wall Based Magnonics in Iron Garnet</title>
<link href="https://hdl.handle.net/1721.1/164040" rel="alternate"/>
<author>
<name>Gross, Miela J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164040</id>
<updated>2025-11-26T03:03:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Domain Wall Based Magnonics in Iron Garnet
Gross, Miela J.
Magnonic devices leverage magnons, quantized spin waves, as the mechanism to process and transfer information. In materials with low Gilbert damping, these spin wave-based systems enable ultra-fast operation while eliminating thermal heating and leakage currents inherent to conventional electron-based microelectronics. To maximize energy efficiency and processing speed, materials like iron garnets, ferrimagnetic insulators with tunable magnetic properties, are essential. Key magnetic parameters, including saturation magnetization, perpendicular magnetic anisotropy, coercivity, and Gilbert damping, can be tailored through elemental substitution or strain engineering in thin films. Furthermore, relativistic domain wall velocities reported in yttrium iron garnet (YIG), bismuth substituted YIG, and thulium iron garnet lay the foundations for high-speed operation. These unique attributes position garnets as ideal materials for the development of magnonic devices that integrate efficiency, speed, and versatility. This thesis presents my research on integrating thin film garnets into a domain wall based magnonic devices. It begins by exploring the magnetic characterization of thin film iron garnets, including the growth process, temperature dependent magnetic behavior, and tunable magnetic anisotropy. Next, we report on magnonics within the garnet, focusing on the interactions between spin waves and domain walls. Finally, we demonstrate a write mechanism for a magnonic device driven by spin wave-induced domain wall motion, providing detailed characterization of the device behavior and performance. These results underscore the potential of iron garnets for magnonic-based device applications and offer insights into the efficiency of write mechanism, paving the way for energy-efficient high-speed spintronic technologies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mitigating Inhomogeneity in High-Field MRI Excitations: Arbitrary Waveform Optimization and Multiphoton Parallel Transmission (MP-pTx)</title>
<link href="https://hdl.handle.net/1721.1/164039" rel="alternate"/>
<author>
<name>Drago, John M.</name>
</author>
<id>https://hdl.handle.net/1721.1/164039</id>
<updated>2025-11-26T03:03:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Mitigating Inhomogeneity in High-Field MRI Excitations: Arbitrary Waveform Optimization and Multiphoton Parallel Transmission (MP-pTx)
Drago, John M.
High-field magnetic resonance imaging (MRI) using a standard volume coil results in a spatially varying flip angle across the body, which renders images difficult to clinically interpret. This arises from the complex interactions of electromagnetic fields from current-carrying elements surrounding the imaging region. Parallel transmission (pTx) mitigates this issue by employing multiple high-power, independently controlled transmit elements for more precise excitation control. However, since the wavelength of the applied radio waves is shortened in tissue, the effect becomes highly dependent on the patient’s anatomy. As a result, optimization must be performed on a patient-by-patient basis, and methods that attempt full control of these independent waveforms are too computationally intensive to execute during the limited examination time. Additionally, the high-field excitations create complex electric field distributions that require control and careful monitoring to avoid excessive tissue power deposition (and ultimately heating), quantified as the specific absorption rate (SAR). To address these challenges, we introduce a method for optimizing patient-specific pulses using a global waveform (Ritz) approach, enabling rapid, in-scanner optimization. While pTx effectively addresses flip angle inhomogeneity, it remains costly and introduces challenges in SAR management. We address the SAR management and cost problems of pTx by introducing and characterizing the MP-pTx method, which leverages the multiphoton phenomenon to improve homogeneity using a standard volume coil supplemented with low-frequency (kilohertz) parallel channels. MP-pTx reduces costs and simplifies SAR management by shifting the parallel irradiation to low-cost, lowSAR shim array channels. These channels supplement an off-resonant excitation from a conventional birdcage coil with an oscillating, z-directed field that satisfies the resonance condition for spin state transitions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generative Latent Motion Planning and Reinforcement Learning for Legged Locomotion</title>
<link href="https://hdl.handle.net/1721.1/164038" rel="alternate"/>
<author>
<name>Miller, Adam Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/164038</id>
<updated>2025-11-26T03:03:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Generative Latent Motion Planning and Reinforcement Learning for Legged Locomotion
Miller, Adam Joseph
In recent years, reinforcement learning has demonstrated its promise as a powerful tool for developing innovative and advanced control systems for legged robots. The method’s robustness, versatility, and generality have made it a prime candidate for future robotic systems deployed in the real world. Through the development of more advanced machine learning algorithms and more reliable and efficient physics simulators, reinforcement learning continues to improve and enable new, dynamic, and agile capabilities. While the results are often impressive and the tools relatively beginner-friendly, there remain impediments to scalable and reliable progress. Poor reward function scaling, challenges balancing exploration versus exploitation, and misalignment from the engineer’s intent are roadblocks to better performance. To get beyond these limitations, new tools and frameworks are necessary. In this work, I present novel methods to address these challenges and extend the capabilities of reinforcement learning on robot hardware. Through the quantification of the distributional sim-to-real gap, simulation model optimization for hardware matching, latent space motion sequence planning, and latent style training, I demonstrate never-before-seen performance on legged hardware.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning from Weak Supervision: Theory, Methods, and Applications</title>
<link href="https://hdl.handle.net/1721.1/164037" rel="alternate"/>
<author>
<name>Lang, Hunter</name>
</author>
<id>https://hdl.handle.net/1721.1/164037</id>
<updated>2025-11-26T03:03:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Learning from Weak Supervision: Theory, Methods, and Applications
Lang, Hunter
The growing demand for high-quality labeled data to train machine learning models has driven widespread adoption of weak supervision and synthetic data methods, which use automated models instead of humans for annotation. Large language models (LLMs) have further accelerated this trend because their zero- and few-shot classification performance enables them to serve as effective “synthetic annotators” for various tasks. In practice, the data generated by these weak annotators is imperfect, but it enables the training of strong models. However, theoretical understanding of why training one model on the outputs of another leads to strong performance remains limited, especially when the annotator model exhibits suboptimal performance on the target task. In this thesis, I develop a theoretical framework for learning from weak supervision that captures the key aspects of the problem better than existing approaches in the crowdsourcing and learning-with-noisy-label literature. This framework establishes structural conditions that explain when and why weak supervision can reliably train strong models. Building on these theoretical results, the second part of the thesis introduces methods to improve how models learn from weak supervision and applies these methods to low-labeled-data settings.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-fidelity Optimal Trajectory Generation: Optimal Experiment Design for Robot Learning</title>
<link href="https://hdl.handle.net/1721.1/164036" rel="alternate"/>
<author>
<name>Ryou, Gilhyun</name>
</author>
<id>https://hdl.handle.net/1721.1/164036</id>
<updated>2025-11-26T03:03:10Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Multi-fidelity Optimal Trajectory Generation: Optimal Experiment Design for Robot Learning
Ryou, Gilhyun
Data-driven methods have significantly advanced robot learning, yet their direct application to real-world robots remains challenging, particularly under extreme conditions. This challenge is especially pronounced for highly maneuverable vehicles like quadrotor aircraft, which often operate in scenarios requiring rapid maneuvering, such as racing, defense systems, or safety-critical obstacle avoidance. In such extreme conditions, real-world constraints like control delays, state estimation errors, and battery voltage fluctuations often compromise trajectory reliability, even when conforming to ideal dynamics. However, the typical data-driven methods are usually developed in simulated environments. Consequently, the transition to real-world dynamics requires extensive fine-tuning, which can be risky, as perfect training in simulations does not guarantee safe transitions to real-world dynamics. This thesis employs methods from optimal experiment design to address these challenges. By quantifying uncertainty and maximizing information gain, the approach aims to safely and efficiently design the real-world experiments required for accurate constraint modeling. In the first chapter, we present a multi-fidelity Bayesian optimization method that searches for time-optimal speed profiles for quadrotor aircraft, effectively balancing numerical simulations with real-world flight experiments. The second chapter extends the optimal experiment design method to a high-dimensional online planning problem through integration with reinforcement learning. The proposed algorithms, trained and validated through real-world flight experiments, significantly outperform baseline methods in trajectory time and computational efficiency. Additionally, these algorithms have been adapted to various planning problems, including fixed-wing aircraft planning, cooperative multi-drone systems, and energy-efficient trajectory generation.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Systems for Large-Scale Graph Representation Learning</title>
<link href="https://hdl.handle.net/1721.1/164035" rel="alternate"/>
<author>
<name>Huang, Tianhao</name>
</author>
<id>https://hdl.handle.net/1721.1/164035</id>
<updated>2025-11-26T03:03:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Efficient Systems for Large-Scale Graph Representation Learning
Huang, Tianhao
Graph representation learning has gained significant traction in critical domains including finance, social networks, and transportation systems due to its successful application to graphstructured data. Graph neural networks (GNNs), which integrate the power of deep learning with graph structures, have emerged as the leading methods in this field, delivering superior performance across diverse graph related tasks. However, training graph neural networks on large-scale datasets encounters scalability challenges on current system architectures. First, the sparse, non-localized structures of real-world graphs lead to inefficiencies in data sampling and movement. This characteristic heavily stresses system input/output (I/O), particularly burdening the peripheral buses during the sampling phase of GNN training. Second, the suboptimal mapping of training procedure to GPU kernels leads to compute inefficiencies, including substantial kernel orchestration overhead and redundant computations. Addressing these challenges requires a comprehensive, full-stack optimization approach that fully leverages hardware capabilities. This thesis presents two complementary works to achieve the goal. The first work, Hanoi, unblocks the data loading bottleneck in out-of-core GNN training by co-designing the sampling algorithms to align with the hierarchical memory organization of commodity hardware. Hanoi drastically reduces I/O traffic to external storage, delivering up to 4.2× speedup over strong baselines with negligible impacts on the model quality. Notably, Hanoi is able to obtain competitive performance close to in-memory training with only a fraction of memory requirements. Building on this foundation, the second work, Joestar, introduces a unified framework for optimized GNN training on GPUs. Joestar adapts the multistage sampling approach from Hanoi to in-memory training which frees CPUs from heavy data loading workloads. Joestar also identifies novel kernel fusion opportunities and formulates better execution schedules by jointly considering the sampling and compute stages. Combined with compiler infrastructure in PyTorch, Joestar achieves state-of-the-art GNN training throughputs for billion-edge graph datasets on a single GPU.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generalizable Long-Horizon Robotic Manipulation under&#13;
Uncertainty and Partial Observability</title>
<link href="https://hdl.handle.net/1721.1/164034" rel="alternate"/>
<author>
<name>Curtis, Aidan</name>
</author>
<id>https://hdl.handle.net/1721.1/164034</id>
<updated>2025-11-26T03:03:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Generalizable Long-Horizon Robotic Manipulation under&#13;
Uncertainty and Partial Observability
Curtis, Aidan
A central goal in embodied artificial intelligence is to enable autonomous agents to accomplish complex, long-horizon tasks in novel, partially observable environments. In these scenarios, agents must effectively reason about uncertainty, generalize from limited experiences, and proactively plan actions to acquire missing information. This thesis tackles these core challenges by developing and evaluating novel methods specifically designed for partially observable contexts. The first part of this thesis introduces an enhanced heuristicguided planning technique that increases search efficiency in sparse-reward domains with significant uncertainty. Next, we investigate how symbolic reasoning can be integrated into the decision-making framework, accelerating search through the use of temporal and belief-space abstractions. Next, we propose a method for sequencing low-level reinforcement learning skills alongside information gathering actions, enabling increased task complexity and robustness in real-world tasks. Lastly, we show how large language models may be leveraged for few-shot model learning, allowing agents to rapidly adapt and generalize to new scenarios. The methods presented in this thesis advance the state-of-the-art in embodied AI by enabling robots to better handle uncertainty and incomplete information, ultimately paving the way for more capable, exploratory, and risk-aware autonomous systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Graph-based Vector Search Algorithms for Retrieval-Augmented AI Systems</title>
<link href="https://hdl.handle.net/1721.1/164033" rel="alternate"/>
<author>
<name>Zhang, Ziyu</name>
</author>
<id>https://hdl.handle.net/1721.1/164033</id>
<updated>2025-11-26T03:06:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Graph-based Vector Search Algorithms for Retrieval-Augmented AI Systems
Zhang, Ziyu
The recent advancement of large language models (LLMs) and large multimodal models (LMMs) greatly enhances the capabilities of AI systems such as recommendation systems and coding assistants, making them more practical for real-world deployment. However, these models cannot directly interact with large volumes of data in a knowledge corpus during inference/task time due to inherent architectural limits and cost concerns. Encoding data into vector embeddings and leveraging approximate nearest neighbor search (ANNS) have thus become an important data processing primitive in AI systems following the introduction of retrievel-augmented generation (RAG). However, the complexity of tasks these AI systems aim to solve introduces challenges for existing ANNS algorithms. I developed methods to expand existing ANNS algorithms to address two such challenges: freshness and heterogeneity in the data.&#13;
&#13;
Graph-based ANNS algorithms have been proven to have superb cost versus approximation quality trade-off yet follow a simple intuition of best-first search. I focus on adapting graph-based ANNS algorithms to two settings featuring emerging challenges. (1) Data is updated constantly. Existing algorithms are inefficient under deletions and not robust against different orderings in the workload. I propose methods addressing these problems and developed an algorithm supporting updates effectively and efficiently based on Vamana, a state-of-the-art graph-based ANNS algorithm. (2) Data is heterogeneous in format, modality, and how they relate to a query, making the similarity difficult to capture by the canonical ANNS definition. I explore ways to model the similarity between heterogeneous sources and using graph-based ANNS approaches to perform semantic search in this setting. I test this approach under an end-to-end multimodal question-answering system developed in-house.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adaptive Abstractions for Robust Hierarchical Manipulation Planning</title>
<link href="https://hdl.handle.net/1721.1/164032" rel="alternate"/>
<author>
<name>Noseworthy, Michael S.</name>
</author>
<id>https://hdl.handle.net/1721.1/164032</id>
<updated>2025-11-26T03:03:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Adaptive Abstractions for Robust Hierarchical Manipulation Planning
Noseworthy, Michael S.
In this thesis, we address the problem of long-horizon robotic manipulation under partial observability. Tasks such as gearbox assembly or tidying a workstation involve many objects and necessitate a variety of manipulation capabilities. These long-horizon tasks are commonly addressed by hierarchical approaches, which introduce state and action abstractions to make planning tractable. However, our abstractions often rely on imperfect models of the world, which can lead to brittle execution. Furthermore, these abstractions depend on having accurate state information, which is often only noisily sensed, if sensed at all. For example, in the assembly domain, the pose of each part may only be known within a few millimeters, and a box’s mass distribution may be completely unsensed. To deploy robots outside of structured environments like the factory, they will need to be robust to model misspecification and partial observability. The central idea of this thesis is that we can develop adaptive abstractions to improve the robustness of hierarchical planning once the robot is deployed. Adaptive abstractions incorporate observations from the real world that are informative about misspecifications and partial observability, essentially allowing the planner to adapt to its deployment environment. We explore this idea by developing three types of models that enable this adaptivity at different levels of the abstraction hierarchy: plan feasibility models, adaptive samplers, and reactive control policies. In our first contribution, we consider adding adaptivity to a task and motion planning system at the task-planning level. We focus on a setup where the robot has access to a set of parameterized skills, but these skills are derived from imperfect models. To enable robust planning, we propose to autonomously learn skill feasibility models once the robot is deployed through a curious exploration phase. Critically, we propose a novel active learning framework to enable efficient learning without human intervention. We show that the resulting feasibility model leads to robust task performance on multiple downstream tasks in a stacking domain. Our second contribution looks at developing adaptive samplers that can incorporate information about object state that is typically unobserved (e.g., inertial and frictional properties). General-purpose belief representations can handle this partial observability, but online inference is computationally expensive. Instead, we propose to use an offline phase to learn an inference network that directly predicts a distribution over object properties that is consistent with the interaction history. We show that inference networks enable efficient adaptation in a grasping domain with heavy objects. Our final contribution focuses on learning adaptive controllers such that robustness is handled at the lowest level of the abstraction. We consider precise contact-rich manipulation tasks that are sensitive to pose estimation errors. To overcome noisy poses at the control level, explorative contact is necessary, but unintended forces can lead to catastrophic outcomes such as part slippage or damage. We propose to use simulation in an offline phase to train reactive force-aware policies. The policies are trained to overcome pose uncertainty while using force-sensing to adaptively limit excessive forces. The result is robust real-world performance on the multistage assembly of a planetary gearbox system, which includes insertion, gear-meshing, and nut-threading tasks. In summary, adaptive abstractions can be used to increase the robustness of hierarchical manipulation planning, an important step in deploying robots outside of the lab or factory. Throughout the thesis, we validate the proposed approaches on the real robot in stacking and assembly domains.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Methods for Generalization Under Distribution Shift</title>
<link href="https://hdl.handle.net/1721.1/164031" rel="alternate"/>
<author>
<name>Netanyahu, Aviv</name>
</author>
<id>https://hdl.handle.net/1721.1/164031</id>
<updated>2025-11-26T03:03:12Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Methods for Generalization Under Distribution Shift
Netanyahu, Aviv
Machine learning systems have achieved remarkable performance in tasks where test data closely resembles the training distribution. However, real-world applications often require systems capable of handling more challenging situations -- specifically, adapting to new tasks and extrapolating to data points outside the distribution of the training set. The current paradigm for handling distribution shifts is collecting and training models on large datasets. This work offers two more principled frameworks that enable machine learning models to generalize effectively to out-of-distribution scenarios without sacrificing the power of modern overparameterized models.&#13;
&#13;
The first framework converts an out-of-support zero-shot generalization problem into an out-of-combination problem via a transductive reparameterization, which is possible under low-rank style conditions. We explore how this idea can be applied to domains like robotics, where the environment is changing, and materials and molecular design, where predicting properties of materials or molecules outside of known ranges is crucial to driving more efficient materials discovery.&#13;
&#13;
The second framework focuses on few-shot task learning, which involves agents learning new tasks from minimal data and applying them to new environments. We formulate the problem of few-shot task learning as Few-Shot Task Learning through Inverse Generative Modeling, which allows us to leverage the power of neural generative models pretrained on a set of base tasks. We adapt a method for efficient concept learning to few-shot task learning based on our formulation and rapidly learn new tasks with only a few examples, enabling task execution from autonomous driving to real-world robotic manipulation tasks in novel settings without the need for extensive retraining.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scaling 3D Scene Perception via Probabilistic Programming</title>
<link href="https://hdl.handle.net/1721.1/164030" rel="alternate"/>
<author>
<name>Gothoskar, Nishad</name>
</author>
<id>https://hdl.handle.net/1721.1/164030</id>
<updated>2025-11-26T03:02:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Scaling 3D Scene Perception via Probabilistic Programming
Gothoskar, Nishad
Understanding and interpreting the 3D structure of the world is a central challenge in artificial intelligence. Our physical world is 3D, yet our AI systems often “see” that world through pixels and images. In order to build truly intelligent AI systems, we must go beyond pixels and images and build 3D vision systems that can build meaningful and useful 3D representations of the world. This is the problem of 3D scene perception. How do we transform raw visual input into 3D representations of the world? 3D scene perception has numerous applications from robotics to augmented reality. Despite the advances over the last decade, 3D perception remains a major bottleneck in real-world robotics applications. The challenge stems from the immense variability in real-world conditions, e.g. lighting, color, viewpoint, camera properties, object appearance, the incompleteness of visual data due to limited resolution, noise, and occlusions, and the approximations in our models of visual data. Developing more robust and generalizable 3D perception systems would be an important step towards more general-purpose robotics. In this thesis, we explore a probabilistic architecture for 3D perception based on structured generative models and probabilistic programs. We begin with 3DP3, the first iteration of our approach, which infers 3D scene graphs from real-world depth image data. 3DP3 demonstrates that our method could work on real-world benchmarks and correct commonsense errors from deep learning systems. Building on this foundation, we develop Bayes3D, which scaled up these ideas using a GPU-accelerated image likelihood and generative model alongside a parallel coarse-to-fine inference algorithm. Next, we explore two approaches for incorporating RGB image data into generative 3D graphics programs, expanding their applicability. We then introduce DurableVS, which extends inverse-graphics techniques to model scenes involving a robot and multiple cameras, enabling precise control of a robot. Finally, we present Gen3D, which integrates all the key ideas from this thesis into a real-time 3D perception system that uses multi-resolution probabilistic models of 3D matter to enable real-time tracking that is competitive with vision transformers and 3D Gaussian splatting, state-of-the-art methods in computer vision and computer graphics.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Generative Models for Visual Synthesis</title>
<link href="https://hdl.handle.net/1721.1/164029" rel="alternate"/>
<author>
<name>Yin, Tianwei</name>
</author>
<id>https://hdl.handle.net/1721.1/164029</id>
<updated>2025-11-26T03:03:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Efficient Generative Models for Visual Synthesis
Yin, Tianwei
While current visual generative models produce high-quality outputs, they suffer from significant computational costs and latency, limiting their applicability in interactive settings. In this dissertation, we introduce a suite of techniques designed to enhance the efficiency of generative models for image and video synthesis. First, we propose distribution matching distillation, a method that enables the training of one- or few-step visual generators by distilling knowledge from computationally expensive yet highly capable diffusion models. Next, we develop improved distillation techniques that enhance robustness and scalability, culminating in a production-grade few-step image generator. This system is now deployed in widely used software, generating hundreds of millions of images annually. Finally, we extend our approach to video generation by adopting an autoregressive paradigm, significantly reducing latency and enabling fast interactive video generation and world simulation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterization of pGaN-gate power HEMTs</title>
<link href="https://hdl.handle.net/1721.1/164028" rel="alternate"/>
<author>
<name>Yu, Yue</name>
</author>
<id>https://hdl.handle.net/1721.1/164028</id>
<updated>2025-11-26T03:06:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Characterization of pGaN-gate power HEMTs
Yu, Yue
This thesis presents a comprehensive study of p-GaN gate GaN High Electron Mobility Transistors (HEMTs) with a focus on understanding how fabrication process variations and gate structural designs impact key electrical performance metrics. Five industry-fabricated wafers, each processed with distinct etch depths, contact strategies, and p-GaN surface configurations, were characterized using a combination of DC and pulsed I–V measurements. Full-transistor modules were evaluated alongside specialized test structures to enable both system-level and localized analysis. DC measurements using the Keysight B1505A system revealed that more aggressive gate contact schemes improved ON-resistance and transconductance, but often at the cost of increased gate leakage and reduced threshold control. Pulsed-IV characterization with the Auriga AU4750 system uncovered dynamic Ron degradation behavior and charge trapping effects, especially under high drain bias conditions. Extracted time constants demonstrated process-dependent trends, with wafers retaining more of the p-GaN surface exhibiting slower charge detrapping and more severe transient effects. Specialized test structures provided additional insights into gate lateral conduction, sheet resistance, and contact asymmetry, reinforcing the connection between device layout, processing, and observed variability. These findings highlight critical trade-offs in the design and fabrication of p-GaN gate GaN HEMTs and offer design-aware strategies for optimizing performance and reliability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Contactless Sleep and Physiological Monitoring Using Artificial Intelligence and Radio Waves</title>
<link href="https://hdl.handle.net/1721.1/164027" rel="alternate"/>
<author>
<name>He, Hao</name>
</author>
<id>https://hdl.handle.net/1721.1/164027</id>
<updated>2025-11-26T03:03:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Contactless Sleep and Physiological Monitoring Using Artificial Intelligence and Radio Waves
He, Hao
Remote monitoring of sleep and physiological signals is critical for tracking human health, managing diseases, and enabling early intervention. However, existing monitoring solutions face two major limitations: (1) they are often unsuitable for vulnerable populations—such as infants and seniors—and (2) most of them raise concerns about measurement accuracy. We propose a novel, contactless approach that addresses both challenges by combining advances in artificial intelligence (AI) and radio-frequency (RF) sensing. Our solution makes monitoring more comfortable, accessible, and affordable, while still delivering clinically meaningful insights. This thesis makes four fundamental contributions: First, we introduce a system that can extract high-fidelity breathing signals from ambient RF reflections, even in complex scenarios where multiple individuals are present, such as couples sharing a bed. Second, we develop an AI-based sleep monitoring framework that generates sleep hypnograms and detects respiratory events entirely without the need for on-body sensors. Third we develop AI models that infer critical biomarkers—such as blood oxygen saturation (SpO₂) and inflammation (C-reactive protein levels)—in a fully passive and non-intrusive manner. Finally, inspired by the success of large language models, we show that physiological signals can be represented and interpreted analogously to language. This insight enables effective translation between modalities (e.g., from respiration to EEG) and unlocks robust representation learning for downstream clinical tasks. Together, these contributions establish a new paradigm for remote sleep and physiological monitoring—one that is contactless, continuous, and passive. We validate our system on real world datasets and demonstrate its potential to fundamentally transform clinical care and home health monitoring.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The legacy of Rudolf Nieuwenhuys in perspective</title>
<link href="https://hdl.handle.net/1721.1/164026" rel="alternate"/>
<author>
<name>Pignatelli, Michele</name>
</author>
<author>
<name>Rockland, Kathleen S.</name>
</author>
<id>https://hdl.handle.net/1721.1/164026</id>
<updated>2025-11-26T03:10:59Z</updated>
<published>2025-11-24T00:00:00Z</published>
<summary type="text">The legacy of Rudolf Nieuwenhuys in perspective
Pignatelli, Michele; Rockland, Kathleen S.
Professor Nieuwenhuys is among the great neuroanatomists and a historical figure of the later 20th and early 21st centuries. His legacy is manifold. There is the tangible legacy of the multiple scientific volumes, at once physical and conceptual entities. There is the generational legacy of handed-on scientific and intellectual traditions, and there is the legacy of specific scientific directions. In this brief Commentary, we highlight just two examples of his scientific contributions.
</summary>
<dc:date>2025-11-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Some Advice on Sustainability: ‘I Would Never Get into a Business I Did Not Really Understand’</title>
<link href="https://hdl.handle.net/1721.1/164025" rel="alternate"/>
<author>
<name>Wright, Randall S.</name>
</author>
<id>https://hdl.handle.net/1721.1/164025</id>
<updated>2025-11-26T03:10:53Z</updated>
<published>2022-11-01T00:00:00Z</published>
<summary type="text">Some Advice on Sustainability: ‘I Would Never Get into a Business I Did Not Really Understand’
Wright, Randall S.
My father, Chester S. Wright, was a business executive. He was president of two manufacturing companies and a member of the board of directors of four others.&#13;
&#13;
As a young boy, I remember him coming home from work in a big, black Chrysler Imperial—a “company car”—fitted out with shining chromium bumpers and gleaming radiator grill. After a wonderful home-cooked dinner my mother always made for my father, my two sisters, and me, he and I would head out in the Imperial to Gray’s Drug Store so he could buy House of Windsor cigars, and we could pick up the latest copies of Popular Mechanics, Popular Science, and Mechanix Illustrated.
</summary>
<dc:date>2022-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The 5-methylcytosine DNA glycosylase ROS1 prevents paternal genome hypermethylation in Arabidopsis endosperm</title>
<link href="https://hdl.handle.net/1721.1/164024" rel="alternate"/>
<author>
<name>Hemenway, Elizabeth A.</name>
</author>
<author>
<name>Gehring, Mary</name>
</author>
<id>https://hdl.handle.net/1721.1/164024</id>
<updated>2025-11-26T03:10:22Z</updated>
<published>2025-09-18T00:00:00Z</published>
<summary type="text">The 5-methylcytosine DNA glycosylase ROS1 prevents paternal genome hypermethylation in Arabidopsis endosperm
Hemenway, Elizabeth A.; Gehring, Mary
Background DNA methylation patterning is a consequence of opposing activities of DNA methyltransferases and DNA demethylases. In many plant and animal species, reproduction is a period of significant epigenome lability. In flowering plants, two distinct female gametes, the egg cell and the central cell, are fertilized to produce the embryo and the endosperm of the seed. The endosperm is an unusual tissue, exemplified by triploidy and reduced DNA methylation. In Arabidopsis thaliana, a 5-methylcytosine DNA glycosylase, DME, demethylates regions of the central cell genome, leading to methylation differences between maternally- and paternally-inherited endosperm genomes after fertilization. Expression of DME in the central cell is required for gene imprinting, or parent-of-origin specific gene expression, in endosperm. DME is part of a four member gene family in Arabidopsis that includes ROS1, DML2, and DML3. It is unknown whether any of the other DNA glycosylases are required for endosperm methylation patterning. Results Using whole-genome methylation profiling, we identify ROS1 target regions in the endosperm. We show that ROS1 prevents hypermethylation of paternally-inherited alleles in the endosperm at regions that lack maternal or paternal allele methylation in wild-type endosperm. Additionally, we demonstrate that at many ROS1 target regions the maternal alleles are demethylated by DME. Conclusions ROS1 promotes epigenetic symmetry between parental genomes in the endosperm by preventing CG methylation gain on the paternal genome. We conclude that ROS1 and DME act in a parent-of-origin-specific manner at shared endosperm targets, and consider possible implications for the evolution of imprinting mechanisms.
</summary>
<dc:date>2025-09-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Players chatter and dice clatter: exploring sonic power relations in posthuman game-based learning ecologies</title>
<link href="https://hdl.handle.net/1721.1/164023" rel="alternate"/>
<author>
<name>Woods, Peter J</name>
</author>
<author>
<name>Jones, Karis</name>
</author>
<id>https://hdl.handle.net/1721.1/164023</id>
<updated>2025-11-26T03:10:57Z</updated>
<published>2022-10-28T00:00:00Z</published>
<summary type="text">Players chatter and dice clatter: exploring sonic power relations in posthuman game-based learning ecologies
Woods, Peter J; Jones, Karis
Responding to both recent interest in sound within qualitative education research and sound studies literature that conceptualizes sound as a posthuman technology, we use this paper to explore the following research questions: How does sound both enact and unveil posthuman learning ecologies? And how can education scholars engage sound within posthuman research? Through a posthuman framework, we position noise as an analytical tool for exploring and unveiling more-than-human relations. We then draw parallels between posthuman qualitative research into sound (via noise) and the ideological foundation of experimental music, a musical tradition deeply invested in working with sound as an agentic actor. Within this alignment, we propose using graphic scores to transcribe sonic data without reinscribing humanist research aims. To illustrate, we provide a micro-analysis of preservice teachers engaged in a role-playing game activity and uncover the ways sound asserts its agency within learning ecologies.
</summary>
<dc:date>2022-10-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Palonosetron, a 5-HT3 Receptor Antagonist, Induces G1 Cell Cycle Arrest and Autophagy in Gastric Cancer Cells</title>
<link href="https://hdl.handle.net/1721.1/164022" rel="alternate"/>
<author>
<name>Yoo, Young Chul</name>
</author>
<author>
<name>Lin, Lin</name>
</author>
<author>
<name>Lee, Sihak</name>
</author>
<author>
<name>Shin, Yeeun Rachel</name>
</author>
<author>
<name>Oh, Ju Eun</name>
</author>
<author>
<name>Kim, Na Young</name>
</author>
<id>https://hdl.handle.net/1721.1/164022</id>
<updated>2025-11-26T03:10:27Z</updated>
<published>2025-10-15T00:00:00Z</published>
<summary type="text">Palonosetron, a 5-HT3 Receptor Antagonist, Induces G1 Cell Cycle Arrest and Autophagy in Gastric Cancer Cells
Yoo, Young Chul; Lin, Lin; Lee, Sihak; Shin, Yeeun Rachel; Oh, Ju Eun; Kim, Na Young
Serotonin or 5-hydroxytryptamine (5-HT) has been implicated in promoting cancer cell growth by acting on 5-HT receptors, such as 5-HT1 and 5-HT2 receptors. However, the role of 5-HT3 receptor antagonists in gastric cancer cell lines remains unclear. This study aimed to evaluate the effect of 5-HT3 receptor antagonists (ondansetron, palonosetron, and ramosetron) on cancer cell growth using AGS and MKN-1 cell lines, as well as the xenograft mouse model. All the three antagonists inhibited cell proliferation, migration, and colony formation in AGS cells. Specifically, palonosetron induced G1 cell cycle arrest, autophagy, and phosphorylation of GSK3β, along with increased expression of p27, p53, and LC3B. In vivo studies demonstrated that palonosetron reduced tumor growth and modulated pro-inflammatory cytokines—tumor necrosis factor alpha, interleukin 6, and interleukin 1β. These findings suggest that 5-HT3 receptor antagonists, especially palonosetron, exert anti-tumor effects in gastric cancer through G1 cell cycle regulation and immunomodulation. The results position palonosetron as a promising lead for further preclinical development in gastric cancer.
</summary>
<dc:date>2025-10-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>50 years of nanomechanics: Scale-bridging mechanistic insights through the looking glass</title>
<link href="https://hdl.handle.net/1721.1/164021" rel="alternate"/>
<author>
<name>Han, Seung M.</name>
</author>
<author>
<name>Gianola, Daniel S.</name>
</author>
<author>
<name>Portela, Carlos M.</name>
</author>
<author>
<name>Sebastiani, Marco</name>
</author>
<author>
<name>Kirchlechner, Christoph</name>
</author>
<id>https://hdl.handle.net/1721.1/164021</id>
<updated>2025-11-26T03:10:47Z</updated>
<published>2025-11-17T00:00:00Z</published>
<summary type="text">50 years of nanomechanics: Scale-bridging mechanistic insights through the looking glass
Han, Seung M.; Gianola, Daniel S.; Portela, Carlos M.; Sebastiani, Marco; Kirchlechner, Christoph
Historical and recent advances in the field of nanomechanics, ranging from the early development of nanoindentation to recent advances in artificial intelligence- and machine learning-based characterization and modeling are covered in this article. Early advances were motivated by thin-film mechanics challenges driven by the microelectronics industry. In the ensuing years, different methodologies for probing mechanical properties at length scales relevant to a myriad of applications and materials systems have been developed, coupled with a variety of in situ testing methods that shed insights into new mechanisms. Built upon the knowledge base from nanomechanics, new mechanical metamaterials with otherwise unachievable material properties have been discovered, and new methods in testing and analyzing properties for extreme conditions have been recently reported. This article discusses the journey that the nanomechanics community has gone through over the past 50 years and shares the scale-bridging mechanistic insights through the looking glass.
</summary>
<dc:date>2025-11-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sustainable Synthesis of CoFe2O4/Fe2O3 Catalyst for Hydrogen Generation from Sodium Borohydride Hydrolysis</title>
<link href="https://hdl.handle.net/1721.1/164020" rel="alternate"/>
<author>
<name>Teixeira, Lucas Tonetti</name>
</author>
<author>
<name>Medeiros, Marcos</name>
</author>
<author>
<name>Liu, Liying</name>
</author>
<author>
<name>Park, Vinicius Novaes</name>
</author>
<author>
<name>Valente-Rodriguez, Célio</name>
</author>
<author>
<name>Letichevsky, Sonia</name>
</author>
<author>
<name>Fajardo, Humberto Vieira</name>
</author>
<author>
<name>de Siqueira, Rogério Navarro Correia</name>
</author>
<author>
<name>Maia da Costa, Marcelo Eduardo Huguenin</name>
</author>
<author>
<name>Botelho Junior, Amilton Barbosa</name>
</author>
<id>https://hdl.handle.net/1721.1/164020</id>
<updated>2025-11-26T03:10:55Z</updated>
<published>2025-10-01T00:00:00Z</published>
<summary type="text">Sustainable Synthesis of CoFe2O4/Fe2O3 Catalyst for Hydrogen Generation from Sodium Borohydride Hydrolysis
Teixeira, Lucas Tonetti; Medeiros, Marcos; Liu, Liying; Park, Vinicius Novaes; Valente-Rodriguez, Célio; Letichevsky, Sonia; Fajardo, Humberto Vieira; de Siqueira, Rogério Navarro Correia; Maia da Costa, Marcelo Eduardo Huguenin; Botelho Junior, Amilton Barbosa
Hydrogen has been explored as a greener alternative for greenhouse gas emissions reduction. Sodium borohydride (NaBH4) is a favorable hydrogen carrier due to its high hydrogen content, safe handling, and rapid hydrogen release. This work presents a novel synthesis of the catalyst CoFe2O4/Fe2O3 using nanocellulose fibers (TCNF) as reactive templates for metal adsorption and subsequent calcination. The resulting material was tested for H2 production from basic NaBH4 aqueous solutions (10–55 °C). The catalyst’s composition is 74.8 wt% CoFe2O4, 25 wt% Fe2O3, and 0.2 wt% Fe2(SO4)3 with agglomerated spheroidal particles (15–20 nm) and homogeneous Fe and Co distribution. The catalyst produced 1785 mL of H2 in 15 min at 25 °C (50 mg catalyst, 4.0% NaBH4, and 2.5 wt% NaOH), close to the stoichiometric maximum (2086 mL). The maximum H2 generation rate (HGR) reached 3.55 L min−1 gcat−1 at 40 °C. Activation energies were determined using empirical (38.4 ± 5.3 kJ mol−1) and Langmuir–Hinshelwood (L–H) models (42.2 ± 5.8 kJ mol−1), consistent with values for other Co-ferrite catalysts. Kinetic data fitted better to the L–H model, suggesting that boron complex adsorption precedes H2 evolution.
</summary>
<dc:date>2025-10-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the potential of microtubules for scalable quantum computation</title>
<link href="https://hdl.handle.net/1721.1/164019" rel="alternate"/>
<author>
<name>Mavromatos, Nick E.</name>
</author>
<author>
<name>Mershin, Andreas</name>
</author>
<author>
<name>Nanopoulos, Dimitri V.</name>
</author>
<id>https://hdl.handle.net/1721.1/164019</id>
<updated>2025-11-26T03:10:41Z</updated>
<published>2025-11-19T00:00:00Z</published>
<summary type="text">On the potential of microtubules for scalable quantum computation
Mavromatos, Nick E.; Mershin, Andreas; Nanopoulos, Dimitri V.
We examine the quantum coherence properties of tubulin heterodimers arranged into the protofilaments of cytoskeletal microtubules. In the physical model proposed by the authors, the microtubule interiors are treated as high-Q quantum electrodynamics (QED) cavities that can support decoherence-resistant entangled states under physiological conditions, with decoherence times of the order of O ( 10 - 6 )  s. We identify strong electric dipole interactions between tubulin dimers and ordered water dipole quanta within the microtuble interior as the mechanism responsible for the extended coherence times. Classical nonlinear (pseudospin) σ -models describing solitonic excitations are reinterpreted as emergent quantum-coherent—or possibly pointer—states, arising from incomplete collapse of dipole-aligned quantum states. These solitons mediate dissipation-free energy transfer along microtubule filaments. We discuss logic-gate-like behaviour facilitated by microtubule-associated proteins, and outline how such structures may enable scalable, ambient-temperature quantum computation, with the fundamental unit of information storage realized as a quDit encoded in the tubulin dipole state. We further describe a process akin to “decision-making” that emerges following an external stimulus, whereby optimal, energy-loss-free signal and information transport pathways are selected across the microtubular network. Finally, we propose experimental approaches—including Rabi-splitting spectroscopy and entangled surface plasmon probes—to validate the use of biomatter as a substrate for scalable quantum computation.
</summary>
<dc:date>2025-11-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Can Artificial Intelligence Improve the Appropriate Use and Decrease the Misuse of REBOA?</title>
<link href="https://hdl.handle.net/1721.1/164018" rel="alternate"/>
<author>
<name>Bokenkamp, Mary</name>
</author>
<author>
<name>Ma, Yu</name>
</author>
<author>
<name>Dorken-Gallastegi, Ander</name>
</author>
<author>
<name>Proaño-Zamudio, Jefferson A</name>
</author>
<author>
<name>Gebran, Anthony</name>
</author>
<author>
<name>Velmahos, George C</name>
</author>
<author>
<name>Bertsimas, Dimitris</name>
</author>
<author>
<name>Kaafarani, Haytham MA</name>
</author>
<id>https://hdl.handle.net/1721.1/164018</id>
<updated>2025-11-26T03:10:54Z</updated>
<published>2025-09-25T00:00:00Z</published>
<summary type="text">Can Artificial Intelligence Improve the Appropriate Use and Decrease the Misuse of REBOA?
Bokenkamp, Mary; Ma, Yu; Dorken-Gallastegi, Ander; Proaño-Zamudio, Jefferson A; Gebran, Anthony; Velmahos, George C; Bertsimas, Dimitris; Kaafarani, Haytham MA
Background: The use of resuscitative endovascular balloon occlusion of the aorta (REBOA) for control of noncompressible torso hemorrhage remains controversial. We aimed to utilize a novel and transparent/interpretable artificial intelligence (AI) method called Optimal Policy Trees (OPTs) to improve the appropriate use and decrease the misuse of REBOA in hemodynamically unstable blunt trauma patients. Methods: We trained and then validated OPTs that “prescribe” REBOA in a 50:50 split on all hemorrhagic shock blunt trauma patients in the 2010–2019 ACS-TQIP database based on rates of survival. Hemorrhagic shock was defined as a systolic blood pressure ≤90 on arrival or a transfusion requirement of ≥4 units of blood in the first 4 h of presentation. The expected 24 h mortality rate following OPT prescription was compared to the observed 24 h mortality rate in patients who were or were not treated with REBOA. Results: Out of 4.5 million patients, 100,615 were included, and 803 underwent REBOA. REBOA patients had a higher rate of pelvic fracture, femur fracture, hemothorax, pneumothorax, and thoracic aorta injury (p &lt; 0.001). The 24 h mortality rate for the REBOA vs. non-REBOA group was 47% vs. 21%, respectively (p &lt; 0.001). OPTs resulted in an 18% reduction in 24 h mortality for REBOA and a 0.8% reduction in non-REBOA patients. We specifically divert the misuse of REBOA by recommending against REBOA in cases where it leads to worse outcomes. Conclusions: This proof-of-concept study shows that interpretable AI models can improve mortality in unstable blunt trauma patients by optimizing the use and decreasing the misuse of REBOA. To date, these models have been used to predict outcomes, but their groundbreaking use will be in prescribing interventions and changing outcomes.
</summary>
<dc:date>2025-09-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ultra-High Resolution 9.4T Brain MRI Segmentation via a Newly Engineered Multi-Scale Residual Nested U-Net with Gated Attention</title>
<link href="https://hdl.handle.net/1721.1/164017" rel="alternate"/>
<author>
<name>Kalluvila, Aryan</name>
</author>
<author>
<name>Patel, Jay B.</name>
</author>
<author>
<name>Johnson, Jason M.</name>
</author>
<id>https://hdl.handle.net/1721.1/164017</id>
<updated>2025-11-26T03:10:29Z</updated>
<published>2025-09-24T00:00:00Z</published>
<summary type="text">Ultra-High Resolution 9.4T Brain MRI Segmentation via a Newly Engineered Multi-Scale Residual Nested U-Net with Gated Attention
Kalluvila, Aryan; Patel, Jay B.; Johnson, Jason M.
A 9.4T brain MRI is the highest resolution MRI scanner in the public market. It offers submillimeter brain imaging with exceptional anatomical detail, making it one of the most powerful tools for detecting subtle structural changes associated with neurological conditions. Current segmentation models are optimized for lower-field MRI (1.5T–3T), and they struggle to perform well on 9.4T data. In this study, we present the GA-MS-UNet++, the world’s first deep learning-based model specifically designed for 9.4T brain MRI segmentation. Our model integrates multi-scale residual blocks, gated skip connections, and spatial channel attention mechanisms to improve both local and global feature extraction. The model was trained and evaluated on 12 patients in the UltraCortex 9.4T dataset and benchmarked against four leading segmentation models (Attention U-Net, Nested U-Net, VDSR, and R2UNet). The GA-MS-UNet++ achieved a state-of-the-art performance across both evaluation sets. When tested against manual, radiologist-reviewed ground truth masks, the model achieved a Dice score of 0.93. On a separate test set using SynthSeg-generated masks as the ground truth, the Dice score was 0.89. Across both evaluations, the model achieved an overall accuracy of 97.29%, precision of 90.02%, and recall of 94.00%. Statistical validation using the Wilcoxon signed-rank test (p &lt; 1 × 10−5) and Kruskal–Wallis test (H = 26,281.98, p &lt; 1 × 10−5) confirmed the significance of these results. Qualitative comparisons also showed a near-exact alignment with ground truth masks, particularly in areas such as the ventricles and gray–white matter interfaces. Volumetric validation further demonstrated a high correlation (R2 = 0.90) between the predicted and ground truth brain volumes. Despite the limited annotated data, the GA-MS-UNet++ maintained a strong performance and has the potential for clinical use. This algorithm represents the first publicly available segmentation model for 9.4T imaging, providing a powerful tool for high-resolution brain segmentation and driving progress in automated neuroimaging analysis.
</summary>
<dc:date>2025-09-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>De novo design of a two-step approach targeting Claudin-6 for enhanced drug delivery to solid tumors</title>
<link href="https://hdl.handle.net/1721.1/164016" rel="alternate"/>
<author>
<name>Yan, Jiayao</name>
</author>
<author>
<name>Zhong, Liqing</name>
</author>
<author>
<name>Chen, Xiaotong</name>
</author>
<author>
<name>Li, Lin</name>
</author>
<author>
<name>Liu, Fangcen</name>
</author>
<author>
<name>Lei, Lei</name>
</author>
<author>
<name>An, Mengchao</name>
</author>
<author>
<name>Wei, Xiao</name>
</author>
<author>
<name>Wang, Ying</name>
</author>
<author>
<name>Chen, Tianran</name>
</author>
<author>
<name>Guo, Jingyi</name>
</author>
<author>
<name>Shao, Jie</name>
</author>
<author>
<name>Yu, Xiaoxiao</name>
</author>
<author>
<name>Zhao, Yingjie</name>
</author>
<author>
<name>Li, Rutian</name>
</author>
<author>
<name>Liu, Qin</name>
</author>
<id>https://hdl.handle.net/1721.1/164016</id>
<updated>2025-11-26T03:10:50Z</updated>
<published>2025-11-20T00:00:00Z</published>
<summary type="text">De novo design of a two-step approach targeting Claudin-6 for enhanced drug delivery to solid tumors
Yan, Jiayao; Zhong, Liqing; Chen, Xiaotong; Li, Lin; Liu, Fangcen; Lei, Lei; An, Mengchao; Wei, Xiao; Wang, Ying; Chen, Tianran; Guo, Jingyi; Shao, Jie; Yu, Xiaoxiao; Zhao, Yingjie; Li, Rutian; Liu, Qin
Background Although antibody-conjugated drugs have achieved success in clinical practice for cancer treatment, challenges remain in developing a highly efficient drug delivery system with specific accumulation in tumors and reduction in side effects. With improved pharmacokinetics, strong covalent bonding and quick binding reactions, a pre-targeting approach via molecular pairs represents an attractive platform for two-step delivery system construction. Methods Bioinformatics and immunohistochemistry assays were performed to assess Claudin-6 (CLDN6) as a highly specific tumor target in solid tumors. A phage-displayed library was used to screen and optimize anti-CLDN6 designed ankyrin repeat proteins (DARPins), which were incorporated into a two-step delivery system based on SpyTag/SpyCatcher. Fluorescent staining, flow cytometry and near-infrared imaging were performed to assess the tumor-targeting ability and biodistribution of this delivery system. The cytotoxic drug, Monomethyl auristatin E (MMAE), was conjugated with the delivery system to evaluate its anti-tumor efficacy and safety profile. Results Anti-CLDN6 DARPins exhibited specific binding to CLDN6+ cancer cells with high affinity instead of negative cells in vitro, ex vivo and in vivo. The DARPins-based two-step delivery system improved background clearance with a high signal-to-noise ratio, enhancing the specific accumulation of payloads in tumors. The cytotoxic drug delivered via the two-step system appeared superior to the one-step approach in IC50, biodistribution, and tumor growth inhibition. Conclusions Our study presented the de novo design of a two-step drug delivery system targeting Claudin-6 with enhanced anti-tumor efficacy and improved biosafety. These findings highlighted the potential of this approach to enhance the efficacy of tumor-targeting therapies and reduce adverse effects, paving the way for more effective cancer treatments.
</summary>
<dc:date>2025-11-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Does AI Transform Cyber Risk Management?</title>
<link href="https://hdl.handle.net/1721.1/164015" rel="alternate"/>
<author>
<name>Zeijlemaker, Sander</name>
</author>
<author>
<name>Lemiesa, Yaphet K</name>
</author>
<author>
<name>Schröer, Saskia Laura</name>
</author>
<author>
<name>Abhishta, Abhishta</name>
</author>
<author>
<name>Siegel, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/164015</id>
<updated>2025-11-26T03:10:51Z</updated>
<published>2025-09-23T00:00:00Z</published>
<summary type="text">How Does AI Transform Cyber Risk Management?
Zeijlemaker, Sander; Lemiesa, Yaphet K; Schröer, Saskia Laura; Abhishta, Abhishta; Siegel, Michael
Digital transformation embeds smart cities, e-health, and Industry 4.0 into critical infrastructures, thereby increasing reliance on digital systems and exposure to cyber threats and boosting complexity and dependency. Research involving over 200 executives reveals that under rising complexity, only 15% of cyber risk investments are effective, leaving most organizations misaligned or vulnerable. In this context, the role of artificial intelligence (AI) in cybersecurity requires systemic scrutiny. This study analyzes how AI reshapes systemic structures in cyber risk management through a multi-method approach: literature review, expert workshops with practitioners and policymakers, and a structured kill chain analysis of the Colonial Pipeline attack. The findings reveal three new feedback loops: (1) deceptive defense structures that misdirect adversaries while protecting assets, (2) two-step success-to-success attacks that disable defenses before targeting infrastructure, and (3) autonomous proliferation when AI applications go rogue. These dynamics shift cyber risk from linear patterns to adaptive, compounding interactions. The principal conclusion is that AI both amplifies and mitigates systemic risk. The core recommendation is to institutionalize deception in security standards and address drifting AI-powered systems. Deliverables include validated systemic structures, policy options, and a foundation for creating future simulation models to support strategic cyber risk management investment.
</summary>
<dc:date>2025-09-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Flex-Route Transit for Smart Cities: A Reinforcement Learning Approach to Balance Ridership and Performance</title>
<link href="https://hdl.handle.net/1721.1/164014" rel="alternate"/>
<author>
<name>Rodriguez, Joseph</name>
</author>
<author>
<name>Koutsopoulos, Haris N.</name>
</author>
<author>
<name>Zhao, Jinhua</name>
</author>
<id>https://hdl.handle.net/1721.1/164014</id>
<updated>2025-11-26T03:10:31Z</updated>
<published>2025-09-16T00:00:00Z</published>
<summary type="text">Flex-Route Transit for Smart Cities: A Reinforcement Learning Approach to Balance Ridership and Performance
Rodriguez, Joseph; Koutsopoulos, Haris N.; Zhao, Jinhua
A major challenge for modern transit systems relying on traditional fixed-route designs is providing broad accessibility to users. Flex-route transit can enhance accessibility in low-density areas, since it combines the directness of fixed-route transit with the coverage of on-demand mobility. Although deviating for optional pickups can increase ridership and transit accessibility, it also deteriorates the service performance for fixed-route riders. To balance this inherent trade-off, this paper proposes a reinforcement learning approach for deviation decisions. The proposed model is used in a case study of a proposed flex-route service in the city of Boston. The performance on competing objectives is evaluated for reward configurations that adapt to peak and off-peak scenarios. The analysis shows a significant improvement of our method compared to a heuristic derived from industry practice as a baseline. To evaluate robustness, we assess performance across scenarios with varying demand compositions (fixed and requested riders). The results show that the method achieves greater improvements than the baseline in scenarios with increased request ridership, i.e., where decision-making is more complex. Our approach improves service performance under dynamic demand conditions and varying priorities, offering a valuable tool for smart cities to operate flex-route services.
</summary>
<dc:date>2025-09-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Search for a cH signal in the associated production of at least one charm quark with a Higgs boson in the diphoton decay channel in pp collisions at $$\sqrt{s}=13$$ TeV</title>
<link href="https://hdl.handle.net/1721.1/164013" rel="alternate"/>
<author>
<name>Chekhovsky, V.</name>
</author>
<author>
<name>Hayrapetyan, A.</name>
</author>
<author>
<name>Makarenko, V.</name>
</author>
<author>
<name>Tumasyan, A.</name>
</author>
<author>
<name>Adam, W.</name>
</author>
<author>
<name>Andrejkovic, J. W.</name>
</author>
<author>
<name>Benato, L.</name>
</author>
<author>
<name>Bergauer, T.</name>
</author>
<author>
<name>Chatterjee, S.</name>
</author>
<author>
<name>Damanakis, K.</name>
</author>
<author>
<name>Dragicevic, M.</name>
</author>
<author>
<name>Hussain, P. S.</name>
</author>
<author>
<name>Jeitler, M.</name>
</author>
<author>
<name>Krammer, N.</name>
</author>
<author>
<name>Li, A.</name>
</author>
<author>
<name>Liko, D.</name>
</author>
<author>
<name>Mikulec, I.</name>
</author>
<id>https://hdl.handle.net/1721.1/164013</id>
<updated>2025-11-26T03:10:44Z</updated>
<published>2025-11-12T00:00:00Z</published>
<summary type="text">Search for a cH signal in the associated production of at least one charm quark with a Higgs boson in the diphoton decay channel in pp collisions at $$\sqrt{s}=13$$ TeV
Chekhovsky, V.; Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.
This paper presents the first search for a cH signal sensitive to the coupling of the charm quark (c) to the Higgs boson (H) in the associated production of at least one charm quark with a Higgs boson decaying to two photons. The results are based on a data set of proton-proton collisions at a center-of-mass energy of 13 TeV collected with the CMS experiment at the LHC, corresponding to an integrated luminosity of 138 fb−1. Assuming the standard model (SM) rates for all other Higgs boson production processes, the observed (expected) upper limit at 95% confidence level on the cH signal strength is 243 (355) times the SM prediction. Under the same assumption, the observed (expected) allowed interval on the Higgs boson to charm quark coupling modifier, κc, is |κc| &lt; 38.1 (|κc| &lt; 72.5) at 95% confidence level.
</summary>
<dc:date>2025-11-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Bunch of Gaps: Factors Behind Service Reliability in Chicago’s High-Frequency Transit Network</title>
<link href="https://hdl.handle.net/1721.1/164012" rel="alternate"/>
<author>
<name>Rodriguez, Joseph</name>
</author>
<author>
<name>Koutsopoulos, Haris N.</name>
</author>
<author>
<name>Zhao, Jinhua</name>
</author>
<id>https://hdl.handle.net/1721.1/164012</id>
<updated>2025-11-26T03:10:32Z</updated>
<published>2025-08-28T00:00:00Z</published>
<summary type="text">A Bunch of Gaps: Factors Behind Service Reliability in Chicago’s High-Frequency Transit Network
Rodriguez, Joseph; Koutsopoulos, Haris N.; Zhao, Jinhua
Frequent transit services in urban areas have the potential to increase their accessibility to transit-dependent riders and reduce congestion by attracting new ridership through a modal shift. However, bus services operating in mixed traffic face operational challenges that reduce reliability and hinder their attractiveness. The sources of unreliability can range from local-level conditions, like the road infrastructure, to higher-level decisions, like the service plan. For the effective planning of improvement strategies, both scales of analysis must be considered. This paper uses a novel modeling framework to understand reliability by analyzing the route and segment factors separately. The Chicago Transit Authority (CTA) bus network is used as a case study for the analysis. The data reflect the operational, demand, and urban conditions of 50 high-frequency bus routes. At the route level, we use the coefficient of headway variation as the dependent variable and diverse route characteristics as explanatory variables. The results indicate that the most significant contributors to the variability of headways are variability in schedules and dispatching at terminals. It is also found that driver experience impacts reliability and that east–west routes are more unreliable than north–south routes. At the segment level, we use data from trips involved in bunching and gaps. As the dependent variable, a novel measure is formulated to capture how quickly bunching or gaps are formed. The bunching and gap events are treated as separate regression models. Findings suggest that link and dwell time variability are the most significant contributors to gap and bunching formation. In terms of infrastructure, bus lane segments reduce gap formations, and left turns increase bunching and gap formations. The insights presented can inform improvements in service and transit infrastructure planning to improve transit level of service (LOS) and support the future of sustainable, smart cities.
</summary>
<dc:date>2025-08-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Oil Transport Simulation and Oil Consumption Prediction with a Physics-Based and Data-Driven Digital Twin Model for Internal Combustion Engines</title>
<link href="https://hdl.handle.net/1721.1/164011" rel="alternate"/>
<author>
<name>Zhong, Xinlin</name>
</author>
<author>
<name>Tian, Tian</name>
</author>
<id>https://hdl.handle.net/1721.1/164011</id>
<updated>2025-11-26T03:10:58Z</updated>
<published>2025-10-21T00:00:00Z</published>
<summary type="text">Oil Transport Simulation and Oil Consumption Prediction with a Physics-Based and Data-Driven Digital Twin Model for Internal Combustion Engines
Zhong, Xinlin; Tian, Tian
Lubrication oil consumption (LOC) is one of the major sources of emissions from internal combustion (IC) engines; yet, analyzing and predicting it through modeling is challenging due to its multi-physics nature, which spans different time and length scales. In this work, a digital twin model is developed to simulate oil transport in the piston ring pack of IC engines and predict the resulting oil consumption with all major physical mechanisms considered. Three main contributors to LOC, namely, top ring up-scraping, oil vaporization on the liner, and reverse gas flows through the top ring gap, are included in the model. It was found that their behaviors are heavily dependent on the arrangement of the piston ring gaps. Therefore, with the ring rotation behavior still not resolved, the current model can predict the LOC range of a given engine profile. Results show that the predicted range can well encapsulate the experimentally measured LOC value.
</summary>
<dc:date>2025-10-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>LSM and CPT</title>
<link href="https://hdl.handle.net/1721.1/164010" rel="alternate"/>
<author>
<name>Seiberg, Nathan</name>
</author>
<author>
<name>Shao, Shu-Heng</name>
</author>
<author>
<name>Zhang, Wucheng</name>
</author>
<id>https://hdl.handle.net/1721.1/164010</id>
<updated>2025-11-26T03:10:45Z</updated>
<published>2025-11-19T00:00:00Z</published>
<summary type="text">LSM and CPT
Seiberg, Nathan; Shao, Shu-Heng; Zhang, Wucheng
We study a number of 1+1d lattice models with anti-unitary symmetries that simultaneously reflect space and reverse time. Some of these symmetries are anomalous, leading to Lieb-Schultz-Mattis-type constraints, thus excluding a trivially gapped phase. Examples include a mod 8 anomaly in the Majorana chain and various mod 2 anomalies in the spin chain. In some cases, there is an exact, non-anomalous lattice symmetry that flows in the continuum to CPT. In some other cases, the CPT symmetry of the continuum theory is emergent or absent. Depending on the model, the anomaly of the lattice model is matched in the continuum in different ways. In particular, it can be mapped to an emergent anomaly of an emanant symmetry.
</summary>
<dc:date>2025-11-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Crystallization of Glauber's salt</title>
<link href="https://hdl.handle.net/1721.1/164009" rel="alternate"/>
<author>
<name>Coberly, C. Wheeler.</name>
</author>
<id>https://hdl.handle.net/1721.1/164009</id>
<updated>2025-11-25T06:32:25Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">Crystallization of Glauber's salt
Coberly, C. Wheeler.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1936; Includes bibliographical references (leaf 39).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation of torquemeters for high speed shafts</title>
<link href="https://hdl.handle.net/1721.1/164008" rel="alternate"/>
<author>
<name>Saluja, Narinder S.
            (Narinder Singh)</name>
</author>
<id>https://hdl.handle.net/1721.1/164008</id>
<updated>2025-11-25T06:33:34Z</updated>
<published>1959-01-01T00:00:00Z</published>
<summary type="text">Investigation of torquemeters for high speed shafts
Saluja, Narinder S.
            (Narinder Singh)
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1959; Includes bibliographical references (leaves 64-67).
</summary>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The resonant-frequency shift of a microwave cavity caused by the high-density plasma in semiconductors, as a function of magnetic field</title>
<link href="https://hdl.handle.net/1721.1/164007" rel="alternate"/>
<author>
<name>Weber, Robert.</name>
</author>
<id>https://hdl.handle.net/1721.1/164007</id>
<updated>2025-11-25T03:04:01Z</updated>
<published>1959-01-01T00:00:00Z</published>
<summary type="text">The resonant-frequency shift of a microwave cavity caused by the high-density plasma in semiconductors, as a function of magnetic field
Weber, Robert.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Physics, 1959; Includes bibliographical references (leaves 46-47).
</summary>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of angular scintillation of radar echoes</title>
<link href="https://hdl.handle.net/1721.1/164006" rel="alternate"/>
<author>
<name>Graham, James William.</name>
</author>
<id>https://hdl.handle.net/1721.1/164006</id>
<updated>2025-11-25T06:32:44Z</updated>
<published>1952-01-01T00:00:00Z</published>
<summary type="text">Analysis of angular scintillation of radar echoes
Graham, James William.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1952
</summary>
<dc:date>1952-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rapid transit use of existing rail lines</title>
<link href="https://hdl.handle.net/1721.1/164005" rel="alternate"/>
<author>
<name>Kenyon, Michael D.</name>
</author>
<id>https://hdl.handle.net/1721.1/164005</id>
<updated>2025-11-25T06:33:31Z</updated>
<published>1958-01-01T00:00:00Z</published>
<summary type="text">Rapid transit use of existing rail lines
Kenyon, Michael D.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1958; Includes bibliographical references (leaf 25).
</summary>
<dc:date>1958-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mass transfer from rotating cylinders</title>
<link href="https://hdl.handle.net/1721.1/164004" rel="alternate"/>
<author>
<name>Cotter, John.</name>
</author>
<author>
<name>Schmidt, Guy L.</name>
</author>
<id>https://hdl.handle.net/1721.1/164004</id>
<updated>2025-11-25T06:33:29Z</updated>
<published>1956-01-01T00:00:00Z</published>
<summary type="text">Mass transfer from rotating cylinders
Cotter, John.; Schmidt, Guy L.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1956; Bibliography: leaf 38.
</summary>
<dc:date>1956-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An observation about the Chicago Council and its policies</title>
<link href="https://hdl.handle.net/1721.1/164003" rel="alternate"/>
<author>
<name>Naber, Fred P.</name>
</author>
<id>https://hdl.handle.net/1721.1/164003</id>
<updated>2025-11-25T06:33:26Z</updated>
<published>1948-01-01T00:00:00Z</published>
<summary type="text">An observation about the Chicago Council and its policies
Naber, Fred P.
Thesis: B.S., Massachusetts Institute of Technology, Department of Business and Engineering Administration, 1948
</summary>
<dc:date>1948-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The design and construction of an ultra-high vacuum field-ion microscope.</title>
<link href="https://hdl.handle.net/1721.1/164002" rel="alternate"/>
<author>
<name>Olson, Gregory Bruce.</name>
</author>
<id>https://hdl.handle.net/1721.1/164002</id>
<updated>2025-11-25T06:32:36Z</updated>
<published>1970-01-01T00:00:00Z</published>
<summary type="text">The design and construction of an ultra-high vacuum field-ion microscope.
Olson, Gregory Bruce.
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy and Materials Science, 1970; Bibliography: leaf 35.
</summary>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>South End Center for the Arts.</title>
<link href="https://hdl.handle.net/1721.1/164001" rel="alternate"/>
<author>
<name>Dunbar, Gary Arthur.</name>
</author>
<id>https://hdl.handle.net/1721.1/164001</id>
<updated>2025-11-25T06:33:23Z</updated>
<published>1965-01-01T00:00:00Z</published>
<summary type="text">South End Center for the Arts.
Dunbar, Gary Arthur.
Thesis: B. Arch., Massachusetts Institute of Technology, Department of Architecture, 1965; "Special requirements for group A occupancy: theatres" leaves [34-42] inserted. "Special requirements for group C occupancy: schools" leaves [50-54] inserted.; Bibliography: leaf 20.
</summary>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of braced excavations.</title>
<link href="https://hdl.handle.net/1721.1/164000" rel="alternate"/>
<author>
<name>Wong, Ing Hieng.</name>
</author>
<id>https://hdl.handle.net/1721.1/164000</id>
<updated>2025-11-25T03:04:06Z</updated>
<published>1971-01-01T00:00:00Z</published>
<summary type="text">Analysis of braced excavations.
Wong, Ing Hieng.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Civil Engineering, 1971; Three leaves on transparent sheets. Vita.; Bibliography: leaves 95-99.
</summary>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shipleasing as a prospective method of l/t financing for international shipowners.</title>
<link href="https://hdl.handle.net/1721.1/163999" rel="alternate"/>
<author>
<name>Angelicoussis, John Anthony.</name>
</author>
<id>https://hdl.handle.net/1721.1/163999</id>
<updated>2025-11-25T06:32:28Z</updated>
<published>1974-01-01T00:00:00Z</published>
<summary type="text">Shipleasing as a prospective method of l/t financing for international shipowners.
Angelicoussis, John Anthony.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1974; Includes bibliographical references.
</summary>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modelling rail freight management.</title>
<link href="https://hdl.handle.net/1721.1/163998" rel="alternate"/>
<author>
<name>Assad, A.
            (Arjang)</name>
</author>
<id>https://hdl.handle.net/1721.1/163998</id>
<updated>2025-11-25T03:03:57Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Modelling rail freight management.
Assad, A.
            (Arjang)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, 1978; Vita.; Bibliography: leaves 277-292.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effects of rapid thermal annealing on gallium arsenide grown by MOCVD on silicon substrates</title>
<link href="https://hdl.handle.net/1721.1/163997" rel="alternate"/>
<author>
<name>Lehman, LeNore Louise.</name>
</author>
<id>https://hdl.handle.net/1721.1/163997</id>
<updated>2025-11-25T06:32:40Z</updated>
<published>1988-01-01T00:00:00Z</published>
<summary type="text">The effects of rapid thermal annealing on gallium arsenide grown by MOCVD on silicon substrates
Lehman, LeNore Louise.
Thesis: M.S., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1988; Includes bibliographical references.
</summary>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Acoustic-phonetic and lexical constraints in word recognition: lexical access using partial information</title>
<link href="https://hdl.handle.net/1721.1/163996" rel="alternate"/>
<author>
<name>Huttenlocher, Daniel P.</name>
</author>
<id>https://hdl.handle.net/1721.1/163996</id>
<updated>2025-11-25T06:32:20Z</updated>
<published>1984-01-01T00:00:00Z</published>
<summary type="text">Acoustic-phonetic and lexical constraints in word recognition: lexical access using partial information
Huttenlocher, Daniel P.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1984; Bibliography: leaves 73-77.
</summary>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Metabolism in vivo of 1, 3-butanediol in the rat</title>
<link href="https://hdl.handle.net/1721.1/163995" rel="alternate"/>
<author>
<name>Nahapetian, Aratoonnaz,
            author.</name>
</author>
<id>https://hdl.handle.net/1721.1/163995</id>
<updated>2025-11-25T03:03:27Z</updated>
<published>1971-01-01T00:00:00Z</published>
<summary type="text">Metabolism in vivo of 1, 3-butanediol in the rat
Nahapetian, Aratoonnaz,
            author.
The metabolism of 1, 3-butanediol (BD) was investigated in vitamin B 12 -deficient and normal rats and in liver slice and diaphragm systems. Body weight gain and feed efficiency were determined in rats fed ad libitum for five weeks on a basal 5% BD or 5% sodium propionate diet with and without vitamin B12. The rats were train-fed for ten months on the same diets. The presence of sodium prop i onate in vitamin B12-deficient basal diets resulted in reduced food intake while BD had the opposite effect. As a result, vitamin B12-deficient rats fed a 5% sodium propionate diet grew less than those fed a 5% BD diet. The metabolism in vivo of BD labeled in carbon-1 (BD-l-cl4) and carbon-4 (BD-4-cl4) were compared to the metabolism of propionate-l-cl4 (PRP-l-cl4) in vitamin B12-deficient and normal rats. Vitamin B12 deficiency reduced the oxidation of sodium propionate but not that of BD, and had no effect on glycogen labeling from BD-l-cl4 and BD-4-cl4. For PRP-l-cl4 however, vitamin B12 deficiency resulted in not only no incorporation of label but liver glycogen levels were very small. On the other hand , when vitamin B12 was present in the diet, the labeling of glycogen from propionate was higher than that from either of the BD-labeled test compounds. Methylmalonic aciduria and urinary loss of ingested activity was higher in vitamin B12-deficient rats fed PRP-l-cl4 than in those fed l abe l ed BD. Nearly all of the urinary activity of vitamin B 1 2-deficient rats fed PRP-l-cl4 was in the form of me t hy l malonic acid (MMA), while little, if any, of the activity was found in the MMA fraction of urine of vita m in B12-deficient rats fed labeled BD. The metabolism in vivo of BD-c14 and BD-3-c14 was investigated in normal rats. About eighty percent of BD was oxidized to carbon dioxide within 32 hours. Its oxidation in the first eight hours was higher when BD was administered intraperitoneally than when it was fed by stomach tube. The loss of ingested activity in the urine expressed as a percentage of total intake and 1,3-BD was higher at the higher doses of BD. However, the activity in urinary BD could not account for all the activity in the urine. A considerable amount of ketone bodies was detected in urine of rats after feeding BD while no detectable ketone bodies were found in the urine of control rats. In addition, relative specific activities of urinary BD and S -hydroxybutyrate were 0.91 and 0.50 respectively. Polarimetry of both purified urinary BD and S -hydroxybutyrate showed that the percentages of (+)- and (-)-isomers of both compounds were 40 and 60% respectively. The metabolism in vitro of BD-3-c14 and DL-S - hydroxybutyrate-4-cl4 were investigated in systems which contained liver slices alone, diaphragm alone or both liver slices plus diaphragm. The oxidation rate of S -hydroxybut y rate was lower in liver slices than in either the diaphragm or the liver slices plus diaphragm systems. Moreover, the rate of oxidation of S -hydrox y- butyrate was highest in the system which included both liver slices and diaphragm. On the other hand, the oxidation rate of BD was lower in the system which had only diaphragm than in the other two systems. However, the rate of BD oxidation was highest in the system - wh ich included both liver slices and diaphragm. Presence of BD gave rise to increased D-(-)- S -hydroxybutyrate and acetoacetate in systems which contained liver slices or liver slices plus diaphragm. In addition, the production rate of D-(-)- S - hydroxybutyric acid was higher than that of acetoacetate in the pre sence of BD, while the opposite was true in its absence. Finally, all the radioactivity in the control incubation media was accounted for by BD-3-cl4, while about 1.5 and 98.5 percent of incubation media activity were recovered in S -hydroxybutyrate and BD peaks, respectively, in incub at ion systems containing liver. The results of this study indicate that 1,3-BD and sodium propionate do not share a common metabolic pathway in the rat. The data suggest, however, that nahapetian-4-1, 3-BD is most probably oxidized to S-hydroxybutyric acid using a "1,3-butanediol dehydrogenase" that is higher in activity in the liver than in the diaphragm. M oreover the (+)-isomer of BD is oxidized at a faster rate than the (-)-isomer, suggesting that the two isomers are oxidized by two different pathways.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1971; Thesis supervised by Sanford A. Miller Vita: page 196; Includes bibliographical references (pages 182-196)
</summary>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An investigation of the merits of a four-element coplanar vacuum tube when used as a modulator at carrier telephone frequencies</title>
<link href="https://hdl.handle.net/1721.1/163994" rel="alternate"/>
<author>
<name>Perkins, Edwin H.</name>
</author>
<id>https://hdl.handle.net/1721.1/163994</id>
<updated>2025-11-25T06:32:33Z</updated>
<published>1930-01-01T00:00:00Z</published>
<summary type="text">An investigation of the merits of a four-element coplanar vacuum tube when used as a modulator at carrier telephone frequencies
Perkins, Edwin H.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1930; Includes bibliographical references (leaf 115).
</summary>
<dc:date>1930-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Micro-analysis of grinding machine cuttings</title>
<link href="https://hdl.handle.net/1721.1/163993" rel="alternate"/>
<author>
<name>Zurlo, J. V.</name>
</author>
<author>
<name>Terkelsen, E. A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163993</id>
<updated>2025-11-25T06:33:21Z</updated>
<published>1922-01-01T00:00:00Z</published>
<summary type="text">Micro-analysis of grinding machine cuttings
Zurlo, J. V.; Terkelsen, E. A.
Thesis: B.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1922
</summary>
<dc:date>1922-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tests upon bamboo as a concrete reinforcement and a consideration of its application in construction</title>
<link href="https://hdl.handle.net/1721.1/163992" rel="alternate"/>
<author>
<name>Young, Joe W.</name>
</author>
<author>
<name>Guo, Dianbang.</name>
</author>
<id>https://hdl.handle.net/1721.1/163992</id>
<updated>2025-11-25T06:33:09Z</updated>
<published>1924-01-01T00:00:00Z</published>
<summary type="text">Tests upon bamboo as a concrete reinforcement and a consideration of its application in construction
Young, Joe W.; Guo, Dianbang.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1924
</summary>
<dc:date>1924-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A precision method for the determination of dew points of complex gaseous systems</title>
<link href="https://hdl.handle.net/1721.1/163991" rel="alternate"/>
<author>
<name>Cox, John Tatum.</name>
</author>
<id>https://hdl.handle.net/1721.1/163991</id>
<updated>2025-11-25T06:32:38Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">A precision method for the determination of dew points of complex gaseous systems
Cox, John Tatum.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1936; Includes bibliographical references (leaf 43).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A biomimetic chip to assess subcutaneous bioavailability of monoclonal antibodies in humans</title>
<link href="https://hdl.handle.net/1721.1/163990" rel="alternate"/>
<author>
<name>Chandran Suja, Vineeth</name>
</author>
<author>
<name>Qi, Qin M</name>
</author>
<author>
<name>Halloran, Kevin</name>
</author>
<author>
<name>Zhang, Jifeng</name>
</author>
<author>
<name>Shaha, Suyog</name>
</author>
<author>
<name>Prakash, Supriya</name>
</author>
<author>
<name>Kumbhojkar, Ninad</name>
</author>
<author>
<name>Deslandes, Antoine</name>
</author>
<author>
<name>Huille, Sylvain</name>
</author>
<author>
<name>Gokarn, Yatin R</name>
</author>
<author>
<name>Mitragotri, Samir</name>
</author>
<id>https://hdl.handle.net/1721.1/163990</id>
<updated>2025-11-25T06:37:31Z</updated>
<published>2023-10-09T00:00:00Z</published>
<summary type="text">A biomimetic chip to assess subcutaneous bioavailability of monoclonal antibodies in humans
Chandran Suja, Vineeth; Qi, Qin M; Halloran, Kevin; Zhang, Jifeng; Shaha, Suyog; Prakash, Supriya; Kumbhojkar, Ninad; Deslandes, Antoine; Huille, Sylvain; Gokarn, Yatin R; Mitragotri, Samir
Subcutaneous (subQ) injection is a common route for delivering biotherapeutics, wherein pharmacokinetics is largely influenced by drug transport in a complex subQ tissue microenvironment. The selection of good drug candidates with beneficial pharmacokinetics for subQ injections is currently limited by a lack of reliable testing models. To address this limitation, we report here a Subcutaneous Co-Culture Tissue-on-a-chip for Injection Simulation (SubCuTIS). SubCuTIS possesses a 3D coculture tissue architecture, and it allows facile quantitative determination of relevant scale independent drug transport rate constants. SubCuTIS captures key in vivo physiological characteristics of the subQ tissues, and it differentiates the transport behavior of various chemically distinct molecules. We supplemented the transport measurements with theoretical modeling, which identified subtle differences in the local absorption rate constants of seven clinically available mAbs. Accounting for first-order proteolytic catabolism, we established a mathematical framework to assess clinical bioavailability using the local absorption rate constants obtained from SubCuTIS. Taken together, the technology described here broadens the applicability of organs-on-chips as a standardized and easy-to-use device for quantitative analysis of subQ drug transport.
</summary>
<dc:date>2023-10-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nanoparticle-induced lipid membrane deformation influences the design of biomedicine</title>
<link href="https://hdl.handle.net/1721.1/163989" rel="alternate"/>
<author>
<name>Pincus, Isaac</name>
</author>
<author>
<name>Qi, Qin M</name>
</author>
<id>https://hdl.handle.net/1721.1/163989</id>
<updated>2025-11-25T06:37:41Z</updated>
<published>2026-07-21T00:00:00Z</published>
<summary type="text">Nanoparticle-induced lipid membrane deformation influences the design of biomedicine
Pincus, Isaac; Qi, Qin M
Controlling the physicochemical properties of nanoparticles is important for their performance as drug carriers, pharmaceuticals, or imaging contrast agents in nanomedicine. Predictive models can accelerate experimental designs at reduced time and costs compared to a brute-force approach conventionally used. However, physical principles underlying particle-cell interactions are still poorly understood due to their large size contrast, hindering the model development. In this work, we describe a model that examines the interaction between multiple particles and the membrane of a mammalian cell or an artificial vesicle, thus influencing the outcomes of surface adsorption, detachment or uptake of particles. Compared to existing biophysical models on particle-membrane interactions accounting for membrane adhesion, stretching and bending energies, we make several important updates that are essential to reaching quantitative agreement with existing experimental data. Particle-induced membrane tension changes are crucial to the membrane deformation even at very low surface concentrations (0.1%); we explain this surprising finding using a new length scale previously neglected. Furthermore, a multi-step and non-equilibrium endocytosis mechanism is proposed in the absence of specific receptor-ligand interactions, inspired by recent experimental evidence on the dynamic regulation of membrane tension through the active transport of lipid molecules. We demonstrate the predictive power of our model in generating the adsorption isotherms and shear-induced particle detachment from cell surfaces and the size-dependent rate of particle uptake. Our research provides a framework to design tailor-made nanoparticles with controllable interaction outcomes with various cell types based on a quantitative and fundamental understanding.
</summary>
<dc:date>2026-07-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effective use of biosensors for high-throughput library screening for metabolite production</title>
<link href="https://hdl.handle.net/1721.1/163988" rel="alternate"/>
<author>
<name>Kaczmarek, Jennifer A</name>
</author>
<author>
<name>Prather, Kristala LJ</name>
</author>
<id>https://hdl.handle.net/1721.1/163988</id>
<updated>2025-11-25T06:37:39Z</updated>
<published>2021-08-04T00:00:00Z</published>
<summary type="text">Effective use of biosensors for high-throughput library screening for metabolite production
Kaczmarek, Jennifer A; Prather, Kristala LJ
The development of fast and affordable microbial production from recombinant pathways is a challenging endeavor, with targeted improvements difficult to predict due to the complex nature of living systems. To address the limitations in biosynthetic pathways, much work has been done to generate large libraries of various genetic parts (promoters, RBSs, enzymes, etc.) to discover library members that bring about significantly improved levels of metabolite production. To evaluate these large libraries, high throughput approaches are necessary, such as those that rely on biosensors. There are various modes of operation to apply biosensors to library screens that are available at different scales of throughput. The effectiveness of each biosensor-based method is dependent on the pathway or strain to which it is applied, and all approaches have strengths and weaknesses to be carefully considered for any high throughput library screen. In this review, we discuss the various approaches used in biosensor screening for improved metabolite production, focusing on transcription factor-based biosensors.
</summary>
<dc:date>2021-08-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lincoln Laboratory and MIT Haystack Observatory partner to unveil hidden parts of the galaxy</title>
<link href="https://hdl.handle.net/1721.1/163987" rel="alternate"/>
<author>
<name>Parde, Nathan</name>
</author>
<id>https://hdl.handle.net/1721.1/163987</id>
<updated>2025-11-25T06:39:00Z</updated>
<published>2025-09-22T00:00:00Z</published>
<summary type="text">Lincoln Laboratory and MIT Haystack Observatory partner to unveil hidden parts of the galaxy
Parde, Nathan
They propose building a telescope made of thousands of tiny, identical satellites that will work together to reveal low-frequency radio waves in space.
</summary>
<dc:date>2025-09-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>A method for correcting the substructure of multiprong jets using the Lund jet plane</title>
<link href="https://hdl.handle.net/1721.1/163986" rel="alternate"/>
<author>
<name>Hayrapetyan, A.</name>
</author>
<author>
<name>Makarenko, V.</name>
</author>
<author>
<name>Tumasyan, A.</name>
</author>
<author>
<name>Adam, W.</name>
</author>
<author>
<name>Andrejkovic, J. W.</name>
</author>
<author>
<name>Benato, L.</name>
</author>
<author>
<name>Bergauer, T.</name>
</author>
<author>
<name>Damanakis, K.</name>
</author>
<author>
<name>Dragicevic, M.</name>
</author>
<author>
<name>Giordano, C.</name>
</author>
<author>
<name>Hussain, P. S.</name>
</author>
<author>
<name>Jeitler, M.</name>
</author>
<author>
<name>Krammer, N.</name>
</author>
<author>
<name>Li, A.</name>
</author>
<author>
<name>Liko, D.</name>
</author>
<author>
<name>Mikulec, I.</name>
</author>
<author>
<name>Schieck, J.</name>
</author>
<author>
<name>Schöfbeck, R.</name>
</author>
<id>https://hdl.handle.net/1721.1/163986</id>
<updated>2025-11-25T06:37:07Z</updated>
<published>2025-11-10T00:00:00Z</published>
<summary type="text">A method for correcting the substructure of multiprong jets using the Lund jet plane
Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Damanakis, K.; Dragicevic, M.; Giordano, C.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.
Many analyses at the CERN LHC exploit the substructure of jets to identify heavy resonances produced with high momenta that decay into multiple quarks and/or gluons. This paper presents a new technique for correcting the substructure of simulated large-radius jets from multiprong decays. The technique is based on reclustering the jet constituents into several subjets such that each subjet represents a single prong, and separately correcting the radiation pattern in the Lund jet plane of each subjet using a correction derived from data. The data presented here correspond to an integrated luminosity of 138 fb−1 collected by the CMS experiment between 2016–2018 at a center-of-mass energy of 13 TeV. The correction procedure improves the agreement between data and simulation for several different substructure observables of multiprong jets. This technique establishes, for the first time, a robust calibration for the substructure of jets with four or more prongs, enabling future measurements and searches for new phenomena containing these signatures.
</summary>
<dc:date>2025-11-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inclusionary and Exclusionary Preferences: A Test of Three Cognitive Mechanisms</title>
<link href="https://hdl.handle.net/1721.1/163985" rel="alternate"/>
<author>
<name>Landau-Wells, Marika</name>
</author>
<author>
<name>Lydic, Kirsten O.</name>
</author>
<author>
<name>Kennedy, Joachim</name>
</author>
<author>
<name>Mittman, Benjamin G.</name>
</author>
<author>
<name>Thompson, Todd W.</name>
</author>
<author>
<name>Gupta, Akhil</name>
</author>
<author>
<name>Saxe, Rebecca</name>
</author>
<id>https://hdl.handle.net/1721.1/163985</id>
<updated>2025-11-25T06:37:00Z</updated>
<published>2025-11-22T00:00:00Z</published>
<summary type="text">Inclusionary and Exclusionary Preferences: A Test of Three Cognitive Mechanisms
Landau-Wells, Marika; Lydic, Kirsten O.; Kennedy, Joachim; Mittman, Benjamin G.; Thompson, Todd W.; Gupta, Akhil; Saxe, Rebecca
Exclusionary social policies take a significant toll on the mental and physical health of targeted groups. Support for specific exclusionary policies does not always align with general antipathy towards the targeted group, however. Does support for specific exclusionary policies rely on particular thought processes (i.e., cognitive mechanisms)? Does opposition? We investigate these questions through the lens of “bathroom laws” across two studies. In Study 1, we use functional neuroimaging to test three candidate cognitive mechanisms from the literature: (1) threat-related emotions (e.g., fear, disgust) supporting exclusionary preferences; (2) mentalizing (e.g., empathy, perspective-taking) supporting inclusionary preferences; and (3) self-regulation (e.g., aligning one’s behavior with one’s goals) supporting inclusionary preferences. Consistent with the intergroup conflict and prejudice literatures, we find evidence of a motivated self-regulation mechanism in bathroom law opponents. In Study 2, we investigate a possible source of this motivation using text analysis of open-ended policy preference justifications. We find that bathroom law opponents link their policy preference to a small number of specific values, particularly autonomy of action. Taken together, these studies point to a value-driven, motivational account of inclusionary preferences that reconciles puzzling patterns of public opinion, offers new levers for tolerance interventions, and provides some insight into the brain-basis of political behavior.
</summary>
<dc:date>2025-11-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerated Bayesian Calibration and Uncertainty Quantification of RANS Turbulence Model Parameters for Stratified Atmospheric Boundary Layer Flows</title>
<link href="https://hdl.handle.net/1721.1/163984" rel="alternate"/>
<author>
<name>Shin, Ethan Y.</name>
</author>
<author>
<name>Howland, Michael F.</name>
</author>
<id>https://hdl.handle.net/1721.1/163984</id>
<updated>2025-11-25T06:37:03Z</updated>
<published>2025-11-22T00:00:00Z</published>
<summary type="text">Accelerated Bayesian Calibration and Uncertainty Quantification of RANS Turbulence Model Parameters for Stratified Atmospheric Boundary Layer Flows
Shin, Ethan Y.; Howland, Michael F.
In operational weather models, the effects of turbulence in the atmospheric boundary layer (ABL) on the resolved flow are modeled using turbulence parameterizations. These parameterizations typically use a predetermined set of model parameters that are tuned to limited data from canonical flows. Using these fixed parameters results in deterministic predictions that neglect uncertainty in the unresolved turbulence processes. In this study, we perform a machine learning-accelerated Bayesian inversion of a single-column model of the ABL. This approach is used to calibrate and quantify uncertainty in model parameters of Reynolds-averaged Navier–Stokes turbulence models. To verify the data-driven uncertainty quantification methodology, we test in an idealized setup in which a prescribed but unobserved set of parameters is learned from noisy approximations of the model output. Following this verification, we learn the parameters and their uncertainties in two different turbulence models conditioned on scale-resolving large-eddy simulation data over a range of ABL stabilities. We show how Bayesian inversion of a numerical model improves flow predictions by investigating the underlying mean momentum budgets. Further, we show that uncertainty quantification based on neutral ABL surface layer data recovers the relationships between parameters that have been predicted using theoretical modeling, but that learning the parameters based on stable ABL data or data from outside the surface layer can lead to different parameter relationships than neutral surface layer theory. Efforts to systematically reduce parameter uncertainty reveal that (1) sampling wind speed up to the ABL height can reduce uncertainty in key model parameters by up to $$84\%$$ , and (2) assimilating fluid flow quantities beyond first-order moment statistics can further reduce uncertainty in ways that baseline wind speed assimilation alone cannot achieve. The parameters learned using Bayesian uncertainty quantification generally yield lower error than standard deterministic parameters in out-of-sample tests and also provide uncertainty intervals on predictions.
</summary>
<dc:date>2025-11-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Europa Clipper Magnetometer Boom Deployment: A First Look at the Magnetometer Observations of the Spacecraft and the Interplanetary Magnetic Field</title>
<link href="https://hdl.handle.net/1721.1/163983" rel="alternate"/>
<author>
<name>Cochrane, Corey J.</name>
</author>
<author>
<name>Joy, Steven P.</name>
</author>
<author>
<name>Korth, Haje</name>
</author>
<author>
<name>Biersteker, John B.</name>
</author>
<author>
<name>Blacksberg, Jordana</name>
</author>
<author>
<name>Bouchard, Michael</name>
</author>
<author>
<name>Contreras, Jacob</name>
</author>
<author>
<name>Dawson, Olivia R.</name>
</author>
<author>
<name>Khurana, Krishan K.</name>
</author>
<author>
<name>Murphy, Neil</name>
</author>
<author>
<name>Palm, Derek</name>
</author>
<author>
<name>Perley, Mitch O.</name>
</author>
<author>
<name>Pierce, David R.</name>
</author>
<author>
<name>Richter, Ingo</name>
</author>
<author>
<name>Russell, Christopher T.</name>
</author>
<id>https://hdl.handle.net/1721.1/163983</id>
<updated>2025-11-25T06:37:05Z</updated>
<published>2025-11-19T00:00:00Z</published>
<summary type="text">Europa Clipper Magnetometer Boom Deployment: A First Look at the Magnetometer Observations of the Spacecraft and the Interplanetary Magnetic Field
Cochrane, Corey J.; Joy, Steven P.; Korth, Haje; Biersteker, John B.; Blacksberg, Jordana; Bouchard, Michael; Contreras, Jacob; Dawson, Olivia R.; Khurana, Krishan K.; Murphy, Neil; Palm, Derek; Perley, Mitch O.; Pierce, David R.; Richter, Ingo; Russell, Christopher T.
NASA’s Europa Clipper flagship mission is designed to investigate the habitability of Jupiter’s moon Europa. A key instrument aboard the spacecraft is the Europa Clipper Magnetometer (ECM), a suite of fluxgate magnetometer sensors deployed on a boom to minimize spacecraft-induced magnetic interference. The ECM investigation aims to characterize Europa’s induced magnetic field, offering constraints on the salinity, depth, and thickness of its subsurface ocean. This work presents the first in-flight ECM observations acquired during the magnetometer boom deployment and shortly thereafter. We show how these observations provide the requisite evidence needed to validate a successful deployment. We also demonstrate how these observations can be used to calibrate the sensor offsets and to develop new magnetic field models of the spacecraft of varying complexity, thus enabling the robust removal of the instrument’s zero-levels which is critical for achieving the mission’s science objectives. We finally share preliminary calibrated magnetometer observations acquired over a two-month period after deployment, revealing a very active interplanetary magnetic field characteristic of solar maximum.
</summary>
<dc:date>2025-11-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>DOE selects MIT to establish a Center for the Exascale Simulation of Coupled High-Enthalpy Fluid–Solid Interactions</title>
<link href="https://hdl.handle.net/1721.1/163982" rel="alternate"/>
<author>
<name>Hadley, F</name>
</author>
<id>https://hdl.handle.net/1721.1/163982</id>
<updated>2025-11-25T06:39:03Z</updated>
<published>2025-09-10T00:00:00Z</published>
<summary type="text">DOE selects MIT to establish a Center for the Exascale Simulation of Coupled High-Enthalpy Fluid–Solid Interactions
Hadley, F
The research center, sponsored by the DOE’s National Nuclear Security Administration, will advance the simulation of extreme environments, such as those in hypersonic flight and atmospheric reentry.
</summary>
<dc:date>2025-09-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deuteron identification via time of flight with LHCb</title>
<link href="https://hdl.handle.net/1721.1/163981" rel="alternate"/>
<author>
<name>LHCb Collaboration</name>
</author>
<id>https://hdl.handle.net/1721.1/163981</id>
<updated>2025-11-25T06:36:57Z</updated>
<published>2025-11-19T00:00:00Z</published>
<summary type="text">Deuteron identification via time of flight with LHCb
LHCb Collaboration
It is shown that the timing capabilities of the LHCb detector operated during the LHC Run 2 can be used to identify light ion particles with momenta of a few GeV/c. This is achieved by estimating the particle time of flight through a newly developed technique. A dedicated reconstruction procedure and a neural-network-based estimator of the particle speed have been developed to enable deuteron identification by suppressing the abundant background from lighter particles. The performance of the identification procedure is demonstrated in a sample of proton-helium collisions at s NN  = 110 GeV, where the production of deuteron and triton particles is observed. This novel approach opens the way to study deuteron and antideuteron production for different collision systems at different energy scales, exploiting the rich dataset collected by the LHCb experiment.
</summary>
<dc:date>2025-11-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Confidently Comparing Estimates with the c-value</title>
<link href="https://hdl.handle.net/1721.1/163980" rel="alternate"/>
<author>
<name>Trippe, Brian L</name>
</author>
<author>
<name>Deshpande, Sameer K</name>
</author>
<author>
<name>Broderick, Tamara</name>
</author>
<id>https://hdl.handle.net/1721.1/163980</id>
<updated>2025-11-25T06:37:35Z</updated>
<published>2023-02-24T00:00:00Z</published>
<summary type="text">Confidently Comparing Estimates with the c-value
Trippe, Brian L; Deshpande, Sameer K; Broderick, Tamara
Modern statistics provides an ever-expanding toolkit for estimating unknown parameters. Consequently, applied statisticians frequently face a difficult decision: retain a parameter estimate from a familiar method or replace it with an estimate from a newer or more complex one. While it is traditional to compare estimates using risk, such comparisons are rarely conclusive in realistic settings. In response, we propose the “c-value” as a measure of confidence that a new estimate achieves smaller loss than an old estimate on a given dataset. We show that it is unlikely that a large c-value coincides with a larger loss for the new estimate. Therefore, just as a small p-value supports rejecting a null hypothesis, a large c-value supports using a new estimate in place of the old. For a wide class of problems and estimates, we show how to compute a c-value by first constructing a data-dependent high-probability lower bound on the difference in loss. The c-value is frequentist in nature, but we show that it can provide validation of shrinkage estimates derived from Bayesian models in real data applications involving hierarchical models and Gaussian processes. Supplementary materials for this article are available online.
</summary>
<dc:date>2023-02-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI Accelerator Announces Award Winners</title>
<link href="https://hdl.handle.net/1721.1/163979" rel="alternate"/>
<author>
<name>Accelerator, AI</name>
</author>
<id>https://hdl.handle.net/1721.1/163979</id>
<updated>2025-11-25T06:39:01Z</updated>
<published>2025-07-31T00:00:00Z</published>
<summary type="text">AI Accelerator Announces Award Winners
Accelerator, AI
The Department of Air Force (DAF)-MIT AI Accelerator is a unique collaboration designed to advance the field of AI to improve DAF operations and  address broader societal needs. In June 2025, the DAF-MIT Artificial Intelligence Accelerator named the recipients of AI Accelerator awards, recognizing scientific excellence, distinguished contributions, and other exceptional accomplishments. The awardees were nominated and selected from members of the AI Accelerator community, including individuals from the DAF, MIT campus, and MIT Lincoln Laboratory.
</summary>
<dc:date>2025-07-31T00:00:00Z</dc:date>
</entry>
<entry>
<title>Future circular collider feasibility study report</title>
<link href="https://hdl.handle.net/1721.1/163978" rel="alternate"/>
<author>
<name>Benedikt, M.</name>
</author>
<author>
<name>Zimmermann, F.</name>
</author>
<author>
<name>Auchmann, B.</name>
</author>
<author>
<name>Bartmann, W.</name>
</author>
<author>
<name>Burnet, J. P.</name>
</author>
<author>
<name>Carli, C.</name>
</author>
<author>
<name>Chancé, A.</name>
</author>
<author>
<name>Craievich, P.</name>
</author>
<author>
<name>Giovannozzi, M.</name>
</author>
<author>
<name>Grojean, C.</name>
</author>
<author>
<name>Gutleber, J.</name>
</author>
<author>
<name>Hanke, K.</name>
</author>
<author>
<name>Henriques, A.</name>
</author>
<author>
<name>Janot, P.</name>
</author>
<author>
<name>Lourenço, C.</name>
</author>
<author>
<name>Mangano, M.</name>
</author>
<author>
<name>Otto, T.</name>
</author>
<author>
<name>Poole, J.</name>
</author>
<author>
<name>Rajagopalan, S.</name>
</author>
<author>
<name>Raubenheimer, T.</name>
</author>
<id>https://hdl.handle.net/1721.1/163978</id>
<updated>2025-11-25T06:37:26Z</updated>
<published>2025-11-17T00:00:00Z</published>
<summary type="text">Future circular collider feasibility study report
Benedikt, M.; Zimmermann, F.; Auchmann, B.; Bartmann, W.; Burnet, J. P.; Carli, C.; Chancé, A.; Craievich, P.; Giovannozzi, M.; Grojean, C.; Gutleber, J.; Hanke, K.; Henriques, A.; Janot, P.; Lourenço, C.; Mangano, M.; Otto, T.; Poole, J.; Rajagopalan, S.; Raubenheimer, T.
In response to the 2020 Update of the European Strategy for Particle Physics, the Future Circular Collider (FCC) Feasibility Study was launched as an international collaboration hosted by CERN. This report describes the FCC integrated programme, which consists of two stages: an electron-positron collider (FCC-ee) in the first phase, serving as a high-luminosity Higgs, top, and electroweak factory; followed by a proton-proton collider (FCC-hh) at the energy frontier in the second phase. The FCC-ee is designed to operate at four key centre-of-mass energies: the Z pole, the WW pair production threshold, the ZH production peak, and the top/anti-top production threshold—each delivering the highest possible luminosities to four experiments. Over 15 years of operation, FCC-ee will produce more than 6 trillion Z bosons, 200 million WW pairs, nearly 3 million Higgs bosons, and 2 million top anti-top pairs. Precise energy calibration at the Z pole and WW threshold will be achieved through frequent resonant depolarisation of pilot bunches. The sequence of operation modes between the Z, WW, and ZH substages remains flexible. The FCC-hh will operate at a centre-of-mass energy of approximately 85 TeV—nearly an order of magnitude higher than the LHC—and is designed to deliver 5 to 10 times the integrated luminosity of the upcoming High-Luminosity LHC. Its mass reach for direct discovery extends to several tens of TeV. In addition to proton-proton collisions, the FCC-hh is capable of supporting ion-ion, ion-proton, and lepton-hadron collision modes. This second volume of the Feasibility Study Report presents the complete design of the FCC-ee collider, its operation and staging strategy, the full-energy booster and injector complex, required accelerator technologies, safety concepts, and technical infrastructure. It also includes the design of the FCC-hh hadron collider, development of high-field magnets, hadron injector options, and key technical systems for FCC-hh.
</summary>
<dc:date>2025-11-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nitrous Oxide Distributions in the Oxygenated Water Column of the Sargasso Sea</title>
<link href="https://hdl.handle.net/1721.1/163977" rel="alternate"/>
<author>
<name>Meyer, Annaliese C. S.</name>
</author>
<author>
<name>Cullen, Jay T.</name>
</author>
<author>
<name>Grundle, Damian S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163977</id>
<updated>2025-11-25T06:37:36Z</updated>
<published>2022-12-15T00:00:00Z</published>
<summary type="text">Nitrous Oxide Distributions in the Oxygenated Water Column of the Sargasso Sea
Meyer, Annaliese C. S.; Cullen, Jay T.; Grundle, Damian S.
This study presents dissolved nitrous oxide (N2O) concentrations in the water column at the Bermuda Atlantic Time-series Study (BATS) station and uses a subset of these measurements to estimate air-to-sea flux for four specific time points between September 2018 and June 2019. N2O concentrations at BATS were in the range of 4.0 nmol L−1–16.9 nmol L−1, with vertical profiles which were the mirror inverse of dissolved oxygen. Regardless of season, N2O concentration maxima were found within the oxygen minimum zone (OMZ). The highest maximum N2O values were observed in November and lowest in October. As the water column at BATS remains consistently at dissolved oxygen concentrations greater than 140 µmol L−1, and therefore aerobic, we assume that the bulk of N2O production occurs through nitrification. A nitrification source is supported by a correlation between excess N2O (ΔN2O) below the mixed layer, apparent oxygen utilization (AOU) and nitrate concentrations. We estimate a pooled average yield of 0.027% to 0.038% N2O from nitrification at BATS. Finally, estimates of air–sea exchange of N2O using regional average monthly wind speeds indicated that this region acts as a weak source or a sink of atmospheric N2O, and varies between months.
</summary>
<dc:date>2022-12-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Responding to the Climate Impact of Generative AI</title>
<link href="https://hdl.handle.net/1721.1/163976" rel="alternate"/>
<author>
<name>Zewe, Adam</name>
</author>
<id>https://hdl.handle.net/1721.1/163976</id>
<updated>2025-11-25T06:39:02Z</updated>
<published>2025-09-30T00:00:00Z</published>
<summary type="text">Responding to the Climate Impact of Generative AI
Zewe, Adam
Explosive growth of AI data centers is expected to increase greenhouse gas emissions. Researchers are now seeking solutions to reduce these environmental harms.
</summary>
<dc:date>2025-09-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>A holistic model for understanding the dynamics of outsourcing</title>
<link href="https://hdl.handle.net/1721.1/163975" rel="alternate"/>
<author>
<name>Uygun, Yilmaz</name>
</author>
<author>
<name>Gotsadze, Nikoloz</name>
</author>
<author>
<name>Schupp, Florian</name>
</author>
<author>
<name>Gzirishvili, Lizi</name>
</author>
<author>
<name>Tindjou Nana, Brigitte Stephanie</name>
</author>
<id>https://hdl.handle.net/1721.1/163975</id>
<updated>2025-11-25T06:37:38Z</updated>
<published>2022-02-14T00:00:00Z</published>
<summary type="text">A holistic model for understanding the dynamics of outsourcing
Uygun, Yilmaz; Gotsadze, Nikoloz; Schupp, Florian; Gzirishvili, Lizi; Tindjou Nana, Brigitte Stephanie
Outsourcing is a complex process as many external and internal factors that look convincing in the first place might, however, lead to a failure in the long run. Motivated by this, we wanted to get a holistic understanding of such outsourcing decisions. Thus, we created a comprehensive System Dynamics simulation model including all relevant variables to examine the dynamic nature of outsourcing in a holistic manner and over time that consists of more than 200 interrelated variables. Our results show, amongst others, that higher process specialisation that requires substantial investments by the supplier appears to be favourable for an outsourcing company and shifting a larger quantity to such a supplier achieves better cost savings and thus accounts for a better overall outsourcing result. On an operational level, we identified an innovation trap, a bargaining power shift, a plagiarism trap, and a knowledge trap. Based on that, we give specific managerial recommendations to tackles these aspects. We conclude that, amongst others, it is important for innovative companies with rather complex processes and parts to carefully plan which and how many employees to release so as not to lose the knowledge on those outsourced processes and parts.
</summary>
<dc:date>2022-02-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lincoln Lab Unveils the Most Powerful AI Supercomputer at any US University</title>
<link href="https://hdl.handle.net/1721.1/163974" rel="alternate"/>
<author>
<name>Foy, Kylie</name>
</author>
<id>https://hdl.handle.net/1721.1/163974</id>
<updated>2025-11-25T06:38:54Z</updated>
<published>2025-10-02T00:00:00Z</published>
<summary type="text">Lincoln Lab Unveils the Most Powerful AI Supercomputer at any US University
Foy, Kylie
Optimized for generative AI, TX-GAIN is driving innovation in biodefense, materials discovery, cybersecurity, and other areas of research and development.
</summary>
<dc:date>2025-10-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mediation and ANCOVA Models to Study the Influence of Solvent Retting Traits and Plant Physique on Bast Fiber Yield and Retting Time</title>
<link href="https://hdl.handle.net/1721.1/163973" rel="alternate"/>
<author>
<name>Shuvo, Ikra Iftekhar</name>
</author>
<author>
<name>Hoque, Md. Saiful</name>
</author>
<author>
<name>Khandakar, Lovely K. M.</name>
</author>
<id>https://hdl.handle.net/1721.1/163973</id>
<updated>2025-11-25T06:37:33Z</updated>
<published>2022-07-11T00:00:00Z</published>
<summary type="text">Mediation and ANCOVA Models to Study the Influence of Solvent Retting Traits and Plant Physique on Bast Fiber Yield and Retting Time
Shuvo, Ikra Iftekhar; Hoque, Md. Saiful; Khandakar, Lovely K. M.
The study aims in applying two statistical tools to analyze the retting beha-vior of plant stems for extracting bast fibers for industrial applications. Atfirst, a mediation model is employed to investigate the first hypothesis of thiswork that involves studying the color response of the retted solvent asa function of retting time on the responsible variable, fiber yield (%).Statistically, there is a significant indirect effect of retting time on fiberyield (%) through retting trait (β = −0.0142, 95% C.I. [−0.0274, −0.0011]) –a statistical inference bolstered by the Sobel test result, confirming themediation effect (p-value = 0.0329 &lt; 0.05; z-score = −2.1334; bootstrappingof 5000 resamples). Next, the second hypothesis of the current work involvesanalyzing the impact of stem form-factors on their retting time using thestatistical tool, ANCOVA. The partial- η2 indicates that cultivar treatmentaccounts for 30% variance of the retting time while controlling for the effectsof two covariates – diameter and length of the stems, in this case. Bycontrolling the Type-I error, Bonferroni and similar post-hoc tests also con-firm the statistical significance of cultivar categories pertaining to their meanretting time. Future work could focus on these underlying hypotheses andstudy the impact of microorganisms, environmental factors, and cultivartreatment variables on the retting time to optimize the overall fiber yieldand production process.
</summary>
<dc:date>2022-07-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Autistic Experiences in the Workplace: Key Factors and Actionable Steps</title>
<link href="https://hdl.handle.net/1721.1/163972" rel="alternate"/>
<author>
<name>Nishith, Shruti</name>
</author>
<author>
<name>O’Brien, Amanda M.</name>
</author>
<author>
<name>Li, Cindy</name>
</author>
<author>
<name>Bungert, Lindsay</name>
</author>
<author>
<name>Oddis, Kyle</name>
</author>
<author>
<name>Riddle, Joseph</name>
</author>
<author>
<name>Gabrieli, John D. E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163972</id>
<updated>2025-11-25T06:36:41Z</updated>
<published>2025-09-24T00:00:00Z</published>
<summary type="text">Improving Autistic Experiences in the Workplace: Key Factors and Actionable Steps
Nishith, Shruti; O’Brien, Amanda M.; Li, Cindy; Bungert, Lindsay; Oddis, Kyle; Riddle, Joseph; Gabrieli, John D. E.
Autistic adults have higher rates of unemployment and underemployment than non-autistic adults with and without disabilities. While previous work has highlighted factors specific to individuals and/or job sectors that serve as barriers or facilitators to autistic employment, the question of how to modify the workplace to best support autistic people remains under-researched. The present study utilized an ecological framework to investigate what workplace factors can be modified to improve autistic experiences and how these modifications may be enacted across different levels of workplace ecosystem to promote autistic success. Autistic participants (N = 85) across employment sectors provided quantitative ratings and written descriptions of positive and negative factors related to their workplace experiences. Quantitative and qualitative analyses were used to examine which factors and overarching principles most impact employment. Actionable strategies to modify these factors were derived from participant responses and validated by autistic collaborators and neuroinclusion experts. On average, participants rated task training as having the most positive, and mental health as having the most negative, impact on their employment. Participants described four themes (acceptance, communication, autonomy, accommodations) that can be embedded in the work environment to improve experiences. Steps to improve autistic employment that can be enacted by stakeholders across levels of the workplace experiences are provided. Autistic adults face multifaceted barriers to employment across levels of the workplace. Modifying the workplace itself, across multiple levels and stakeholders, may serve to improve autistic employment outcomes.
</summary>
<dc:date>2025-09-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Connectivity of Friends-and-Strangers Graphs on Complete Multipartite Graphs</title>
<link href="https://hdl.handle.net/1721.1/163971" rel="alternate"/>
<author>
<name>Zhu, Honglin</name>
</author>
<id>https://hdl.handle.net/1721.1/163971</id>
<updated>2025-11-25T06:36:39Z</updated>
<published>2024-12-31T00:00:00Z</published>
<summary type="text">The Connectivity of Friends-and-Strangers Graphs on Complete Multipartite Graphs
Zhu, Honglin
For simple graphs X and Y on n vertices, the friends-and-strangers graph FS ( X , Y ) is the graph whose vertex set consists of all bijections σ : V ( X ) → V ( Y ) , where two bijections σ and σ ′ are adjacent if and only if they agree on all but two adjacent vertices a , b ∈ V ( X ) such that σ ( a ) , σ ( b ) ∈ V ( Y ) are adjacent in Y. Resolving a conjecture of Wang, Lu, and Chen, we completely characterize the connectedness of FS ( X , Y ) when Y is a complete bipartite graph. We further extend this result to when Y is a complete multipartite graph. We also determine when FS ( X , Y ) has exactly two connected components where X is bipartite and Y is a complete bipartite graph.
</summary>
<dc:date>2024-12-31T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reducing Aerodynamic Interference Through Layout Optimization of Symmetrically Cambered Wingsails: A Comparative Study of In-Line and Parallel Configurations</title>
<link href="https://hdl.handle.net/1721.1/163970" rel="alternate"/>
<author>
<name>van Reen, Stephan</name>
</author>
<author>
<name>Lin, Jianfeng</name>
</author>
<author>
<name>Niu, Jiqiang</name>
</author>
<author>
<name>Sharpe, Peter</name>
</author>
<author>
<name>Li, Xiaodong</name>
</author>
<author>
<name>Yao, Hua-Dong</name>
</author>
<id>https://hdl.handle.net/1721.1/163970</id>
<updated>2025-11-25T06:37:30Z</updated>
<published>2025-10-16T00:00:00Z</published>
<summary type="text">Reducing Aerodynamic Interference Through Layout Optimization of Symmetrically Cambered Wingsails: A Comparative Study of In-Line and Parallel Configurations
van Reen, Stephan; Lin, Jianfeng; Niu, Jiqiang; Sharpe, Peter; Li, Xiaodong; Yao, Hua-Dong
Rigid wingsails are increasingly adopted for wind-assisted ship propulsion, with Symmetrically Cambered (SC) profiles identified as highly efficient for thrust generation. This study investigates installation layouts for multiple SC wingsails, focusing on aerodynamic interference that limits their performance. A fast 2D potential-flow panel method is employed and benchmarked against wind tunnel and 3D IDDES data. Two representative layouts are analyzed: triple-in-line (TL) and quad-in-parallel (QP). Layout optimization is performed using a genetic algorithm with distances between sails as design variables, constrained by the total installation span, at apparent wind angles (AWAs) of 60◦ , 90◦ , and 120◦ . Results show that thrust generation decreases progressively from upstream to downstream sails due to interference effects, with penalties of about 4–6% in the TL and up to 28% in the QP layout. The optimization improves performance only for the TL layout at 60◦ , while the QP layout shows negligible gains. Analysis of pressure distributions confirms that downstream sails suffer from reduced suction on the leading edge caused by upstream wakes. Overall, the TL layout demonstrates significantly higher aerodynamic reliability than the QP layout. These findings provide new insights into multi-sail configurations and highlight the importance of layout optimization in maximizing thrust efficiency.
</summary>
<dc:date>2025-10-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>A rapid experimental workflow for studying melt track scaling in laser powder bed fusion using high-precision metal template substrates</title>
<link href="https://hdl.handle.net/1721.1/163969" rel="alternate"/>
<author>
<name>Weissbach, Reimar</name>
</author>
<author>
<name>Penny, Ryan W.</name>
</author>
<author>
<name>Hart, A. J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163969</id>
<updated>2025-11-25T06:36:30Z</updated>
<published>2025-06-26T00:00:00Z</published>
<summary type="text">A rapid experimental workflow for studying melt track scaling in laser powder bed fusion using high-precision metal template substrates
Weissbach, Reimar; Penny, Ryan W.; Hart, A. J.
Development and qualification of process parameters in laser powder bed fusion (LPBF) involves many variables. At the outset of development, whether transferring known parameters to a new machine, or exploring a new material, single-track and single-layer experiments are a convenient means of down-selecting key variables and exploring parameter scaling behavior. We present an experimental workflow for single-layer LPBF experiments using high-precision metal template substrates, overcoming challenges with precision single-layer alignment in LPBF systems and enabling efficient processing and cross-sectional analysis. Templates are fabricated using chemical etching and machining, and are characterized using optical profilometry and X-ray transmission imaging of powder layers. Using the etched templates, a single-track parameter study is performed in SS316 including three powder layer thicknesses, and spanning common laser melting modes (lack-of-fusion, conduction, and keyhole mode). Analysis of melt track geometries using automated image processing allows a scaling law to be applied to define the process window, quantifying the amount of material added with increasing powder layer thickness. Single-track results are verified with raster scanning experiments, showing the potential to transfer single-track results to full LPBF builds.
</summary>
<dc:date>2025-06-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Augmented intelligence should be good for medicine, if medicine is to remain good for us</title>
<link href="https://hdl.handle.net/1721.1/163968" rel="alternate"/>
<author>
<name>Idan, Daphna</name>
</author>
<author>
<name>Celi, Leo A.</name>
</author>
<author>
<name>Einav, Sharon</name>
</author>
<author>
<name>Frenkel, Amit</name>
</author>
<id>https://hdl.handle.net/1721.1/163968</id>
<updated>2025-11-25T06:36:35Z</updated>
<published>2025-09-29T00:00:00Z</published>
<summary type="text">Augmented intelligence should be good for medicine, if medicine is to remain good for us
Idan, Daphna; Celi, Leo A.; Einav, Sharon; Frenkel, Amit
Throughout history, the medical community has failed to address health disparities. Augmented intelligence (AI) is poised to cement these structural inequities permanently. The need to establish a triage process that ensures fair and equitable access to medical care, and to consider all patient populations equally researchable, should not overshadow the need to learn how best to exploit AI for furthering medical fairness and equity despite resource limitations. Open discussion of the shortcomings of medical AI, approaching medical AI development, testing, and implementation from a critical ethical perspective, constant testing and analysis of AI outputs, and human oversight in the loop constitute only the first part of ensuring augmented intelligence tools are equitably robust and free of bias.
</summary>
<dc:date>2025-09-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Principles and Impact of a Learning Analytics Dashboard: Evidence from a Randomized MOOC Experiment</title>
<link href="https://hdl.handle.net/1721.1/163967" rel="alternate"/>
<author>
<name>Borrella, Inma</name>
</author>
<author>
<name>Ponce-Cueto, Eva</name>
</author>
<id>https://hdl.handle.net/1721.1/163967</id>
<updated>2025-11-25T06:36:51Z</updated>
<published>2025-10-27T00:00:00Z</published>
<summary type="text">Design Principles and Impact of a Learning Analytics Dashboard: Evidence from a Randomized MOOC Experiment
Borrella, Inma; Ponce-Cueto, Eva
Learning Analytics Dashboards (LADs) are increasingly deployed to support self-regulated learning on online courses. Yet many existing dashboards lack strong theoretical grounding, contextual alignment, or actionable feedback, and some designs have been shown to inadvertently discourage learners through excessive social comparison or high inference costs. In this study, we designed and evaluated a LAD grounded in the COPES model of self-regulated learning and tailored to a credit-bearing Massive Open Online Course (MOOC) using a data-driven approach. We conducted a randomized controlled trial with 8745 learners, comparing a control group, a dashboard without feedback, and a dashboard with ARCS-framed actionable feedback. The results showed that the dashboard with feedback significantly increased learners&amp;rsquo; likelihood of verification (i.e., paying for the certification track), with mixed effects on engagement and no measurable impact on final grades. These findings suggest that dashboards are not uniformly beneficial: while feedback-supported LADs can enhance motivation and persistence, dashboards that lack interpretive support may impose cognitive burdens without improving outcomes. This study contributes to the literature on learning analytics by (1) articulating the design principles for theoretically and contextually grounded LADs and (2) providing experimental evidence on their impact in authentic MOOC settings.
</summary>
<dc:date>2025-10-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Effect of IL-17A and Combined Mechanical Injury on Meniscal Tissue Integrity In Vitro</title>
<link href="https://hdl.handle.net/1721.1/163966" rel="alternate"/>
<author>
<name>Ahrens, Greta</name>
</author>
<author>
<name>Gellhaus, Florian</name>
</author>
<author>
<name>Weitkamp, Jan-Tobias</name>
</author>
<author>
<name>Behrendt, Peter</name>
</author>
<author>
<name>Cossais, François</name>
</author>
<author>
<name>Rolauffs, Bernd</name>
</author>
<author>
<name>Grodzinsky, Alan J</name>
</author>
<author>
<name>Kurz, Bodo</name>
</author>
<id>https://hdl.handle.net/1721.1/163966</id>
<updated>2025-11-25T06:37:29Z</updated>
<published>2025-10-24T00:00:00Z</published>
<summary type="text">The Effect of IL-17A and Combined Mechanical Injury on Meniscal Tissue Integrity In Vitro
Ahrens, Greta; Gellhaus, Florian; Weitkamp, Jan-Tobias; Behrendt, Peter; Cossais, François; Rolauffs, Bernd; Grodzinsky, Alan J; Kurz, Bodo
Objectives: Meniscal integrity is crucial for knee joint stability and the prevention of osteoarthritis (OA) development. Recent studies suggested that mechanical overload and interleukin (IL)-17A may be important intertwined players in meniscal degeneration, but a direct impact of IL-17A on the meniscus has not been investigated. Therefore, the aim of this study was to analyze the effect of IL-17A on meniscal tissue with and without combined mechanical injury (MI). Methods: Meniscal explant disks (1 mm height, 3 mm diameter) were isolated from bovine menisci (preserving the native tibial superficial zone) and exposed to IL-17A [0–100 ng/mL] and/or MI (single compression, 50% strain, strain rate 1 mm/sec). After three days of incubation in a serum-free medium, the proteoglycan release (sGAG; DMMB assay), mRNA level of matrix-degrading enzymes (qRT-PCR), aggrecan degradation (NITEGE immunostaining), and cell death (histomorphometry of nuclear blebbing/apoptosis and condensed nuclei/unspecified cell death) were determined. Statistics: one- and two-way ANOVA with Tukey’s multiple comparisons or Kruskal– Wallis with post hoc testing. Results: IL-17A increased sGAG release in a dose-dependent significant manner. MI also induced the release of sGAG significantly, but the combination with IL-17A showed the highest levels. Both IL-17A and MI individually affected the mRNA levels for ADAMTS4 and MMP-13 slightly, but the combination of both particularly induced a significant increase in mRNA levels. Signals for the ADAMTS4-related aggrecan neoepitope NITEGE were elevated by IL-17A in superficial areas of the excised tissue and by MI in superficial and deeper areas. The combination of both stimuli intensified this signal further. MI increased the number of cells with condensed nuclei significantly and induced apoptosis in a small proportion of cells. IL-17A had no significant impact on the amount of condensed or apoptotic nuclei. Conclusions: Our findings emphasize an interaction between inflammatory cytokine IL-17A signaling and mechanical stress since IL17A induced matrix degeneration in meniscal tissue, which intensified in combination with a trauma. The latter might create a post-traumatic environment that promotes meniscal degeneration and subsequently osteoarthritis progression.
</summary>
<dc:date>2025-10-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Molecular hallmarks of excitatory and inhibitory neuronal resilience to Alzheimer’s disease</title>
<link href="https://hdl.handle.net/1721.1/163965" rel="alternate"/>
<author>
<name>Castanho, Isabel</name>
</author>
<author>
<name>Naderi Yeganeh, Pourya</name>
</author>
<author>
<name>Boix, Carles A.</name>
</author>
<author>
<name>Morgan, Sarah L.</name>
</author>
<author>
<name>Mathys, Hansruedi</name>
</author>
<author>
<name>Prokopenko, Dmitry</name>
</author>
<author>
<name>White, Bartholomew</name>
</author>
<author>
<name>Soto, Larisa M.</name>
</author>
<author>
<name>Pegoraro, Giulia</name>
</author>
<id>https://hdl.handle.net/1721.1/163965</id>
<updated>2025-11-25T06:36:38Z</updated>
<published>2025-10-01T00:00:00Z</published>
<summary type="text">Molecular hallmarks of excitatory and inhibitory neuronal resilience to Alzheimer’s disease
Castanho, Isabel; Naderi Yeganeh, Pourya; Boix, Carles A.; Morgan, Sarah L.; Mathys, Hansruedi; Prokopenko, Dmitry; White, Bartholomew; Soto, Larisa M.; Pegoraro, Giulia
Background A significant proportion of individuals maintain cognition despite extensive Alzheimer’s disease (AD) pathology, known as cognitive resilience. Understanding the molecular mechanisms that protect these individuals could reveal therapeutic targets for AD. Methods This study defines molecular and cellular signatures of cognitive resilience by integrating bulk RNA and single-cell transcriptomic data with genetics across multiple brain regions. We analyzed data from the Religious Order Study and the Rush Memory and Aging Project (ROSMAP), including bulk RNA sequencing (n = 631 individuals) and multiregional single-nucleus RNA sequencing (n = 48 individuals). Subjects were categorized into AD, resilient, and control based on β-amyloid and tau pathology, and cognitive status. We identified and prioritized protected cell populations using whole-genome sequencing-derived genetic variants, transcriptomic profiling, and cellular composition. Results Transcriptomics and polygenic risk analysis position resilience as an intermediate AD state. Only GFAP and KLF4 expression distinguished resilience from controls at tissue level, whereas differential expression of genes involved in nucleic acid metabolism and signaling differentiated AD and resilient brains. At the cellular level, resilience was characterized by broad downregulation of LINGO1 expression and reorganization of chaperone pathways, specifically downregulation of Hsp90 and upregulation of Hsp40, Hsp70, and Hsp110 families in excitatory neurons. MEF2C, ATP8B1, and RELN emerged as key markers of resilient neurons. Excitatory neuronal subtypes in the entorhinal cortex (ATP8B+ and MEF2Chigh) exhibited unique resilience signaling through activation of neurotrophin (BDNF-NTRK2, modulated by LINGO1) and angiopoietin (ANGPT2-TEK) pathways. MEF2C+ inhibitory neurons were over-represented in resilient brains, and the expression of genes associated with rare genetic variants revealed vulnerable somatostatin (SST) cortical interneurons that survive in AD resilience. The maintenance of excitatory-inhibitory balance emerges as a key characteristic of resilience. Conclusions We have defined molecular and cellular hallmarks of cognitive resilience, an intermediate state in the AD continuum. Resilience mechanisms include preserved neuronal function, balanced network activity, and activation of neurotrophic survival signaling. Specific excitatory neuronal populations appear to play a central role in mediating cognitive resilience, while a subset of vulnerable interneurons likely provides compensation against AD-associated hyperexcitability. This study offers a framework to leverage natural protective mechanisms to mitigate neurodegeneration and preserve cognition in AD.
</summary>
<dc:date>2025-10-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nationwide Trends in Hospitalizations for Sudden Cardiac Arrest Before and During the COVID Outbreak</title>
<link href="https://hdl.handle.net/1721.1/163964" rel="alternate"/>
<author>
<name>Daoudi, Sarah</name>
</author>
<author>
<name>Furer, Ariel</name>
</author>
<author>
<name>John, Kevin</name>
</author>
<author>
<name>Chalhoub, Fadi</name>
</author>
<author>
<name>Chee, Jennifer</name>
</author>
<author>
<name>Infeld, Margaret</name>
</author>
<author>
<name>Elbaz-Greener, Gabby</name>
</author>
<author>
<name>Homoud, Munther</name>
</author>
<author>
<name>Udelson, James</name>
</author>
<author>
<name>Madias, Christopher</name>
</author>
<author>
<name>Rozen, Guy</name>
</author>
<id>https://hdl.handle.net/1721.1/163964</id>
<updated>2025-11-25T06:36:53Z</updated>
<published>2025-10-22T00:00:00Z</published>
<summary type="text">Nationwide Trends in Hospitalizations for Sudden Cardiac Arrest Before and During the COVID Outbreak
Daoudi, Sarah; Furer, Ariel; John, Kevin; Chalhoub, Fadi; Chee, Jennifer; Infeld, Margaret; Elbaz-Greener, Gabby; Homoud, Munther; Udelson, James; Madias, Christopher; Rozen, Guy
Background/Objectives: Sudden cardiac arrest (SCA) accounts for ~50% of cardiovascular mortality in the U.S. Cardiovascular complications are common in acute and post-acute COVID-19 infection. We aimed to examine nationwide trends in SCA-related hospitalizations in the United States before and during the COVID-19 outbreak. Methods: Using data from the National Inpatient Sample, we conducted a retrospective analysis of hospitalizations for SCA in the U.S. between 2016 and 2020. Sociodemographic and clinical characteristics and in-hospital mortality were compared between the pre-COVID (2016– 2019) and COVID (2020) eras. Multivariable analysis was performed to identify factors associated with mortality. Results: Among a weighted total of 153,100 SCA hospitalizations between 2016 and 2020, the median age was 65 years, 62.7% were male, and 66.6% were white. There was a trend towards fewer hospitalizations in 2020 compared to prior years (n = 28,585 vs. naverage = 32,129, p = 0.07). In-hospital mortality remained unchanged between the pre-COVID and COVID eras (47.7% vs. 47.3%, p = 0.66). Increased mortality was associated with female sex (OR: 1.21; 95% CI: 1.15–1.28; p &lt; 0.001), non-white race (OR: 1.24; 95% CI: 1.15–1.28; p &lt; 0.001), history of renal failure (OR: 1.08; 95% CI: 1.02–1.15; p = 0.007), and diabetes (OR: 1.32; 95% CI: 1.25–1.39; p &lt; 0.001). In 2020, 1.5% of the study population was diagnosed with COVID-19 infection, which was found to be independently associated with increased in-hospital mortality (OR: 1.57; 95% CI: 1.27–1.95; p &lt; 0.001). Conclusions: In 2020, there was a trend towards a decrease in hospitalizations for SCA, while COVID-19 infection was independently associated with higher in-hospital mortality among patients admitted with SCA.
</summary>
<dc:date>2025-10-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Biosensor development for single-cell detection of glucuronate</title>
<link href="https://hdl.handle.net/1721.1/163963" rel="alternate"/>
<author>
<name>Nash, Jennifer Kaczmarek</name>
</author>
<author>
<name>Prather, Kristala LJ</name>
</author>
<id>https://hdl.handle.net/1721.1/163963</id>
<updated>2025-11-25T06:36:58Z</updated>
<published>2023-06-16T00:00:00Z</published>
<summary type="text">Biosensor development for single-cell detection of glucuronate
Nash, Jennifer Kaczmarek; Prather, Kristala LJ
Recent work in biosensors has shown promise to enable high throughput searches through large genetic libraries. However, just as physiological limitations and lack of in-depth mechanistic knowledge can prevent us from achieving high titers in microbial systems; similar roadblocks can appear in the application of biosensors. Here, we characterized a previously developed transcription-factor (ExuR) based galacturonate biosensor for its other cognate ligand, glucuronate. Though we saw an ideal response to glucuronate from the biosensor in controlled and ideal experimental circumstances, these results began to deviate from a well-behaved system when we explored the application of the sensor to different MIOX homologs. Through modifications to circuit architecture and culture conditions, we were able to decrease this variation and use these more optimal conditions to apply the biosensor for the separation of two closely related MIOX homologs.
</summary>
<dc:date>2023-06-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategies in engineering sustainable biochemical synthesis through microbial systems</title>
<link href="https://hdl.handle.net/1721.1/163962" rel="alternate"/>
<author>
<name>Song, Yoseb</name>
</author>
<author>
<name>Prather, Kristala LJ</name>
</author>
<id>https://hdl.handle.net/1721.1/163962</id>
<updated>2025-11-22T03:15:33Z</updated>
<published>2024-08-01T00:00:00Z</published>
<summary type="text">Strategies in engineering sustainable biochemical synthesis through microbial systems
Song, Yoseb; Prather, Kristala LJ
Growing environmental concerns and the urgency to address climate change have increased demand for the development of sustainable alternatives to fossil-derived fuels and chemicals. Microbial systems, possessing inherent biosynthetic capabilities, present a promising approach for achieving this goal. This review discusses the coupling of systems and synthetic biology to enable the elucidation and manipulation of microbial phenotypes for the production of chemicals that can substitute for petroleum-derived counterparts and contribute to advancing green biotechnology. The integration of artificial intelligence with metabolic engineering to facilitate precise and data-driven design of biosynthetic pathways is also discussed, along with the identification of current limitations and proposition of strategies for optimizing biosystems, thereby propelling the field of chemical biology towards sustainable chemical production.
</summary>
<dc:date>2024-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Office of the Vice Chancellor for Undergraduate and Graduate Education</title>
<link href="https://hdl.handle.net/1721.1/163961" rel="alternate"/>
<author>
<name>Darmofal, David</name>
</author>
<id>https://hdl.handle.net/1721.1/163961</id>
<updated>2025-11-22T03:18:43Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Office of the Vice Chancellor for Undergraduate and Graduate Education
Darmofal, David
This report contains the following sections: Executive Summary, OVC Headquarters, Office of Admissions, Career Advising &amp; Professional Development, Office of Experiential Learning, Edgerton Center, PKG Public Service Center, Undergraduate Research Opportunities Program, Concourse, Experimental Study Group, Terrascope, Office of Graduate Education, International Students Office, Registrar’s Office, Air Force ROTC, Army ROTC, Navy ROTC, Student Financial Services, Teaching + Learning Lab, and Undergraduate Advising Center.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Covert reciprocals: a scope-based analysis of reciprocal alternations</title>
<link href="https://hdl.handle.net/1721.1/163960" rel="alternate"/>
<author>
<name>Wehbe, Jad</name>
</author>
<id>https://hdl.handle.net/1721.1/163960</id>
<updated>2026-03-08T03:26:44Z</updated>
<published>2025-09-30T00:00:00Z</published>
<summary type="text">Covert reciprocals: a scope-based analysis of reciprocal alternations
Wehbe, Jad
This paper argues that the class of predicates that participate in reciprocal alternations, like the seemingly 1-place predicate hug in Jane and Mary hugged, should in fact be analyzed as 2-place predicates with a covert reciprocal in object position. The main challenge for this analysis is that there are truth-conditional differences between covert reciprocals and their overt counterparts. Focusing on a few case studies, this paper will argue that these seemingly lexical differences can be reanalyzed in terms of scope, allowing the differences to be systematically predicted once appropriate scope restrictions on covert reciprocals are established. More specifically, I propose that covert reciprocals are simply reciprocals that have to be bound at the lowest possible scope position. I show that these seemingly 1-place predicates behave just like overt reciprocals, modulo the low-scope requirement, for example giving rise to homogeneity and non-maximality. I therefore conclude that in order to account systematically for these inferences, covert reciprocals (at least the case studies that the paper considers) must be treated as having the same LFs as low-scope overt reciprocals.
</summary>
<dc:date>2025-09-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Robust resonant anomaly detection with NPLM</title>
<link href="https://hdl.handle.net/1721.1/163959" rel="alternate"/>
<author>
<name>Grosso, Gaia</name>
</author>
<author>
<name>Sengupta, Debajyoti</name>
</author>
<author>
<name>Golling, Tobias</name>
</author>
<author>
<name>Harris, Philip</name>
</author>
<id>https://hdl.handle.net/1721.1/163959</id>
<updated>2026-03-08T03:26:44Z</updated>
<published>2025-09-28T00:00:00Z</published>
<summary type="text">Robust resonant anomaly detection with NPLM
Grosso, Gaia; Sengupta, Debajyoti; Golling, Tobias; Harris, Philip
In this study, we investigate the application of the New Physics Learning Machine (NPLM) algorithm as an alternative to the standard CWoLa method with Boosted Decision Trees (BDTs), particularly for scenarios with rare signal events. NPLM offers an end-to-end approach to anomaly detection and hypothesis testing by utilizing an in-sample evaluation of a binary classifier to estimate a log-density ratio, which can improve detection performance without prior assumptions on the signal model. We examine two approaches: (1) a end-to-end NPLM application in cases with reliable background modelling and (2) an NPLM-based classifier used for signal selection when accurate background modelling is unavailable, with subsequent performance enhancement through a hyper-test on multiple values of the selection threshold. Our findings show that NPLM-based methods outperform BDT-based approaches in detection performance, particularly in low signal injection scenarios, while significantly reducing epistemic variance due to hyperparameter choices. This work highlights the potential of NPLM for robust resonant anomaly detection in particle physics, setting a foundation for future methods that enhance sensitivity and consistency under signal variability.
</summary>
<dc:date>2025-09-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tackling the Cardio-Kidney-Metabolic Burden in Cancer</title>
<link href="https://hdl.handle.net/1721.1/163958" rel="alternate"/>
<author>
<name>Nahle, Tarek</name>
</author>
<author>
<name>Shah, Viraj</name>
</author>
<author>
<name>Kunhiraman, Harikrishnan H.</name>
</author>
<author>
<name>Makram, Omar M.</name>
</author>
<author>
<name>Ahmed, Ola</name>
</author>
<author>
<name>Yerraguntla, Sandeep</name>
</author>
<author>
<name>Gopu, Gaurav</name>
</author>
<author>
<name>Vy, Jenny</name>
</author>
<author>
<name>Singh, Shivam</name>
</author>
<author>
<name>Borse, Tanvi</name>
</author>
<author>
<name>Kalinsky, Kevin</name>
</author>
<author>
<name>Deswal, Anita</name>
</author>
<author>
<name>Sadler, Diego</name>
</author>
<author>
<name>Chitalia, Vipul</name>
</author>
<author>
<name>Weintraub, Neal L.</name>
</author>
<id>https://hdl.handle.net/1721.1/163958</id>
<updated>2025-11-22T03:15:06Z</updated>
<published>2025-09-15T00:00:00Z</published>
<summary type="text">Tackling the Cardio-Kidney-Metabolic Burden in Cancer
Nahle, Tarek; Shah, Viraj; Kunhiraman, Harikrishnan H.; Makram, Omar M.; Ahmed, Ola; Yerraguntla, Sandeep; Gopu, Gaurav; Vy, Jenny; Singh, Shivam; Borse, Tanvi; Kalinsky, Kevin; Deswal, Anita; Sadler, Diego; Chitalia, Vipul; Weintraub, Neal L.
Purpose of the Review This review aims to examine the clinical relevance of cardio-kidney-metabolic syndrome (CKMS) in oncology, highlighting its role as both a preexisting comorbidity and a consequence of cancer treatment. It aims to integrating CKMS staging into personalized cancer care. Recent Findings CKMS is a progressive syndrome marked by dysfunction across cardiovascular, renal, and metabolic systems. Cancer therapies—particularly hormonal agents, immune checkpoint inhibitors, and chemotherapeutics—can accelerate or reveal underlying CKMS through inflammatory and metabolic pathways. Early risk stratification based on CKMS stage enables more effective monitoring, referral, and therapeutic strategies. A stage-based, multidisciplinary approach tailored to cancer type and comorbidity burden is essential for optimizing outcomes. Summary With rising multimorbidity among cancer patients, recognizing and addressing CKMS is increasingly critical. Routine CKMS assessment in oncology offers a pathway for earlier intervention and potentially altering its course. A comprehensive, individualized care model based on CKS stage is necessary to mitigate CKMS-related complications and deliver high-quality, integrated cancer care.
</summary>
<dc:date>2025-09-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Initial checkout of the Psyche electric propulsion system</title>
<link href="https://hdl.handle.net/1721.1/163957" rel="alternate"/>
<author>
<name>Snyder, John S.</name>
</author>
<author>
<name>Kelly, Charles L.</name>
</author>
<author>
<name>Garner, Charles</name>
</author>
<author>
<name>Bradley, Nicholas</name>
</author>
<author>
<name>Johnson, Ian</name>
</author>
<author>
<name>Corey, Ron</name>
</author>
<author>
<name>Ream, Jodie B.</name>
</author>
<author>
<name>Weiss, Benjamin P.</name>
</author>
<id>https://hdl.handle.net/1721.1/163957</id>
<updated>2026-03-08T03:22:59Z</updated>
<published>2025-07-31T00:00:00Z</published>
<summary type="text">Initial checkout of the Psyche electric propulsion system
Snyder, John S.; Kelly, Charles L.; Garner, Charles; Bradley, Nicholas; Johnson, Ian; Corey, Ron; Ream, Jodie B.; Weiss, Benjamin P.
NASA’s Psyche spacecraft launched on October 13, 2023, and soon afterward the mission operations team began spacecraft initial checkout activities. For the electric propulsion system, the feed system and thruster gimbals were first prepared and then the rest of the subsystem completed an initial operations test during thruster bakeout. Thrust for each thruster was measured across the full range of operating powers and was in good agreement with pre-flight expectations. A weeklong test of the spacecraft and mission operations plan during thrusting activities was successful, but a thruster burn-in phenomenon was observed during full power operation that was longer than expected based on previous flight history. Data accumulated during the initial checkout activities shows that this burn-in behavior is different for each thruster and suggests that it is a result of the thruster discharge transitioning between two different plasma modes that can be mitigated by reducing discharge power and by adjusting the thruster magnet current. At the conclusion of the checkout activities, the subsystem had accumulated 357 h of thrusting operations while consuming 18.5 kg of propellant and was fully ready to begin the cruise phase of the mission.
</summary>
<dc:date>2025-07-31T00:00:00Z</dc:date>
</entry>
<entry>
<title>Derandomizing Logspace With a Small Shared Hard Drive</title>
<link href="https://hdl.handle.net/1721.1/163956" rel="alternate"/>
<author>
<name>Pyne, Edward</name>
</author>
<id>https://hdl.handle.net/1721.1/163956</id>
<updated>2026-03-08T03:26:39Z</updated>
<published>2025-09-22T00:00:00Z</published>
<summary type="text">Derandomizing Logspace With a Small Shared Hard Drive
Pyne, Edward
We obtain new catalytic algorithms for space-bounded derandomization. In the catalytic computation model introduced by (Buhrman, Cleve, Koucký, Loff, and Speelman STOC 2013), we are given a small worktape, and a larger catalytic tape that has an arbitrary initial configuration. We may edit this tape, but it must be exactly restored to its initial configuration at the completion of the computation. We prove that B P S P A C E [ S ] ⊆ C S P A C E [ S , S 2 ] where B P S P A C E [ S ] corresponds to randomized space S computation, and C S P A C E [ S , C ] corresponds to catalytic algorithms that use O(S) bits of workspace and O(C) bits of catalytic space. Previously, only B P S P A C E [ S ] ⊆ C S P A C E [ S , 2 O ( S ) ] was known. In fact, we prove a general tradeoff, that for every α ∈ [ 1 , 1.5 ] , B P S P A C E [ S ] ⊆ C S P A C E [ S α , S 3 - α ] . We do not use the algebraic techniques of prior work on catalytic computation. Instead, we develop an algorithm that branches based on if the catalytic tape is conditionally random, and instantiate this primitive in a recursive framework. Our result gives an alternate proof of the best known time-space tradeoff for B P S P A C E [ S ] , due to (Cai, Chakaravarthy, and van Melkebeek, Theory Comput. Sys. 2006). As a final application, we extend our results to solve search problems in C S P A C E [ S , S 2 ] . As far as we are aware, this constitutes the first study of search problems in the catalytic computing model.
</summary>
<dc:date>2025-09-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Center for Real Estate</title>
<link href="https://hdl.handle.net/1721.1/163955" rel="alternate"/>
<author>
<name>Geltner, David M</name>
</author>
<id>https://hdl.handle.net/1721.1/163955</id>
<updated>2025-11-22T03:17:40Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Center for Real Estate
Geltner, David M
This report contains the following sections: Strategic Planning Initiative; Education; Research Activities; Professional Education and Industry Interface; Membership; Alumni Outreach; Administration
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Appendix B: Enrollment Statistics, Fall 2003, Number of Minority Graduate Students by Course and Year</title>
<link href="https://hdl.handle.net/1721.1/163954" rel="alternate"/>
<author>
<name>MIT Registrar's Office</name>
</author>
<id>https://hdl.handle.net/1721.1/163954</id>
<updated>2025-11-22T03:17:13Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Appendix B: Enrollment Statistics, Fall 2003, Number of Minority Graduate Students by Course and Year
MIT Registrar's Office
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Appendix B: Enrollment Statistics, Fall 2003, Number of Minority Undergraduates by Course and Year</title>
<link href="https://hdl.handle.net/1721.1/163953" rel="alternate"/>
<author>
<name>MIT Registrar's Office</name>
</author>
<id>https://hdl.handle.net/1721.1/163953</id>
<updated>2025-11-22T03:16:55Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Appendix B: Enrollment Statistics, Fall 2003, Number of Minority Undergraduates by Course and Year
MIT Registrar's Office
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Vice President for Human Resources and Equal Opportunity Officer</title>
<link href="https://hdl.handle.net/1721.1/163952" rel="alternate"/>
<author>
<name>Avakian, Laura</name>
</author>
<author>
<name>Lima, Philip</name>
</author>
<author>
<name>Foley, Shawn</name>
</author>
<author>
<name>Roberts, Barbara</name>
</author>
<author>
<name>Weiss, Ellen</name>
</author>
<author>
<name>Jablon, Barbara</name>
</author>
<author>
<name>Paulding, Claire</name>
</author>
<author>
<name>Culver, Kande</name>
</author>
<author>
<name>Pierce, Marianna</name>
</author>
<author>
<name>Gray, Margaret Ann</name>
</author>
<author>
<name>Williams, Wendy</name>
</author>
<author>
<name>Jacobs, Annette</name>
</author>
<author>
<name>Friscino, Deborah</name>
</author>
<author>
<name>Murray, Mary</name>
</author>
<author>
<name>O’Keefe, Eileen</name>
</author>
<author>
<name>Joyce, Shelagh</name>
</author>
<author>
<name>Wattendorf,  Maryann</name>
</author>
<author>
<name>Simpson, Rae</name>
</author>
<author>
<name>Simons, Kathy</name>
</author>
<id>https://hdl.handle.net/1721.1/163952</id>
<updated>2025-11-22T03:17:47Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Vice President for Human Resources and Equal Opportunity Officer
Avakian, Laura; Lima, Philip; Foley, Shawn; Roberts, Barbara; Weiss, Ellen; Jablon, Barbara; Paulding, Claire; Culver, Kande; Pierce, Marianna; Gray, Margaret Ann; Williams, Wendy; Jacobs, Annette; Friscino, Deborah; Murray, Mary; O’Keefe, Eileen; Joyce, Shelagh; Wattendorf,  Maryann; Simpson, Rae; Simons, Kathy
This report contains the following sections: Highlights of AY2004; Labor and Employee Relations Issues; Staff Diversity, Affirmative Action, and Equal Opportunity Management Team; Benefits Services; Disabilities Services Office; Retirement Programs Office; Compensation; Human Resources Information Systems; MIT Rewards and Recognition; Labor and Employee Relations; Organization and Employee Development; Organization Development Services; Center for Career Planning at MIT; Professional Development Programs; Staffing Services; MIT Medical; Center for Work, Family &amp; Personal Life
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Operations: Environmental Programs Office and Environment, Health, and Safety Office</title>
<link href="https://hdl.handle.net/1721.1/163951" rel="alternate"/>
<author>
<name>Keith, Jamie Lewis</name>
</author>
<author>
<name>Van Schalkwyk, William</name>
</author>
<author>
<name>DiBerardinis, Lou</name>
</author>
<id>https://hdl.handle.net/1721.1/163951</id>
<updated>2025-11-22T03:18:23Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Operations: Environmental Programs Office and Environment, Health, and Safety Office
Keith, Jamie Lewis; Van Schalkwyk, William; DiBerardinis, Lou
This report contains the following sections: Highlights; Positive EHS Initiatives and Collaborations; Communications, Outreach, and Awareness; Security and Emergency Preparedness Programs; Regulatory Interactions
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Appendix C: Personnel Changes</title>
<link href="https://hdl.handle.net/1721.1/163950" rel="alternate"/>
<author>
<name>MIT Registrar's Office</name>
</author>
<id>https://hdl.handle.net/1721.1/163950</id>
<updated>2025-11-22T03:17:46Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Appendix C: Personnel Changes
MIT Registrar's Office
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Association of Alumni and Alumnae of MIT</title>
<link href="https://hdl.handle.net/1721.1/163949" rel="alternate"/>
<author>
<name>Garvin HM, Elizabeth A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163949</id>
<updated>2025-11-22T03:16:50Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Association of Alumni and Alumnae of MIT
Garvin HM, Elizabeth A.
This report contains the following sections: By the Numbers; Organizational Change; Special Initiatives; The Alumni Fund; Fund Staff Restructuring; Responsibilities and Goals; Alumni Fund Strategic Initiatives; Fund Results; MIT Capital Campaign; Alumni Activities; Alumni Clubs and Regional Programs; Affinity Groups; Tech Reunions and Class Programs; Enterprise Forum; MIT Parents Association; Student and Young Alumni Program; Alumni Education; Online Services; Alumni Career Services; Travel Program; Volunteers, Leadership, and Governance; Association Board of Directors; National Selection Committee; National Boards and Directors; Alumni Leadership Conference; Association Volunteer Awards; Communications Department; Web and Electronic Communications; Operations and Information Systems; Web-Based Systems Achievements; Office of Records; Process Improvements; Personnel and Operations Update; Association Staff; Renovations;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Operations: Senior Counsel’s Office</title>
<link href="https://hdl.handle.net/1721.1/163948" rel="alternate"/>
<author>
<name>Keith, Jamie Lewis</name>
</author>
<id>https://hdl.handle.net/1721.1/163948</id>
<updated>2025-11-22T03:18:03Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Operations: Senior Counsel’s Office
Keith, Jamie Lewis
This report contains the following sections: Highlights; Serving Education and the Nation through Diversity; Service in the Post-9/11 Environment; Supporting Human Resources Management; Supporting Research and Other Compliance Initiatives; Risk Management and Litigation;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Student Support Services</title>
<link href="https://hdl.handle.net/1721.1/163947" rel="alternate"/>
<author>
<name>Simonis, Jackie</name>
</author>
<author>
<name>Henderson, Arnold, Jr.</name>
</author>
<author>
<name>Randolph, Robert M.</name>
</author>
<id>https://hdl.handle.net/1721.1/163947</id>
<updated>2025-11-22T03:17:53Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Student Support Services
Simonis, Jackie; Henderson, Arnold, Jr.; Randolph, Robert M.
This report contains the following sections: Counseling and Support Services; Religious Life at MIT
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Student Life Programs</title>
<link href="https://hdl.handle.net/1721.1/163946" rel="alternate"/>
<author>
<name>Baker, Barbara A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163946</id>
<updated>2025-11-22T03:18:10Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Student Life Programs
Baker, Barbara A.
This report contains the following sections: Summary Statement; Highlights of the Year; New Initiatives; Summary of Staffing Changes
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Housing</title>
<link href="https://hdl.handle.net/1721.1/163945" rel="alternate"/>
<author>
<name>Nilsson, Karen A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163945</id>
<updated>2025-11-22T03:18:10Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Housing
Nilsson, Karen A.
This report contains the following sections: Summary Statement; Highlights of the Year; New Initiatives; Housing Staffing Changes
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Administrative Services</title>
<link href="https://hdl.handle.net/1721.1/163944" rel="alternate"/>
<author>
<name>Capone, Laura</name>
</author>
<author>
<name>Salamone, Frank</name>
</author>
<id>https://hdl.handle.net/1721.1/163944</id>
<updated>2025-11-22T03:18:22Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Administrative Services
Capone, Laura; Salamone, Frank
This report contains the following sections: Summary Statement; Highlights of the Year; New Initiatives; Summary of Staffing Changes
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Dean for Student Life</title>
<link href="https://hdl.handle.net/1721.1/163943" rel="alternate"/>
<author>
<name>Benedict, Larry G.</name>
</author>
<id>https://hdl.handle.net/1721.1/163943</id>
<updated>2025-11-22T03:17:54Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Dean for Student Life
Benedict, Larry G.
This report contains the following sections: Accomplishments;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Edgerton Center</title>
<link href="https://hdl.handle.net/1721.1/163942" rel="alternate"/>
<author>
<name>Vandiver, J. Kim</name>
</author>
<id>https://hdl.handle.net/1721.1/163942</id>
<updated>2025-11-22T03:18:20Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Edgerton Center
Vandiver, J. Kim
This report contains the following sections: Service Learning; Curricular Initiative for Development Design; IDEAS Competition; Ongoing Programs; Staff Changes
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Graduate Student Council</title>
<link href="https://hdl.handle.net/1721.1/163941" rel="alternate"/>
<author>
<name>Singh, Barun</name>
</author>
<author>
<name>Hernandez, Hector</name>
</author>
<author>
<name>Wong, Lucy</name>
</author>
<author>
<name>Villacorta,  Virgilio</name>
</author>
<id>https://hdl.handle.net/1721.1/163941</id>
<updated>2025-11-22T03:16:53Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Graduate Student Council
Singh, Barun; Hernandez, Hector; Wong, Lucy; Villacorta,  Virgilio
This report contains the following sections: Graduate Student Income and Expenses; Housing, Safety, and Transportation; New Groups and Projects; Other Initiatives and Programs; Collaborations; Historic Events and Activities;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Chancellor</title>
<link href="https://hdl.handle.net/1721.1/163940" rel="alternate"/>
<author>
<name>Clay, Phillip</name>
</author>
<id>https://hdl.handle.net/1721.1/163940</id>
<updated>2025-11-22T03:17:25Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Chancellor
Clay, Phillip
This report contains the following sections: Highlights; Other Areas; The Cambridge–MIT Institute; The MIT–Ford Alliance; Faculty Vote on Reserve Officers’ Training Corps;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, George R. Wallace, Jr., Astrophysical Observatory</title>
<link href="https://hdl.handle.net/1721.1/163939" rel="alternate"/>
<author>
<name>Elliot, James L.</name>
</author>
<id>https://hdl.handle.net/1721.1/163939</id>
<updated>2025-11-22T03:17:15Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, George R. Wallace, Jr., Astrophysical Observatory
Elliot, James L.
This report contains the following sections: Facilities; Research and Student Work;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Department of Chemistry</title>
<link href="https://hdl.handle.net/1721.1/163938" rel="alternate"/>
<author>
<name>Lippard, Stephen J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163938</id>
<updated>2025-11-22T03:18:00Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Department of Chemistry
Lippard, Stephen J.
This report contains the following sections: Major Faculty Awards and Honors; Infrastructure Developments; Education; Graduate Student Awards and Honors; Named Lectureships; Selected Research Highlights;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Dean, Department of Biology</title>
<link href="https://hdl.handle.net/1721.1/163937" rel="alternate"/>
<author>
<name>Kaiser, Chris A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163937</id>
<updated>2025-11-22T03:16:56Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Dean, Department of Biology
Kaiser, Chris A.
This report contains the following sections: Educational Activities; Student Awards; Biology Department Awards; Degrees; Research; Personnel; Faculty Awards;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Dean, School of Science</title>
<link href="https://hdl.handle.net/1721.1/163936" rel="alternate"/>
<author>
<name>Silbey, Robert J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163936</id>
<updated>2025-11-22T03:17:59Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Dean, School of Science
Silbey, Robert J.
This report contains the following sections: New Initiatives; Building and Strengthening a Diverse Community; Faculty Awards; Staff Awards; Academic Program Statistics; Fundraising; Research Volume
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Program in Women's Studies</title>
<link href="https://hdl.handle.net/1721.1/163935" rel="alternate"/>
<author>
<name>Wood, Elizabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/163935</id>
<updated>2025-11-22T03:18:14Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Program in Women's Studies
Wood, Elizabeth
This report contains the following sections: Program Administration, Curriculum and Faculty Development, Programming Highlights 2003–2004, Research, Publications, and Service; Affirmative Action Goals and Successes; Future Plans
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Literature</title>
<link href="https://hdl.handle.net/1721.1/163934" rel="alternate"/>
<author>
<name>Donaldson, Peter S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163934</id>
<updated>2025-11-22T03:17:16Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Literature
Donaldson, Peter S.
This report contains the following sections: Highlights of the Year; Academic Program and Student Enrollment; Research and Publication; Conferences and Invited Addresses; Electronic Projects and Sponsored Research; Service and Committees;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, History</title>
<link href="https://hdl.handle.net/1721.1/163933" rel="alternate"/>
<author>
<name>Ritvo, Harriet</name>
</author>
<id>https://hdl.handle.net/1721.1/163933</id>
<updated>2025-11-22T03:17:26Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, History
Ritvo, Harriet
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Foreign Languages and Literatures</title>
<link href="https://hdl.handle.net/1721.1/163932" rel="alternate"/>
<author>
<name>Garrels, Elizabeth J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163932</id>
<updated>2025-11-22T03:17:19Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Foreign Languages and Literatures
Garrels, Elizabeth J.
This report contains the following sections: Highlights of the Year; Research and Publications; Conferences and Presentations; MIT Service and Enrollments;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Terrascope</title>
<link href="https://hdl.handle.net/1721.1/163931" rel="alternate"/>
<author>
<name>Hodges, Kip</name>
</author>
<author>
<name>Chisholm, Penny</name>
</author>
<id>https://hdl.handle.net/1721.1/163931</id>
<updated>2025-11-22T03:17:58Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Terrascope
Hodges, Kip; Chisholm, Penny
This report contains the following sections: Program Highlights; New Developments;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Program in Polymer Science and Technology</title>
<link href="https://hdl.handle.net/1721.1/163930" rel="alternate"/>
<author>
<name>McKinley, Gareth H.</name>
</author>
<id>https://hdl.handle.net/1721.1/163930</id>
<updated>2025-11-22T03:17:02Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Program in Polymer Science and Technology
McKinley, Gareth H.
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Lemelson–MIT Program</title>
<link href="https://hdl.handle.net/1721.1/163929" rel="alternate"/>
<author>
<name>Finn, Kristin</name>
</author>
<id>https://hdl.handle.net/1721.1/163929</id>
<updated>2025-11-22T03:17:41Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Lemelson–MIT Program
Finn, Kristin
This report contains the following sections: The Invention Study: Workshops and Assembly; Annual Invention Awards; Outreach Activities and Events;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Deshpande Center for Technological Innovation</title>
<link href="https://hdl.handle.net/1721.1/163928" rel="alternate"/>
<author>
<name>Holly, Krisztina</name>
</author>
<id>https://hdl.handle.net/1721.1/163928</id>
<updated>2025-11-22T03:17:24Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Deshpande Center for Technological Innovation
Holly, Krisztina
This report contains the following sections: Highlights; Deshpande Grant Awards; Innovation Grants; Catalyst Program; Deshpande Center Events; Administrative Changes
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Department of Nuclear Engineering</title>
<link href="https://hdl.handle.net/1721.1/163927" rel="alternate"/>
<author>
<name>Hutchinson, Ian H.</name>
</author>
<id>https://hdl.handle.net/1721.1/163927</id>
<updated>2025-11-22T03:17:20Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Department of Nuclear Engineering
Hutchinson, Ian H.
This report contains the following sections: Undergraduate Program; Graduate Program; Faculty Awards, Honors, and Activities; Research; Student Awards and Activities;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Department of Chemical Engineering</title>
<link href="https://hdl.handle.net/1721.1/163926" rel="alternate"/>
<author>
<name>Armstrong, Robert C.</name>
</author>
<author>
<name>Rutledge, Gregory C.</name>
</author>
<id>https://hdl.handle.net/1721.1/163926</id>
<updated>2025-11-22T03:17:12Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Department of Chemical Engineering
Armstrong, Robert C.; Rutledge, Gregory C.
This report contains the following sections: Undergraduate Education; Graduate Education; Faculty Notes; Research Highlights; Annual Lectures, Seminars, and Symposium; Departmental Awards;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Department of Aeronautics and Astronautics</title>
<link href="https://hdl.handle.net/1721.1/163925" rel="alternate"/>
<author>
<name>Harris, Wesley</name>
</author>
<id>https://hdl.handle.net/1721.1/163925</id>
<updated>2025-11-22T03:18:19Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Department of Aeronautics and Astronautics
Harris, Wesley
This report contains the following sections: Undergraduate Awards; Faculty Awards; Staff Awards; Communication for Challenging Environments; Complex Systems Research Lab; Embedded Systems Laboratory; System-on-Chip Design Approaches; Verification and Validation; Operating System Design; Man-Vehicle Laboratory; Massachusetts Space Grant Consortium; Space Systems Laboratory; Systems Analysis Tools (Professor David Miller); Generalized Information Network Analysis; Dynamics, Optics, Controls, and Structures; Uncertainty Propagation in System Modeling; Spaceflight Dynamics and Control Technologies;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Department of Urban Studies and Planning</title>
<link href="https://hdl.handle.net/1721.1/163924" rel="alternate"/>
<author>
<name>Vale, Lawrence J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163924</id>
<updated>2025-11-22T03:17:32Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Department of Urban Studies and Planning
Vale, Lawrence J.
This report contains the following sections: Progress on Departmental Priorities; Faculty Achievements; DUSP’s Contribution to MIT–Wide Efforts; Research and Teaching on Urban Planning; City Design and Development; Environmental Policy Group; Housing Community and Economic Development; International Development and Regional Planning; Graduate Degree Program Enrollment and Activities; Undergraduate Program Activities; Awards; Outreach to Alumni; International Connection; Community Partnerships;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Program in Media Arts and Sciences</title>
<link href="https://hdl.handle.net/1721.1/163923" rel="alternate"/>
<author>
<name>Mitchell, William J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163923</id>
<updated>2025-11-22T03:18:15Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Program in Media Arts and Sciences
Mitchell, William J.
This report contains the following sections: Education; Faculty and Staff; Students
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Department of Architecture</title>
<link href="https://hdl.handle.net/1721.1/163922" rel="alternate"/>
<author>
<name>Anderson, Stanford</name>
</author>
<id>https://hdl.handle.net/1721.1/163922</id>
<updated>2025-11-22T03:17:42Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Department of Architecture
Anderson, Stanford
This report contains the following sections: Architectural Design; Building Technology; History, Theory, and Criticism; Computation; Undergraduate Program; Aga Khan Program for Islamic Architecture; Department of Architecture Enrollments;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Museum Loan Network</title>
<link href="https://hdl.handle.net/1721.1/163921" rel="alternate"/>
<author>
<name>Gross, Lori</name>
</author>
<id>https://hdl.handle.net/1721.1/163921</id>
<updated>2025-11-22T03:17:09Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Museum Loan Network
Gross, Lori
This report contains the following sections: Program Development; Website; Press and Promotion; Grants; Future Plans; Personnel Changes
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Francis Bitter Magnet Laboratory</title>
<link href="https://hdl.handle.net/1721.1/163920" rel="alternate"/>
<author>
<name>Griffin, Robert G.</name>
</author>
<id>https://hdl.handle.net/1721.1/163920</id>
<updated>2025-11-22T03:18:17Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Francis Bitter Magnet Laboratory
Griffin, Robert G.
This report contains the following sections: Research Activities; Facilities; Education and Personnel; Future Plans;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Broad Institute</title>
<link href="https://hdl.handle.net/1721.1/163919" rel="alternate"/>
<author>
<name>Lander, Eric S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163919</id>
<updated>2025-11-22T03:17:34Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Broad Institute
Lander, Eric S.
This report contains the following sections: Mission; Research; Scientific Programs; Faculty; Core Members; Associate Members; Facility
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Center for Biomedical Engineering</title>
<link href="https://hdl.handle.net/1721.1/163918" rel="alternate"/>
<author>
<name>Grodzinsky, Alan J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163918</id>
<updated>2025-11-22T03:16:59Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Center for Biomedical Engineering
Grodzinsky, Alan J.
This report contains the following sections: Major Research Thrust Areas; Core Facilities in 500 Technology Square; Major New Initiatives;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, MIT OpenCourseWare</title>
<link href="https://hdl.handle.net/1721.1/163917" rel="alternate"/>
<author>
<name>Margulies, Anne</name>
</author>
<id>https://hdl.handle.net/1721.1/163917</id>
<updated>2025-11-22T03:17:38Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, MIT OpenCourseWare
Margulies, Anne
This report contains the following sections: Achievements; Publication Process; Organization; Technology; Communications; Evaluation; Awards; Finances and Funding; Personnel Information
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Ombuds Office</title>
<link href="https://hdl.handle.net/1721.1/163916" rel="alternate"/>
<author>
<name>Robinson, Toni P.</name>
</author>
<author>
<name>Rowe, Mary P.</name>
</author>
<id>https://hdl.handle.net/1721.1/163916</id>
<updated>2025-11-22T03:18:00Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Ombuds Office
Robinson, Toni P.; Rowe, Mary P.
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Appendix B: Enrollment Statistics, Fall 2003, Number of Women Students by Course and Year</title>
<link href="https://hdl.handle.net/1721.1/163915" rel="alternate"/>
<author>
<name>MIT Registrar's Office</name>
</author>
<id>https://hdl.handle.net/1721.1/163915</id>
<updated>2025-11-22T03:18:22Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Appendix B: Enrollment Statistics, Fall 2003, Number of Women Students by Course and Year
MIT Registrar's Office
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Treasurer of the Corporation</title>
<link href="https://hdl.handle.net/1721.1/163914" rel="alternate"/>
<author>
<name>Bufferd, Allan S.</name>
</author>
<author>
<name>Stone,  Theresa M.</name>
</author>
<id>https://hdl.handle.net/1721.1/163914</id>
<updated>2025-11-22T03:18:01Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Treasurer of the Corporation
Bufferd, Allan S.; Stone,  Theresa M.
This report contains the following sections: Investment Committee
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Finance: Office of the Controller</title>
<link href="https://hdl.handle.net/1721.1/163913" rel="alternate"/>
<author>
<name>Morgan, James L.</name>
</author>
<id>https://hdl.handle.net/1721.1/163913</id>
<updated>2025-11-22T03:17:28Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Finance: Office of the Controller
Morgan, James L.
This report contains the following sections: Highlights; Accounts Payable; Institute and Enterprise Reporting; Insurance Office; Property Office; Lincoln Fiscal Office; Business Process Changes; Buy/Pay; Payroll; Travel; Institute and Enterprise Reporting; Cashier; Representative Metrics;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Finance: Office of Budget and Financial Planning</title>
<link href="https://hdl.handle.net/1721.1/163912" rel="alternate"/>
<author>
<name>Ruiz, Israel</name>
</author>
<author>
<name>Warner, Margaret</name>
</author>
<id>https://hdl.handle.net/1721.1/163912</id>
<updated>2025-11-22T03:17:45Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Finance: Office of Budget and Financial Planning
Ruiz, Israel; Warner, Margaret
This report contains the following sections: Current Goals and Objectives; SAPBUD: Budgeting in SAP; Business and Organizational Modeling; Stochastic Financial Planning; Capital Planning and Budgeting Model Development; Accomplishment; Administration Initiatives; Future Plans; Personnel Information
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Finance: Audit Division</title>
<link href="https://hdl.handle.net/1721.1/163911" rel="alternate"/>
<author>
<name>Fisher, Deborah L.</name>
</author>
<id>https://hdl.handle.net/1721.1/163911</id>
<updated>2025-11-22T03:17:37Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Finance: Audit Division
Fisher, Deborah L.
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Executive Vice President</title>
<link href="https://hdl.handle.net/1721.1/163910" rel="alternate"/>
<author>
<name>Curry, John R.</name>
</author>
<id>https://hdl.handle.net/1721.1/163910</id>
<updated>2025-11-22T03:18:24Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Executive Vice President
Curry, John R.
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Enterprise Services</title>
<link href="https://hdl.handle.net/1721.1/163909" rel="alternate"/>
<author>
<name>Graham, Louis W., Jr.</name>
</author>
<author>
<name>Walsh, Phillip J.</name>
</author>
<author>
<name>Berlin, Richard D.</name>
</author>
<author>
<name>Dimond, Steven M.</name>
</author>
<author>
<name>Fitzgerald, Michael</name>
</author>
<author>
<name>Michaud, Daniel</name>
</author>
<author>
<name>Brutti, Larry</name>
</author>
<author>
<name>Dimond, Steven M.</name>
</author>
<id>https://hdl.handle.net/1721.1/163909</id>
<updated>2025-11-22T03:17:26Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Enterprise Services
Graham, Louis W., Jr.; Walsh, Phillip J.; Berlin, Richard D.; Dimond, Steven M.; Fitzgerald, Michael; Michaud, Daniel; Brutti, Larry; Dimond, Steven M.
This report contains the following sections: Audio Visual Services; Campus Activities Complex; Campus Dining; Copy Technology Centers; MIT Endicott House; MIT Card Office; Parking and Transportation; TechCASH
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Appendix B: Enrollment Statistics, Fall 2003, Number of International Students by Course and Year</title>
<link href="https://hdl.handle.net/1721.1/163908" rel="alternate"/>
<author>
<name>MIT Registrar's Office</name>
</author>
<id>https://hdl.handle.net/1721.1/163908</id>
<updated>2025-11-22T03:18:16Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Appendix B: Enrollment Statistics, Fall 2003, Number of International Students by Course and Year
MIT Registrar's Office
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, Appendix B: Enrollment Statistics, Fall 2003, Number of Students by Course and Year</title>
<link href="https://hdl.handle.net/1721.1/163907" rel="alternate"/>
<author>
<name>MIT Registrar's Office</name>
</author>
<id>https://hdl.handle.net/1721.1/163907</id>
<updated>2025-11-22T03:17:56Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, Appendix B: Enrollment Statistics, Fall 2003, Number of Students by Course and Year; Report to the President for year ended June 30, 2004, Appendix B: Enrollment Statistics, Fall 2003, Number of Students by Course and Year
MIT Registrar's Office
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Appendix A: Degrees Awarded 2003-2004</title>
<link href="https://hdl.handle.net/1721.1/163906" rel="alternate"/>
<author>
<name>MIT Registrar's Office</name>
</author>
<id>https://hdl.handle.net/1721.1/163906</id>
<updated>2025-11-22T03:17:06Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Appendix A: Degrees Awarded 2003-2004
MIT Registrar's Office
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Operations: MIT Police</title>
<link href="https://hdl.handle.net/1721.1/163905" rel="alternate"/>
<author>
<name>DiFava, John</name>
</author>
<id>https://hdl.handle.net/1721.1/163905</id>
<updated>2025-11-22T03:18:25Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Operations: MIT Police
DiFava, John
This report contains the following sections: Patrol Division; Professional Standards; Community Policing; Criminal Activity; Devpartment Highlights
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Office of Student Discipline</title>
<link href="https://hdl.handle.net/1721.1/163904" rel="alternate"/>
<author>
<name>Tyrell, Steven</name>
</author>
<id>https://hdl.handle.net/1721.1/163904</id>
<updated>2025-11-22T03:17:00Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Office of Student Discipline
Tyrell, Steven
This report contains the following sections: Summary Statement; Highlights of the Year; New Initiatives; Summary of Staffing Changes
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Office of Academic Services</title>
<link href="https://hdl.handle.net/1721.1/163903" rel="alternate"/>
<author>
<name>Vandiver, J. Kim</name>
</author>
<id>https://hdl.handle.net/1721.1/163903</id>
<updated>2025-11-22T03:18:12Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Office of Academic Services
Vandiver, J. Kim
This report contains the following sections: Academic Information and Communication; Academic Resource Center; Faculty and Alumni Support; Staffing Changes
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Cambridge–MIT Institute</title>
<link href="https://hdl.handle.net/1721.1/163902" rel="alternate"/>
<author>
<name>Crawley, Ed</name>
</author>
<id>https://hdl.handle.net/1721.1/163902</id>
<updated>2025-11-22T03:18:04Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Cambridge–MIT Institute
Crawley, Ed
This report contains the following sections: Noteworthy Events; Educational Programs; Research Programs and Industry; Special Interest Groups; Knowledge Exchange Activities; Future Plans;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Operations: Facilities</title>
<link href="https://hdl.handle.net/1721.1/163901" rel="alternate"/>
<author>
<name>Sirianni, Victoria</name>
</author>
<id>https://hdl.handle.net/1721.1/163901</id>
<updated>2025-11-22T03:17:40Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Operations: Facilities
Sirianni, Victoria
This report contains the following sections: Infrastructure Renewal; Shared Services Center; Client Orientation; Collaboration; Sustainability; Accountability; Professionalism; Capital Projects; Design and Construction Services; Continuation of Project Controls and Information Tracking Efforts; Finance and Accounting; Operations; Utilities; Administration; Information Technology; Personnel Changes;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Community Development and Substance Abuse Programs</title>
<link href="https://hdl.handle.net/1721.1/163900" rel="alternate"/>
<author>
<name>Trujillo, Daniel A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163900</id>
<updated>2025-11-22T03:17:39Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Community Development and Substance Abuse Programs
Trujillo, Daniel A.
This report contains the following sections: Summary Statement; Highlights of the Year; New Initiatives; Summary of Staffing Changes
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Experimental Study Group</title>
<link href="https://hdl.handle.net/1721.1/163899" rel="alternate"/>
<author>
<name>Slocum, Alexander</name>
</author>
<author>
<name>Dourmashkin, Peter</name>
</author>
<author>
<name>Sweet, Holly</name>
</author>
<id>https://hdl.handle.net/1721.1/163899</id>
<updated>2025-11-22T03:17:04Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Experimental Study Group
Slocum, Alexander; Dourmashkin, Peter; Sweet, Holly
This report contains the following sections: Student Statistics; Staff and Faculty; Academic Initiatives; Awards; Alumni Involvement; Future Developments;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Department of Physics</title>
<link href="https://hdl.handle.net/1721.1/163898" rel="alternate"/>
<author>
<name>Kastner, Marc</name>
</author>
<id>https://hdl.handle.net/1721.1/163898</id>
<updated>2025-11-22T03:17:12Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Department of Physics
Kastner, Marc
This report contains the following sections: Honors and Awards; Education; Diversity; Pappalardo Fellowships in Physics; Research Highlights;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Center for International Studies</title>
<link href="https://hdl.handle.net/1721.1/163897" rel="alternate"/>
<author>
<name>Samuels, Richard J.</name>
</author>
<author>
<name>Van Evera, Stephen</name>
</author>
<author>
<name>Makinson, Carolyn</name>
</author>
<id>https://hdl.handle.net/1721.1/163897</id>
<updated>2025-11-22T03:18:21Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Center for International Studies
Samuels, Richard J.; Van Evera, Stephen; Makinson, Carolyn
This report contains the following sections: MIT Security Studies Program; MIT International Science and Technology Initiative; OpenCourseWare; MIT Mexico Program; HASS Minor in Applied International Studies; Program on Human Rights and Justice; The Inter-University Committee on International Migration; The Inter-University Initiative on Humanitarian Studies and Field Practice; Political Economy and Technology Policy Program; Crosscutting Working Groups; Seminar XXI—Outreach to the Washington Policy Community; Public Programs; Seminars, Colloquia, Workshops, and Conferences; Grant Programs; Publications; Personnel; CIS Affirmative Action Goals and Successes;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Singapore–MIT Alliance</title>
<link href="https://hdl.handle.net/1721.1/163896" rel="alternate"/>
<author>
<name>Patera, Anthony T.</name>
</author>
<id>https://hdl.handle.net/1721.1/163896</id>
<updated>2025-11-22T03:17:33Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Singapore–MIT Alliance
Patera, Anthony T.
This report contains the following sections: Partnership; Management Structure; Summer Conference; Distance Learning; Entering Class; Noteworthy Events in 2003; Innovation in Manufacturing Systems and Technology; Molecular Engineering of Biological and Chemical Systems; Computer Science; Benefits and Goals
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Materials Processing Center/Microphotonics Center</title>
<link href="https://hdl.handle.net/1721.1/163895" rel="alternate"/>
<author>
<name>Lippegrenfell, Tamarleigh</name>
</author>
<id>https://hdl.handle.net/1721.1/163895</id>
<updated>2025-11-22T03:17:08Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Materials Processing Center/Microphotonics Center
Lippegrenfell, Tamarleigh
This report contains the following sections: Relationships; Activities; Equipment and Facilities; Outlook;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Laboratory for Information and Decision Systems</title>
<link href="https://hdl.handle.net/1721.1/163894" rel="alternate"/>
<author>
<name>Chan, Vincent W. S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163894</id>
<updated>2025-11-22T03:18:01Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Laboratory for Information and Decision Systems
Chan, Vincent W. S.
This report contains the following sections: Highlights; Faculty; Students; Research Overview; Research Areas;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Concourse Program</title>
<link href="https://hdl.handle.net/1721.1/163893" rel="alternate"/>
<author>
<name>Rose, Robert M.</name>
</author>
<id>https://hdl.handle.net/1721.1/163893</id>
<updated>2025-11-22T03:18:16Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Concourse Program
Rose, Robert M.
This report contains the following sections: Personnel Information; Enrollment; Teaching and Curriculum
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Biotechnology Process Engineering Center</title>
<link href="https://hdl.handle.net/1721.1/163892" rel="alternate"/>
<author>
<name>Griffith, Linda</name>
</author>
<id>https://hdl.handle.net/1721.1/163892</id>
<updated>2025-11-22T03:17:46Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Biotechnology Process Engineering Center
Griffith, Linda
This report contains the following sections: Goals, Objectives, and Priorities; Accomplishments; Administrative Initiatives; Finances and Funding; Future Plans; Personnel;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Department of Electrical Engineering and Computer Science</title>
<link href="https://hdl.handle.net/1721.1/163891" rel="alternate"/>
<author>
<name>Guttag, John V.</name>
</author>
<id>https://hdl.handle.net/1721.1/163891</id>
<updated>2025-11-22T03:18:19Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Department of Electrical Engineering and Computer Science
Guttag, John V.
This report contains the following sections: Graduate Program; Undergraduate Program; 6-A Internship Program; Faculty Notes; Faculty Awards and Honors
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Media Laboratory</title>
<link href="https://hdl.handle.net/1721.1/163890" rel="alternate"/>
<author>
<name>Bender, Walter</name>
</author>
<id>https://hdl.handle.net/1721.1/163890</id>
<updated>2025-11-22T03:18:06Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Media Laboratory
Bender, Walter
This report contains the following sections: Research Achievements; Exhibitions and Performances; Collaboration within MIT; Media Lab Europe; Sponsors; Human Resources/Administration;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Department of Athletics, Physical Education, and Recreation</title>
<link href="https://hdl.handle.net/1721.1/163889" rel="alternate"/>
<author>
<name>Royer, Candace L.</name>
</author>
<id>https://hdl.handle.net/1721.1/163889</id>
<updated>2025-11-22T03:17:02Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Department of Athletics, Physical Education, and Recreation
Royer, Candace L.
This report contains the following sections: Summary Statement; Highlights of the Year; New Initiatives; Summary of Staffing Changes
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Student Services Information Technology</title>
<link href="https://hdl.handle.net/1721.1/163888" rel="alternate"/>
<author>
<name>Stevenson, JoAnne</name>
</author>
<id>https://hdl.handle.net/1721.1/163888</id>
<updated>2025-11-22T03:17:51Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Student Services Information Technology
Stevenson, JoAnne
This report contains the following sections: Accomplishments; Staffing Changes
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, MIT Careers Office</title>
<link href="https://hdl.handle.net/1721.1/163887" rel="alternate"/>
<author>
<name>Reed, Elizabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/163887</id>
<updated>2025-11-22T03:17:35Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, MIT Careers Office
Reed, Elizabeth
This report contains the following sections: Accomplishments during FY2004; Staffing Changes;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Director, Libraries</title>
<link href="https://hdl.handle.net/1721.1/163886" rel="alternate"/>
<author>
<name>Wolpert, Ann J.</name>
</author>
<author>
<name>Gass, Steve</name>
</author>
<author>
<name>Fleishauer, Carol</name>
</author>
<author>
<name>Glavash, Keith</name>
</author>
<author>
<name>Smith, MacKenzie</name>
</author>
<id>https://hdl.handle.net/1721.1/163886</id>
<updated>2025-11-22T03:17:58Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Director, Libraries
Wolpert, Ann J.; Gass, Steve; Fleishauer, Carol; Glavash, Keith; Smith, MacKenzie
This report contains the following sections: Director, Libraries; Public Services; Collection Services; Administrative Services; Technology Planning and Administration
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Center for Space Research</title>
<link href="https://hdl.handle.net/1721.1/163885" rel="alternate"/>
<author>
<name>Hewitt, Jacqueline N.</name>
</author>
<id>https://hdl.handle.net/1721.1/163885</id>
<updated>2025-11-22T03:16:53Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Center for Space Research
Hewitt, Jacqueline N.
This report contains the following sections: Research Highlights; Education and Public Outreach
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Laboratory for Nuclear Science</title>
<link href="https://hdl.handle.net/1721.1/163884" rel="alternate"/>
<author>
<name>Matthews, June L.</name>
</author>
<id>https://hdl.handle.net/1721.1/163884</id>
<updated>2025-11-22T03:17:44Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Laboratory for Nuclear Science
Matthews, June L.
This report contains the following sections: Experimental High-Energy Physics; Experimental Nuclear Physics; Theoretical Nuclear and Particle Physics; Education
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, George R. Harrison Spectroscopy Laboratory</title>
<link href="https://hdl.handle.net/1721.1/163883" rel="alternate"/>
<author>
<name>Feld, Michael S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163883</id>
<updated>2025-11-22T03:17:04Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, George R. Harrison Spectroscopy Laboratory
Feld, Michael S.
This report contains the following sections: Research Highlights;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Department of Mathematics</title>
<link href="https://hdl.handle.net/1721.1/163882" rel="alternate"/>
<author>
<name>Vogan, David A., Jr.</name>
</author>
<id>https://hdl.handle.net/1721.1/163882</id>
<updated>2025-11-22T03:16:58Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Department of Mathematics
Vogan, David A., Jr.
This report contains the following sections: Students; Faculty Changes; Administration; Research; Honors, Prizes, and Awards; Education;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Department of Earth, Atmospheric, and Planetary Sciences</title>
<link href="https://hdl.handle.net/1721.1/163881" rel="alternate"/>
<author>
<name>Zuber, Maria T.</name>
</author>
<id>https://hdl.handle.net/1721.1/163881</id>
<updated>2025-11-22T03:17:54Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Department of Earth, Atmospheric, and Planetary Sciences
Zuber, Maria T.
This report contains the following sections: Educational Activities; Faculty; Current Research;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Program in Science, Technology, and Society</title>
<link href="https://hdl.handle.net/1721.1/163880" rel="alternate"/>
<author>
<name>Williams, Rosalind H.</name>
</author>
<id>https://hdl.handle.net/1721.1/163880</id>
<updated>2025-11-22T03:17:53Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Program in Science, Technology, and Society
Williams, Rosalind H.
This report contains the following sections: Doctoral Program; Projects, Grants, and Initiatives; Educational Activities; Ongoing Activities of the Program; Knight Science Journalism Fellowship Program; Faculty Activities;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Special Projects Office</title>
<link href="https://hdl.handle.net/1721.1/163879" rel="alternate"/>
<author>
<name>Enders, Margaret S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163879</id>
<updated>2025-11-22T03:17:25Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Special Projects Office
Enders, Margaret S.
This report contains the following sections: Administration of the MIT Communication Requirement; The Cambridge-MIT Undergraduate Student Exchange Program; Planning for an Office of Study Abroad and Foreign Scholarships; The MacVicar Faculty Fellows Program; The d’Arbeloff Grants Program; Support to the Task Force on the Undergraduate Educational Commons; Staffing Changes
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Office of Minority Education</title>
<link href="https://hdl.handle.net/1721.1/163878" rel="alternate"/>
<author>
<name>Beamon, Kim</name>
</author>
<id>https://hdl.handle.net/1721.1/163878</id>
<updated>2025-11-22T03:17:16Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Office of Minority Education
Beamon, Kim
This report contains the following sections: Project Interphase; Seminar XL; Second Summer Program; Tutorial Service Room; Industrial Advisory Council for Minority Education; Office of Minority Education Student Advisory Council; Minority Scholarships; Minority Awards Banquet;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Admissions Office</title>
<link href="https://hdl.handle.net/1721.1/163877" rel="alternate"/>
<author>
<name>Jones, Marilee</name>
</author>
<id>https://hdl.handle.net/1721.1/163877</id>
<updated>2025-11-22T03:16:51Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Admissions Office
Jones, Marilee
This report contains the following sections: Accomplishments; Staffing Changes
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, International Students Office</title>
<link href="https://hdl.handle.net/1721.1/163876" rel="alternate"/>
<author>
<name>Guichard‐Ashbrook, Danielle</name>
</author>
<id>https://hdl.handle.net/1721.1/163876</id>
<updated>2025-11-22T03:17:51Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, International Students Office
Guichard‐Ashbrook, Danielle
This report contains the following sections: International Admissions; International Student Advising; Orientation Programs for International Students; Host to International Students Program; Future Goals;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, MIT Press</title>
<link href="https://hdl.handle.net/1721.1/163875" rel="alternate"/>
<author>
<name>Faran, Ellen W.</name>
</author>
<id>https://hdl.handle.net/1721.1/163875</id>
<updated>2025-11-22T03:18:13Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, MIT Press
Faran, Ellen W.
This report contains the following sections: FY2004 Highlights; FY2004 Financial Results; MIT Press Management Board, 2003–2004; MIT Press Editorial Board, 2003–2004; MIT Press Acquisitions Editors; Books Division; Production Department; Journals Division; MIT Faculty Journal Editors; MIT Press Bookstore
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Picower Center for Learning and Memory</title>
<link href="https://hdl.handle.net/1721.1/163874" rel="alternate"/>
<author>
<name>Tonegawa, Susumu</name>
</author>
<id>https://hdl.handle.net/1721.1/163874</id>
<updated>2025-11-22T03:17:50Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Picower Center for Learning and Memory
Tonegawa, Susumu
This report contains the following sections: Major Research Breakthroughs; New Building and Faculty Hiring; Public Relations; Promotions; Awards; Research Highlights;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Center for Cancer Research</title>
<link href="https://hdl.handle.net/1721.1/163873" rel="alternate"/>
<author>
<name>Jacks, Tyler</name>
</author>
<id>https://hdl.handle.net/1721.1/163873</id>
<updated>2025-11-22T03:17:34Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Center for Cancer Research
Jacks, Tyler
This report contains the following sections: Animal Models of Cancer; Stem Cells, Development, and Cancer; RNAi Technology; Integrative Analysis of Cancer Pathways; Faculty Awards
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Program in Writing and Humanistic Studies</title>
<link href="https://hdl.handle.net/1721.1/163872" rel="alternate"/>
<author>
<name>Paradis, James</name>
</author>
<id>https://hdl.handle.net/1721.1/163872</id>
<updated>2025-11-22T03:18:07Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Program in Writing and Humanistic Studies
Paradis, James
This report contains the following sections: Research and Publications; Academic Programs and Initiatives; Service, Grants, and Awards; Personnel;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Music and Theater Arts</title>
<link href="https://hdl.handle.net/1721.1/163871" rel="alternate"/>
<author>
<name>Ziporyn, Evan</name>
</author>
<id>https://hdl.handle.net/1721.1/163871</id>
<updated>2025-11-22T03:17:55Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Music and Theater Arts
Ziporyn, Evan
This report contains the following sections: Highlights of the Year; Honors and Awards; Program Highlights; Achievements; Personnel;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Office of Professional Education Programs</title>
<link href="https://hdl.handle.net/1721.1/163870" rel="alternate"/>
<author>
<name>Stine, Jennifer</name>
</author>
<id>https://hdl.handle.net/1721.1/163870</id>
<updated>2025-11-22T03:17:44Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Office of Professional Education Programs
Stine, Jennifer
This report contains the following sections: Current Goals, Objectives, Priorities; Accomplishments and Program Developments; Funding; Future Plans; Personnel Information
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Anthropology</title>
<link href="https://hdl.handle.net/1721.1/163869" rel="alternate"/>
<author>
<name>Jackson, Jean</name>
</author>
<id>https://hdl.handle.net/1721.1/163869</id>
<updated>2025-11-22T03:18:29Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Anthropology
Jackson, Jean
This report contains the following sections: Personnel and Administrative Changes; Program Contributions to MIT and Outside Communities; Educational Activities; Presentations; Publications; Other Program Accomplishments; Grants, Honors, and Awards
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Laboratory for Manufacturing and Productivity</title>
<link href="https://hdl.handle.net/1721.1/163868" rel="alternate"/>
<author>
<name>Gutowski, Timothy G.</name>
</author>
<id>https://hdl.handle.net/1721.1/163868</id>
<updated>2025-11-22T03:18:27Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Laboratory for Manufacturing and Productivity
Gutowski, Timothy G.
This report contains the following sections: Research and Education Highlights, Awards;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Industrial Performance Center</title>
<link href="https://hdl.handle.net/1721.1/163867" rel="alternate"/>
<author>
<name>Lester, Richard K.</name>
</author>
<id>https://hdl.handle.net/1721.1/163867</id>
<updated>2025-11-22T03:17:52Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Industrial Performance Center
Lester, Richard K.
This report contains the following sections: Research Highlights; People;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Department of Materials Science and Engineering</title>
<link href="https://hdl.handle.net/1721.1/163866" rel="alternate"/>
<author>
<name>Suresh, Subra</name>
</author>
<id>https://hdl.handle.net/1721.1/163866</id>
<updated>2025-11-22T03:17:31Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Department of Materials Science and Engineering
Suresh, Subra
This report contains the following sections: Research Initiatives; Undergraduate Education; Graduate Education; Master of Engineering in Materials; Other Educational Initiatives; Student Organizations; Personnel; Research Highlights; Awards and Honors; AY2004 Undergraduate Awards; AY2004 Graduate Awards; Faculty Notes; Future Plans;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Engineering Systems Division</title>
<link href="https://hdl.handle.net/1721.1/163865" rel="alternate"/>
<author>
<name>Hastings, Daniel</name>
</author>
<author>
<name>Allen, Tom</name>
</author>
<author>
<name>Hanson, Bill</name>
</author>
<author>
<name>Simchi‐Levi, David</name>
</author>
<author>
<name>Newman, Dava</name>
</author>
<author>
<name>Nordal, Nils</name>
</author>
<author>
<name>Moavenzadeh, Fred</name>
</author>
<author>
<name>Clay, Phillip L.</name>
</author>
<author>
<name>Heywood, John B.</name>
</author>
<author>
<name>Cusumano, Michael A.</name>
</author>
<author>
<name>MacDuffie, John Paul</name>
</author>
<author>
<name>Cutcher‐Gershenfeld, Joel</name>
</author>
<author>
<name>Kochan, Thomas A.</name>
</author>
<author>
<name>Nightingale, Deborah</name>
</author>
<author>
<name>Carroll, John</name>
</author>
<author>
<name>Bryan, Frederick “Terry”</name>
</author>
<author>
<name>Harris, Wesley L.</name>
</author>
<author>
<name>Roth, Richard</name>
</author>
<author>
<name>Ashford, Nicholas</name>
</author>
<author>
<name>Sheffi, Yossi</name>
</author>
<id>https://hdl.handle.net/1721.1/163865</id>
<updated>2025-11-22T03:17:11Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Engineering Systems Division
Hastings, Daniel; Allen, Tom; Hanson, Bill; Simchi‐Levi, David; Newman, Dava; Nordal, Nils; Moavenzadeh, Fred; Clay, Phillip L.; Heywood, John B.; Cusumano, Michael A.; MacDuffie, John Paul; Cutcher‐Gershenfeld, Joel; Kochan, Thomas A.; Nightingale, Deborah; Carroll, John; Bryan, Frederick “Terry”; Harris, Wesley L.; Roth, Richard; Ashford, Nicholas; Sheffi, Yossi
This report contains the following sections: Ongoing Initiatives; Faculty Notes; Student Honors; Program Honors; INCOSE; Conference on Systems Engineering Research; ESD Administrative Staff; Major Meetings; Leaders for Manufacturing; System Design and Management; Technology and Policy Program; Technology, Management, and Policy Program; Center for Innovation in Product Development; Center for Technology, Policy, and Industrial Development;  Ford–MIT Alliance; International Motor Vehicle Program; Labor Aerospace Research Agenda; Lean Aerospace Initiative;  Lean Sustainment Initiative; Materials Systems Laboratory; MIT Information Quality Program; Technology and Law Program; MIT Center for Transportation and Logistics;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Department of Civil and Environmental Engineering</title>
<link href="https://hdl.handle.net/1721.1/163864" rel="alternate"/>
<author>
<name>Jaillet, Patrick</name>
</author>
<id>https://hdl.handle.net/1721.1/163864</id>
<updated>2025-11-22T03:17:45Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Department of Civil and Environmental Engineering
Jaillet, Patrick
This report contains the following sections: Initiatives; Educational Activities; Undergraduate Program; Graduate Programs; Faculty Notes; Student Notes; Departmental Awards;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Dean, School of Engineering</title>
<link href="https://hdl.handle.net/1721.1/163863" rel="alternate"/>
<author>
<name>Magnanti, Thomas L.</name>
</author>
<id>https://hdl.handle.net/1721.1/163863</id>
<updated>2025-11-22T03:18:05Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Dean, School of Engineering
Magnanti, Thomas L.
This report contains the following sections: Continuing Initiatives; Emerging Technologies; Educational Innovation and Diversity; New Initiative: Electronic Outreach to Alumni; Notable Events; Organizational Reviews and Changes; Personnel; Awards; Statistics for AY2004;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Associate Provost for the Arts</title>
<link href="https://hdl.handle.net/1721.1/163862" rel="alternate"/>
<author>
<name>Brody, Alan</name>
</author>
<id>https://hdl.handle.net/1721.1/163862</id>
<updated>2025-11-22T03:16:58Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Associate Provost for the Arts
Brody, Alan
This report contains the following sections: Resources and Programs; Laboratory for the Performing Arts; Budget Crunch; Student Art Association and Wiesner Gallery; Council for the Arts and the McDermott Award; Office of Arts Communication; Museum Loan Network; MIT Museum; List Visual Arts Center;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Council on Primary and Secondary Education</title>
<link href="https://hdl.handle.net/1721.1/163861" rel="alternate"/>
<author>
<name>Latanision, R. M.</name>
</author>
<id>https://hdl.handle.net/1721.1/163861</id>
<updated>2025-11-22T03:17:30Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Council on Primary and Secondary Education
Latanision, R. M.
This report contains the following sections: MIT/Wellesley Teacher Education Program; Teacher Sabbaticals;  Educational Program Outreach Directory; Programs by the CPSE Chairman; Association of American Universities Task Force on K–16 Education; Science and Engineering Program for Teachers;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, MIT Washington Office</title>
<link href="https://hdl.handle.net/1721.1/163860" rel="alternate"/>
<author>
<name>Crowley, John C.</name>
</author>
<id>https://hdl.handle.net/1721.1/163860</id>
<updated>2025-11-22T03:17:17Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, MIT Washington Office
Crowley, John C.
This report contains the following sections: Mission; Advocacy Coalitions and Working Groups; Legislative Initiatives; MIT Lincoln Laboratory, Center for Fusion and Plasma Science, Bates Laboratory; MIT Congressional Staff Seminar on Science and Technology; Executive Branch
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Dean, School of Architecture and Planning</title>
<link href="https://hdl.handle.net/1721.1/163859" rel="alternate"/>
<author>
<name>Santos, Adèle Naudé</name>
</author>
<author>
<name>Knight, Terry</name>
</author>
<id>https://hdl.handle.net/1721.1/163859</id>
<updated>2025-11-22T03:16:55Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Dean, School of Architecture and Planning
Santos, Adèle Naudé; Knight, Terry
This report contains the following sections: Faculty; Space; Educational Initiatives; Events and Awards; Goals for 2005;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Office of the Arts</title>
<link href="https://hdl.handle.net/1721.1/163858" rel="alternate"/>
<author>
<name>Cohen, Susan R.</name>
</author>
<author>
<name>Haller, Mary L.</name>
</author>
<author>
<name>Billingsley, Glenn</name>
</author>
<author>
<name>Oshima, Michèle</name>
</author>
<id>https://hdl.handle.net/1721.1/163858</id>
<updated>2025-11-22T03:18:12Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Office of the Arts
Cohen, Susan R.; Haller, Mary L.; Billingsley, Glenn; Oshima, Michèle
This report contains the following sections: Council for the Arts; Arts Communication; Office of the Arts Development; Student &amp; Artist-in-Residence Programs;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, MIT Museum</title>
<link href="https://hdl.handle.net/1721.1/163857" rel="alternate"/>
<author>
<name>Curtis, Jack</name>
</author>
<author>
<name>Douglas, Debbie</name>
</author>
<author>
<name>Hunt, Stephanie</name>
</author>
<author>
<name>Hasselbalch, Kurt</name>
</author>
<author>
<name>Leen, Mary</name>
</author>
<author>
<name>OʹNeill, Jenny</name>
</author>
<author>
<name>Rosenthal, Beryl</name>
</author>
<author>
<name>Van Zante, Gary</name>
</author>
<author>
<name>Whitlow, Joan</name>
</author>
<id>https://hdl.handle.net/1721.1/163857</id>
<updated>2025-11-22T03:18:26Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, MIT Museum
Curtis, Jack; Douglas, Debbie; Hunt, Stephanie; Hasselbalch, Kurt; Leen, Mary; OʹNeill, Jenny; Rosenthal, Beryl; Van Zante, Gary; Whitlow, Joan
This report contains the following sections: Collections; Education and Outreach; Exhibitions; Administration;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Lincoln Laboratory</title>
<link href="https://hdl.handle.net/1721.1/163856" rel="alternate"/>
<author>
<name>Briggs, David L.</name>
</author>
<id>https://hdl.handle.net/1721.1/163856</id>
<updated>2025-11-22T03:17:23Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Lincoln Laboratory
Briggs, David L.
This report contains the following sections: Laboratory Operations; Technical Program Highlights;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Associate Provost</title>
<link href="https://hdl.handle.net/1721.1/163855" rel="alternate"/>
<author>
<name>Canizares, Claude R.</name>
</author>
<id>https://hdl.handle.net/1721.1/163855</id>
<updated>2025-11-22T03:16:54Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Associate Provost
Canizares, Claude R.
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Technology and Development Program</title>
<link href="https://hdl.handle.net/1721.1/163854" rel="alternate"/>
<author>
<name>Moavenzadeh, Fred</name>
</author>
<id>https://hdl.handle.net/1721.1/163854</id>
<updated>2025-11-22T03:18:24Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Technology and Development Program
Moavenzadeh, Fred
This report contains the following sections: Current Research Programs; Future Research Initiatives; Current Educational Initiatives; Organization
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Haystack Observatory</title>
<link href="https://hdl.handle.net/1721.1/163853" rel="alternate"/>
<author>
<name>Salah, Joseph E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163853</id>
<updated>2025-11-22T03:18:27Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Haystack Observatory
Salah, Joseph E.
This report contains the following sections: Instrumentation; Radio Astronomy; Instrumentation Development; Atmospheric Science; Educational Programs;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Computational and Systems Biology Initiative</title>
<link href="https://hdl.handle.net/1721.1/163852" rel="alternate"/>
<author>
<name>Sorger, Peter</name>
</author>
<author>
<name>Tadmor, Brigitta</name>
</author>
<id>https://hdl.handle.net/1721.1/163852</id>
<updated>2025-11-22T03:17:07Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Computational and Systems Biology Initiative
Sorger, Peter; Tadmor, Brigitta
This report contains the following sections: Goals and Priorities; Education and Training; Research; Technology Development; Junior Faculty Startup; Outreach; Leadership; Finances and Funding
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Office of Educational Opportunity Programs</title>
<link href="https://hdl.handle.net/1721.1/163851" rel="alternate"/>
<author>
<name>Crichlow, Ronald S.</name>
</author>
<author>
<name>Layne, Evette M.</name>
</author>
<id>https://hdl.handle.net/1721.1/163851</id>
<updated>2025-11-22T03:18:02Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Office of Educational Opportunity Programs
Crichlow, Ronald S.; Layne, Evette M.
This report contains the following sections: MIT/Wellesley Upward Bound
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Center for Archaeological Materials / Center for Materials Research in Archaeology and Ethnology</title>
<link href="https://hdl.handle.net/1721.1/163850" rel="alternate"/>
<author>
<name>Lechtman, Heather</name>
</author>
<id>https://hdl.handle.net/1721.1/163850</id>
<updated>2025-11-22T03:16:57Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Center for Archaeological Materials / Center for Materials Research in Archaeology and Ethnology
Lechtman, Heather
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Finance: Office of Sponsored Programs</title>
<link href="https://hdl.handle.net/1721.1/163849" rel="alternate"/>
<author>
<name>Norris, Julie</name>
</author>
<id>https://hdl.handle.net/1721.1/163849</id>
<updated>2025-11-22T03:17:14Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Finance: Office of Sponsored Programs
Norris, Julie
This report contains the following sections: Research Volume; Compliance Issues; Costing Issues; Negotiation of Rates; Other Costing Activities; Export Control Laws and Related Issues; Administrative Theme Initiatives;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Teaching and Learning Laboratory</title>
<link href="https://hdl.handle.net/1721.1/163848" rel="alternate"/>
<author>
<name>Breslow, Lori</name>
</author>
<id>https://hdl.handle.net/1721.1/163848</id>
<updated>2025-11-22T03:17:57Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Teaching and Learning Laboratory
Breslow, Lori
This report contains the following sections: Instructional Support; Assessment and Evaluation; Research and Scholarship; Staff Changes;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Student Financial Services</title>
<link href="https://hdl.handle.net/1721.1/163847" rel="alternate"/>
<author>
<name>Hicks, Betsy</name>
</author>
<id>https://hdl.handle.net/1721.1/163847</id>
<updated>2025-11-22T03:16:52Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Student Financial Services
Hicks, Betsy
This report contains the following sections: Operating Activities; Student Receivables; Undergraduate Financial Aid; Undergraduate Parent Loans; Graduate Need-based Financial Aid; Accomplishments; Staffing;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Office of the Registrar</title>
<link href="https://hdl.handle.net/1721.1/163846" rel="alternate"/>
<author>
<name>Callahan, Mary</name>
</author>
<id>https://hdl.handle.net/1721.1/163846</id>
<updated>2025-11-22T03:17:00Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Office of the Registrar
Callahan, Mary
This report contains the following sections: Accomplishments; Operational Highlights; Classroom Management Highlights; Registration; Degrees Awarded; Personnel Changes;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Dean for Graduate Students</title>
<link href="https://hdl.handle.net/1721.1/163845" rel="alternate"/>
<author>
<name>Colbert, Isaac M.</name>
</author>
<author>
<name>Staton, Blanche</name>
</author>
<author>
<name>Wurie, Brima</name>
</author>
<author>
<name>Charles, Roy A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163845</id>
<updated>2025-11-22T03:18:03Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Dean for Graduate Students
Colbert, Isaac M.; Staton, Blanche; Wurie, Brima; Charles, Roy A.
This report contains the following sections: Graduate Students Office; Indicators of Change; Steps Forward; Strategic Collaborations; Renewed Commitment to Graduate Student Diversity; Graduate Fellowships; Programs and Services;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Earth System Initiative</title>
<link href="https://hdl.handle.net/1721.1/163844" rel="alternate"/>
<author>
<name>Chisholm, Penny</name>
</author>
<author>
<name>Hodges, Kip</name>
</author>
<id>https://hdl.handle.net/1721.1/163844</id>
<updated>2025-11-22T03:17:05Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Earth System Initiative
Chisholm, Penny; Hodges, Kip
This report contains the following sections: Research; Education and Outreach; Personnel;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Department of Brain and Cognitive Sciences</title>
<link href="https://hdl.handle.net/1721.1/163843" rel="alternate"/>
<author>
<name>Sur, Mriganka</name>
</author>
<id>https://hdl.handle.net/1721.1/163843</id>
<updated>2025-11-22T03:18:05Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Department of Brain and Cognitive Sciences
Sur, Mriganka
This report contains the following sections: Education; Faculty Highlights; Research Advances; Learning and Memory; Brain Development and Plasticity; Language and Cognition; New Undertakings and Ongoing Events
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Department of Political Science</title>
<link href="https://hdl.handle.net/1721.1/163842" rel="alternate"/>
<author>
<name>Cohen, Joshua</name>
</author>
<id>https://hdl.handle.net/1721.1/163842</id>
<updated>2025-11-22T03:18:25Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Department of Political Science
Cohen, Joshua
This report contains the following sections: Significant Events 2003–2004; Student Recruitment, Placement, and Enrollment; Faculty/Personnel; Promotions/Personnel Activity in AY2004 and Upcoming Faculty Searches; Faculty Leaves, Departures, Upcoming Searches, and Visitors; Faculty Research and Publications;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Department of Linguistics and Philosophy</title>
<link href="https://hdl.handle.net/1721.1/163841" rel="alternate"/>
<author>
<name>Marantz, Alec</name>
</author>
<id>https://hdl.handle.net/1721.1/163841</id>
<updated>2025-11-22T03:17:03Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Department of Linguistics and Philosophy
Marantz, Alec
This report contains the following sections: Research: Linguistics; Research: Philosophy; Publications; Honors and Awards; Leaves of Absence; Personnel
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Department of Economics</title>
<link href="https://hdl.handle.net/1721.1/163840" rel="alternate"/>
<author>
<name>Holmström, Bengt</name>
</author>
<id>https://hdl.handle.net/1721.1/163840</id>
<updated>2025-11-22T03:17:29Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Department of Economics
Holmström, Bengt
This report contains the following sections: Highlights of the Year; Future Plans; Personnel; Honors and Awards; Research Achievements;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Microsystems Technology Laboratories</title>
<link href="https://hdl.handle.net/1721.1/163839" rel="alternate"/>
<author>
<name>Schmidt, Martin A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163839</id>
<updated>2025-11-22T03:16:57Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Microsystems Technology Laboratories
Schmidt, Martin A.
This report contains the following sections: Highlights; Future Plans;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Biological Engineering Division</title>
<link href="https://hdl.handle.net/1721.1/163838" rel="alternate"/>
<author>
<name>Lauffenburger, Douglas A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163838</id>
<updated>2025-11-22T03:17:05Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Biological Engineering Division
Lauffenburger, Douglas A.
This report contains the following sections: Undergraduate Education; Graduate Education;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Center for Advanced Visual Studies</title>
<link href="https://hdl.handle.net/1721.1/163837" rel="alternate"/>
<author>
<name>Wodiczko, Krzysztof</name>
</author>
<id>https://hdl.handle.net/1721.1/163837</id>
<updated>2025-11-22T03:17:28Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Center for Advanced Visual Studies
Wodiczko, Krzysztof
This report contains the following sections: Activities
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Technology Licensing Office</title>
<link href="https://hdl.handle.net/1721.1/163836" rel="alternate"/>
<author>
<name>Nelsen, Lita</name>
</author>
<id>https://hdl.handle.net/1721.1/163836</id>
<updated>2025-11-22T03:16:54Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Technology Licensing Office
Nelsen, Lita
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Sea Grant College Program</title>
<link href="https://hdl.handle.net/1721.1/163835" rel="alternate"/>
<author>
<name>Chryssostomidis, Chryssostomos</name>
</author>
<id>https://hdl.handle.net/1721.1/163835</id>
<updated>2025-11-22T03:17:29Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Sea Grant College Program
Chryssostomidis, Chryssostomos
This report contains the following sections: Education; Graduate Student Research Assistants; Undergraduate Research Opportunities Program; K-12 Education; New Core Research Projects; Ongoing Core Research Projects; Advisory Services; Outreach and Communications; Program Management
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Research Laboratory of Electronics</title>
<link href="https://hdl.handle.net/1721.1/163834" rel="alternate"/>
<author>
<name>Shapiro, Jeffrey H.</name>
</author>
<id>https://hdl.handle.net/1721.1/163834</id>
<updated>2025-11-22T03:18:28Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Research Laboratory of Electronics
Shapiro, Jeffrey H.
This report contains the following sections: Circuits, Systems, Signals and Communications; Physical Sciences; Quantum Computation and Communication; Nanostructures; Photonic Materials, Devices, and Systems; Communication Biophysics; RLE Conference Facility; Appointments, Awards, and Events; Affirmative Action
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Nuclear Reactor Laboratory</title>
<link href="https://hdl.handle.net/1721.1/163833" rel="alternate"/>
<author>
<name>Moncton, David E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163833</id>
<updated>2025-11-22T03:18:08Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Nuclear Reactor Laboratory
Moncton, David E.
This report contains the following sections: MIT Research Reactor; Reactor Administration and Organization; Organizational Diversity; Safety and Security; Relicensing and Redesign; Major Reactor Services; Research Activities;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, MIT/WHOI Joint Program in Oceanography and Applied Ocean Science and Engineering</title>
<link href="https://hdl.handle.net/1721.1/163832" rel="alternate"/>
<author>
<name>Rizzoli, Paola</name>
</author>
<author>
<name>Schwartz, Ronni</name>
</author>
<id>https://hdl.handle.net/1721.1/163832</id>
<updated>2025-11-22T03:17:41Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, MIT/WHOI Joint Program in Oceanography and Applied Ocean Science and Engineering
Rizzoli, Paola; Schwartz, Ronni
This report contains the following sections: New Program in Marine Meteorology; Student Apartment in University Housing; Continuation of Presidential Fellowships; External Review
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Academic Media Production Services</title>
<link href="https://hdl.handle.net/1721.1/163831" rel="alternate"/>
<author>
<name>Mitra, Amitava “Babi”</name>
</author>
<author>
<name>Kumar, M. S. Vijay</name>
</author>
<id>https://hdl.handle.net/1721.1/163831</id>
<updated>2025-11-22T03:18:09Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Academic Media Production Services
Mitra, Amitava “Babi”; Kumar, M. S. Vijay
This report contains the following sections: Vision; Strategic Support; AMPS Advisory Board; Stellar Faculty Advisory Board; Services Offered; Organizing for Service; MIT Video Productions and Digital Technologies; Initiatives to Advance Operational Efficiency; People; Educational Design and Development Group; Changes in Personnel and Funding; Web Tools and Operations; Stellar Course Management System; Financial Operations and Administrative Liaison Unit; Projects; Facilities; Awards, Conferences, and Programs; Future Plans
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Provost</title>
<link href="https://hdl.handle.net/1721.1/163830" rel="alternate"/>
<author>
<name>Brown, Robert A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163830</id>
<updated>2025-11-22T03:17:31Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Provost
Brown, Robert A.
This report contains the following sections: People; Academic Programs; MIT OpenCourseWare; Broad Institute; McGovern Institute for Brain Research; Facilities; Faculty; Graduate Student Fellowships; Finances; Research
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Vice President and Secretary of the Corporation</title>
<link href="https://hdl.handle.net/1721.1/163829" rel="alternate"/>
<author>
<name>Willmore, Kathryn A.</name>
</author>
<author>
<name>Gallagher, Gayle M.</name>
</author>
<author>
<name>Lisanti, Suzana</name>
</author>
<author>
<name>Jones, Arthur L.</name>
</author>
<author>
<name>Lee, Monica</name>
</author>
<author>
<name>Kiang, Stuart</name>
</author>
<author>
<name>Lester,  Susan A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163829</id>
<updated>2025-11-22T03:17:01Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Vice President and Secretary of the Corporation
Willmore, Kathryn A.; Gallagher, Gayle M.; Lisanti, Suzana; Jones, Arthur L.; Lee, Monica; Kiang, Stuart; Lester,  Susan A.
This report contains the following sections: Public Relations Services; Conference Services, Events, and Information Center; MIT Home Page Team; News Office; Publishing Services Bureau; Reference Publications Office| Office of the Secretary of the Corporation
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Vice President for Resource Development</title>
<link href="https://hdl.handle.net/1721.1/163828" rel="alternate"/>
<author>
<name>Stowe, Barbara G.</name>
</author>
<author>
<name>Dare, Stephen A.</name>
</author>
<author>
<name>Eastment, Katherine E.</name>
</author>
<author>
<name>Serfes, Pamela Dumas</name>
</author>
<author>
<name>Koster, Karl F.</name>
</author>
<author>
<name>Scott, Robert D.</name>
</author>
<author>
<name>Rinaldi, Christine M.</name>
</author>
<author>
<name>Oldham, John E.</name>
</author>
<author>
<name>Sager, Judith V.</name>
</author>
<author>
<name>Miller, Lucy V.</name>
</author>
<id>https://hdl.handle.net/1721.1/163828</id>
<updated>2025-11-22T03:18:30Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Vice President for Resource Development
Stowe, Barbara G.; Dare, Stephen A.; Eastment, Katherine E.; Serfes, Pamela Dumas; Koster, Karl F.; Scott, Robert D.; Rinaldi, Christine M.; Oldham, John E.; Sager, Judith V.; Miller, Lucy V.
This report contains the following sections: Highlights; Corporation and Foundation Giving; Principal Gifts; Stewardship; Volunteer Partnerships; Unrestricted and Core Support; Human Resources; Summary of Private Support; Campaign for MIT; Campaign Giving; Communications and Donor Relations; Corporate Relations; Development Research and Systems; Donor Partnerships and Special Projects; Foundation Relations and Academic Development Support; Gift Planning; Principal Gifts
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Vice President for Information Services and Technology</title>
<link href="https://hdl.handle.net/1721.1/163827" rel="alternate"/>
<author>
<name>Grochow, Jerrold M.</name>
</author>
<id>https://hdl.handle.net/1721.1/163827</id>
<updated>2025-11-22T03:18:08Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Vice President for Information Services and Technology
Grochow, Jerrold M.
This report contains the following sections: Client Orientation; Collaboration; Sustainability; Accountability; Professionalism;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Chair of the Faculty</title>
<link href="https://hdl.handle.net/1721.1/163826" rel="alternate"/>
<author>
<name>Bras, Rafael L.</name>
</author>
<author>
<name>Burns, Lily U.</name>
</author>
<id>https://hdl.handle.net/1721.1/163826</id>
<updated>2025-11-22T03:18:18Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Chair of the Faculty
Bras, Rafael L.; Burns, Lily U.
This report contains the following sections: Faculty Policy Committee; Committee on the Undergraduate Program; Subcommittee on the Communication Requirement; Committee on Academic Performance; Committee on Curricula; Committee on Discipline; Harold E. Edgerton Award Committee; Committee on Faculty–Administration; Killian Award Committee; Committee on the Library System; Committee on Nominations; Committee on Student Life; Student–Faculty Interaction; Student Family Health Care; Committee on Outside Professional Activities; Committee on Undergraduate Admissions and Financial Aid;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, ROTC Programs</title>
<link href="https://hdl.handle.net/1721.1/163825" rel="alternate"/>
<author>
<name>Rojko, Paul</name>
</author>
<author>
<name>Baker, Brian L.</name>
</author>
<author>
<name>Holland, Robert D.</name>
</author>
<id>https://hdl.handle.net/1721.1/163825</id>
<updated>2025-11-22T03:17:36Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, ROTC Programs
Rojko, Paul; Baker, Brian L.; Holland, Robert D.
This report contains the following sections: Air Force Reserve Officers’ Training Corps; Army Reserve Officers’ Training Corps; Naval Reserve Officers Training Corps
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Dean for Undergraduate Education</title>
<link href="https://hdl.handle.net/1721.1/163824" rel="alternate"/>
<author>
<name>Redwine, Robert P.</name>
</author>
<id>https://hdl.handle.net/1721.1/163824</id>
<updated>2025-11-22T03:17:27Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Dean for Undergraduate Education
Redwine, Robert P.
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Dean, MIT Sloan School of Management</title>
<link href="https://hdl.handle.net/1721.1/163823" rel="alternate"/>
<author>
<name>Schmalensee, Richard</name>
</author>
<id>https://hdl.handle.net/1721.1/163823</id>
<updated>2025-11-22T03:17:36Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Dean, MIT Sloan School of Management
Schmalensee, Richard
This report contains the following sections: Academic Program Updates; Other Initiatives
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Comparative Media Studies</title>
<link href="https://hdl.handle.net/1721.1/163822" rel="alternate"/>
<author>
<name>Jenkins, Henry</name>
</author>
<id>https://hdl.handle.net/1721.1/163822</id>
<updated>2025-11-22T03:17:49Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Comparative Media Studies
Jenkins, Henry
This report contains the following sections: Research; Fundraising; Governance; Graduate Admissions; Undergraduate Education; Events and Programs; Honors and Awards; Visiting Scholars; Publications
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Laboratory for Electromagnetic and Electronic Systems</title>
<link href="https://hdl.handle.net/1721.1/163821" rel="alternate"/>
<author>
<name>Kassakian, John G.</name>
</author>
<id>https://hdl.handle.net/1721.1/163821</id>
<updated>2025-11-22T03:17:15Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Laboratory for Electromagnetic and Electronic Systems
Kassakian, John G.
This report contains the following sections: Automotive Electrical and Electronic Systems; Modeling, Monitoring, and Control of Power Systems; Power Electronics and Electromechanics; Sensors, Nanotechnology, and Microelectromechanical Systems; Enhanced Ultracapacitor Analysis and Development; From Bioelectromechanics to Biomedicine; Honors and Awards
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dopamine modulation of aggression</title>
<link href="https://hdl.handle.net/1721.1/163820" rel="alternate"/>
<author>
<name>Dai, Bing</name>
</author>
<author>
<name>Lin, Dayu</name>
</author>
<id>https://hdl.handle.net/1721.1/163820</id>
<updated>2026-03-08T03:26:40Z</updated>
<published>2025-09-23T00:00:00Z</published>
<summary type="text">Dopamine modulation of aggression
Dai, Bing; Lin, Dayu
Rationale Aggression is an innate social behavior prevalent across animal species. However, in modern human society, inter-personal aggression is considered disruptive and detrimental to both families and communities. Clinically, antipsychotics, which primarily target dopamine (DA) receptors, have been widely used to suppress hyper-aggression. However, the mechanisms underlying the effect of the antipsychotics remain incompletely understood. Objectives We reviewed key steps in brain DA synthesis and summarized genetic and pharmacological evidence supporting the role of the mesolimbic DA system in aggression. Next, we discussed recent circuit studies that elucidate the DA action in modulating aggression-related brain regions. These lines of evidence collectively suggest that DA acts on different brain regions to facilitate aggression and self-learning, and signals the valence of the fighting experience.
</summary>
<dc:date>2025-09-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Computer Science and Artificial Intelligence Laboratory</title>
<link href="https://hdl.handle.net/1721.1/163819" rel="alternate"/>
<author>
<name>Brooks, Rodney</name>
</author>
<id>https://hdl.handle.net/1721.1/163819</id>
<updated>2025-11-22T03:17:33Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Computer Science and Artificial Intelligence Laboratory
Brooks, Rodney
This report contains the following sections: Highlights; World Wide Web Consortium; Distinguished Lecture Series; Awards/Honors; Affirmative Action
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Department of Ocean Engineering</title>
<link href="https://hdl.handle.net/1721.1/163818" rel="alternate"/>
<author>
<name>Schmidt, Henrik</name>
</author>
<id>https://hdl.handle.net/1721.1/163818</id>
<updated>2025-11-22T03:18:15Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Department of Ocean Engineering
Schmidt, Henrik
This report contains the following sections: Current Goals and Objectives, Focus, and Priorities; Accomplishments; Administrative Initiatives; Strategic Planning; Future Plans; Personnel Information; Student Awards; Teaching and Curriculum; Current Research Projects;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Department of Mechanical Engineering</title>
<link href="https://hdl.handle.net/1721.1/163817" rel="alternate"/>
<author>
<name>Abeyaratne, Rohan</name>
</author>
<id>https://hdl.handle.net/1721.1/163817</id>
<updated>2025-11-22T03:17:21Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Department of Mechanical Engineering
Abeyaratne, Rohan
This report contains the following sections: Undergraduate Program; Graduate Program; Faculty Notes;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Institute for Soldier Nanotechnologies</title>
<link href="https://hdl.handle.net/1721.1/163816" rel="alternate"/>
<author>
<name>Thomas, Edwin L.</name>
</author>
<id>https://hdl.handle.net/1721.1/163816</id>
<updated>2025-11-22T03:17:38Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Institute for Soldier Nanotechnologies
Thomas, Edwin L.
This report contains the following sections: Research; Soldier Design Competition; Industrial Collaboration; Facilities; Outreach; Appointments, Visitors, and Awards; Future Plans
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Operations Research Center</title>
<link href="https://hdl.handle.net/1721.1/163815" rel="alternate"/>
<author>
<name>Orlin, James B.</name>
</author>
<author>
<name>Tsitsiklis, John N.</name>
</author>
<id>https://hdl.handle.net/1721.1/163815</id>
<updated>2025-11-22T03:18:29Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Operations Research Center
Orlin, James B.; Tsitsiklis, John N.
This report contains the following sections: Faculty, Students, Staff; Outreach and Professional Service; Operational Issues; Future Plans; Diversity; Professional Activities;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Whitaker College of Health Sciences and Technology</title>
<link href="https://hdl.handle.net/1721.1/163814" rel="alternate"/>
<author>
<name>Samson, Leona D.</name>
</author>
<author>
<name>Dedon, Peter C.</name>
</author>
<author>
<name>Wurtman, Richard J.</name>
</author>
<author>
<name>Fox, James G.</name>
</author>
<author>
<name>Gray, Martha L.</name>
</author>
<id>https://hdl.handle.net/1721.1/163814</id>
<updated>2025-11-22T03:17:22Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Whitaker College of Health Sciences and Technology
Samson, Leona D.; Dedon, Peter C.; Wurtman, Richard J.; Fox, James G.; Gray, Martha L.
This report contains the following sections: Center for Environmental Health Sciences; Research Cores; Core Facilities; Pilot Project Program; Plans for 2005; Clinical Research Center; Administration; Education; Affirmative Action; Research Activities; Center for Experimental Pharmacology and Therapeutics; Computer Facility; Core Laboratory/Mass Spectrometry Facility; Research Highlights; CRC Investigator-Initiated Programs;  Division of Comparative Medicine; Facility Management and Animal Care; Research Activities; Academic Activities; Committee on Animal Care Activities; Harvard-MIT Division of Health Sciences and Technology
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Vice President for Research and Associate Provost</title>
<link href="https://hdl.handle.net/1721.1/163813" rel="alternate"/>
<author>
<name>Gast, Alice P.</name>
</author>
<id>https://hdl.handle.net/1721.1/163813</id>
<updated>2025-11-22T03:17:19Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Vice President for Research and Associate Provost
Gast, Alice P.
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Office of Government and Community Relations</title>
<link href="https://hdl.handle.net/1721.1/163812" rel="alternate"/>
<author>
<name>Gallop, Sarah E.</name>
</author>
<author>
<name>Parravano, Paul</name>
</author>
<id>https://hdl.handle.net/1721.1/163812</id>
<updated>2025-11-22T03:17:50Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Office of Government and Community Relations
Gallop, Sarah E.; Parravano, Paul
This report contains the following sections: Local Government Relations; Federal Government Relations; Community Relations; Schools; Business; Community Service Fund
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Office of the President: In Special Recognition</title>
<link href="https://hdl.handle.net/1721.1/163811" rel="alternate"/>
<author>
<name/>
</author>
<id>https://hdl.handle.net/1721.1/163811</id>
<updated>2025-11-22T03:17:56Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Office of the President: In Special Recognition
This report contains the following sections: Honors and Awards; In Memoriam;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Humanities, Arts, and Social Sciences Education Office</title>
<link href="https://hdl.handle.net/1721.1/163810" rel="alternate"/>
<author>
<name>Davis, Bette</name>
</author>
<id>https://hdl.handle.net/1721.1/163810</id>
<updated>2025-11-22T03:17:48Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Humanities, Arts, and Social Sciences Education Office
Davis, Bette
This report contains the following sections: HASS Enrollment Statistics by Field and Subject—Recent Trends; HASS Concentrations: Patterns of Popularity; HASS Minor Programs; Harvard Cross-Registration; Undergraduate Degrees Granted in SHASS; Undergraduate Majors in SHASS; Honors and Awards Granted to Undergraduate Majors in SHASS;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, School of Humanities, Arts, and Social Sciences</title>
<link href="https://hdl.handle.net/1721.1/163809" rel="alternate"/>
<author>
<name>Khoury, Philip S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163809</id>
<updated>2025-11-22T03:17:13Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, School of Humanities, Arts, and Social Sciences
Khoury, Philip S.
This report contains the following sections: Undergraduate Education; Affirmative Action; Honors and Awards; Fundraising; Faculty Promotions, Administrative Changes, and Retirements;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, iCampus</title>
<link href="https://hdl.handle.net/1721.1/163808" rel="alternate"/>
<author>
<name>Bisbee, Rebecca</name>
</author>
<id>https://hdl.handle.net/1721.1/163808</id>
<updated>2025-11-22T03:17:43Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, iCampus
Bisbee, Rebecca
This report contains the following sections: iCampus Research; Learning Web Services; iLab: Sharing Laboratory Equipment via Web Services; iMoat: Shared Services for Writing Instruction; Reinventing the Classroom with Educational Technology; Computer Science; Physics; Overview of Other Faculty Research; Student Projects;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, List Visual Arts Center</title>
<link href="https://hdl.handle.net/1721.1/163807" rel="alternate"/>
<author>
<name>Farver, Jane</name>
</author>
<id>https://hdl.handle.net/1721.1/163807</id>
<updated>2025-11-22T03:17:18Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, List Visual Arts Center
Farver, Jane
This report contains the following sections: Current Goals; Accomplishments; Exhibitions; Dean’s Gallery at the Sloan School; Interpretive Program Highlights; Collections; Permanent Collection; Percent for Art; Student Loan Art Collection; Administrative Changes; Finances/Funding; Future Goals; Personnel Information; Advisory Committee
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Plasma Science and Fusion Center</title>
<link href="https://hdl.handle.net/1721.1/163806" rel="alternate"/>
<author>
<name>Porkolab, Miklos</name>
</author>
<id>https://hdl.handle.net/1721.1/163806</id>
<updated>2025-11-22T03:18:11Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Plasma Science and Fusion Center
Porkolab, Miklos
This report contains the following sections: Alcator Division; Physics Research Division; Waves and Beams Division; Fusion Engineering and Technology Division; Plasma Technology Division; Educational Outreach Programs; Awards, Appointments, and Promotions;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Center for Materials Science and Engineering</title>
<link href="https://hdl.handle.net/1721.1/163805" rel="alternate"/>
<author>
<name>Rubner, Michael F.</name>
</author>
<id>https://hdl.handle.net/1721.1/163805</id>
<updated>2025-11-22T03:17:08Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Center for Materials Science and Engineering
Rubner, Michael F.
This report contains the following sections: Administration, Management, and Research; Interdisciplinary Research Programs; Seed Projects; Shared Experimental Facilities; Collaboration, Outreach and Knowledge Transfer; Education and Human Resources; Pre-College Education; Undergraduate Education; Graduate Education; Colloquia
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, International Scholars Office</title>
<link href="https://hdl.handle.net/1721.1/163804" rel="alternate"/>
<author>
<name>Rosser, Penny</name>
</author>
<id>https://hdl.handle.net/1721.1/163804</id>
<updated>2025-11-22T03:17:07Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, International Scholars Office
Rosser, Penny
This report contains the following sections: MIT’S International Scholar Population FY2004; MIT Initiatives and the ISO; Primary Activities and Accomplishments; Sarah and Thomas Kailath International Scholars Fund; Technology; Personnel
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Laboratory for Energy and the Environment</title>
<link href="https://hdl.handle.net/1721.1/163803" rel="alternate"/>
<author>
<name>Marks, David H.</name>
</author>
<id>https://hdl.handle.net/1721.1/163803</id>
<updated>2025-11-22T03:17:23Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Laboratory for Energy and the Environment
Marks, David H.
This report contains the following sections: Component Programs; Alliance for Global Sustainability; MIT/AGS Consortium on Environmental Challenges; Carbon Capture and Sequestration Technologies Program; Analysis Group for Regional Electricity Alternatives; Political Economy and Technology Policy Group; Affiliated Research; Building Technology Program; Center for Advanced Nuclear Energy Systems; Sloan Automotive Laboratory; Center for Energy and Environmental Policy Research; Joint Program on the Science and Policy of Global Change; Education and Curriculum Initiatives ;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, McGovern Institute for Brain Research</title>
<link href="https://hdl.handle.net/1721.1/163802" rel="alternate"/>
<author>
<name>Sharp, Phillip A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163802</id>
<updated>2025-11-22T03:18:20Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, McGovern Institute for Brain Research
Sharp, Phillip A.
This report contains the following sections: Personnel; Activities; Awards and Honors; Research Accomplishments;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2004, Office of the President: Statistics of the Year</title>
<link href="https://hdl.handle.net/1721.1/163801" rel="alternate"/>
<author>
<name/>
</author>
<id>https://hdl.handle.net/1721.1/163801</id>
<updated>2025-11-22T03:17:17Z</updated>
<published>2004-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2004, Office of the President: Statistics of the Year
This report contains the following sections: Registration; Degrees Awarded; Financial Aid; MIT Careers Office; Private Support; Finances; Facilities and Campus Environment;
</summary>
<dc:date>2004-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Singularities of Ricci flow and diffeomorphisms</title>
<link href="https://hdl.handle.net/1721.1/163800" rel="alternate"/>
<author>
<name>Colding, Tobias H.</name>
</author>
<author>
<name>Minicozzi, William P.</name>
</author>
<id>https://hdl.handle.net/1721.1/163800</id>
<updated>2026-03-08T03:26:46Z</updated>
<published>2025-09-22T00:00:00Z</published>
<summary type="text">Singularities of Ricci flow and diffeomorphisms
Colding, Tobias H.; Minicozzi, William P.
We solve a well-known open problem in Ricci flow: Strong rigidity of cylinders. Strong rigidity is an illustration of a shrinker principle that uniqueness radiates out from a compact set. It implies that if one tangent flow at a future singular point is a cylinder, then all tangent flows are. At the heart of this problem in Ricci flow is comparing and recognizing metrics. This can be rather complicated because of the group of diffeomorphisms. Two metrics, that could even be the same, could look completely different in different coordinates. This is the gauge problem. Often it can be avoided if one uses some additional structure of the particular situation. The gauge problem is subtle for non-compact spaces without additional structure. We solve this gauge problem by solving a nonlinear system of PDEs. The PDE produces a diffeomorphism that fixes an appropriate gauge in the spirit of the slice theorem for group actions. We then show optimal bounds for the displacement function of the diffeomorphism. Strong rigidity relies on gauge fixing and several other new ideas. One of these is “propagation of almost splitting”, another is quadratic rigidity in the right gauge, and a third is an optimal polynomial growth bound for PDEs that holds in great generality.
</summary>
<dc:date>2025-09-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Parametric, second-order cone representable model of fairness for decision-making problems</title>
<link href="https://hdl.handle.net/1721.1/163799" rel="alternate"/>
<author>
<name>Sundar, Kaarthik</name>
</author>
<author>
<name>Deka, Deepjyoti</name>
</author>
<author>
<name>Bent, Russell</name>
</author>
<id>https://hdl.handle.net/1721.1/163799</id>
<updated>2025-11-22T03:15:12Z</updated>
<published>2025-04-10T00:00:00Z</published>
<summary type="text">A Parametric, second-order cone representable model of fairness for decision-making problems
Sundar, Kaarthik; Deka, Deepjyoti; Bent, Russell
The article develops a parametric model of fairness called “ ε -fairness” that can be represented using a single second-order cone constraint and incorporated into existing decision-making problem formulations without impacting the complexity of solution techniques. We develop the model from the fundamental result of finite-dimensional norm equivalence in linear algebra and show that this model has a closed-form relationship to an existing metric for measuring fairness widely used in the literature. Finally, a simple case study on the optimal operation of a damaged power transmission network illustrates its effectiveness.
</summary>
<dc:date>2025-04-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Semiclassical Measures for Complex Hyperbolic Quotients</title>
<link href="https://hdl.handle.net/1721.1/163798" rel="alternate"/>
<author>
<name>Athreya, Jayadev</name>
</author>
<author>
<name>Dyatlov, Semyon</name>
</author>
<author>
<name>Miller, Nicholas</name>
</author>
<id>https://hdl.handle.net/1721.1/163798</id>
<updated>2025-11-22T03:15:00Z</updated>
<published>2025-08-28T00:00:00Z</published>
<summary type="text">Semiclassical Measures for Complex Hyperbolic Quotients
Athreya, Jayadev; Dyatlov, Semyon; Miller, Nicholas
We study semiclassical measures for Laplacian eigenfunctions on compact complex hyperbolic quotients. Geodesic flows on these quotients are a model case of hyperbolic dynamical systems with different expansion/contraction rates in different directions. We show that the support of any semiclassical measure is either equal to the entire cosphere bundle or contains the cosphere bundle of a compact immersed totally geodesic complex submanifold. The proof uses the one-dimensional fractal uncertainty principle of Bourgain–Dyatlov (Ann. Math. (2) 187(3):825–867, 2018) along the fast expanding/contracting directions, in a way similar to the work of Dyatlov–Jézéquel (Ann. Henri Poincaré, 2023) in the toy model of quantum cat maps, together with a description of the closures of fast unstable/stable trajectories relying on Ratner theory.
</summary>
<dc:date>2025-08-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>A divisor generating q-series and cumulants arising from random graphs</title>
<link href="https://hdl.handle.net/1721.1/163797" rel="alternate"/>
<author>
<name>Agarwal, Archit</name>
</author>
<author>
<name>Bhoria, Subhash C.</name>
</author>
<author>
<name>Eyyunni, Pramod</name>
</author>
<author>
<name>Maji, Bibekananda</name>
</author>
<author>
<name>Wakhare, Tanay</name>
</author>
<id>https://hdl.handle.net/1721.1/163797</id>
<updated>2025-11-22T03:15:30Z</updated>
<published>2025-11-20T00:00:00Z</published>
<summary type="text">A divisor generating q-series and cumulants arising from random graphs
Agarwal, Archit; Bhoria, Subhash C.; Eyyunni, Pramod; Maji, Bibekananda; Wakhare, Tanay
Uchimura, in 1987, introduced a probability generating function for a random variable X and using properties of this function, he discovered an interesting q-series identity. He further showed that the m-th cumulant with respect to the random variable X is nothing but the generating function for the generalized divisor function σ m - 1 ( n ) . Simon, Crippa, and Collenberg, in 1993, explored the G n , p -model of a random acyclic digraph and defined a random variable γ n ∗ ( 1 ) . Quite interestingly, they found links between the limit of its mean and the generating function for the divisor function d(n). Later in 1997, Andrews, Crippa and Simon extended these results using q-series techniques. They calculated the limit of the mean and the variance of the random variable γ n ∗ ( 1 ) which correspond to the first and second cumulants. In this paper, we generalize the result of Andrews, Crippa and Simon by calculating the limit of the t-th cumulant in terms of the generalized divisor function. Furthermore, we also discover limit forms for identities of Uchimura and Dilcher. This provides a fourth side to the Uchimura–Ramanujan–divisor-type three-way partition identities expounded by the first four authors recently.
</summary>
<dc:date>2025-11-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>Arithmetic properties encoded in undermonoids</title>
<link href="https://hdl.handle.net/1721.1/163796" rel="alternate"/>
<author>
<name>Gotti, Felix</name>
</author>
<author>
<name>Li, Bangzheng</name>
</author>
<id>https://hdl.handle.net/1721.1/163796</id>
<updated>2026-03-08T03:26:33Z</updated>
<published>2025-09-19T00:00:00Z</published>
<summary type="text">Arithmetic properties encoded in undermonoids
Gotti, Felix; Li, Bangzheng
Let M be a cancellative and commutative monoid. A submonoid N of M is called an undermonoid if the Grothendieck groups of M and N coincide. For a given property p , we are interested in providing an answer to the following main question: does it suffice to check that all undermonoids of M satisfy p to conclude that all submonoids of M satisfy p ? In this paper, we give a positive answer to this question for the property of being atomic, and then we prove that if M is hereditarily atomic (i.e., every submonoid of M is atomic), then M must satisfy the ACCP, proving a recent conjecture posed by Vulakh and the first author. We also give positive answers to our main question for the following well-studied factorization properties: the bounded factorization property, half-factoriality, and length-factoriality. Finally, we determine all the monoids whose submonoids/undermonoids are half-factorial/length-factorial.
</summary>
<dc:date>2025-09-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>The rocky road to modernity: an assessment of Pakistan’s 75 years</title>
<link href="https://hdl.handle.net/1721.1/163795" rel="alternate"/>
<author>
<name>Hoodbhoy, Pervez</name>
</author>
<id>https://hdl.handle.net/1721.1/163795</id>
<updated>2026-03-08T03:31:36Z</updated>
<published>2022-12-12T00:00:00Z</published>
<summary type="text">The rocky road to modernity: an assessment of Pakistan’s 75 years
Hoodbhoy, Pervez
To assess whether Pakistan is moving towards or away from modernity I examine here the evolution of three key aspects: the overall idea system of society, the political system, and national culture. A meaningful analysis must begin with pre-colonial India, examine how British rule made fundamental changes, and the emergence of Pakistan as a result of Muslim religious identity. Although the beginnings of Pakistani modernity were shaky, the earlier inclination was to equalise with the developed world at large. In the mid-1980s this changed profoundly with the advent of political Islam, explicit repudiation of overt forms of western modernity, and a sharply increased tendency to seek examplars in the Islamic past. That trend has since accelerated under the influence of social media. But most Pakistanis, I argue, still want to hedge their bets and seek the fruits of modernity within a framework that they perceive as not inimical to their faith in Islam.
</summary>
<dc:date>2022-12-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Precipitate Size in GRCop-42 and GRCop-84 Cu-Cr-Nb Alloy Gas Atomized Powder and L-PBF Additive Manufactured Material</title>
<link href="https://hdl.handle.net/1721.1/163794" rel="alternate"/>
<author>
<name>Seltzman, AH</name>
</author>
<author>
<name>Wukitch, SJ</name>
</author>
<id>https://hdl.handle.net/1721.1/163794</id>
<updated>2026-03-08T03:31:35Z</updated>
<published>2023-01-26T00:00:00Z</published>
<summary type="text">Precipitate Size in GRCop-42 and GRCop-84 Cu-Cr-Nb Alloy Gas Atomized Powder and L-PBF Additive Manufactured Material
Seltzman, AH; Wukitch, SJ
Laser powder bed fusion (L-PBF) of Glenn Research Copper 42 or 84 (GRCop-42 or GRCop-84) produces a Cr2Nb precipitation-hardened high-conductivity copper alloy with tensile strength superior to other competing copper alloys. Precipitate diameters within GRCop-42 gas-atomized powder increase with powder diameter due to slower cooling rates, however, unlike GRCop-84, no threshold diameter above which extensive precipitate agglomerations form was observed in GRCop-42. Large Cr2Nb crystals were observed in GRCop-42 powder particles, implying formation within the crucible melt. A consistent precipitate volume of ~7% over a range of powder particle diameters indicated a consistent atomization process. Occasional voids were observed in GRCop-42 powder. Precipitate size was refined in L-PBF GRCop-42 to a greater extent than in GRCop-84, improving Orowan strengthening, however, this benefit was lost after heat treatment due to greater coarsening of precipitates. Precipitates in GRCop-42 accumulated on grain boundaries during heat treatment to a greater extent than in GRCop-84.
</summary>
<dc:date>2023-01-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>US-Russian partnerships in science: working with differences</title>
<link href="https://hdl.handle.net/1721.1/163793" rel="alternate"/>
<author>
<name>Dezhina, Irina</name>
</author>
<author>
<name>Wood, Elizabeth A</name>
</author>
<id>https://hdl.handle.net/1721.1/163793</id>
<updated>2026-03-08T03:31:37Z</updated>
<published>2022-02-16T00:00:00Z</published>
<summary type="text">US-Russian partnerships in science: working with differences
Dezhina, Irina; Wood, Elizabeth A
In the early 1990s, Russian and US observers were pessimistic about Russian science and its global integration. Yet scientists from the two countries were actively collaborating in new ways nonetheless. In order to explore the nature of those collaborations, we conducted open-ended interviews with 13 US scientists and 13 in Russia who collaborated trans-nationally in 1995–2014. Our results suggest that recognizing and working with differences benefited these colleagues. Despite ongoing political tensions and differences in scientific cultures, respondents told us that understanding those differences – in funding, cultures of doing science, institutional structures, and treatment of graduate students – helped them avoid missteps. Respect for each other’s country’s scientific contributions, interpersonal diplomacy, and personal interconnections further strengthened their work together. Diaspora scientists in particular, played a positive role as mediators and cultural interpreters.
</summary>
<dc:date>2022-02-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Agrammatic output in non-fluent, including Broca’s, aphasia as a rational behavior</title>
<link href="https://hdl.handle.net/1721.1/163792" rel="alternate"/>
<author>
<name>Fedorenko, Evelina</name>
</author>
<author>
<name>Ryskin, Rachel</name>
</author>
<author>
<name>Gibson, Edward</name>
</author>
<id>https://hdl.handle.net/1721.1/163792</id>
<updated>2026-03-08T03:31:42Z</updated>
<published>2022-11-18T00:00:00Z</published>
<summary type="text">Agrammatic output in non-fluent, including Broca’s, aphasia as a rational behavior
Fedorenko, Evelina; Ryskin, Rachel; Gibson, Edward
Background: Speech of individuals with non-fluent, including Broca's, aphasia is often characterized as "agrammatic" because their output mostly consists of nouns and, to a lesser extent, verbs and lacks function words, like articles and prepositions, and correct morphological endings. Among the earliest accounts of agrammatic output in the early 1900s was the "economy of effort" idea whereby agrammatic output is construed as a way of coping with increases in the cost of language production. This idea resurfaced in the 1980s, but in general, the field of language research has largely focused on accounts of agrammatism that postulated core deficits in syntactic knowledge.&#13;
Aims: We here revisit the economy of effort hypothesis in light of increasing emphasis in cognitive science on rational and efficient behavior.&#13;
Main contribution: The critical idea is as follows: there is a cost per unit of linguistic output, and this cost is greater for patients with non-fluent aphasia. For a rational agent, this increase leads to shorter messages. Critically, the informative parts of the message should be preserved and the redundant ones (like the function words and inflectional markers) should be omitted. Although economy of effort is unlikely to provide a unifying account of agrammatic output in all patients-the relevant population is too heterogeneous and the empirical landscape too complex for any single-factor explanation-we argue that the idea of agrammatic output as a rational behavior was dismissed prematurely and appears to provide a plausible explanation for a large subset of the reported cases of expressive aphasia.&#13;
Conclusions: The rational account of expressive agrammatism should be evaluated more carefully and systematically. On the basic research side, pursuing this hypothesis may reveal how the human mind and brain optimize communicative efficiency in the presence of production difficulties. And on the applied side, this construal of expressive agrammatism emphasizes the strengths of some patients to flexibly adapt utterances in order to communicate in spite of grammatical difficulties; and focusing on these strengths may be more effective than trying to "fix" their grammar.
</summary>
<dc:date>2022-11-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Burns on Strauss’s Liberating Liberal Education</title>
<link href="https://hdl.handle.net/1721.1/163791" rel="alternate"/>
<author>
<name>Rabieh, Linda R.</name>
</author>
<id>https://hdl.handle.net/1721.1/163791</id>
<updated>2026-03-08T03:31:40Z</updated>
<published>2023-01-18T00:00:00Z</published>
<summary type="text">Burns on Strauss’s Liberating Liberal Education
Rabieh, Linda R.
Leo Strauss on Democracy, Technology, and Liberal Education is an invaluable source of historical learning and philosophic guidance. Timothy W. Burns provides us with an in-depth and careful study of four important writings by Leo Strauss that examine the challenges faced by modern democracy and the ways in which liberal education can supply a modest remedy. According to Burns, Strauss understands the problems facing modern democracy to be rooted in the ascendancy of technology as the ultimate political aim, which prioritizes acquiring the means to pursue whatever ends we happen to desire rather than the good life itself (9). Subsequent developments in the service of this goal have led to our present situation, which Strauss characterizes as “hardly more than the interplay of mass taste with high grade but strictly speaking unprincipled efficiency” (13; see also 35, 69, 75–78). Burns sharpens his analysis of Strauss by comparing Strauss’s understanding of technology with that of Heidegger. In contrast to Heidegger’s argument for a “new thinking” to address modernity’s ills, Strauss looks to an older thinking from which he gleans an argument for liberal education, which he describes as the cultivation of “an aristocracy within democracy,” i.e., a class within society whose thinking is informed by both serious education in tradition and the study of the Great Books (15; see also 21, 84, 166). Although Burns’s book addresses many aspects of Strauss’s account of the way in which technology came to dominate politics and shape our modern world, I will focus on the thread throughout these essays that explains what Strauss means by liberal education and why it is needed today.
</summary>
<dc:date>2023-01-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Meta-UNet: enhancing skin-lesion segmentation with multimodal feature integration and uncertainty estimation</title>
<link href="https://hdl.handle.net/1721.1/163790" rel="alternate"/>
<author>
<name>Sikha, O. K.</name>
</author>
<author>
<name>Stone, Alaysia L. B.</name>
</author>
<author>
<name>González Ballester, Miguel A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163790</id>
<updated>2026-03-08T03:31:29Z</updated>
<published>2025-07-30T00:00:00Z</published>
<summary type="text">Meta-UNet: enhancing skin-lesion segmentation with multimodal feature integration and uncertainty estimation
Sikha, O. K.; Stone, Alaysia L. B.; González Ballester, Miguel A.
Purpose Medical image segmentation plays a crucial role in diagnostic pipelines. This study investigates the integration of lesion-specific metadata with image data to enhance segmentation accuracy and reduce predictive uncertainty. Methods The standard U-Net architecture was modified to incorporate lesion-specific metadata (Meta-UNet). Various integration strategies, including addition, weighted addition, and embedding layers, were evaluated. Additionally, a Bayesian Meta-UNet with Monte Carlo Dropout (MCD) was developed to assess the impact of metadata integration on model uncertainty. Uncertainty was quantified using measures such as Confidence Maps, Entropy, Mutual Information, and Expected Pairwise Kullback–Leibler divergence (EPKL). An aggregation strategy was also introduced to provide a single comprehensive uncertainty score per image. Results Meta-UNet outperformed standard U-Net across PH2, ISIC 2018, and HAM10000 datasets. On PH2, it achieved 84.64% accuracy and 90.62% Intersection over Union (IoU), compared to 83.36% and 89.19%. On ISIC 2018, U-Net scored 71.02 ± 6.69 IoU and 79.89 ± 5.09 Dice. On HAM10000, Meta-UNet achieved 88.66 ± 6.09 IoU and 93.42 ± 5.19 Dice. Meta-UNet reduced uncertainty (e.g., 0.149 vs. 0.1745), highlighting the benefit of metadata integration in improving segmentation accuracy and model confidence. Conclusion Integrating lesion-specific metadata into the U-Net architecture significantly improves segmentation accuracy and reduces predictive uncertainty. The inclusion of metadata enhances model confidence and reliability, underscoring its potential to strengthen diagnostic segmentation pipelines.
</summary>
<dc:date>2025-07-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Increasing the quantum tunneling probability through a learned ancilla-assisted protocol</title>
<link href="https://hdl.handle.net/1721.1/163789" rel="alternate"/>
<author>
<name>Testa, Renzo</name>
</author>
<author>
<name>Rodriguez Garcia, Alejandro</name>
</author>
<author>
<name>d’Onofrio, Alberto</name>
</author>
<author>
<name>Trombettoni, Andrea</name>
</author>
<author>
<name>Benatti, Fabio</name>
</author>
<author>
<name>Anselmi, Fabio</name>
</author>
<id>https://hdl.handle.net/1721.1/163789</id>
<updated>2026-03-08T03:23:26Z</updated>
<published>2025-08-05T00:00:00Z</published>
<summary type="text">Increasing the quantum tunneling probability through a learned ancilla-assisted protocol
Testa, Renzo; Rodriguez Garcia, Alejandro; d’Onofrio, Alberto; Trombettoni, Andrea; Benatti, Fabio; Anselmi, Fabio
Increasing the probability of quantum tunneling between two states, while keeping constant the resources of the underlying physical system, is a task of key importance in several physical contexts and platforms, including ultracold atoms confined by double-well potentials and superconducting qubits. We propose a novel ancillary assisted protocol showing that when a quantum system—such as a qubit—is coupled to an ancilla, one can learn the optimal ancillary component and its coupling, to increase the tunneling probability. As a case study, we consider a quantum system that, due to the presence of an energy detuning between two modes, cannot transfer by tunneling the particles from one mode to the other. However, it does it through a learned coupling with an ancillary system characterized by a detuning not smaller than the one of the primary system. We provide several illustrative examples for the paradigmatic case of a two-mode system and a two-mode ancilla in the presence of interacting particles. This reduces to a qubit coupled to an ancillary qubit in the case of one particle in the system and one in the ancilla. Our proposal provides an effective method to increase the tunneling probability in all those physical situations where no direct improvement of the system parameters, such as tunneling coefficient or energy detuning, is either possible or resource efficient. Finally, we also argue that the proposed strategy is not hampered by weak coupling to noisy environments.
</summary>
<dc:date>2025-08-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Attitudes, aboutness, and indirect restriction</title>
<link href="https://hdl.handle.net/1721.1/163788" rel="alternate"/>
<author>
<name>von Fintel, Kai</name>
</author>
<author>
<name>Pasternak, Robert</name>
</author>
<id>https://hdl.handle.net/1721.1/163788</id>
<updated>2026-03-08T03:23:24Z</updated>
<published>2025-08-04T00:00:00Z</published>
<summary type="text">Attitudes, aboutness, and indirect restriction
von Fintel, Kai; Pasternak, Robert
On its surface, a sentence like If Laura becomes a zombie, she wants you to shoot her looks like a plain conditional with the attitude want in its consequent. However, the most salient reading of this sentence is not about the desires of a hypothetical zombie-Laura. Rather, it asserts that the actual, non-zombie Laura has a certain restricted attitude: her present desires, when considering only possible states of affairs in which she becomes a zombie, are such that you shoot her. This can be contrasted with the shifted reading about zombie-desires that arises with conditional morphosyntax, e.g., If Laura became a zombie, she would want you to shoot her. Furthermore, as Blumberg and Holguín (J Semant 36(3):377–406, 2019) note, restricted attitude readings can also arise in disjunctive environments, as in Either a lot of people are on the deck outside, or I regret that I didn’t bring more friends. We provide a novel analysis of restricted and shifted readings in conditional and disjunctive environments, with a few crucial features. First, both restricted and shifted attitude conditionals are in fact “regular” conditionals with attitudes in their consequents, which accords with their surface-level appearance and contrasts with Pasternak’s (The mereology of attitudes, Ph.D. thesis, Stony Brook University, Stony Brook, NY, 2018) Kratzerian approach, in which the if-clause restricts the attitude directly. Second, whether the attitude is or is not shifted—i.e., zombie versus actual desires—is dependent on the presence or absence of conditional morphosyntax. And third, the restriction of the attitude is effected by means of aboutness, a concept for which we provide two potential implementations. We conclude by discussing our analysis’s prospective repercussions for the theory of conditionals more generally.
</summary>
<dc:date>2025-08-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sulfated dietary fiber protects gut microbiota from antibiotics</title>
<link href="https://hdl.handle.net/1721.1/163787" rel="alternate"/>
<author>
<name>Wu, Fuqing</name>
</author>
<author>
<name>Yu, Xiaoqian A.</name>
</author>
<author>
<name>Angeles-Albores, David</name>
</author>
<author>
<name>Erdman, Susan E.</name>
</author>
<author>
<name>Alm, Eric J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163787</id>
<updated>2026-03-08T03:23:26Z</updated>
<published>2025-08-06T00:00:00Z</published>
<summary type="text">Sulfated dietary fiber protects gut microbiota from antibiotics
Wu, Fuqing; Yu, Xiaoqian A.; Angeles-Albores, David; Erdman, Susan E.; Alm, Eric J.
Background Antibiotics, while essential for combating pathogens, also disrupt commensal bacteria, leading to gut microbiota imbalance and associated diseases. However, strategies to mitigate such collateral damage remain largely underexplored. Result In this study, we found that fucoidan, a marine polysaccharide derived from brown seaweed, provides broad-spectrum growth protection against multiple classes of antibiotics for human gut microbial isolates in vitro and for fecal communities ex vivo. This protective effect is dependent on the structural integrity, molecular weight, and sulfur content of the polysaccharide. Transcriptomic analysis showed that while fucoidan had minimal impact on baseline gene expression, it counteracted about 60% of the genes induced by kanamycin, suggesting a potential inhibition of kanamycin. Mass spectrometry results further showed that this inhibition may be due to the non-specific binding of fucoidan to kanamycin in solution. Finally, animal model experiments revealed that fucoidan facilitated the recovery of gut microbes following antibiotic treatment in vivo. Conclusion These findings suggest fucoidan could serve as a potential intervention to help protect gut microbiota during antibiotic therapy. Further studies are needed to evaluate its clinical potential and ensure it does not compromise antimicrobial efficacy. Video Abstract
</summary>
<dc:date>2025-08-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>Additivity, Haag duality, and non-invertible symmetries</title>
<link href="https://hdl.handle.net/1721.1/163786" rel="alternate"/>
<author>
<name>Shao, Shu-Heng</name>
</author>
<author>
<name>Sorce, Jonathan</name>
</author>
<author>
<name>Srivastava, Manu</name>
</author>
<id>https://hdl.handle.net/1721.1/163786</id>
<updated>2026-03-08T03:26:33Z</updated>
<published>2025-08-01T00:00:00Z</published>
<summary type="text">Additivity, Haag duality, and non-invertible symmetries
Shao, Shu-Heng; Sorce, Jonathan; Srivastava, Manu
The algebraic approach to quantum field theory focuses on the properties of local algebras, whereas the study of (possibly non-invertible) global symmetries emphasizes global aspects of the theory and spacetime. We study connections between these two perspectives by examining how either of two core algebraic properties — “additivity” or “Haag duality” — is violated in a 1+1D CFT or lattice model restricted to the symmetric sector of a general global symmetry. For the Verlinde symmetry of a bosonic diagonal RCFT, we find that additivity is violated whenever the symmetry algebra contains an invertible element, while Haag duality is violated whenever it contains a non-invertible element. We find similar phenomena for the Kramers-Wannier and Rep(D8) non-invertible symmetries on spin chains.
</summary>
<dc:date>2025-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Observation of the Λ b 0 → J / ψ Ξ - K + and Ξ b 0 → J / ψ Ξ - π + decays</title>
<link href="https://hdl.handle.net/1721.1/163785" rel="alternate"/>
<author>
<name/>
</author>
<id>https://hdl.handle.net/1721.1/163785</id>
<updated>2026-03-08T03:22:55Z</updated>
<published>2025-07-28T00:00:00Z</published>
<summary type="text">Observation of the Λ b 0 → J / ψ Ξ - K + and Ξ b 0 → J / ψ Ξ - π + decays
The first observation of the Ξ b 0 → J / ψ Ξ - π + decay and the most precise measurement of the branching fraction of the Λ b 0 → J / ψ Ξ - K + decay are reported, using proton-proton collision data from the LHCb experiment collected in 2016–2018 at a centre-of-mass energy of 13 \,Te V , corresponding to an integrated luminosity of 5.4 \,fb - 1 . Using the Λ b 0 → J / ψ Λ and Ξ b - → J / ψ Ξ - decays as normalisation channels, the ratios of branching fractions are measured to be B ( Λ b 0 → J / ψ Ξ - K + ) B ( Λ b 0 → J / ψ Λ ) = ( 1.17 ± 0.14 ± 0.08 ) × 10 - 2 , B ( Ξ b 0 → J / ψ Ξ - π + ) B ( Ξ b - → J / ψ Ξ - ) = ( 11.9 ± 1.4 ± 0.6 ) × 10 - 2 , where the first uncertainty is statistical and the second systematic.
</summary>
<dc:date>2025-07-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Incorporating teacher effect when modeling student engagement in smart STEM classrooms: a cluster analysis</title>
<link href="https://hdl.handle.net/1721.1/163784" rel="alternate"/>
<author>
<name>Shreeve, Kelly</name>
</author>
<author>
<name>Perry, Anthony</name>
</author>
<author>
<name>Cassidy, Michael</name>
</author>
<author>
<name>Jessen Eller, Kathryn</name>
</author>
<author>
<name>Price, Beth</name>
</author>
<author>
<name>Jackson, Brandy</name>
</author>
<author>
<name>Celi, Leo</name>
</author>
<author>
<name>Lourentzou, Ismini</name>
</author>
<author>
<name>Hendrik, Luk</name>
</author>
<id>https://hdl.handle.net/1721.1/163784</id>
<updated>2026-03-08T03:26:34Z</updated>
<published>2025-09-10T00:00:00Z</published>
<summary type="text">Incorporating teacher effect when modeling student engagement in smart STEM classrooms: a cluster analysis
Shreeve, Kelly; Perry, Anthony; Cassidy, Michael; Jessen Eller, Kathryn; Price, Beth; Jackson, Brandy; Celi, Leo; Lourentzou, Ismini; Hendrik, Luk
Student engagement during learning serves as a critical predictor of academic success and plays a pivotal role in nurturing interest and readiness for future careers. As digital platforms become increasingly important to learning, it is essential that we understand how the interactions that students have with them reflects their engagement with learning. Previous research has often modeled engagement in a fully online context, where students pursue lessons independently and outside the influence of the classroom, paced and structured by digital systems. However, in STEM (Science, Technology, Engineering, and Math) subjects—and many others—learning more frequently happens in a physical classroom setting, under the guidance of a teacher, and involves interactions with other students and tangible objects. Here digital materials are used to scaffold and support learning but are not typically the focus of where learning happens. To study how student interactions with digital materials in these settings might allow us to measure, evaluate and help teachers enhance engagement, we have developed and deployed a smart digital learning platform that guides instruction and captures real-time multimodal student learning events in the physical STEM classroom. Previously we have shown that a subset of student interactions measured with this platform can be used to model student learning and generate human-like insights into engagement. Here we report on the significant influence that teachers have on student interactions with our smart platform in the STEM classroom, and the impact that this has on evaluating their engagement with learning. In an analysis of 108 high school students that used the platform to complete a 19-lesson data science curriculum in 5 different classrooms, we found significant differences between teachers both in the measured time students spent on the lesson and the percentage of the lesson they completed. In this setting, taking teacher influence into account improves the outcomes of our machine learning clustering models that group students based on their level of engagement. These findings inform how we develop smart classroom technology and machine learning applications that are globally informed but locally relevant, and support teachers to enhance student engagement and learning outcomes in dynamic and highly variable STEM classroom learning environments.
</summary>
<dc:date>2025-09-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>On determining αs(mZ) from dijets in e+e− thrust</title>
<link href="https://hdl.handle.net/1721.1/163783" rel="alternate"/>
<author>
<name>Benitez, Miguel A.</name>
</author>
<author>
<name>Hoang, André H.</name>
</author>
<author>
<name>Mateu, Vicent</name>
</author>
<author>
<name>Stewart, Iain W.</name>
</author>
<author>
<name>Vita, Gherardo</name>
</author>
<id>https://hdl.handle.net/1721.1/163783</id>
<updated>2026-03-08T03:21:33Z</updated>
<published>2025-07-25T00:00:00Z</published>
<summary type="text">On determining αs(mZ) from dijets in e+e− thrust
Benitez, Miguel A.; Hoang, André H.; Mateu, Vicent; Stewart, Iain W.; Vita, Gherardo
We update a previous N3LL′+ O α s 3 determination of the strong coupling from a global fit to thrust data by including newly available perturbative ingredients, upgrading the renormalization scales to include a fully canonical scaling region, and implementing the log resummation in a way which ensures the integrated cross section is unaffected by the leading 1/Q hadronization power corrections. Detailed discussions are provided concerning the stability of the results under variations of the fit range and the importance of summing up higher-order logarithmic terms for convergence and stability. We show that high-precision results can be achieved even when carrying out a more conservative fit by restricting the dataset to a region which is more clearly dominated by dijet events. This leads to αs(mZ) = 0.1136 ± 0.0012 with χ2/dof = 0.86, fully compatible with earlier results using a larger fit range. We also demonstrate that a number of additional effects associated to power corrections have a small impact on this fit result, including modifications to the renormalon substraction scheme for dijet power corrections and the inclusion of three-jet power correction models. The fit is also shown to provide very good agreement with data outside the fit range.
</summary>
<dc:date>2025-07-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measurement of the ψ(2S) to J/ψ cross-section ratio as a function of centrality in PbPb collisions at √sNN = 5.02 TeV</title>
<link href="https://hdl.handle.net/1721.1/163782" rel="alternate"/>
<author>
<name>Aaij, R.</name>
</author>
<author>
<name>Abdelmotteleb, A. S. W.</name>
</author>
<author>
<name>Abellan Beteta, C.</name>
</author>
<author>
<name>Abudinén, F.</name>
</author>
<author>
<name>Ackernley, T.</name>
</author>
<author>
<name>Adefisoye, A. A.</name>
</author>
<author>
<name>Adeva, B.</name>
</author>
<author>
<name>Adinolfi, M.</name>
</author>
<author>
<name>Adlarson, P.</name>
</author>
<author>
<name>Agapopoulou, C.</name>
</author>
<author>
<name>Aidala, C. A.</name>
</author>
<author>
<name>Ajaltouni, Z.</name>
</author>
<author>
<name>Akar, S.</name>
</author>
<author>
<name>Akiba, K.</name>
</author>
<author>
<name>Albicocco, P.</name>
</author>
<author>
<name>Albrecht, J.</name>
</author>
<author>
<name>LHCb collaboration</name>
</author>
<id>https://hdl.handle.net/1721.1/163782</id>
<updated>2026-03-08T03:21:32Z</updated>
<published>2025-07-23T00:00:00Z</published>
<summary type="text">Measurement of the ψ(2S) to J/ψ cross-section ratio as a function of centrality in PbPb collisions at √sNN = 5.02 TeV
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; LHCb collaboration
The ratio of prompt production cross-sections of ψ(2S) and J/ψ mesons in their&#13;
dimuon final state is measured as a function of centrality, using data collected by the LHCb&#13;
detector in PbPb collisions at √&#13;
sNN = 5.02 TeV, for the first time in the forward rapidity&#13;
region. The measured ratio shows no dependence on the collision centrality, and is compared&#13;
to the latest theory predictions and to the recent measurements in literature.
</summary>
<dc:date>2025-07-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Legal causation*</title>
<link href="https://hdl.handle.net/1721.1/163781" rel="alternate"/>
<author>
<name>Byrne, Thomas</name>
</author>
<id>https://hdl.handle.net/1721.1/163781</id>
<updated>2026-03-08T03:31:22Z</updated>
<published>2022-10-14T00:00:00Z</published>
<summary type="text">Legal causation*
Byrne, Thomas
I propose a new formalist account of legal (/proximate) causation – one that holds legal causation to be a matter of amoral, descriptive fact. The account starts with a metaphysical relation, akin to but distinct from common-sense causation, and it argues that legal causation aligns exactly with that relation; it is unified and principled.
</summary>
<dc:date>2022-10-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Space Architecture in Microgravity: TESSERAE Project for Large Scale Space Structures</title>
<link href="https://hdl.handle.net/1721.1/163780" rel="alternate"/>
<author>
<name>Ekblaw, Ariel</name>
</author>
<id>https://hdl.handle.net/1721.1/163780</id>
<updated>2026-03-08T03:31:25Z</updated>
<published>2022-11-21T00:00:00Z</published>
<summary type="text">Space Architecture in Microgravity: TESSERAE Project for Large Scale Space Structures
Ekblaw, Ariel
NASA and international partners are planning a crewed returnto the lunar surface in this decade, with the explicit long-termgoal of establishing sustainable lunar habitat infrastructure.International space agencies and several space entrepreneurshave shared plans for human missions to Mars in the 2030s.A menagerie of “new space” start-up companies is poised tosupport extensive activity for in-space habitation. Space explo-ration is entering an age of burgeoning commercial movement,fueled not only by the unique science experiments performedin microgravity but also by space tourism and a need for inhab-itable next-generation space architecture.Designers such as architects, engineers, and space structurepractitioners should aim to democratize access to space andchallenge the prevailing paradigm of space as an exclusive andinaccessible domain. In that case, they must build space architec-ture that can scale to welcome, safeguard, and inspire human-kind. Our space structures research program applies biomimeticprinciples to design modular, reconfigurable, and self-assemblingspace architecture. Currently, The team includes electrical andmechanical engineers, designers, a university-trained architect,and a spaceflight mission integration specialist.
</summary>
<dc:date>2022-11-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Infrastructure, Revenue, and Services: Non-State Governance in Iraq’s Disputed Territories</title>
<link href="https://hdl.handle.net/1721.1/163779" rel="alternate"/>
<author>
<name>Cancian, Matthew</name>
</author>
<author>
<name>Greenwald, Diana B.</name>
</author>
<id>https://hdl.handle.net/1721.1/163779</id>
<updated>2026-03-08T03:31:32Z</updated>
<published>2022-10-05T00:00:00Z</published>
<summary type="text">Infrastructure, Revenue, and Services: Non-State Governance in Iraq’s Disputed Territories
Cancian, Matthew; Greenwald, Diana B.
While states and non-state armed groups often engage in militarised conflict over contested territory, at other times they co-govern in a tenuous equilibrium. Using a survey of over 1,600 Kurdish soldiers (Peshmerga) and elite interviews, we investigate local variation in shared governance in one such context – the disputed territories of northern Iraq. Despite the area being under Kurdish military control, the Iraqi government continued to provide services in districts where it had pre-existing infrastructural capacity. However, in revenue-producing districts, Kurdish actors appropriated infrastructural power to provide services themselves. This illustrates that non-state governance strategies, and their outputs, can vary locally.
</summary>
<dc:date>2022-10-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Which Information Matters? Measuring Landlord Assessment of Tenant Screening Reports</title>
<link href="https://hdl.handle.net/1721.1/163778" rel="alternate"/>
<author>
<name>So, Wonyoung</name>
</author>
<id>https://hdl.handle.net/1721.1/163778</id>
<updated>2026-03-08T03:31:23Z</updated>
<published>2022-08-30T00:00:00Z</published>
<summary type="text">Which Information Matters? Measuring Landlord Assessment of Tenant Screening Reports
So, Wonyoung
This research studies how tenant screening services’ presentation ofinformation influences landlord decisions. Tenant screening services util-ize criminal records, eviction records, and credit score databases to pro-duce reports that landlords use to inform their decisions about who torent to. However, little is known about how landlords assess the infor-mation presented by tenant screening reports. Through a behavioralexperiment with landlords using simulated tenant screening reports,this study shows that landlords use blanket screening policies, that theyconflate the existence of tenant records with outcomes (e.g., eviction fil-ings with executed evictions), and that they display, on average, tenden-cies toward automation bias that are influenced by the risk assessmentsand scores presented by tenant screening reports. I argue that maintain-ing blanket screening policies and automation bias, combined with thedownstream effects of creating and using racially biased eviction andcriminal records, means that people of color will inevitably experiencedisproportionate exclusion from rental housing due to perceived “risk”on the part of landlords.
</summary>
<dc:date>2022-08-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning to make noise: toward a process model of artistic practice within experimental music scenes</title>
<link href="https://hdl.handle.net/1721.1/163777" rel="alternate"/>
<author>
<name>Woods, Peter J</name>
</author>
<id>https://hdl.handle.net/1721.1/163777</id>
<updated>2026-03-08T03:31:21Z</updated>
<published>2022-07-15T00:00:00Z</published>
<summary type="text">Learning to make noise: toward a process model of artistic practice within experimental music scenes
Woods, Peter J
Emerging at the intersection of industrial, punk, electronic music, and avant-garde jazz, noise music represents a niche subgenre reliant on loud, discordant, and arrhythmic sounds to make music. Yet despite its place within the (broadly defined) experimental music tradition, research into experimental music education has largely overlooked the genre. In response, I explore noise music through the lens of situated learning theory by addressing the following research question: how do noise musicians develop their artistic practice? To do so, I present findings from a comparative case study centered on two intertwined experimental music concert and workshop series focused on noise music. I begin by analyzing interview data from seventeen featured artists to construct a process model of artistic practice shared between musicians. I then employ bidirectional artifact analysis to trace the development of one novice participant in the series through this model. In turn, these findings not only illuminate how experimental musicians learn within informal settings but provide a potential model of learning for informal education communities more broadly. This study also holds implications for situated learning theory by asserting the influence of non-anthropocentric actors within communities of practice.
</summary>
<dc:date>2022-07-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Experiencer troubles: A reappraisal of the predicate-based asymmetry in child passives</title>
<link href="https://hdl.handle.net/1721.1/163776" rel="alternate"/>
<author>
<name>Aravind, Athulya</name>
</author>
<author>
<name>Koring, Loes</name>
</author>
<id>https://hdl.handle.net/1721.1/163776</id>
<updated>2026-03-08T03:31:31Z</updated>
<published>2022-10-17T00:00:00Z</published>
<summary type="text">Experiencer troubles: A reappraisal of the predicate-based asymmetry in child passives
Aravind, Athulya; Koring, Loes
Children’s understanding of passives of certain mental state predicates appears to lag behind passives of so-called actional predicates, an asymmetry that has posed a major empirical challenge for theories of passive acquisition. This paper argues against the dominant view in the literature that treats the predicate-based asymmetry as theoretically irrelevant. We instead propose a novel account that locates the problem in the syntax of experiencer constructions. Synthesizing theoretical and developmental evidence, we build a case for an early misanalysis of transitive subject-experiencer constructions as unaccusatives – structures that, by design, cannot passivize.
</summary>
<dc:date>2022-10-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Challenges and Opportunities of Machine Learning on Neutron and X-ray Scattering</title>
<link href="https://hdl.handle.net/1721.1/163775" rel="alternate"/>
<author>
<name>Drucker, Nathan C</name>
</author>
<author>
<name>Liu, Tongtong</name>
</author>
<author>
<name>Chen, Zhantao</name>
</author>
<author>
<name>Okabe, Ryotaro</name>
</author>
<author>
<name>Chotrattanapituk, Abhijatmedhi</name>
</author>
<author>
<name>Nguyen, Thanh</name>
</author>
<author>
<name>Wang, Yao</name>
</author>
<author>
<name>Li, Mingda</name>
</author>
<id>https://hdl.handle.net/1721.1/163775</id>
<updated>2026-03-08T03:31:26Z</updated>
<published>2022-10-12T00:00:00Z</published>
<summary type="text">Challenges and Opportunities of Machine Learning on Neutron and X-ray Scattering
Drucker, Nathan C; Liu, Tongtong; Chen, Zhantao; Okabe, Ryotaro; Chotrattanapituk, Abhijatmedhi; Nguyen, Thanh; Wang, Yao; Li, Mingda
Machine learning has been highly successful in boosting the re-search for neutron and X-ray scattering in the past few years [1, 2]. Fordiffraction, machine learning has shown great promise in phase map-ping [3, 4] and crystallographic information determination [5, 6]. Insmall-angle scattering, machine learning shows the power in reachingsuper-resolution [7, 8], reconstructing structures for macromolecules[9], and building structure-property relations [10]. As for absorptionspectroscopy, machine learning has enabled the rapid inverse searchfor optimized structures [11, 12] with improved spectral interpretability[13, 14]. Overall, as a data-driven approach, the success of the machine-learning-based scattering analysis depends on a few criteria, including:• Quantity of available experimental data, and feasibility to extractcertain data labels;• Quality of experimental data that can separate the intrinsic effect(e.g., materials properties) from extrinsic influence (e.g., instru-mental or data artifacts);• Feasibility to generate high volume of computational data;• Accuracy of computational data that can simulate the experimen-tal data.
</summary>
<dc:date>2022-10-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scaled Process Priors for Bayesian Nonparametric Estimation of the Unseen Genetic Variation</title>
<link href="https://hdl.handle.net/1721.1/163774" rel="alternate"/>
<author>
<name>Camerlenghi, Federico</name>
</author>
<author>
<name>Favaro, Stefano</name>
</author>
<author>
<name>Masoero, Lorenzo</name>
</author>
<author>
<name>Broderick, Tamara</name>
</author>
<id>https://hdl.handle.net/1721.1/163774</id>
<updated>2026-03-08T03:31:23Z</updated>
<published>2022-09-29T00:00:00Z</published>
<summary type="text">Scaled Process Priors for Bayesian Nonparametric Estimation of the Unseen Genetic Variation
Camerlenghi, Federico; Favaro, Stefano; Masoero, Lorenzo; Broderick, Tamara
There is a growing interest in the estimation of the number of unseen features, mostly driven by biological applications. A recent work brought out a peculiar property of the popular completely random measures (CRMs) as prior models in Bayesian nonparametric (BNP) inference for the unseen-features problem: for fixed prior's parameters, they all lead to a Poisson posterior distribution for the number of unseen features, which depends on the sampling information only through the sample size. CRMs are thus not a flexible prior model for the unseen-features problem and, while the Poisson posterior distribution may be appealing for analytical tractability and ease of interpretability, its independence from the sampling information makes the BNP approach a questionable oversimplification, with posterior inferences being completely determined by the estimation of unknown prior's parameters. In this article, we introduce the stable-Beta scaled process (SB-SP) prior, and we show that it allows to enrich the posterior distribution of the number of unseen features arising under CRM priors, while maintaining its analytical tractability and interpretability. That is, the SB-SP prior leads to a negative Binomial posterior distribution, which depends on the sampling information through the sample size and the number of distinct features, with corresponding estimates being simple, linear in the sampling information and computationally efficient. We apply our BNP approach to synthetic data and to real cancer genomic data, showing that: (i) it outperforms the most popular parametric and nonparametric competitors in terms of estimation accuracy; (ii) it provides improved coverage for the estimation with respect to a BNP approach under CRM priors. Supplementary materials for this article are available online.
</summary>
<dc:date>2022-09-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Precision DIS thrust predictions for HERA and EIC</title>
<link href="https://hdl.handle.net/1721.1/163773" rel="alternate"/>
<author>
<name>Ee, June-Haak</name>
</author>
<author>
<name>Kang, Daekyoung</name>
</author>
<author>
<name>Lee, Christopher</name>
</author>
<author>
<name>Stewart, Iain W.</name>
</author>
<id>https://hdl.handle.net/1721.1/163773</id>
<updated>2025-11-20T03:08:12Z</updated>
<published>2025-07-24T00:00:00Z</published>
<summary type="text">Precision DIS thrust predictions for HERA and EIC
Ee, June-Haak; Kang, Daekyoung; Lee, Christopher; Stewart, Iain W.
We present predictions for the DIS 1-jettiness event shape τ 1 b , or DIS thrust, using the framework of Soft Collinear Effective Theory (SCET) for factorization, resummation of large logarithms, and rigorous treatment of nonperturbative power corrections, matched to fixed-order QCD away from the resummation region. Our predictions reach next-to-next-to-next-to-leading-logarithmic (N3LL) accuracy in resummed perturbation theory, matched to O ( α s 2 ) fixed-order QCD calculations obtained using the program NLOJet++. We include a rigorous treatment of hadronization corrections, which are universal across different event shapes and kinematic variables x and Q at leading power, and supplement them with a systematic scheme to remove O (ΛQCD) renormalon ambiguities in their definition. The framework of SCET allows us to connect smoothly the nonperturbative, resummation, and fixed-order regions, whose relative importance varies with x and Q, and to rigorously estimate theoretical uncertainties, across a broad range of x and Q covering existing experimental results from HERA as well as expected new measurements from the upcoming Electron- Ion-Collider (EIC). Our predictions will serve as an important benchmark for the EIC program, enabling the precise determination of the QCD strong coupling αs and the universal nonperturbative first moment parameter Ω1.
</summary>
<dc:date>2025-07-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Semi-classical dilaton gravity and the very blunt defect expansion</title>
<link href="https://hdl.handle.net/1721.1/163772" rel="alternate"/>
<author>
<name>Kruthoff, Jorrit</name>
</author>
<author>
<name>Levine, Adam</name>
</author>
<id>https://hdl.handle.net/1721.1/163772</id>
<updated>2025-11-20T03:08:09Z</updated>
<published>2025-07-22T00:00:00Z</published>
<summary type="text">Semi-classical dilaton gravity and the very blunt defect expansion
Kruthoff, Jorrit; Levine, Adam
We explore dilaton gravity with general dilaton potentials in the semi-classical limit viewed both as a gas of blunt defects and also as a semi-classical theory in its own right. We compare the exact defect gas picture with that obtained by naively canonically quantizing the theory in geodesic gauge. We find a subtlety in the canonical approach due to a non-perturbative ambiguity in geodesic gauge. Unlike in JT gravity, this ambiguity arises already at the disk level. This leads to a distinct mechanism from that in JT gravity by which the semi-classical approximation breaks down at low temperatures. Along the way, we propose that new, previously un-studied saddles contribute to the density of states of dilaton gravity. This in particular leads to a re-interpretation of the disk-level density of states in JT gravity in terms of two saddles with fixed energy boundary conditions: the disk, which caps off on the outer horizon, and another, sub-leading complex saddle which caps off on the inner horizon. When the theory is studied using a defect expansion, we show how the smooth classical geometries of dilaton gravity arise from a dense gas of very blunt defects in the GN → 0 limit. The classical saddle points arise from a balance between the attractive force on the defects toward negative dilaton and a statistical pressure from the entropy of the configuration. We end with speculations on the nature of the space-like singularity present inside black holes described by certain dilaton potentials.
</summary>
<dc:date>2025-07-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measurement of the branching fraction ratio RK at large dilepton invariant mass</title>
<link href="https://hdl.handle.net/1721.1/163771" rel="alternate"/>
<author>
<name>Aaij, R.</name>
</author>
<author>
<name>Abdelmotteleb, A. S. W.</name>
</author>
<author>
<name>Abellan Beteta, C.</name>
</author>
<author>
<name>Abudinén, F.</name>
</author>
<author>
<name>Ackernley, T.</name>
</author>
<author>
<name>Adefisoye, A. A.</name>
</author>
<author>
<name>Adeva, B.</name>
</author>
<author>
<name>Adinolfi, M.</name>
</author>
<author>
<name>Adlarson, P.</name>
</author>
<author>
<name>Agapopoulou, C.</name>
</author>
<author>
<name>Aidala, C. A.</name>
</author>
<author>
<name>Ajaltouni, Z.</name>
</author>
<author>
<name>Akar, S.</name>
</author>
<author>
<name>Akiba, K.</name>
</author>
<author>
<name>Albicocco, P.</name>
</author>
<author>
<name>Albrecht, J.</name>
</author>
<author>
<name>The LHCb collaboration</name>
</author>
<id>https://hdl.handle.net/1721.1/163771</id>
<updated>2025-11-20T03:08:06Z</updated>
<published>2025-07-17T00:00:00Z</published>
<summary type="text">Measurement of the branching fraction ratio RK at large dilepton invariant mass
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; The LHCb collaboration
A test of lepton universality between muons and electrons is performed using B+ → K+ℓ+ℓ− decays (where ℓ = e, μ), in the dilepton invariant-mass-squared region above 14.3 GeV2/c4. The data used for the measurement consists of beauty meson decays produced in proton-proton collisions, corresponding to an integrated luminosity of 9 fb−1, collected by the LHCb experiment between 2011 and 2018. The ratio of branching fractions for B+ → K+μ+μ− and B+ → K+e+e− decays is measured to be R K = 1.0 8 − 0.09 + 0.11 stat − 0.04 + 0.04 syst , which is consistent with the Standard Model prediction of unity. This constitutes the most precise test of lepton flavour universality using B+ → K+ℓ+ℓ− decays with dilepton invariant-mass-squared above the ψ(2S) mass, whilst being the first of its kind at a hadron collider.
</summary>
<dc:date>2025-07-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Iterating Sine, Equivalence Classes of Variable Changes, and Groups with Few Conjugacy Classes</title>
<link href="https://hdl.handle.net/1721.1/163770" rel="alternate"/>
<author>
<name>Etingof, Pavel</name>
</author>
<id>https://hdl.handle.net/1721.1/163770</id>
<updated>2025-11-20T03:08:07Z</updated>
<published>2025-07-23T00:00:00Z</published>
<summary type="text">Iterating Sine, Equivalence Classes of Variable Changes, and Groups with Few Conjugacy Classes
Etingof, Pavel
This is an expository paper about iterations of a&#13;
smooth real function f on [0, ) such that f(0) = 0,&#13;
f E&#13;
(0) = 1, and f(x) &lt; x for x &gt; 0, i.e., the sequence&#13;
defined by xn+1 = f(xn). This sequence has interesting asymptotics, whose study leads to the question of classifying conjugacy classes in the group of formal changes of variable y = f(x), i.e., formal series f(x) = x + a2x2 + a3x2 + ⋯&#13;
with real coefficients (under composition). The same classification applies over a finite field p for suitably truncated&#13;
series f, defining a family of p-groups that have the smallest&#13;
number of conjugacy classes for a given order, i.e., are the&#13;
“most noncommutative” finite groups currently known. The&#13;
paper should be accessible to undergraduates and at least&#13;
partially to advanced high school students.
</summary>
<dc:date>2025-07-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reports to the President for the year ended June 30, 2003, Volume 2</title>
<link href="https://hdl.handle.net/1721.1/163769" rel="alternate"/>
<author>
<name>Massachusetts Institute of Technology. Office of the President</name>
</author>
<id>https://hdl.handle.net/1721.1/163769</id>
<updated>2025-11-20T03:09:34Z</updated>
<published>2003-01-01T00:00:00Z</published>
<summary type="text">Reports to the President for the year ended June 30, 2003, Volume 2
Massachusetts Institute of Technology. Office of the President
A compilation of annual reports for the 2002-2003 academic year, including a report from the President of the Massachusetts Institute of Technology, as well as reports from the academic and administrative units of the Institute. The reports outline the year's goals, accomplishments, honors and awards, and future plans.
</summary>
<dc:date>2003-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantum information meets high-energy physics: input to the update of the European strategy for particle physics</title>
<link href="https://hdl.handle.net/1721.1/163768" rel="alternate"/>
<author>
<name>Afik, Yoav</name>
</author>
<author>
<name>Fabbri, Federica</name>
</author>
<author>
<name>Low, Matthew</name>
</author>
<author>
<name>Marzola, Luca</name>
</author>
<author>
<name>Aguilar-Saavedra, Juan A.</name>
</author>
<author>
<name>Altakach, Mohammad M.</name>
</author>
<author>
<name>Asbah, Nedaa A.</name>
</author>
<author>
<name>Bai, Yang</name>
</author>
<author>
<name>Banks, Hannah</name>
</author>
<author>
<name>Barr, Alan J.</name>
</author>
<author>
<name>Bernal, Alexander</name>
</author>
<author>
<name>Browder, Thomas E.</name>
</author>
<author>
<name>Caban, Paweł</name>
</author>
<author>
<name>Casas, J. A.</name>
</author>
<author>
<name>Cheng, Kun</name>
</author>
<author>
<name>Déliot, Frédéric</name>
</author>
<author>
<name>Demina, Regina</name>
</author>
<author>
<name>Di Domenico, Antonio</name>
</author>
<author>
<name>Eckstein, Michał</name>
</author>
<author>
<name>Fabbrichesi, Marco</name>
</author>
<id>https://hdl.handle.net/1721.1/163768</id>
<updated>2025-11-20T03:08:24Z</updated>
<published>2025-09-09T00:00:00Z</published>
<summary type="text">Quantum information meets high-energy physics: input to the update of the European strategy for particle physics
Afik, Yoav; Fabbri, Federica; Low, Matthew; Marzola, Luca; Aguilar-Saavedra, Juan A.; Altakach, Mohammad M.; Asbah, Nedaa A.; Bai, Yang; Banks, Hannah; Barr, Alan J.; Bernal, Alexander; Browder, Thomas E.; Caban, Paweł; Casas, J. A.; Cheng, Kun; Déliot, Frédéric; Demina, Regina; Di Domenico, Antonio; Eckstein, Michał; Fabbrichesi, Marco
Some of the most astonishing and prominent properties of Quantum Mechanics, such as entanglement and Bell nonlocality, have only been studied extensively in dedicated low-energy laboratory setups. The feasibility of these studies in the high-energy regime explored by particle colliders was only recently shown and has gathered the attention of the scientific community. For the range of particles and fundamental interactions involved, particle colliders provide a novel environment where quantum information theory can be probed, with energies exceeding by about 12 orders of magnitude those employed in dedicated laboratory setups. Furthermore, collider detectors have inherent advantages in performing certain quantum information measurements and allow for the reconstruction of the state of the system under consideration via quantum state tomography. Here, we elaborate on the potential, challenges, and goals of this innovative and rapidly evolving line of research and discuss its expected impact on both quantum information theory and high-energy physics.
</summary>
<dc:date>2025-09-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>On approximability of Satisfiable k -CSPs: I</title>
<link href="https://hdl.handle.net/1721.1/163767" rel="alternate"/>
<author>
<name>Bhangale, Amey</name>
</author>
<author>
<name>Khot, Subhash</name>
</author>
<author>
<name>Minzer, Dor</name>
</author>
<id>https://hdl.handle.net/1721.1/163767</id>
<updated>2025-11-20T03:08:03Z</updated>
<published>2025-07-22T00:00:00Z</published>
<summary type="text">On approximability of Satisfiable k -CSPs: I
Bhangale, Amey; Khot, Subhash; Minzer, Dor
We consider the P -CSP problem for 3-ary predicates P on satisfiable instances. We show that under certain conditions on P and a ( 1 , s ) integrality gap instance of the P -CSP problem, it can be translated into a dictatorship vs. quasirandomness test with perfect completeness and soundness s + ϵ , for every constant ϵ &gt; 0 . Compared to Ragahvendra (in: Proceedings of the fortieth annual ACM symposium on theory of computing (STOC), pp 245–254, 2008), we do not lose perfect completeness. This is particularly interesting as this test implies new hardness results on satisfiable constraint satisfaction problems, assuming the Rich 2-to-1 Games Conjecture by Braverman et al. (in: Lee JR (ed) Volume 185 of Leibniz international proceedings in informatics (LIPIcs), 27:1–27:20. Schloss Dagstuhl–Leibniz-Zentrum für Informatik, Dagstuhl, 2021b. https://drops.dagstuhl.de/opus/volltexte/2021/13566 ).Our result can be seen as the first step of a potentially long-term challenging program of characterizing optimal inapproximability of every satisfiable k -ary CSP. At the heart of the reduction is our main analytical lemma for a class of 3-ary predicates, which is a generalization of a lemma by Mossel (Geom Funct Anal 19(6):1713–1756, 2010). The lemma and a further generalization of it that we conjecture may be of independent interest.
</summary>
<dc:date>2025-07-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reports to the President for the year ended June 30, 2003, Volume 1</title>
<link href="https://hdl.handle.net/1721.1/163766" rel="alternate"/>
<author>
<name>Massachusetts Institute of Technology. Office of the President</name>
</author>
<id>https://hdl.handle.net/1721.1/163766</id>
<updated>2025-11-20T03:09:28Z</updated>
<published>2003-01-01T00:00:00Z</published>
<summary type="text">Reports to the President for the year ended June 30, 2003, Volume 1
Massachusetts Institute of Technology. Office of the President
A compilation of annual reports for the 2002-2003 academic year, including a report from the President of the Massachusetts Institute of Technology, as well as reports from the academic and administrative units of the Institute. The reports outline the year's goals, accomplishments, honors and awards, and future plans.
</summary>
<dc:date>2003-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measurement of light-by-light scattering and the Breit-Wheeler process, and search for axion-like particles in ultraperipheral PbPb collisions at √sNN = 5.02 TeV</title>
<link href="https://hdl.handle.net/1721.1/163765" rel="alternate"/>
<author>
<name>Hayrapetyan, A.</name>
</author>
<author>
<name>Tumasyan, A.</name>
</author>
<author>
<name>Adam, W.</name>
</author>
<author>
<name>Andrejkovic, J. W.</name>
</author>
<author>
<name>Bergauer, T.</name>
</author>
<author>
<name>Chatterjee, S.</name>
</author>
<author>
<name>Damanakis, K.</name>
</author>
<author>
<name>Dragicevic, M.</name>
</author>
<author>
<name>Hussain, P. S.</name>
</author>
<author>
<name>Jeitler, M.</name>
</author>
<author>
<name>Krammer, N.</name>
</author>
<author>
<name>Li, A.</name>
</author>
<author>
<name>Liko, D.</name>
</author>
<author>
<name>Mikulec, I.</name>
</author>
<author>
<name>Schieck, J.</name>
</author>
<author>
<name>Schöfbeck, R.</name>
</author>
<author>
<name>Schwarz, D.</name>
</author>
<author>
<name>The CMS collaboration</name>
</author>
<id>https://hdl.handle.net/1721.1/163765</id>
<updated>2025-11-20T03:08:22Z</updated>
<published>2025-08-01T00:00:00Z</published>
<summary type="text">Measurement of light-by-light scattering and the Breit-Wheeler process, and search for axion-like particles in ultraperipheral PbPb collisions at √sNN = 5.02 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.; Schwarz, D.; The CMS collaboration
Measurements of light-by-light scattering (LbL, γγ → γγ) and the Breit-Wheeler process (BW, γγ → e+e−) are reported in ultraperipheral PbPb collisions at a centre-of-mass energy per nucleon pair of 5.02 TeV. The data sample, corresponding to an integrated luminosity of 1.7 nb−1, was collected by the CMS experiment at the CERN LHC in 2018. Events with an exclusively produced γγ or e+e− pair with invariant masses mγγ,ee &gt; 5 GeV, along with other fiducial criteria, are selected. The measured BW fiducial production cross section, σfid(γγ → e+e−) = 263.5 ± 1.8(stat) ± 17.8(syst) μb, as well as the differential distributions for various kinematic observables, are in agreement with leading-order quantum electrodynamics predictions complemented with final-state photon radiation. The measured differential BW cross sections allow discrimination between different theoretical descriptions of the photon flux of the lead ion. In the LbL final state, 26 exclusive diphoton candidate events are observed compared with 12.0 ± 2.9 expected for the background. Combined with previous results, the observed significance of the LbL signal with respect to the background-only hypothesis is above five standard deviations. The measured fiducial LbL scattering cross section, σfid(γγ → γγ) = 107 ± 24(stat) ± 13(syst) nb, is in agreement with next- to-leading-order predictions. Limits on the production of axion-like particles coupled to photons are set over the mass range 5–100 GeV, including the most stringent limits to date in the range of 5–10 GeV.
</summary>
<dc:date>2025-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The PLATO mission</title>
<link href="https://hdl.handle.net/1721.1/163764" rel="alternate"/>
<author>
<name>Rauer, Heike</name>
</author>
<author>
<name>Aerts, Conny</name>
</author>
<author>
<name>Cabrera, Juan</name>
</author>
<author>
<name>Deleuil, Magali</name>
</author>
<author>
<name>Erikson, Anders</name>
</author>
<author>
<name>Gizon, Laurent</name>
</author>
<author>
<name>Goupil, Mariejo</name>
</author>
<author>
<name>Heras, Ana</name>
</author>
<author>
<name>Walloschek, Thomas</name>
</author>
<author>
<name>Lorenzo-Alvarez, Jose</name>
</author>
<author>
<name>Marliani, Filippo</name>
</author>
<author>
<name>Martin-Garcia, César</name>
</author>
<author>
<name>Mas-Hesse, J. M.</name>
</author>
<author>
<name>O’Rourke, Laurence</name>
</author>
<author>
<name>Osborn, Hugh</name>
</author>
<author>
<name>Pagano, Isabella</name>
</author>
<author>
<name>Piotto, Giampaolo</name>
</author>
<id>https://hdl.handle.net/1721.1/163764</id>
<updated>2025-11-20T03:07:49Z</updated>
<published>2025-04-21T00:00:00Z</published>
<summary type="text">The PLATO mission
Rauer, Heike; Aerts, Conny; Cabrera, Juan; Deleuil, Magali; Erikson, Anders; Gizon, Laurent; Goupil, Mariejo; Heras, Ana; Walloschek, Thomas; Lorenzo-Alvarez, Jose; Marliani, Filippo; Martin-Garcia, César; Mas-Hesse, J. M.; O’Rourke, Laurence; Osborn, Hugh; Pagano, Isabella; Piotto, Giampaolo
PLATO (PLAnetary Transits and Oscillations of stars) is ESA’s M3 mission designed to detect and characterise extrasolar planets and perform asteroseismic monitoring of a large number of stars. PLATO will detect small planets (down to &lt;2R Earth ) around bright stars (&lt;11 mag), including terrestrial planets in the habitable zone of solar-like stars. With the complement of radial velocity observations from the ground, planets will be characterised for their radius, mass, and age with high accuracy (5%, 10%, 10% for an Earth-Sun combination respectively). PLATO will provide us with a large-scale catalogue of well-characterised small planets up to intermediate orbital periods, relevant for a meaningful comparison to planet formation theories and to better understand planet evolution. It will make possible comparative exoplanetology to place our Solar System planets in a broader context. In parallel, PLATO will study (host) stars using asteroseismology, allowing us to determine the stellar properties with high accuracy, substantially enhancing our knowledge of stellar structure and evolution. The payload instrument consists of 26 cameras with 12cm aperture each. For at least four years, the mission will perform high-precision photometric measurements. Here we review the science objectives, present PLATO‘s target samples and fields, provide an overview of expected core science performance as well as a description of the instrument and the mission profile towards the end of the serial production of the flight cameras. PLATO is scheduled for a launch date end 2026. This overview therefore provides a summary of the mission to the community in preparation of the upcoming operational phases.
</summary>
<dc:date>2025-04-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>DisruptionBench and Complimentary New Models: Two Advancements in Machine Learning Driven Disruption Prediction</title>
<link href="https://hdl.handle.net/1721.1/163763" rel="alternate"/>
<author>
<name>Spangher, Lucas</name>
</author>
<author>
<name>Bonotto, Matteo</name>
</author>
<author>
<name>Arnold, William</name>
</author>
<author>
<name>Chayapathy, Dhruva</name>
</author>
<author>
<name>Gallingani, Tommaso</name>
</author>
<author>
<name>Spangher, Alexander</name>
</author>
<author>
<name>Cannarile, Francesco</name>
</author>
<author>
<name>Bigoni, Daniele</name>
</author>
<author>
<name>de Marchi, Eliana</name>
</author>
<author>
<name>Rea, Cristina</name>
</author>
<id>https://hdl.handle.net/1721.1/163763</id>
<updated>2025-11-20T03:08:00Z</updated>
<published>2025-05-24T00:00:00Z</published>
<summary type="text">DisruptionBench and Complimentary New Models: Two Advancements in Machine Learning Driven Disruption Prediction
Spangher, Lucas; Bonotto, Matteo; Arnold, William; Chayapathy, Dhruva; Gallingani, Tommaso; Spangher, Alexander; Cannarile, Francesco; Bigoni, Daniele; de Marchi, Eliana; Rea, Cristina
Plasma disruptions remain a major obstacle to sustained commercial operation of tokamak-based fusion devices. Although machine learning (ML) methods have shown promise for predicting disruptions, their performance and generalizability suffer from a lack of common benchmarks and comprehensive multi-device evaluations. To address this, we present DisruptionBench, a new benchmarking platform designed to standardize how ML-driven disruption prediction systems are trained and evaluated on multi-machine data. DisruptionBench spans three devices - Alcator C-Mod, DIII-D, and EAST - and includes tasks of varying difficulty: zero-shot, few-shot, and many-shot training regimes to assess each model’s ability to transfer learned representations to new or data-limited machines. We evaluate four state-of-the-art ML architectures. Two are re-implementations of notable prior work: a random forest (Cristina Rea in PPCF 60:084008, 2018) and the Hybrid Deep Learner (HDL) (Zhu in NC 61: 026607, 2020). We also propose two new approaches tailored for disruption prediction: a transformer-based model inspired by GPT-2, capable of learning long-range temporal dependencies through self-attention, and a Continuous Convolutional Neural Network (CCNN) that leverages continuous kernels to capture subtle variations in plasma signals. Across the nine benchmarking tasks, the CCNN demonstrates consistently strong performance and achieves the highest overall Area Under the ROC Curve (AUC) in intra-machine tests (up to 0.97 on C-Mod). Nevertheless, the GPT-2-based approach and HDL can outperform CCNN in specific transfer scenarios, particularly when the test machine is underrepresented in training data. We further analyze the significance of memory length in capturing precursor phenomena, providing evidence that longer context windows can boost predictive accuracy.
</summary>
<dc:date>2025-05-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Crucible Steel to the Battlefield: Investigating a Unique Early Medieval Arrowhead from Anatolia</title>
<link href="https://hdl.handle.net/1721.1/163762" rel="alternate"/>
<author>
<name>Güder, Ümit</name>
</author>
<author>
<name>Yavaş, Alptekin</name>
</author>
<author>
<name>Demirel Gökalp, Zeliha</name>
</author>
<author>
<name>Taşan, Cemal C.</name>
</author>
<author>
<name>Raabe, Dierk</name>
</author>
<id>https://hdl.handle.net/1721.1/163762</id>
<updated>2025-11-20T03:08:19Z</updated>
<published>2025-06-06T00:00:00Z</published>
<summary type="text">From Crucible Steel to the Battlefield: Investigating a Unique Early Medieval Arrowhead from Anatolia
Güder, Ümit; Yavaş, Alptekin; Demirel Gökalp, Zeliha; Taşan, Cemal C.; Raabe, Dierk
An arrowhead that was recovered during the excavations of the lower city church of Byzantine Stronghold Amorium in central Anatolia has been subjected to archaeometric analysis. Coins discovered in the same context date the arrowhead to the Middle Byzantine period (ninth–tenth century CE). It is a three-bladed arrowhead with a needle-type tang. Metallography (OM, SEM), SEM–EDS and EBSD techniques were used to examine samples taken from the head and the tang sections of the arrowhead. The arrowhead was determined to be made of manganese-alloyed crucible steel (0.4–1% Mn), shaped through warm forging cycles and selectively quenched and tempered to enhance its mechanical properties. The hardened head, likely designed for armor penetration, along with the potential watered surface pattern (firind), suggests that the arrowhead functioned both as a weapon and a symbol of prestige. Historical sources and archaeometallurgical evidence link the arrowhead to mounted Turkic archers in the Abbasid army during the 838 CE Sack of Amorium. This study of the arrowhead revealed it to be the earliest crucible steel find and the only example of such an object manufactured from crucible steel discovered in medieval Anatolian excavations.
</summary>
<dc:date>2025-06-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bayesian Methods for Magnetic and Mechanical Optimization of Superconducting Magnets for Fusion</title>
<link href="https://hdl.handle.net/1721.1/163761" rel="alternate"/>
<author>
<name>Packman, Sam</name>
</author>
<author>
<name>Riva, Nicolò</name>
</author>
<author>
<name>Rodriguez-Fernandez, Pablo</name>
</author>
<id>https://hdl.handle.net/1721.1/163761</id>
<updated>2025-11-20T03:07:58Z</updated>
<published>2025-03-14T00:00:00Z</published>
<summary type="text">Bayesian Methods for Magnetic and Mechanical Optimization of Superconducting Magnets for Fusion
Packman, Sam; Riva, Nicolò; Rodriguez-Fernandez, Pablo
Stellarators as compact fusion power sources have incredible potential to help combat climate change. However, the task of making that a reality faces many challenges. This work uses Bayesian optimization, (BO) which is a method that is well suited to black-box optimizations, to address the complicated optimization problem inherent by stellarator design. In particular it focuses on the mechanical optimization necessary to withstand the Lorentz forces generated by the magnetic coils. This work leverages surrogate models that are constructed to integrate as much information as possible from the available data points, significantly reducing the number of required model evaluations. It showcases the efficacy of Bayesian optimization as a versatile tool for enhancing both magneto-static and mechanical properties within stellarator winding packs. Employing a suite of Bayesian optimization algorithms, we iteratively refine 2D and 3D models of solenoid and stellarator configurations, and demonstrate a 15% increase in optimization speed using multi-fidelity Bayesian optimization. For fusion technology to progresses from experimental stages to commercial viability, precise and efficient design methodologies will be essential. By emphasizing its modularity and transferability, our approach lays the foundation for streamlining optimization processes, facilitating the integration of fusion power into a sustainable energy infrastructure.
</summary>
<dc:date>2025-03-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pseudo-Anosov representatives of stable Hamiltonian structures</title>
<link href="https://hdl.handle.net/1721.1/163760" rel="alternate"/>
<author>
<name>Zung, Jonathan</name>
</author>
<id>https://hdl.handle.net/1721.1/163760</id>
<updated>2025-11-20T03:08:20Z</updated>
<published>2025-09-08T00:00:00Z</published>
<summary type="text">Pseudo-Anosov representatives of stable Hamiltonian structures
Zung, Jonathan
A pseudo-Anosov homeomorphism of a surface is a canonical representative of its mapping class. Conditional on the foundations of symplectic field theory, we explain that a transitive pseudo-Anosov flow is similarly a canonical representative of its stable Hamiltonian class. It follows that there are finitely many pseudo-Anosov flows admitting positive Birkhoff sections on any given rational homology 3-sphere. This result has a purely topological consequence: any 3-manifold can be obtained in at most finitely many ways as p/q surgery on a fibered hyperbolic knot in S 3 for a slope p/q satisfying q ≥ 6 , p ≠ 0 , ± 1 , ± 2 mod q . The proof of the main theorem generalizes an argument of Barthelmé–Bowden–Mann.
</summary>
<dc:date>2025-09-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hindered segmental dynamics in associative protein hydrogels studied by neutron spin-echo spectroscopy</title>
<link href="https://hdl.handle.net/1721.1/163759" rel="alternate"/>
<author>
<name>Rao, Ameya</name>
</author>
<author>
<name>Carrick, Brian R</name>
</author>
<author>
<name>Yao, Helen</name>
</author>
<author>
<name>Olsen, Bradley D</name>
</author>
<id>https://hdl.handle.net/1721.1/163759</id>
<updated>2025-11-19T05:00:39Z</updated>
<published>2023-07-26T00:00:00Z</published>
<summary type="text">Hindered segmental dynamics in associative protein hydrogels studied by neutron spin-echo spectroscopy
Rao, Ameya; Carrick, Brian R; Yao, Helen; Olsen, Bradley D
Transient binding between associating macromolecules can cause qualitative changes to chain dynamics, including modes of conformational relaxation and diffusion, through tethering effects imparted by long-range connectivity. In this work, the role of binding on short-time segmental dynamics in associative polymer gels is investigated by neutron spin-echo (NSE) measurements on a class of model artificial coiled-coil proteins with a systematically varied architecture, probing timescales of 0.1–130 ns, and length scales close to the molecular radius of gyration. The results illustrate effects of transient cross-linking on chain dynamics on different timescales, manifested in changes in segmental relaxation behavior with variations in strand length, chain concentration, and sticker distribution (endblock- vs midblock-functionalized). In all gels, a short-time cooperative diffusion mode is seen over all wave vectors, analogous to a semidilute solution, with no transitions seen at any known structural length scale. However, the diffusion coefficients are found to decrease with increasing junction density across all gels, with the strand length and number of stickers per chain in each gel appearing to play a relatively minor role. The slowing of cooperative diffusion with junction density contrasts with classical predictions of a greater restoring force for fluctuation dissipation due to the increased elasticity, suggesting additional effects of the coiled-coil junctions such as an enhancement in local viscosity that slows dynamics. Notably, the relaxation rates for all gels can be rescaled by the interjunction spacing inferred from small-angle neutron scattering, where they collapse onto a master curve suggestive of self-similar dynamics even in networks with different strand lengths and chain architectures. On long timescales (but shorter than the junction exchange time), a slowing of network relaxation is observed, resulting in a nondecaying plateau in the spin-echo amplitude attributed to a freezing of chain dynamics due to tethering. A characteristic length scale corresponding to the extent of dynamic fluctuations is estimated for each gel, which appears to be smaller than the interjunction spacing but similar to the correlation blob size of the overlapping strands. The results indicate an important role of transient binding on molecular-scale dynamics in associative polymer gels, even on timescales shorter than the junction exchange time, in addition to its effects on long-range self-diffusion previously observed.
</summary>
<dc:date>2023-07-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Techno-economic assessment of co-production of edible bioplastic and food supplements from Spirulina</title>
<link href="https://hdl.handle.net/1721.1/163758" rel="alternate"/>
<author>
<name>Chalermthai, Bushra</name>
</author>
<author>
<name>Charoensuppanimit, Pongtorn</name>
</author>
<author>
<name>Nootong, Kasidit</name>
</author>
<author>
<name>Olsen, Bradley D</name>
</author>
<author>
<name>Assabumrungrat, Suttichai</name>
</author>
<id>https://hdl.handle.net/1721.1/163758</id>
<updated>2025-11-19T05:00:41Z</updated>
<published>2023-06-22T00:00:00Z</published>
<summary type="text">Techno-economic assessment of co-production of edible bioplastic and food supplements from Spirulina
Chalermthai, Bushra; Charoensuppanimit, Pongtorn; Nootong, Kasidit; Olsen, Bradley D; Assabumrungrat, Suttichai
Large amount of plastic wastes harming the environment have raised concerns worldwide on finding alternatives to non-biodegradable plastics. Microalgae has been found as a potential source for bioplastic production, besides its more common application in the pharmaceutical and nutraceutical industry. In this study, the objective was to techno-economically evaluate the large-scale co-production of Spirulina powder as food supplements and edible bioplastic for food packaging. The scale of production was large enough to satisfy 1% of local (Thailand) plastic demand (i.e., approx. 1200 MT y&lt;jats:sup&gt;−1&lt;/jats:sup&gt;), and 1% of the global Spirulina demand (approx. 1000 MT y&lt;jats:sup&gt;−1&lt;/jats:sup&gt;) as food supplements. Results showed that the co-production of the Spirulina powder and bioplastic revealed an attractive venture with a payback time (PBT) as low as 2.6 y and ROI as high as 38.5%. This was because the revenues generated were as high as US$ 55.6 million y&lt;jats:sup&gt;−1&lt;/jats:sup&gt;, despite high capital (US$ 55.7 million) and operating (US$ 34.9 million y&lt;jats:sup&gt;−1&lt;/jats:sup&gt;) costs. Sensitivity analysis showed differences in the profitability based on variations of major parameters in the study, where the split ratio of biomass used for food supplement versus bioplastic production and the bioplastic’s selling price were found to be the most sensitive.
</summary>
<dc:date>2023-06-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Counterfactual Worlds</title>
<link href="https://hdl.handle.net/1721.1/163757" rel="alternate"/>
<author>
<name>Brast-McKie, Benjamin</name>
</author>
<id>https://hdl.handle.net/1721.1/163757</id>
<updated>2025-11-19T04:59:39Z</updated>
<published>2025-06-03T00:00:00Z</published>
<summary type="text">Counterfactual Worlds
Brast-McKie, Benjamin
This paper extends Kit Fine’s (2012a, 2012b, 2017a, 2017b, 2017c) truthmaker framework to provide a novel task semantics for tensed counterfactual conditionals. Instead of taking possible worlds to be primitive elements in a model, possible worlds will be defined in terms of states, parthood, tasks, and times where the task relation encodes the possible transitions between states. Rather than invoking primitive relations for similarity or imposition, possible worlds will be compared at a time independent of that time’s past and future where the comparison will be carried out in modal and mereological terms. After reviewing motivations for this approach, I will provide the hyperintensional semantics for counterfactuals that is implemented in the model-checker software along with a unified logic for counterfactual, modal, and tense operators. I will then extend the language to include further tense operators in order to analyze forwards, backwards, and backtracking counterfactuals.
</summary>
<dc:date>2025-06-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tight mixed-integer optimization formulations for prescriptive trees</title>
<link href="https://hdl.handle.net/1721.1/163756" rel="alternate"/>
<author>
<name>Biggs, Max</name>
</author>
<author>
<name>Perakis, Georgia</name>
</author>
<id>https://hdl.handle.net/1721.1/163756</id>
<updated>2025-11-19T05:00:02Z</updated>
<published>2025-05-29T00:00:00Z</published>
<summary type="text">Tight mixed-integer optimization formulations for prescriptive trees
Biggs, Max; Perakis, Georgia
We focus on modeling the relationship between an input feature vector and the predicted outcome of a trained decision tree using mixed-integer optimization. This can be used in many practical applications where a decision tree or a tree ensemble is incorporated into an optimization problem to model the predicted outcomes of a decision. We propose novel tight mixed-integer optimization formulations for this problem. Existing formulations can be shown to have linear relaxations that have fractional extreme points, even for the simple case of modeling a single decision tree or a very large number of constraints, which leads to slow solve times in practice. A formulation we propose, based on a projected union of polyhedra approach, is ideal (i.e., the extreme points of the linear relaxation are integer when required) for a single decision tree. Although the formulation is generally not ideal for tree ensembles, it generally has fewer extreme points, leading to a faster time to solve. We also study formulations with a binary representation of the feature vector and present multiple approaches to tighten existing formulations. We show that fractional extreme points are removed when multiple splits are on the same feature. At an extreme, we prove that this results in an ideal formulation for a tree ensemble modeling a one-dimensional feature vector. Building on this result, we also show that these additional constraints result in significantly tighter linear relaxations when the feature vector is low dimensional.
</summary>
<dc:date>2025-05-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adaptive optimization for prediction with missing data</title>
<link href="https://hdl.handle.net/1721.1/163755" rel="alternate"/>
<author>
<name>Bertsimas, Dimitris</name>
</author>
<author>
<name>Delarue, Arthur</name>
</author>
<author>
<name>Pauphilet, Jean</name>
</author>
<id>https://hdl.handle.net/1721.1/163755</id>
<updated>2025-11-19T04:59:54Z</updated>
<published>2025-03-24T00:00:00Z</published>
<summary type="text">Adaptive optimization for prediction with missing data
Bertsimas, Dimitris; Delarue, Arthur; Pauphilet, Jean
When training predictive models on data with missing entries, the most widely used and versatile approach is a pipeline technique where we first impute missing entries and then compute predictions. In this paper, we view prediction with missing data as a two-stage adaptive optimization problem and propose a new class of models, adaptive linear regression models, where the regression coefficients adapt to the set of observed features. We show that some adaptive linear regression models are equivalent to learning an imputation rule and a downstream linear regression model simultaneously instead of sequentially. We leverage this joint-impute-then-regress interpretation to generalize our framework to non-linear models. In settings where data is strongly not missing at random, our methods achieve a 2–10% improvement in out-of-sample accuracy.
</summary>
<dc:date>2025-03-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Biodiversity Systems via Digital Kinships: Insights from Community Data Processes and Creative Practice</title>
<link href="https://hdl.handle.net/1721.1/163754" rel="alternate"/>
<author>
<name>Westerlaken, Michelle</name>
</author>
<id>https://hdl.handle.net/1721.1/163754</id>
<updated>2025-11-19T04:59:48Z</updated>
<published>2025-06-16T00:00:00Z</published>
<summary type="text">Designing Biodiversity Systems via Digital Kinships: Insights from Community Data Processes and Creative Practice
Westerlaken, Michelle
This study details how digital biodiversity data is used and gains meaning in local restoration projects, how these experiences contrast with large-scale innovation patterns, and what new design recommendations emerge from these insights. Digital innovations in biodiversity technologies are increasingly complex, fast-paced, and driven by technological capacities where data generation rather than biodiversity restoration risks becoming the primary goal. Focusing on a biodiversity restoration project with a living lab community in the Netherlands, this participatory research critically examines how plans for emerging technologies, such as biodiversity simulations and digital twins, contrast with local user relations to biodiversity data. Building on qualitative insights from six-months of fieldwork, a digital and physical data portal was designed to simulate ongoing technoscientific innovation and make their complex effects experientially available to users. Findings are brought directly in conversation with emerging technical features through four distinct themes with the aim to share user-insights and produce design recommendations for: environmental storytelling, prediction and future making, interactive dynamics, and simulation aesthetics. These themes articulate the community's preferences towards digital environments that support their nuanced, complex relationships with local biodiversity, suggesting a shift from top-down technocentric approaches to more community-driven and restoration-focused models. Based on this study, design recommendations are articulated for each of these four themes contributing detailed empirical and practice-oriented insights that propose how new biodiversity technologies can resonate more effectively with local biodiversity restoration efforts.
</summary>
<dc:date>2025-06-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Asilomar Goes Underground: The Long Legacy of Recombinant DNA Hazard Debates for the Greater Boston Area Biotechnology Industry</title>
<link href="https://hdl.handle.net/1721.1/163753" rel="alternate"/>
<author>
<name>Scheffler, Robin W.</name>
</author>
<id>https://hdl.handle.net/1721.1/163753</id>
<updated>2025-11-19T04:59:55Z</updated>
<published>2025-03-07T00:00:00Z</published>
<summary type="text">Asilomar Goes Underground: The Long Legacy of Recombinant DNA Hazard Debates for the Greater Boston Area Biotechnology Industry
Scheffler, Robin W.
In 1975, a meeting on the potential hazards of recently invented recombinant DNA techniques was held at the Asilomar Conference Center in California. This meeting gave rise to a global debate over the safety and regulation of recombinant DNA (rDNA). In this paper, I use the historical development of recombinant DNA regulation in the Greater Boston Area—now home to the densest cluster of the biotechnology industry in the world—to provide a different interpretation of the legacies of Asilomar. While most accounts of Asilomar have considered its brief and dramatic impact on molecular biology on a national scale, an equally meaningful and overlooked impact is to be found in the development of regulations around recombinant DNA at the local level. Rather than hindering research, these events enabled the operations of the modern commercial biotechnology industry, which was founded on the promise of recombinant DNA. This approach highlights a different legacy of Asilomar, one which did not end with expert consensus that recombinant DNA was safe. Instead, attending to the material, infrastructural aspects of working with recombinant DNA in commercial settings reveals a wide range of communities involved in determining the social impacts of Asilomar—communities asking a broader set of questions about recombinant DNA than those originally posed in 1975.
</summary>
<dc:date>2025-03-07T00:00:00Z</dc:date>
</entry>
<entry>
<title>A unified semantics for distributive and non-distributive universal quantifiers across languages</title>
<link href="https://hdl.handle.net/1721.1/163752" rel="alternate"/>
<author>
<name>Haslinger, Nina</name>
</author>
<author>
<name>Hien, Alain N.</name>
</author>
<author>
<name>Rosina, Emil E.</name>
</author>
<author>
<name>Schmitt, Viola</name>
</author>
<author>
<name>Wurm, Valerie</name>
</author>
<id>https://hdl.handle.net/1721.1/163752</id>
<updated>2025-11-19T04:59:50Z</updated>
<published>2025-07-09T00:00:00Z</published>
<summary type="text">A unified semantics for distributive and non-distributive universal quantifiers across languages
Haslinger, Nina; Hien, Alain N.; Rosina, Emil E.; Schmitt, Viola; Wurm, Valerie
Universal quantifiers differ in whether they are restricted to distributive interpretations, like English every, or permit non-distributive interpretations, like English all. This interpretational difference is traditionally captured by positing two unrelated lexical entries for distributive and non-distributive quantification. But this lexical approach does not explain why distributivity correlates with number: cross-linguistically, distributive universal quantifiers typically take singular complements, while non-distributive quantifiers consistently take plural complements. We derive this correlation by proposing a single lexical meaning for the universal quantifier, which derives a non-distributive interpretation if the restrictor predicate is closed under sum, but a distributive interpretation if it is quantized. Support comes from languages in which the same lexical item expresses distributive or non-distributive quantification depending on the number of the complement. For languages like English that have different expressions for non-distributive and distributive quantification, we propose that the distributive forms contain an additional morphosyntactic element that is semantically restricted to combine with a predicate of atomic individuals. This is motivated by the fact that in several languages, the distributive form is structurally more complex than the non-distributive form and sometimes even contains it transparently. We further show that in such languages, there are empirical advantages to taking the choice between distributive and non-distributive quantifier forms to be driven by semantic properties of the restrictor predicate, rather than morphosyntactic number.
</summary>
<dc:date>2025-07-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Passivization and composite A/Ā-movement in the Mandarin BEI-construction</title>
<link href="https://hdl.handle.net/1721.1/163751" rel="alternate"/>
<author>
<name>Chen, Fulang</name>
</author>
<id>https://hdl.handle.net/1721.1/163751</id>
<updated>2025-11-19T04:59:52Z</updated>
<published>2025-06-11T00:00:00Z</published>
<summary type="text">Passivization and composite A/Ā-movement in the Mandarin BEI-construction
Chen, Fulang
The bei-construction in Mandarin is a well-studied construction known for exhibiting both passive-like properties and tough-movement-like properties (see e.g., Feng 1995, 2012; Ting 1995a, 1998; Huang 1999; Tang 2001; Huang et al. 2009; Bruening and Tran 2015; a.o.). In this paper, I argue for a novel analysis of the bei-construction in Mandarin as a passive construction where the passive head/bei hosts a composite probe [ϕ+Ā], which triggers composite A/Ā-movement, in the sense of Van Urk (2015). The subject in the bei-construction is derived via (successive-cyclic) composite A/Ā-movement, followed by a terminating step of A-movement, similar to Longenbaugh’s (2017) analysis of English tough-movement. Under the proposed analysis, the mixed A/Ā-properties associated with the bei-construction are direct consequences of composite A/Ā-movement (following Van Urk 2015; Longenbaugh 2017). The proposed analysis of the bei-construction accounts for two restrictions on long-distance dependencies in the bei-construction – a requirement that no overt, case-less NPs should intervene between the subject of bei and the gap in agent-less bei-constructions, and a subject/object contrast with respect to the possibility of crossing a finite clause boundary to become the subject of bei.
</summary>
<dc:date>2025-06-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>From concept to manufacturing: evaluating vision-language models for engineering design</title>
<link href="https://hdl.handle.net/1721.1/163750" rel="alternate"/>
<author>
<name>Picard, Cyril</name>
</author>
<author>
<name>Edwards, Kristen M.</name>
</author>
<author>
<name>Doris, Anna C.</name>
</author>
<author>
<name>Man, Brandon</name>
</author>
<author>
<name>Giannone, Giorgio</name>
</author>
<author>
<name>Alam, Md F.</name>
</author>
<author>
<name>Ahmed, Faez</name>
</author>
<id>https://hdl.handle.net/1721.1/163750</id>
<updated>2025-11-19T05:00:00Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">From concept to manufacturing: evaluating vision-language models for engineering design
Picard, Cyril; Edwards, Kristen M.; Doris, Anna C.; Man, Brandon; Giannone, Giorgio; Alam, Md F.; Ahmed, Faez
Engineering design is undergoing a transformative shift with the advent of AI, marking a new era in how we approach product, system, and service planning. Large language models have demonstrated impressive capabilities in enabling this shift. Yet, with text as their only input modality, they cannot leverage the large body of visual artifacts that engineers have used for centuries and are accustomed to. This gap is addressed with the release of multimodal vision-language models (VLMs), such as GPT-4V, enabling AI to impact many more types of tasks. Our work presents a comprehensive evaluation of VLMs across a spectrum of engineering design tasks, categorized into four main areas: Conceptual Design, System-Level and Detailed Design, Manufacturing and Inspection, and Engineering Education Tasks. Specifically in this paper, we assess the capabilities of two VLMs, GPT-4V and LLaVA 1.6 34B, in design tasks such as sketch similarity analysis, CAD generation, topology optimization, manufacturability assessment, and engineering textbook problems. Through this structured evaluation, we not only explore VLMs’ proficiency in handling complex design challenges but also identify their limitations in complex engineering design applications. Our research establishes a foundation for future assessments of vision language models. It also contributes a set of benchmark testing datasets, with more than 1000 queries, for ongoing advancements and applications in this field.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Review of AI-assisted design of low-carbon cost-effective concrete toward carbon neutrality</title>
<link href="https://hdl.handle.net/1721.1/163749" rel="alternate"/>
<author>
<name>Mahjoubi, Soroush</name>
</author>
<author>
<name>Barhemat, Rojyar</name>
</author>
<author>
<name>Meng, Weina</name>
</author>
<author>
<name>Bao, Yi</name>
</author>
<id>https://hdl.handle.net/1721.1/163749</id>
<updated>2025-11-19T05:00:04Z</updated>
<published>2025-05-03T00:00:00Z</published>
<summary type="text">Review of AI-assisted design of low-carbon cost-effective concrete toward carbon neutrality
Mahjoubi, Soroush; Barhemat, Rojyar; Meng, Weina; Bao, Yi
Decarbonizing concrete production is a critical step toward achieving carbon neutrality by 2050. This paper highlights the advancements in artificial intelligence-assisted design of low-carbon cost-effective concrete, focusing on integrating machine learning-based property prediction with multi-objective optimization. Data collection and processing techniques, such as automatic data extraction, artificial data generation, and anomaly detection, are first discussed to address the importance of dataset quality. Strategies that capture physicochemical information of ingredients, including by-product supplementary cementitious materials and recycled aggregates, are then examined to enhance model generalizability. Various machine learning models—from individual regression approaches to heterogeneous ensemble methods—are compared for their predictive accuracy and robustness. Methods for hyperparameter tuning, model evaluation, and interpretation to ensure reliable and interpretable predictions are reviewed. Design optimization approaches are then highlighted, ranging from grid/random searches to more sophisticated gradient-based and metaheuristic algorithms, aimed at minimizing carbon footprint, embodied energy, and cost. Future research avenues encompass (1) application-specific design frameworks that integrate critical objectives—mechanical performance, durability, fresh-state behavior, and time-dependent properties such as creep and shrinkage—tailored to specific structural and environmental requirements; (2) holistic design optimization that simultaneously refines mixture design and structural parameters; and (3) probabilistic approaches to systematically manage uncertainties in materials, structural performance, and loading conditions systematically.
</summary>
<dc:date>2025-05-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Making the eyes of the state: algorithmic alienation and mundane creativity in Peruvian street-level bureaucrats</title>
<link href="https://hdl.handle.net/1721.1/163748" rel="alternate"/>
<author>
<name>Cerna-Aragon, Diego</name>
</author>
<author>
<name>García, Luis</name>
</author>
<id>https://hdl.handle.net/1721.1/163748</id>
<updated>2025-11-19T05:00:06Z</updated>
<published>2025-02-15T00:00:00Z</published>
<summary type="text">Making the eyes of the state: algorithmic alienation and mundane creativity in Peruvian street-level bureaucrats
Cerna-Aragon, Diego; García, Luis
The production of state legibility has been a prolific subject of study. However, most works have not paid much attention to the quotidian labor of the street-level bureaucrats that implement legibility projects at a local level. The aim of this article is to explore the implementation of a social registry system at a local level to understand how frontline workers make the population legible. Instead of taking legibility as an object of evaluation or critique, we pay close attention to the inner workings of bureaucracies at the instances in which the sociomaterial conditions of the population are translated into data. Drawing from qualitative research in Peruvian municipalities, we describe the operations of an algorithmic system that classifies the population for the distribution of welfare. We observed how under-resourced bureaucrats were constrained by regulations and technologies of the system. Paradoxically, to make the system work for their local realities, the bureaucrats had to bend the rules and find workarounds. From this perspective, the making of legibility looks less like a top-down exercise of bureaucratic compliance or a story of domination over the population. Instead, we find actors attempting to maintain a delicate balance between inadequate legal rules, scarce resources, and sociopolitical demands.
</summary>
<dc:date>2025-02-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Waveform modelling for the Laser Interferometer Space Antenna</title>
<link href="https://hdl.handle.net/1721.1/163747" rel="alternate"/>
<author>
<name>Afshordi, Niayesh</name>
</author>
<author>
<name>Akçay, Sarp</name>
</author>
<author>
<name>Seoane, Pau A.</name>
</author>
<author>
<name>Antonelli, Andrea</name>
</author>
<author>
<name>Aurrekoetxea, Josu C.</name>
</author>
<author>
<name>Barack, Leor</name>
</author>
<author>
<name>Barausse, Enrico</name>
</author>
<author>
<name>Benkel, Robert</name>
</author>
<author>
<name>Bernard, Laura</name>
</author>
<author>
<name>Bernuzzi, Sebastiano</name>
</author>
<author>
<name>Berti, Emanuele</name>
</author>
<author>
<name>Bonetti, Matteo</name>
</author>
<author>
<name>Bonga, Béatrice</name>
</author>
<id>https://hdl.handle.net/1721.1/163747</id>
<updated>2025-11-19T05:00:32Z</updated>
<published>2025-10-30T00:00:00Z</published>
<summary type="text">Waveform modelling for the Laser Interferometer Space Antenna
Afshordi, Niayesh; Akçay, Sarp; Seoane, Pau A.; Antonelli, Andrea; Aurrekoetxea, Josu C.; Barack, Leor; Barausse, Enrico; Benkel, Robert; Bernard, Laura; Bernuzzi, Sebastiano; Berti, Emanuele; Bonetti, Matteo; Bonga, Béatrice
LISA, the Laser Interferometer Space Antenna, will usher in a new era in gravitational-wave astronomy. As the first anticipated space-based gravitational-wave detector, it will expand our view to the millihertz gravitational-wave sky, where a spectacular variety of interesting new sources abound: from millions of ultra-compact binaries in our Galaxy, to mergers of massive black holes at cosmological distances; from the early inspirals of stellar-mass black holes that will ultimately venture into the ground-based detectors’ view to the death spiral of compact objects into massive black holes, and many sources in between. Central to realising LISA’s discovery potential are waveform models, the theoretical and phenomenological predictions of the pattern of gravitational waves that these sources emit. This White Paper is presented on behalf of the Waveform Working Group for the LISA Consortium. It provides a review of the current state of waveform models for LISA sources, and describes the significant challenges that must yet be overcome.
</summary>
<dc:date>2025-10-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systematic discovery of subcellular RNA patterns in the gut epithelium</title>
<link href="https://hdl.handle.net/1721.1/163746" rel="alternate"/>
<author>
<name>Lee, Minkyoung</name>
</author>
<author>
<name>Acar, Ilhan E.</name>
</author>
<author>
<name>Eletto, Davide</name>
</author>
<author>
<name>Adivarahan, Srivathsan</name>
</author>
<author>
<name>Mhamedi, Farah</name>
</author>
<author>
<name>Handler, Kristina</name>
</author>
<author>
<name>Lee, Jihyun</name>
</author>
<author>
<name>Vinzoni, Elena G.</name>
</author>
<author>
<name>Aguilar, Gustavo</name>
</author>
<id>https://hdl.handle.net/1721.1/163746</id>
<updated>2025-11-19T05:00:26Z</updated>
<published>2025-10-29T00:00:00Z</published>
<summary type="text">Systematic discovery of subcellular RNA patterns in the gut epithelium
Lee, Minkyoung; Acar, Ilhan E.; Eletto, Davide; Adivarahan, Srivathsan; Mhamedi, Farah; Handler, Kristina; Lee, Jihyun; Vinzoni, Elena G.; Aguilar, Gustavo
Background Subcellular RNA localization is crucial for the spatio-temporal control of protein synthesis and underlies key processes during development, homeostasis, and disease. In epithelial cells, RNA can localize asymmetrically along the apico-basal axis. Yet, the localization of most transcripts as well as the diversity of patterns that they adopt remains unexplored. Results Here, we use APEX-seq for proximity labeling and MERFISH for spatial transcriptomics to map subcellular transcript localization in intestinal organoids and tissue from adult mice. Many transcripts present localization bias, often localizing in granular structures. We uncover intrinsic and environmental factors that influence the formation of these patterns. Additionally, we identify translation-dependent and -independent localization patterns and pinpoint the role of 3′ untranslated regions and RNA-binding proteins. Conclusions This subcellular RNA atlas presents a detailed resource for understanding intestinal physiology.
</summary>
<dc:date>2025-10-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Identifying delayed human response to external risks: an econometric analysis of mobility change during a pandemic</title>
<link href="https://hdl.handle.net/1721.1/163745" rel="alternate"/>
<author>
<name>Zhang, Gaofei</name>
</author>
<author>
<name>Osi, Ann</name>
</author>
<author>
<name>Ghaffarzadegan, Navid</name>
</author>
<author>
<name>Rahmandad, Hazhir</name>
</author>
<author>
<name>Xu, Ran</name>
</author>
<id>https://hdl.handle.net/1721.1/163745</id>
<updated>2025-11-19T05:00:24Z</updated>
<published>2025-10-29T00:00:00Z</published>
<summary type="text">Identifying delayed human response to external risks: an econometric analysis of mobility change during a pandemic
Zhang, Gaofei; Osi, Ann; Ghaffarzadegan, Navid; Rahmandad, Hazhir; Xu, Ran
Background Human behavioral responses to changes in risks are often delayed. Methods for estimating these delayed responses either rely on rigid assumptions about the delay distribution (e.g., Erlang distribution), producing a poor fit, or yield period-specific estimates (e.g., estimates from the Autoregressive Distributed Lag (ARDL) model) that are difficult to integrate into simulation models. We propose a hybrid ARDL–Erlang approach that yields an interpretable summary of behavioral responses suitable for incorporation into simulation models. Method We apply the ARDL–Erlang approach to estimate the effect of COVID-19 deaths on mobility across US counties from October 2020 to July 2021. A standard panel autoregressive distributed lag (ARDL) model first estimates the effect of past deaths and past mobility on current mobility. The ARDL model is then transformed into an Infinite Distributed Lag (IDL) model consisting of only past deaths. The coefficients of the past deaths are aggregated into an overall effect and fit to an Erlang distribution, summarized by average delay length and shape parameter. Results Our results show that on the national level, a one-standard-deviation permanent increase in weekly deaths per 100,000 population (log-transformed) is associated with a 0.46-standard-deviation decrease in human mobility in the long run, where the delay distribution follows a first-order Erlang distribution, and the average delay length is about 3.2 weeks. However, there is much heterogeneity across states, with first- to third-order Erlang delays and 2 to 18 weeks of average delay providing a theoretically cogent summary of how mobility followed changes in deaths during the first year and a half of the pandemic. Conclusion This study provides a novel approach to estimating delayed human responses to health risks using a hybrid ARDL-Erlang model. Our findings highlight significant variability in the impact and timing of responses across states, underscoring the need for tailored public health policies. This study can also serve as guidelines and an example for identifying delayed human behavior in other settings.
</summary>
<dc:date>2025-10-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrating single-cell RNA-seq datasets with substantial batch effects</title>
<link href="https://hdl.handle.net/1721.1/163744" rel="alternate"/>
<author>
<name>Hrovatin, Karin</name>
</author>
<author>
<name>Moinfar, Amir Ali</name>
</author>
<author>
<name>Zappia, Luke</name>
</author>
<author>
<name>Parikh, Shrey</name>
</author>
<author>
<name>Lapuerta, Alejandro T.</name>
</author>
<author>
<name>Lengerich, Ben</name>
</author>
<author>
<name>Kellis, Manolis</name>
</author>
<author>
<name>Theis, Fabian J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163744</id>
<updated>2025-11-19T05:00:21Z</updated>
<published>2025-10-30T00:00:00Z</published>
<summary type="text">Integrating single-cell RNA-seq datasets with substantial batch effects
Hrovatin, Karin; Moinfar, Amir Ali; Zappia, Luke; Parikh, Shrey; Lapuerta, Alejandro T.; Lengerich, Ben; Kellis, Manolis; Theis, Fabian J.
Integration of single-cell RNA-sequencing (scRNA-seq) datasets is standard in scRNA-seq analysis. Nevertheless, current computational methods struggle to harmonize datasets across systems such as species, organoids and primary tissue, or different scRNA-seq protocols, including single-cell and single-nuclei. Conditional variational autoencoders (cVAE) are a popular integration method, however, existing strategies for stronger batch correction have limitations. Increasing the Kullback–Leibler divergence regularization does not improve integration and adversarial learning removes biological signals. Here, we propose sysVI, a cVAE-based method employing VampPrior and cycle-consistency constraints. We show that sysVI integrates across systems and improves biological signals for downstream interpretation of cell states and conditions.
</summary>
<dc:date>2025-10-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>21G.104 Chinese IV (Regular), Spring 2006</title>
<link href="https://hdl.handle.net/1721.1/124758.2" rel="alternate"/>
<author>
<name>Wheatley, Julian K.</name>
</author>
<id>https://hdl.handle.net/1721.1/124758.2</id>
<updated>2025-11-17T23:30:23Z</updated>
<published>2006-06-01T00:00:00Z</published>
<summary type="text">21G.104 Chinese IV (Regular), Spring 2006
Wheatley, Julian K.
This is the last of the four courses (Chinese I through IV) that make up the foundation level (four semesters over two years in the normal curriculum) of MIT's regular (non-streamlined) Chinese program. Chinese IV is designed to consolidate conversational usage and grammatical and cultural knowledge encountered in the earlier courses, and to expand reading and listening abilities. It integrates the last part of Learning Chinese (two units designed primarily for review of grammatical concepts and vocabulary growth) with material from Madeline Spring's Making Connections, designed to bolster listening skills, and Linda Hsai and Roger Yue's Strange Stories from a Chinese Studio, a collection of traditional stories that has been a favorite of students of Chinese for many decades and is used here to focus on reading. Reading for this course is primarily, but not exclusively, in the simplified character set that is the standard on the Mainland; readings in the traditional set that is standard in Taiwan are also assigned. Students who have advanced through Chinese I, II, and III to reach this level, as well as those entering at Chinese IV, should review at least the late material in Chinese III before proceeding. Chinese Sequence on OCW MIT OpenCourseWare now offers a complete sequence of four Chinese language courses, covering beginning to intermediate levels of instruction at MIT. They can be used not just as the basis for taught courses, but also for self-instruction and elementary-to-intermediate review. The four Chinese subjects provide the following materials: an online textbook in four parts, J. K. Wheatley's Learning Chinese: A Foundation Course in Mandarin; audio files of the main conversational and narrative material in this book; and syllabi and day-by-day schedules for each term. Course sequence on OCW. CHINESE COURSES COURSE SITES Chinese I (Fall 2014) 21G.101/151 Chinese II (Spring 2014) 21G.102/152 Chinese III (Fall 2005) 21G.103 Chinese IV (Spring 2006) 21G.104
</summary>
<dc:date>2006-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-throughput experimentation for discovery of biodegradable polyesters</title>
<link href="https://hdl.handle.net/1721.1/163743" rel="alternate"/>
<author>
<name>Fransen, Katharina A</name>
</author>
<author>
<name>Av-Ron, Sarah HM</name>
</author>
<author>
<name>Buchanan, Tess R</name>
</author>
<author>
<name>Walsh, Dylan J</name>
</author>
<author>
<name>Rota, Dechen T</name>
</author>
<author>
<name>Van Note, Lana</name>
</author>
<author>
<name>Olsen, Bradley D</name>
</author>
<id>https://hdl.handle.net/1721.1/163743</id>
<updated>2025-11-18T06:33:54Z</updated>
<published>2023-05-30T00:00:00Z</published>
<summary type="text">High-throughput experimentation for discovery of biodegradable polyesters
Fransen, Katharina A; Av-Ron, Sarah HM; Buchanan, Tess R; Walsh, Dylan J; Rota, Dechen T; Van Note, Lana; Olsen, Bradley D
The consistent rise of plastic pollution has stimulated interest in the development of biodegradable plastics. However, the study of polymer biodegradation has historically been limited to a small number of polymers due to costly and slow standard methods for measuring degradation, slowing new material innovation. High-throughput polymer synthesis and a high-throughput polymer biodegradation method are developed and applied to generate a biodegradation dataset for 642 chemically distinct polyesters and polycarbonates. The biodegradation assay was based on the clear-zone technique, using automation to optically observe the degradation of suspended polymer particles under the action of a single&#13;
            &lt;jats:italic&gt;Pseudomonas lemoignei&lt;/jats:italic&gt;&#13;
            bacterial colony. Biodegradability was found to depend strongly on aliphatic repeat unit length, with chains less than 15 carbons and short side chains improving biodegradability. Aromatic backbone groups were generally detrimental to biodegradability; however, ortho- and para-substituted benzene rings in the backbone were more likely to be degradable than metasubstituted rings. Additionally, backbone ether groups improved biodegradability. While other heteroatoms did not show a clear improvement in biodegradability, they did demonstrate increases in biodegradation rates. Machine learning (ML) models were leveraged to predict biodegradability on this large dataset with accuracies over 82% using only chemical structure descriptors.
</summary>
<dc:date>2023-05-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Antigen-adjuvant interactions, stability, and immunogenicity profiles of a SARS-CoV-2 receptor-binding domain (RBD) antigen formulated with aluminum salt and CpG adjuvants</title>
<link href="https://hdl.handle.net/1721.1/163742" rel="alternate"/>
<author>
<name>Bajoria, Sakshi</name>
</author>
<author>
<name>Kaur, Kawaljit</name>
</author>
<author>
<name>Kumru, Ozan S</name>
</author>
<author>
<name>Van Slyke, Greta</name>
</author>
<author>
<name>Doering, Jennifer</name>
</author>
<author>
<name>Novak, Hayley</name>
</author>
<author>
<name>Rodriguez Aponte, Sergio A</name>
</author>
<author>
<name>Dalvie, Neil C</name>
</author>
<author>
<name>Naranjo, Christopher A</name>
</author>
<author>
<name>Johnston, Ryan S</name>
</author>
<author>
<name>Silverman, Judith Maxwell</name>
</author>
<author>
<name>Kleanthous, Harry</name>
</author>
<author>
<name>Love, J Christopher</name>
</author>
<author>
<name>Mantis, Nicholas J</name>
</author>
<author>
<name>Joshi, Sangeeta B</name>
</author>
<author>
<name>Volkin, David B</name>
</author>
<id>https://hdl.handle.net/1721.1/163742</id>
<updated>2025-11-18T06:33:20Z</updated>
<published>2022-06-06T00:00:00Z</published>
<summary type="text">Antigen-adjuvant interactions, stability, and immunogenicity profiles of a SARS-CoV-2 receptor-binding domain (RBD) antigen formulated with aluminum salt and CpG adjuvants
Bajoria, Sakshi; Kaur, Kawaljit; Kumru, Ozan S; Van Slyke, Greta; Doering, Jennifer; Novak, Hayley; Rodriguez Aponte, Sergio A; Dalvie, Neil C; Naranjo, Christopher A; Johnston, Ryan S; Silverman, Judith Maxwell; Kleanthous, Harry; Love, J Christopher; Mantis, Nicholas J; Joshi, Sangeeta B; Volkin, David B
Low-cost, refrigerator-stable COVID-19 vaccines will facilitate global access and improve vaccine coverage&#13;
in low- and middle-income countries. To this end, subunit-based approaches targeting the receptorbinding domain (RBD) of SARS-CoV-2 Spike protein remain attractive. Antibodies against RBD neutralize&#13;
SARS-CoV-2 by blocking viral attachment to the host cell receptor, ACE2. Here, a yeast-produced recombinant RBD antigen (RBD-L452K-F490W or RBD-J) was formulated with various combinations of aluminum-salt (Alhydrogel®, AH; AdjuPhos®, AP) and CpG 1018 adjuvants. We assessed the effect of antigenadjuvant interactions on the stability and mouse immunogenicity of various RBD-J preparations. While&#13;
RBD-J was 50% adsorbed to AH and &lt;15% to AP, addition of CpG resulted in complete AH binding, yet no&#13;
improvement in AP adsorption. ACE2 competition ELISA analyses of formulated RBD-J stored at varying&#13;
temperatures (4, 25, 37°C) revealed that RBD-J was destabilized by AH, an effect exacerbated by CpG. DSC&#13;
studies demonstrated that aluminum-salt and CpG adjuvants decrease the conformational stability of&#13;
RBD-J and suggest a direct CpG-RBD-J interaction. Although AH+CpG-adjuvanted RBD-J was the least&#13;
stable in vitro, the formulation was most potent at eliciting SARS-CoV-2 pseudovirus neutralizing antibodies in mice. In contrast, RBD-J formulated with AP+CpG showed minimal antigen-adjuvant interactions, a better stability profile, but suboptimal immune responses. Interestingly, the loss of in vivo potency&#13;
associated with heat-stressed RBD-J formulated with AH+CpG after one dose was abrogated by a booster.&#13;
Our findings highlight the importance of elucidating the key interrelationships between antigen-adjuvant&#13;
interactions, storage stability, and in vivo performance to enable successful formulation development of&#13;
stable and efficacious subunit vaccines.
</summary>
<dc:date>2022-06-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>21G.103 Chinese III (Regular), Fall 2005</title>
<link href="https://hdl.handle.net/1721.1/120951.2" rel="alternate"/>
<author>
<name>Wheatley, Julian K.</name>
</author>
<id>https://hdl.handle.net/1721.1/120951.2</id>
<updated>2025-11-17T23:31:22Z</updated>
<published>2005-12-01T00:00:00Z</published>
<summary type="text">21G.103 Chinese III (Regular), Fall 2005
Wheatley, Julian K.
This is the third of the four courses (Chinese I through IV) in MIT's regular (non-streamlined) Chinese curriculum. The four make use of the textbook, Learning Chinese: A Foundation Course in Mandarin (unpublished, but available online), to which are added various supporting materials as needs arise. The foundation level covers core grammar, linguistic culture, basic conversation, the principles of the writing system, and elementary reading. Reading is primarily in the simplified character set that is the standard on the Mainland, but also in the traditional set that is still standard in Taiwan and many overseas communities. All four subjects in the foundation level are (Chinese I and II) or soon will be (Chinese IV) available on OCW. Students who have advanced through Chinese I and II to reach this level, as well as those entering at Chinese III, should review at least the late material in Chinese II before proceeding. To facilitate review, as well as to orient students who are new to these materials, highlights from all the units in Chinese I and II and a list of the characters formally introduced in Character lessons 1-6 are included in the readings section of this course. Chinese Sequence on OCW OpenCourseWare now offers a complete sequence of four Chinese language courses, covering beginning to intermediate levels of instruction at MIT. They can be used not just as the basis for taught courses, but also for self-instruction and elementary-to-intermediate review. The four Chinese subjects provide the following materials: an online textbook in four parts, J. K. Wheatley's Learning Chinese: A Foundation Course in Mandarin; audio files of the main conversational and narrative material in this book; and syllabi and day-by-day schedules for each term. Course sequence on OCW. CHINESE COURSES COURSE SITES Chinese I (Fall 2014) 21G.101/151 Chinese II (Spring 2015) 21G.102/152 Chinese III (Fall 2005) 21G.103 Chinese IV (Spring 2006) 21G.104
</summary>
<dc:date>2005-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Synthetic Collagen Hydrogels through Symmetric Self‐Assembly of Small Peptides</title>
<link href="https://hdl.handle.net/1721.1/163741" rel="alternate"/>
<author>
<name>Tanrikulu, I Caglar</name>
</author>
<author>
<name>Dang, Lianna</name>
</author>
<author>
<name>Nelavelli, Lekha</name>
</author>
<author>
<name>Ellison, Aubrey J</name>
</author>
<author>
<name>Olsen, Bradley D</name>
</author>
<author>
<name>Jin, Song</name>
</author>
<author>
<name>Raines, Ronald T</name>
</author>
<id>https://hdl.handle.net/1721.1/163741</id>
<updated>2025-11-18T06:33:52Z</updated>
<published>2023-11-23T00:00:00Z</published>
<summary type="text">Synthetic Collagen Hydrogels through Symmetric Self‐Assembly of Small Peptides
Tanrikulu, I Caglar; Dang, Lianna; Nelavelli, Lekha; Ellison, Aubrey J; Olsen, Bradley D; Jin, Song; Raines, Ronald T
Animal‐sourced hydrogels, such as collagen, are widely used as extracellular‐matrix (ECM) mimics in tissue engineering but are plagued with problems of reproducibility, immunogenicity, and contamination. Synthetic, chemically defined hydrogels can avoid such issues. Despite the abundance of collagen in the ECM, synthetic collagen hydrogels are extremely rare due to design challenges brought on by the triple‐helical structure of collagen. Sticky‐ended symmetric self‐assembly (SESSA) overcomes these challenges by maximizing interactions between the strands of the triple helix, allowing the assembly of collagen‐mimetic peptides (CMPs) into robust synthetic collagen nanofibers. This optimization, however, also minimizes interfiber contacts. In this work, symmetric association states for the SESSA of short CMPs to probe their increased propensity for interfiber association are modelled. It is found that 33‐residue CMPs not only self‐assemble through sticky ends, but also form hydrogels. These self‐assemblies behave with remarkable consistency across multiple scales and present a clear link between their triple‐helical architecture and the properties of their hydrogels. The results show that SESSA is an effective and robust design methodology that enables the rational design of synthetic collagen hydrogels.
</summary>
<dc:date>2023-11-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>The BostonWalks study: a longitudinal travel survey using smartphone tracking</title>
<link href="https://hdl.handle.net/1721.1/163740" rel="alternate"/>
<author>
<name>Meister, Adrian</name>
</author>
<author>
<name>Bashan, Nail F.</name>
</author>
<author>
<name>Basu, Rounaq</name>
</author>
<author>
<name>Shen, Xianglu</name>
</author>
<author>
<name>Wang, Ryan Q.</name>
</author>
<author>
<name>Sevtsuk, Andres</name>
</author>
<id>https://hdl.handle.net/1721.1/163740</id>
<updated>2025-11-18T06:32:56Z</updated>
<published>2025-06-28T00:00:00Z</published>
<summary type="text">The BostonWalks study: a longitudinal travel survey using smartphone tracking
Meister, Adrian; Bashan, Nail F.; Basu, Rounaq; Shen, Xianglu; Wang, Ryan Q.; Sevtsuk, Andres
This paper introduces the BostonWalks (BWS) study, detailing its methodology, the resulting dataset, and an initial analysis. The BWS study is a smartphone-based GNSS-tracking study in the Boston metropolitan area, designed to generate an up-to-date dataset on travel behavior, with a particular focus on non-auto travel behavior and its representativeness across all population segments. The dataset encompasses approximately 155,000 trips from 990 participants, making it one of the most extensive datasets of its kind in North America. It includes both raw trajectory data and comprehensive socio-demographic information about participants. The paper outlines the survey methodology, including the technical infrastructure, recruitment strategy, and data processing techniques. A comparison of the socio-demographic and travel behavior characteristics of BWS participants with those from the National Household Travel Survey is provided. Lastly, the paper highlights the richness of the data through correlation and cluster analysis.
</summary>
<dc:date>2025-06-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Diverging global incidence trends of early-onset cancers: comparisons with incidence trends of later-onset cancers and mortality trends of early-onset cancers</title>
<link href="https://hdl.handle.net/1721.1/163739" rel="alternate"/>
<author>
<name>Terashima, Miyu</name>
</author>
<author>
<name>Nakayama, Kota</name>
</author>
<author>
<name>Shirai, Sora</name>
</author>
<author>
<name>Ugai, Satoko</name>
</author>
<author>
<name>Lee, Hwa-Young</name>
</author>
<author>
<name>Matsui, Haruna</name>
</author>
<author>
<name>Mizuno, Hiroki</name>
</author>
<author>
<name>Tanaka, Shiori</name>
</author>
<author>
<name>Song, Minkyo</name>
</author>
<author>
<name>Sasamoto, Naoko</name>
</author>
<author>
<name>Kawachi, Ichiro</name>
</author>
<author>
<name>Giovannucci, Edward L.</name>
</author>
<author>
<name>Ugai, Tomotaka</name>
</author>
<id>https://hdl.handle.net/1721.1/163739</id>
<updated>2025-11-18T06:33:42Z</updated>
<published>2025-11-14T00:00:00Z</published>
<summary type="text">Diverging global incidence trends of early-onset cancers: comparisons with incidence trends of later-onset cancers and mortality trends of early-onset cancers
Terashima, Miyu; Nakayama, Kota; Shirai, Sora; Ugai, Satoko; Lee, Hwa-Young; Matsui, Haruna; Mizuno, Hiroki; Tanaka, Shiori; Song, Minkyo; Sasamoto, Naoko; Kawachi, Ichiro; Giovannucci, Edward L.; Ugai, Tomotaka
Background The global increase in the incidence of early-onset cancers (defined as cancers diagnosed at 20–49 years old) is a serious public health problem. We investigated 1) whether the incidence trend of early-onset cancers differs from that of later-onset cancers and 2) whether both the incidence and mortality of early-onset cancers have increased concurrently. Methods We utilized age-standardized incidence and mortality rates for early-onset and later-onset cancers diagnosed between 2000 and 2017 from the Cancer Incidence in Five Continents and World Health Organization (WHO) mortality databases. The national obesity prevalence among adults aged 20–49 years was obtained from the National Clinical Database. Using joinpoint regression models, we calculated average annual percentage changes (AAPCs) for cancer incidence and mortality by cancer types and countries. We additionally conducted human development index (HDI)-stratified analyses and assessed the correlation between the obesity prevalence in younger populations and early-onset cancer incidence by country. To investigate the more recent trend of early-onset cancer mortality, we extended our mortality analysis after 2017 for cancer types and countries with statistically significant positive AAPCs in both incidence and mortality of early-onset cancers between 2000 and 2017. Results Our analysis showed that 10 early-onset cancer types (thyroid cancer, breast cancer, melanoma, uterine cancer, colorectal cancer, kidney cancer, cervical cancer, pancreatic cancer, multiple myeloma, Hodgkin lymphoma) in females and 7 early-onset cancer types (thyroid cancer, kidney cancer, testis cancer, prostate cancer, colorectal cancer, melanoma, leukemia) in males had statistically significant positive AAPCs in at least 10 countries. Among these, the following early-onset cancer types had significantly higher AAPCs than later-onset cancer types in females: colorectal cancer (6 countries; AAPC range: 1.8–3.8%), cervical cancer (6 countries; AAPC range: 1.2–3.3%), pancreatic cancer (5 countries; AAPC range: 2.3–13.0%), and multiple myeloma (5 countries; AAPC range: 3.1–9.8%); in males: prostate cancer (12 countries; AAPC range: 3.9–18.4%), colorectal cancer (8 countries; AAPC range: 1.8–3.2%), and kidney cancer (6 countries; AAPC range: 2.0–6.0%). We observed statistically significant positive AAPCs in both the incidence and mortality of the following early-onset cancer types: uterine cancer (5 countries) and colorectal cancer (3 countries in females and 5 countries in males). The steeper increases in early-onset cancers compared with later-onset cancers were mainly observed in the very high-HDI country group, including early-onset colorectal cancer (AAPC = 2.4%, 95% CI 2.1–2.6 in females; AAPC = 2.0%, 95% CI 1.7–2.4 in males) to later-onset colorectal cancer (AAPC = −0.1%, 95% CI −0.2 to 0 in females; AAPC = −0.2%, 95% CI −0.3 to 0 in males). We observed strong positive correlations between the increasing obesity prevalence and the rising incidence of early-onset obesity-related cancers in several countries, including Australia (7 cancer types), United Kingdom (7 cancer types), Canada (7 cancer types), Republic of Korea (7 cancer types), and USA (6 cancer types) in females and United Kingdom (7 cancer types), Canada (6 cancer types), Australia (5 cancer types), Sweden (5 cancer types), and Republic of Korea (4 cancer types) in males. Although we did not observe an apparent spike after 2017 in many countries, we observed continued increases in the mortality of certain cancer types, such as uterine cancer (Japan, Republic of Korea, United Kingdom, USA, and Ecuador) in females and colorectal cancer (Argentina, Canada, United Kingdom, and USA) in males. Conclusions The increase in many early-onset cancer types was significantly higher than that of later-onset cancers, and the incidence and mortality of certain early-onset cancer types (such as colorectal cancer) increased simultaneously. Our study highlights global differences in cancer incidence and mortality trends of early-onset and later-onset cancers.
</summary>
<dc:date>2025-11-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generalizable MRI normative modelling to detect age-inappropriate neurodegeneration</title>
<link href="https://hdl.handle.net/1721.1/163738" rel="alternate"/>
<author>
<name>Parker, Thomas D.</name>
</author>
<author>
<name>Bethlehem, Richard A. I.</name>
</author>
<author>
<name>Seidlitz, Jakob</name>
</author>
<author>
<name>White, Simon R.</name>
</author>
<author>
<name>David, Michael C. B.</name>
</author>
<author>
<name>Kolanko, Magdalena A.</name>
</author>
<author>
<name>Bernstock, Joshua D.</name>
</author>
<author>
<name>Dorfschmidt, Lena</name>
</author>
<author>
<name>Bourke, Niall</name>
</author>
<author>
<name>Gailly de Taurines, Anastasia</name>
</author>
<author>
<name>Hain, Jessica A.</name>
</author>
<author>
<name>Del Giovane, Martina</name>
</author>
<author>
<name>Graham, Neil S. N.</name>
</author>
<id>https://hdl.handle.net/1721.1/163738</id>
<updated>2025-11-18T06:33:37Z</updated>
<published>2025-11-12T00:00:00Z</published>
<summary type="text">Generalizable MRI normative modelling to detect age-inappropriate neurodegeneration
Parker, Thomas D.; Bethlehem, Richard A. I.; Seidlitz, Jakob; White, Simon R.; David, Michael C. B.; Kolanko, Magdalena A.; Bernstock, Joshua D.; Dorfschmidt, Lena; Bourke, Niall; Gailly de Taurines, Anastasia; Hain, Jessica A.; Del Giovane, Martina; Graham, Neil S. N.
Background Determining whether MRI brain scans demonstrate atrophy that is beyond “normal for age” is challenging. Automated measurements of structural metrics in individual brain regions have shown promise as biomarkers of neurodegeneration, yet widely available reference standards that aid interpretation at the individual level are lacking. Normative modelling, enabling standardized “brain charts”, represents a significant step in addressing this challenge by generating individualized age- and sex- adjusted centile scores derived from large, aggregated datasets for MRI-derived quantitative metrics. Methods Using normative data from 56,173 participants across the life course, we have developed regional cortical thickness and amygdala/hippocampal volume brain charts (adjusted for total intracranial volume) that can be applied at the individual level. At the group level, we investigate whether regional centile scores relate to cognitive performance (mini-mental state examination) and discriminate individuals with neuropathological evidence of Alzheimer’s disease (n = 351) from propensity-matched controls from the National Alzheimer's Coordinating Center (NACC) dataset. In addition, we explored the relationships between disease stage, cognition, regional tau deposition and regional centile scores in amyloid-β-PET-positive individuals with Alzheimer’s disease dementia (n = 39) and mild cognitive impairment (n = 71) from the Alzheimer’s Disease Neuroimaging Initiative-3 (ADNI-3). We then extended this approach to phenotypes of frontotemporal lobar degeneration using the Neuroimaging in Frontotemporal Dementia dataset (n = 113). Results We demonstrate BrainChart’s application to illustrative individual cases. At the group level, we show that in Alzheimer’s disease, regional centile scores from brain charting predicted cognitive performance, temporal lobe tau PET tracer uptake and discriminated disease groups from propensity matched cognitively normal controls in independent cohorts. Distinct patterns of age-inappropriate cortical atrophy were also evident in different clinical phenotypes of frontotemporal lobar degeneration from the Neuroimaging in Frontotemporal Dementia dataset. Conclusions Regional centile scores derived from an extensive normative dataset represent a generalizable method for objectively identifying atrophy in neurodegenerative diseases and can be applied to determine neurodegenerative atrophy at the individual level.
</summary>
<dc:date>2025-11-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inclusive B-meson flavour-tagging algorithm at LHCb</title>
<link href="https://hdl.handle.net/1721.1/163737" rel="alternate"/>
<author>
<name>Aaij, R.</name>
</author>
<author>
<name>Abdelmotteleb, A. S. W.</name>
</author>
<author>
<name>Abellan Beteta, C.</name>
</author>
<author>
<name>Abudinén, F.</name>
</author>
<author>
<name>Ackernley, T.</name>
</author>
<author>
<name>Adefisoye, A. A.</name>
</author>
<author>
<name>Adeva, B.</name>
</author>
<author>
<name>Adinolfi, M.</name>
</author>
<author>
<name>Adlarson, P.</name>
</author>
<author>
<name>Agapopoulou, C.</name>
</author>
<author>
<name>Aidala, C. A.</name>
</author>
<author>
<name>Ajaltouni, Z.</name>
</author>
<author>
<name>Akar, S.</name>
</author>
<author>
<name>Akiba, K.</name>
</author>
<author>
<name>Albicocco, P.</name>
</author>
<author>
<name>Albrecht, J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163737</id>
<updated>2025-11-18T06:33:39Z</updated>
<published>2025-11-10T00:00:00Z</published>
<summary type="text">Inclusive B-meson flavour-tagging algorithm at LHCb
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
A new algorithm is developed to identify the flavour of neutral B mesons at production in pp collisions by utilising all tracks from the hadronisation process. The algorithm is calibrated separately for B0 and B s 0 mesons using B0 → J/ψK+π− and B s 0 → D s − π + decays from pp collision data collected by the LHCb experiment at a centre-of-mass energy of 13 TeV. This new algorithm improves the tagging power by 35% for B0 mesons and 20% for B s 0 mesons when compared to the combined performance of the existing LHCb flavour-tagging algorithms.
</summary>
<dc:date>2025-11-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analytical benchmark problems and methodological framework for the assessment and comparison of multifidelity optimization methods</title>
<link href="https://hdl.handle.net/1721.1/163736" rel="alternate"/>
<author>
<name>Mainini, Laura</name>
</author>
<author>
<name>Serani, Andrea</name>
</author>
<author>
<name>Pehlivan-Solak, Hayriye</name>
</author>
<author>
<name>Di Fiore, Francesco</name>
</author>
<author>
<name>Rumpfkeil, Markus P.</name>
</author>
<author>
<name>Minisci, Edmondo</name>
</author>
<author>
<name>Quagliarella, Domenico</name>
</author>
<author>
<name>Yildiz, Sihmehmet</name>
</author>
<author>
<name>Ficini, Simone</name>
</author>
<author>
<name>Pellegrini, Riccardo</name>
</author>
<author>
<name>Thelen, Andrew</name>
</author>
<author>
<name>Bryson, Dean</name>
</author>
<author>
<name>Nikbay, Melike</name>
</author>
<author>
<name>Diez, Matteo</name>
</author>
<author>
<name>Beran, Philip S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163736</id>
<updated>2025-11-18T06:33:34Z</updated>
<published>2025-11-10T00:00:00Z</published>
<summary type="text">Analytical benchmark problems and methodological framework for the assessment and comparison of multifidelity optimization methods
Mainini, Laura; Serani, Andrea; Pehlivan-Solak, Hayriye; Di Fiore, Francesco; Rumpfkeil, Markus P.; Minisci, Edmondo; Quagliarella, Domenico; Yildiz, Sihmehmet; Ficini, Simone; Pellegrini, Riccardo; Thelen, Andrew; Bryson, Dean; Nikbay, Melike; Diez, Matteo; Beran, Philip S.
As engineering systems increase in complexity and performance demands intensify, Multidisciplinary Design Optimization (MDO) methodologies are becoming essential for integrating models from multiple disciplines to optimize complex multi-physics systems. Within this context, major challenges remain in selecting appropriate disciplinary fidelity levels, and how to couple them effectively. Multifidelity methods offer a promising path forward by strategically combining information sources of varying fidelity - whether computational or experimental - to enable efficient and scalable design exploration and optimization. Despite the development of numerous multifidelity methods, their comparative performance remains difficult to assess due to the absence of standardized benchmark frameworks capable of evaluating performance across diverse optimization tasks. To address this gap, this paper introduces a comprehensive benchmarking framework that includes: (i) a suite of analytical benchmark optimization problems designed to stress-test and validate multifidelity methods; (ii) a set of assessment metrics for quantifying and comparing performance over measurable objectives; and (iii) the classification, evaluation, and comparison of several families of multifidelity optimization methods and frameworks using the proposed benchmarks to identify their respective strengths and weaknesses in real-world scenarios. The proposed benchmark problems are analytically defined functions carefully selected to capture mathematical challenges commonly encountered in real-world applications, including high dimensionality, multimodality, discontinuities, and noise. Their closed-form nature ensures computational efficiency, high reproducibility, and a clear separation of algorithmic behavior from numerical artifacts. The accompanying performance metrics support the systematic evaluation of multifidelity methods, measuring both optimization effectiveness and global approximation accuracy. By providing a rigorous, reproducible, and accessible benchmarking framework, this work aims to enable the broader community to understand, compare, and advance multifidelity optimization methods for complex problems in science and engineering.
</summary>
<dc:date>2025-11-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Embodiment, Relationships, and Sexuality: An Ethical Analysis of Extended Reality Technologies</title>
<link href="https://hdl.handle.net/1721.1/163735" rel="alternate"/>
<author>
<name>Ramirez, Erick J.</name>
</author>
<author>
<name>Clark, Laura</name>
</author>
<author>
<name>Campbell, Sydney</name>
</author>
<author>
<name>Dreiman, Julian</name>
</author>
<author>
<name>Clay, Dorian</name>
</author>
<author>
<name>Gupta, Raghav</name>
</author>
<author>
<name>Jennett, Shelby</name>
</author>
<id>https://hdl.handle.net/1721.1/163735</id>
<updated>2025-11-18T06:33:30Z</updated>
<published>2025-11-14T00:00:00Z</published>
<summary type="text">Embodiment, Relationships, and Sexuality: An Ethical Analysis of Extended Reality Technologies
Ramirez, Erick J.; Clark, Laura; Campbell, Sydney; Dreiman, Julian; Clay, Dorian; Gupta, Raghav; Jennett, Shelby
Abstract Communication technologies change the way we relate to each other and ourselves. In this essay we analyze the effects that extended reality (XR) technologies are likely to have on conceptions of the self, romantic relationships, and other associated concepts like sexual orientation. While these technologies are in their infancy, key psychological and philosophical concepts are already being explored. We begin by defining extended reality and the family of technologies that make it possible. We pay special attention to the way these immersive technologies ground the experiences of presence which can become virtually real. These experiences provide a useful framework for understanding the phenomena of XR embodiment. XR embodiment, the experience of one’s self as embodied in XR, opens up the possibility of blended physical and digital narrative selves which form the basis of new forms of relationships. In a future where XR is incorporated into the basic social and political structures of society, XR embodiment and virtually real experiences challenge normative concepts like sex and sexual orientation. Contemporary conceptions of the self, sex, consent, and love emerged in purely physical contexts to help us navigate the limitations of physical embodiment. XR embodiment requires a new ethical framework to make room for these possibilities. We end the paper by assessing ethical risks XR embodiment can introduce for XR developers, and researchers.
</summary>
<dc:date>2025-11-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>IsoDAR@Yemilab: Preliminary design report—volume II (beam transport, neutrino source, and shielding)</title>
<link href="https://hdl.handle.net/1721.1/163734" rel="alternate"/>
<author>
<name>Spitz, Joshua</name>
</author>
<author>
<name>Alonso, Jose R.</name>
</author>
<author>
<name>Ameel, Jon</name>
</author>
<author>
<name>Barlow, Roger</name>
</author>
<author>
<name>Bartoszek, Larry</name>
</author>
<author>
<name>Bungau, Adriana</name>
</author>
<author>
<name>Shaevitz, Michael H.</name>
</author>
<author>
<name>Voirin, Erik A.</name>
</author>
<author>
<name>Winklehner, Daniel</name>
</author>
<author>
<name>Conrad, Janet M.</name>
</author>
<author>
<name>Engebretson, Samuel J.</name>
</author>
<author>
<name>Moon, Jarrett</name>
</author>
<author>
<name>Winkler, Eleanor</name>
</author>
<author>
<name>Adelmann, Andreas</name>
</author>
<author>
<name>Axani, Spencer N.</name>
</author>
<author>
<name>Barletta, William A.</name>
</author>
<author>
<name>Calabretta, Luciano</name>
</author>
<author>
<name>Calvo, Pedro</name>
</author>
<author>
<name>Chan, Andrew</name>
</author>
<author>
<name>Karagiorgi, Georgia</name>
</author>
<id>https://hdl.handle.net/1721.1/163734</id>
<updated>2025-11-18T06:33:29Z</updated>
<published>2025-11-10T00:00:00Z</published>
<summary type="text">IsoDAR@Yemilab: Preliminary design report—volume II (beam transport, neutrino source, and shielding)
Spitz, Joshua; Alonso, Jose R.; Ameel, Jon; Barlow, Roger; Bartoszek, Larry; Bungau, Adriana; Shaevitz, Michael H.; Voirin, Erik A.; Winklehner, Daniel; Conrad, Janet M.; Engebretson, Samuel J.; Moon, Jarrett; Winkler, Eleanor; Adelmann, Andreas; Axani, Spencer N.; Barletta, William A.; Calabretta, Luciano; Calvo, Pedro; Chan, Andrew; Karagiorgi, Georgia
This Preliminary Design Report (PDR) describes the IsoDAR electron-antineutrino source in two volumes which are mostly site-independent and describe the cyclotron driver providing a 60 MeV, 10 mA proton beam (Volume I); and the Medium Energy Beam Transport (MEBT) line and target (this Volume). The IsoDAR driver and target will produce about 1.15 · 10 23 electron-antineutrinos over 5 calendar years. Paired with a kton-scale liquid scintillator detector, this will enable a broad particle physics program including searches for new symmetries, new interactions and new particles. Here in Volume II, we describe the Medium Energy Beam Transport line, the antineutrino source beam-target and surrounding sleeve, shielding, and plans for monitoring and installation.
</summary>
<dc:date>2025-11-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Nontrivial Winning and Losing Parameters of Schmidt Games</title>
<link href="https://hdl.handle.net/1721.1/163733" rel="alternate"/>
<author>
<name>Neckrasov, Vasiliy</name>
</author>
<author>
<name>Zhan, Eric</name>
</author>
<id>https://hdl.handle.net/1721.1/163733</id>
<updated>2025-11-18T06:33:45Z</updated>
<published>2025-11-14T00:00:00Z</published>
<summary type="text">On Nontrivial Winning and Losing Parameters of Schmidt Games
Neckrasov, Vasiliy; Zhan, Eric
In this paper we study the classical Schmidt game on two families of sets: one related to frequencies of digits in base-2 expansions, and one connected to the set of the badly approximable numbers. Namely, we describe some nontrivial winning and losing parameters ( α , β ) for these sets.
</summary>
<dc:date>2025-11-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>International bureaucrats under transparency: The case of the WTO TRIPS Council</title>
<link href="https://hdl.handle.net/1721.1/163732" rel="alternate"/>
<author>
<name>Park, Sojun</name>
</author>
<author>
<name>Kim, Minju</name>
</author>
<id>https://hdl.handle.net/1721.1/163732</id>
<updated>2025-11-18T06:33:35Z</updated>
<published>2025-11-11T00:00:00Z</published>
<summary type="text">International bureaucrats under transparency: The case of the WTO TRIPS Council
Park, Sojun; Kim, Minju
How does transparency affect the behavior of international bureaucrats tasked with facilitating negotiations? Existing theories offer opposing expectations—greater transparency might induce international bureaucrats to engage more with contentious issues that matter to the public or lead them to avoid those issues whenever possible. We assess these competing perspectives by analyzing the World Trade Organization (WTO)’s 2002 document de-restriction reform that enhanced transparency to the public. Specifically, we examine how prompt public disclosure of documents shapes the way the WTO Secretariat writes reports about the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS). Using network statistics to estimate the state preference distributions on key topics, we find that, after the reform, the WTO Secretariat is more likely to issue reports on polarized topics in negotiations, using accountability-enhancing words. Our analysis at the country-year level shows that the reform led to greater national newspaper coverage of the WTO TRIPS, which in turn raised public awareness. The results suggest that transparency could empower international bureaucrats to tackle divisive issues in times of member-state gridlock.
</summary>
<dc:date>2025-11-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>AbsInt-AI: Language Models for Abstract Interpretation</title>
<link href="https://hdl.handle.net/1721.1/163731" rel="alternate"/>
<author>
<name>Wang, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/163731</id>
<updated>2025-11-18T06:27:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">AbsInt-AI: Language Models for Abstract Interpretation
Wang, Michael
Static program analysis is a foundational technique in software engineering for reasoning about program behavior. Traditional static analysis algorithms model programs as logical systems with well-defined semantics, enabling strong guarantees such as never missing a bug. However, traditional analyses almost always rely on uniform, hard-coded heap abstractions. While more adaptive abstractions are possible in theory, they are rarely implemented in practice due to their complexity and fragility. This limits their precision and flexibility, especially in dynamic languages like JavaScript, where heap structures are heterogeneous and difficult to analyze statically. In this work, we introduce AbsInt-AI, a language-model-guided static analysis framework based on abstract interpretation with adaptive, per-object heap abstractions for JavaScript. This enables the analysis to leverage high-level cues, such as naming conventions and access patterns, without requiring brittle, hand-engineered heuristics. Importantly, the LM agent operates within a bounded interface and never directly manipulates program state, preserving the soundness guarantees of abstract interpretation. ABSINT-AI reduces false positives by up to 34% for bug detection compared to traditional static analysis while maintaining soundness. Our ablations show that the LM’s interactions with the analysis environment are crucial, outperforming non-agentic direct LM predictions by 25%.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Video Streaming at Scale Across Devices, Networks, and Temporal Drift</title>
<link href="https://hdl.handle.net/1721.1/163730" rel="alternate"/>
<author>
<name>Sharma, Harsha</name>
</author>
<id>https://hdl.handle.net/1721.1/163730</id>
<updated>2025-11-18T06:27:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing Video Streaming at Scale Across Devices, Networks, and Temporal Drift
Sharma, Harsha
Video-streaming platforms tune dozens of playback parameters across thousands of client devices. Our measurements from Prime Video show that device-specific tuning can enhance stream quality. Yet traditional blackbox optimization methods like Bayesian optimization become prohibitively expensive due to the large configuration space and the constant emergence of new device types. We introduce AZEEM, a scalable recommendation system leveraging few-shot prediction to rapidly identify promising configurations for new devices. The key insight behind AZEEM is that devices exhibit performance similarities that enable predictions from limited observations. Trained on offline data of device-playback configuration interactions, AZEEM efficiently narrows down the search space to a small set of configurations likely to contain optimal or near-optimal candidates. Additionally, AZEEM addresses temporal distribution shift—where the best-performing configurations change over time—by recommending a small, robust set of candidates rather than a single configuration. Evaluations using largescale real-world datasets show that AZEEM reduces exploration cost by 5.8 − 13.6× and improves stream quality compared to state-of-the-art Bayesian optimization and multi-armed bandit approaches, enabling effective device-specific optimization at scale. The material in this thesis is primarily sourced from the paper "Predict, Prune, Play: Efficient Video Playback Optimization Under Device Diversity and Drift" authored by Harsha Sharma, Pouya Hamadanian, Arash Nasr-Esfahany, Zahaib Akhtar, Mohammad Alizadeh, which is currently under submission.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Oreo: Protecting ASLR Against Microarchitectural&#13;
Attacks</title>
<link href="https://hdl.handle.net/1721.1/163729" rel="alternate"/>
<author>
<name>Song, Shixin</name>
</author>
<id>https://hdl.handle.net/1721.1/163729</id>
<updated>2025-11-18T06:27:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Oreo: Protecting ASLR Against Microarchitectural&#13;
Attacks
Song, Shixin
Address Space Layout Randomization (ASLR) is one of the most prominently deployed mitigations against memory corruption attacks. ASLR randomly shuffles program virtual addresses to prevent attackers from knowing the location of program contents in memory. Microarchitectural side channels have been shown to defeat ASLR through various hardware mechanisms. We systematically analyze existing microarchitectural attacks and identify multiple leakage paths. Given the vast attack surface exposed by ASLR, it is challenging to effectively prevent leaking the ASLR secret against microarchitectural attacks. Motivated by this, we present Oreo, a software-hardware co-design mitigation that strengthens ASLR against these attacks. Oreo uses a new memory mapping interface to remove secret randomized bits in virtual addresses before translating them to their corresponding physical addresses. This extra step hides randomized virtual addresses from microarchitecture structures, preventing side channels from leaking ASLR secrets. Oreo is transparent to user programs and incurs low overhead. We prototyped and evaluated our design on Linux using the hardware simulator gem5.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Counting Substructures with Graph Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/163728" rel="alternate"/>
<author>
<name>Tahmasebi, Behrooz</name>
</author>
<id>https://hdl.handle.net/1721.1/163728</id>
<updated>2025-11-18T06:27:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">On Counting Substructures with Graph Neural Networks
Tahmasebi, Behrooz
To achieve a graph representation, most Graph Neural Networks (GNNs) follow two steps: first, each graph is decomposed into a number of subgraphs (which we call the recursion step), and then the collection of subgraphs is encoded by several iterative pooling steps. While recently proposed higher-order networks show a remarkable increase in the expressive power through a single recursion on larger neighborhoods followed by iterative pooling, the power of deeper recursion in GNNs without any iterative pooling is still not fully understood. To make it concrete, we consider a pure recursion-based GNN which we call Recursive Neighborhood Pooling GNN (RNPGNN). The expressive power of an RNP-GNN and its computational cost quantifies the power of (pure) recursion for a graph representation network. We quantify the power by means of counting substructures, which is one main limitation of the Message Passing graph Neural Networks (MPNNs), and show how RNP-GNN can exploit the sparsity of the underlying graph to achieve low-cost powerful representations. We also compare the recent lower bounds on the time complexity and show how recursion-based networks are near optimal.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Control of a Multi-Fingered Soft-Rigid Hybrid&#13;
Robotic Hand</title>
<link href="https://hdl.handle.net/1721.1/163727" rel="alternate"/>
<author>
<name>Norton, Wil J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163727</id>
<updated>2025-11-18T06:27:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design and Control of a Multi-Fingered Soft-Rigid Hybrid&#13;
Robotic Hand
Norton, Wil J.
In robot hands, compliance improves the quality of grasps and allows for robustness in contact with the environment, which is why soft robot hands, which are inherently compliant, generate such interest despite being complex to control and model. In prior work, our lab developed a soft-rigid hybrid architecture for a robot finger, with the intention of making a compliant finger that is as easy to control as a rigid robot. This thesis details the work done to take this architecture and develop it into a five-fingered dexterous gripper capable of highly compliant grasping — over several iterations, we create an integrated tendon-driven hand that is robust, maintainable, and inexpensive. We develop a precise controller for the soft-rigid hybrid finger, and extend it for both position and task space control of the hand — additionally we implement variable stiffness control within the controller without the need for additional hardware, via adjusting gain values in the control loop. We test the ability of the hand to complete the full set of human grasping postures, and demonstrate that the soft-rigid architecture enables a high degree of generalization, able to complete 28 of the 33 identified human grasp postures. Additionally, tests illustrate the hand’s advantages in completing traditionally difficult manipulation tasks such as picking up thin deformable objects (such as a dollar bill or folding cloth) as well as in interfacing with soft or delicate target objects. We adapt a teleoperation system to map the movements of the robot gripper to a glove worn by a human operator, and evaluate the usability of the hand as a teleoperation target for completing several tasks — we illustrate promising results that the compliance of the hand compensates for operator error and allows for fast completion of tasks requiring environmental or object contact, traditionally difficult tasks for existing rigid robots. Finally, we discuss the use of the teleoperation system to record demonstrations which we then use to train an imitation learning model, utilizing an implementation of denoising diffusion probabilistic models, to complete grasping tasks. We show that our soft-rigid fingers allow a dexterous hand to be trained to perform autonomous grasping with a relatively small set of expert demonstrations, and that the compliance of the physical structure allows for variance in the environment and object position to be compensated for by the physical properties of the hand.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Multi-Modality Imaging Cart for&#13;
Barrett’s Esophagus</title>
<link href="https://hdl.handle.net/1721.1/163726" rel="alternate"/>
<author>
<name>Qu, Ashley</name>
</author>
<id>https://hdl.handle.net/1721.1/163726</id>
<updated>2025-11-18T06:27:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Development of Multi-Modality Imaging Cart for&#13;
Barrett’s Esophagus
Qu, Ashley
Barrett’s Esophagus (BE) is a key precursor to esophageal adenocarcinoma (EAC), but current screening and risk assessment methods are ineffective and costly. Many BE cases remain undiagnosed due to asymptomatic patients, and existing risk algorithms rely on patient data rather than biomarkers. This work aims to start building a risk progression model by using a multi-modal imaging system combining autofluorescence spectroscopy, optical coherence tomography, and diffuse reflectance spectroscopy to perform label-free optical biopsies on ex-vivo tissue. These images will be co-registered and validated with histological biomarkers for BE. The ultimate goal is to develop a non-invasive endoscopic capsule and algorithm to better assess BE progression and enhance early detection of EAC.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Complexity of Basis-Restricted Local Hamiltonians</title>
<link href="https://hdl.handle.net/1721.1/163725" rel="alternate"/>
<author>
<name>Ma, Henry</name>
</author>
<id>https://hdl.handle.net/1721.1/163725</id>
<updated>2025-11-18T06:27:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Complexity of Basis-Restricted Local Hamiltonians
Ma, Henry
A major goal of quantum complexity theory is to understand which computational problems can be solved with access to certain quantum resources. The subfield of Hamiltonian complexity specifically considers computational problems that ask about properties of local Hamiltonians, which are of critical importance in quantum complexity because they can be viewed as quantum generalizations of classical constraint satisfaction problems. In this work, we study the complexity of certain restricted variants of the Quantum-k-Sat problem, a quantum analog of the NP-complete k-Sat problem. We introduce new variants of Quantum-k-Sat which place a basis restriction on the input Hamiltonian H = Σᵢ hᵢ . Each variant is defined by a fixed collection of bases B₁, . . . , Bᵣ of n-qubit space. We require that each Hamiltonian term hi must be diagonal in one of these bases. Our results resolve the complexity of certaim basis-restricted variants of Quantum-k-Sat. First we show the Quantum-6-Sat problem with Hamiltonian terms restricted to be diagonal in an X/Z mixed basis is QMA₁-complete. Second, we combine basis restriction with the restriction of commutativity, and show the following easiness result, which applies generally to higher-level quantum systems (qudits) and bases Q and R (which are real-valued and satisfy an overlap condition): The commmuting Quantum-Sat problem on qudits, where Hamiltonian terms are either diagonal in the Q basis, the R basis, or a single mixed Q/R basis, is in NP.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Future of Personalized, Aligned Language Models</title>
<link href="https://hdl.handle.net/1721.1/163724" rel="alternate"/>
<author>
<name>Han, Seungwook</name>
</author>
<id>https://hdl.handle.net/1721.1/163724</id>
<updated>2025-11-18T06:27:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Future of Personalized, Aligned Language Models
Han, Seungwook
Aligning Large Language Models (LLMs) to cater to different human preferences, learning new skills, and unlearning harmful behavior is an important problem. Search-based methods, such as Best-of-N or Monte-Carlo Tree Search, are effective, but impractical for LLM adaptation due to their high inference cost. On the other hand, using Reinforcement Learning (RL) for adaptation is computationally efficient, but performs worse due to the optimization challenges in co-training the value function and the policy. We present a new framework for reward optimization, Value Augmented Sampling (VAS), that can maximize different reward functions using data sampled from only the initial, frozen LLM. VAS solves for the optimal reward-maximizing policy without co-training the policy and the value function, making the optimization stable, outperforming established baselines, such as PPO and DPO, on standard benchmarks, and achieving comparable results to Best-of-128 with lower inference cost. Unlike existing RL methods that require changing the weights of the LLM, VAS does not require access to the weights of the pre-trained LLM. Thus, it can even adapt LLMs (e.g., ChatGPT), which are available only as APIs. In addition, our algorithm unlocks the new capability of composing several rewards and controlling the extent of each one during deployment time. By bringing together stability, flexibility, and efficiency, we explore the future of aligned, personalized language models that can be adapted seamlessly to meet a wide spectrum of human preferences.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Post-Carbon Seoul: Low-Carbon Interventions for a High-Carbon Housing Stock</title>
<link href="https://hdl.handle.net/1721.1/163723" rel="alternate"/>
<author>
<name>Ji, Yewon</name>
</author>
<id>https://hdl.handle.net/1721.1/163723</id>
<updated>2025-11-18T06:27:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Post-Carbon Seoul: Low-Carbon Interventions for a High-Carbon Housing Stock
Ji, Yewon
Seoul, South Korea, exhibits an exceptionally rapid residential demolition-reconstruction cycle of approximately 30 - 40 years, resulting in one of the world’s shortest apartment building lifespans. This entrenched status quo, fueled by post-war policies, real estate speculation, and finance models treating housing primarily as a short-term asset, contrasts sharply with other developed nations. This research critiques South Korea’s model of rapid demolition for its significant, often overlooked, environmental impacts and social costs. To evaluate alternatives, the methodology comprises three key stages: A) a comparative analysis of the financial frameworks and sustainability outcomes characterizing Western residential longevity versus the unique Korean housing model; B) the formulation of a novel alternative practice focused on adaptive reuse and retrofitting, specifically tailored to integrate within South Korea’s economic system and cultural context; and C) the practical demonstration and assessment of this practice through a design case study, incorporating strategies like phased interventions and low-carbon materials such as mass timber. The analysis reveals that this alternative extends building lifespan and achieves substantial carbon reductions by preserving the embodied carbon within existing structures. It offers long-term financial benefits, presenting a viable economic pathway aligning key stakeholder interests through enduring value over speculative gains.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scaling Automatic Question Generation to Large Documents: A Concept-Driven Approach</title>
<link href="https://hdl.handle.net/1721.1/163722" rel="alternate"/>
<author>
<name>Noorbakhsh, Kimia</name>
</author>
<id>https://hdl.handle.net/1721.1/163722</id>
<updated>2025-11-18T06:27:53Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Scaling Automatic Question Generation to Large Documents: A Concept-Driven Approach
Noorbakhsh, Kimia
Assessing and enhancing human learning through question-answering is vital, especially when dealing with large documents, yet automating this process remains challenging. While large language models (LLMs) excel at summarization and answering queries, their ability to generate meaningful questions from lengthy texts remains underexplored. We propose Savaal, a scalable question-generation system with three objectives: (i) scalability, enabling question-generation from hundreds of pages of text (ii) depth of understanding, producing questions beyond factual recall to test conceptual reasoning, and (iii) domainindependence, automatically generating questions across diverse knowledge areas. Instead of providing an LLM with large documents as context, Savaal improves results with a threestage processing pipeline. Our evaluation with 76 human experts on 71 papers and PhD dissertations shows that Savaal generates questions that better test depth of understanding by 6.5× for dissertations and 1.5× for papers compared to a direct-prompting LLM baseline. Notably, as document length increases, Savaal’s advantages in higher question quality and lower cost become more pronounced.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>New approaches to diagnostic imaging: Magnetic particle&#13;
imaging for human functional neuroimaging and short&#13;
mid-field MRI magnet design</title>
<link href="https://hdl.handle.net/1721.1/163721" rel="alternate"/>
<author>
<name>Barksdale, Alex Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/163721</id>
<updated>2025-11-18T03:03:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">New approaches to diagnostic imaging: Magnetic particle&#13;
imaging for human functional neuroimaging and short&#13;
mid-field MRI magnet design
Barksdale, Alex Christopher
Part I: Magnetic Particle Imaging for Human Functional Neuroimaging While Magnetic Resonance Imaging (MRI) has revolutionized diagnostic imaging since its clinical introduction in the 1980s — primarily focusing on hydrogen nuclei — it remains fundamentally limited by the weak nature of nuclear spin magnetism. For example, functional MRI (fMRI) provides valuable insights into brain activity through BOLD signaling, but its limited sensitivity and reliance on indirect physiological measures often necessitate large subject pools for meaningful analysis. In contrast, Magnetic Particle Imaging (MPI) utilizes the much stronger magnetism associated with superparamagnetic iron oxide nanoparticles (SPIONs), and by minimizing background signal levels which are not modulated by functional activity, it offers a promising alternative. However, there are no approved SPION tracers for human use that are well-suited to MPI, and we have little experience scaling this technology up to human-sized imagers. This thesis therefore demonstrates a human-scale MPI scanner using functional MPI (fMPI) in non-human primates and assesses its potential for future human studies. Additionally, we investigate safety aspects of MPI, specifically focusing on peripheral nerve stimulation (PNS) induced by the 25 kHz magnetic excitation fields used in MPI. Because this is a higher frequency than those used by MRI gradients, threshold data at this frequency are lacking. This thesis measures the PNS stimulation threshold in human subjects to better understand high-frequency magnetic PNS and ensure the safe implementation of human-scale MPI for future neuroimaging applications. Part II: Short Mid-Field MRI Magnet Designs Anxiety induced by the long, narrow tube of conventional 1.5T and 3T scanners is a common cause of incomplete patient examinations, leading to delays in diagnosis and reduced facility throughput. In contrast, the short aspect ratio of CT scanner bores is known to alleviate this anxiety, eliminating this problem. This thesis also addresses the need for a more patient-friendly MRI scanning option by introducing a new “hybrid” superconducting and permanent magnet concept applicable to mid-field (0.5T) superconducting solenoid magnets. While mid-field scanners offer lower sensitivity than high-field alternatives, recent advances in image reconstruction and denoising have significantly enhanced their utility, allowing them to deliver diagnostic information comparable to that of the previous generation of 1.5T scanners. Additionally, they increase the range of compatible metallic implants and offer hospitals a lower-cost, easier-to-site alternative to 1.5T and 3T scanners. They can also enhance patient comfort through shorter bore lengths and larger diameters, but their optimized winding designs still reach a limit in how short they can be made for a given homogeneity and diameter specification. This thesis introduces the use of rare-earth permanent magnets to enable further reductions in scanner length, aiming to match the aspect ratio of CT scanners.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ab initio modeling of superconducting nanowire single-photon detectors</title>
<link href="https://hdl.handle.net/1721.1/163720" rel="alternate"/>
<author>
<name>Simon, Alejandro</name>
</author>
<id>https://hdl.handle.net/1721.1/163720</id>
<updated>2025-11-18T06:27:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Ab initio modeling of superconducting nanowire single-photon detectors
Simon, Alejandro
Single-photon detectors are widely used in modern communication, sensing, and computing technology. Among these detectors, superconducting nanowire single-photon detectors (SNSPDs) possess the highest detection efficiencies, the shortest timing jitter, and the lowest dark count rates. However, for several applications, including those in the biological, astronomical, and quantum computation fields, there remains a desire to push the capabilities of modern detectors even further. To realize these improvements, it is necessary to develop an understanding of the physical mechanisms underpinning single-photon detection in these devices. However, current models are phenomenological, requiring experimental data for input, or can only recover qualitative agreement, severely limiting their predictive ability. In this thesis, we begin by describing the existing theoretical frameworks used to model superconducting materials and devices, both in equilibrium and nonequilibrium. We then illustrate an example of a phenomenological approach to modeling superconducting devices by developing an electrothermal model for the superconducting nanowire cryotron and demonstrating its efficacy in predicting the DC behavior and power dissipation of the device. Finally, we expand upon the current state-of-the-art SNSPD theory by utilizing recent advances in density functional theory to develop an ab initio model for the photon detection mechanism of SNSPDs. We then validate the predictions of our model with experimental data from the literature. The resulting model requires no experimental input, provides quantitative predictions of SNSPD performance, and can be extended to describe other superconducting devices, thus enabling the possibility of conducting a systematic search of materials for enhanced device performance.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inference-Time Learning Algorithms of Language Models</title>
<link href="https://hdl.handle.net/1721.1/163719" rel="alternate"/>
<author>
<name>Akyurek, Ekin</name>
</author>
<id>https://hdl.handle.net/1721.1/163719</id>
<updated>2025-11-18T03:03:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Inference-Time Learning Algorithms of Language Models
Akyurek, Ekin
Modern language models (LMs) can perform complex tasks through in-context learning (ICL)—they can adapt to a task via examples provided in their input without any parameter updates. However, fundamental questions remain about when this adaptation works, what algorithms underlie it, and how to improve it. This thesis studies the mechanisms and limitations of ICL and develops better methods for test time adaptation of LMs on diverse benchmarks of language modeling and reasoning. I begin by evaluating the ICL capabilities of pre-trained LMs. I demonstrate that LMs can achieve strong compositional generalization when provided with few-shot examples. In a separate analysis, I show that their performance deteriorates significantly when faced with counterfactual variants of tasks they normally performed well on. Later, I develop "model problems" of ICL test the ability of LMs to learn novel mathematical structures in-context like linear functions and probabilistic formal languages. I interpret the algorithmic foundations of ICL. First, I prove that Transformer models with sufficient capacity can execute both iterative and closed-form solutions to linear regression problems, and demonstrate that these theoretical solutions manifest as interpretable intermediate variables. Then, I reveal how LMs develop specialized circuits that implement approximate n-gram learning algorithms for probabilistic languages. Building on these insights, I develop two approaches to enhance LMs. First, I demonstrate that explicitly incorporating n-gram computation into model architectures improves performance across multiple domains. Second, I introduce a test-time training that enables rapid adaptation through gradient updates on input data, achieving significant improvements over standard few-shot learning on abstract reasoning tasks. Together, these results advance our understanding of how LMs adapt to novel tasks and provide practical techniques for enhancing their test-time learning capabilities.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Trapping and Laser Cooling an Ensemble of Ytterbium-171 Atoms for use in an Atomic Clock</title>
<link href="https://hdl.handle.net/1721.1/163718" rel="alternate"/>
<author>
<name>Velez, Gustavo A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163718</id>
<updated>2025-11-18T06:27:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Trapping and Laser Cooling an Ensemble of Ytterbium-171 Atoms for use in an Atomic Clock
Velez, Gustavo A.
Optical lattice clocks require careful preparation of atomic ensembles in order to ensure homogeneous interactions with the clock laser. We demonstrate loading and laser cooling of an ensemble of ytterbium-171 atoms in a 2D optical dipole trap created by an optical cavity. Our loading method ensures that all atoms are located in the intersection of 2 perpendicular dipole traps as verified through absorption imaging. Raman sideband cooling was used to cool the atomic ensemble from 15.7 uK to 6.3 uK as measured through optical sideband spectroscopy on the 578 nm clock transition. Together, these steps improved the transfer of atoms during a Rabi oscillation from the ground to the clock state from approximately 45 percent excitation fraction to 80 percent excitation fraction. The final atomic ensemble preparation is now sufficient for running an atomic clock.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving and Analyzing Model Merging Methods for Adaptation</title>
<link href="https://hdl.handle.net/1721.1/163717" rel="alternate"/>
<author>
<name>Pari, Jyothish</name>
</author>
<id>https://hdl.handle.net/1721.1/163717</id>
<updated>2025-11-18T06:27:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Improving and Analyzing Model Merging Methods for Adaptation
Pari, Jyothish
In this work, we explore the limitations of combining models by averaging intermediate features, referred to as model merging, and propose a new direction for achieving collective model intelligence through what we call compatible specialization. Current methods for model merging, such as parameter and feature averaging, struggle to effectively combine specialized models due to representational divergence during fine-tuning. As models specialize to their individual domains, their internal feature representations become increasingly incompatible, leading to poor performance when attempting to merge them for new tasks. We analyze this phenomenon using centered kernel alignment (CKA) and show that as models specialize, the similarity in their feature space structure diminishes, hindering their capacity for collective use. To address these challenges, we investigate routing-based merging strategies, which offer more flexible methods for combining specialized models by dynamically routing across different layers. This allows us to improve on existing methods by combining features from multiple layers rather than relying on fixed, layer-wise combinations. However, we find that these approaches still face limitations when layers within models are representationally incompatible. Our findings highlight the importance of designing new approaches for model merging that operate on well-defined input and output spaces, similar to how humans communicate through language rather than intermediate neural activations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Differences in GPT-4 Treatment by Gender in Healthcare Applications</title>
<link href="https://hdl.handle.net/1721.1/163716" rel="alternate"/>
<author>
<name>Pan, Eileen</name>
</author>
<id>https://hdl.handle.net/1721.1/163716</id>
<updated>2025-11-18T06:27:26Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Evaluating Differences in GPT-4 Treatment by Gender in Healthcare Applications
Pan, Eileen
LLMs already permeate medical settings, supporting patient messaging, medical scribing, and chatbots. While prior work has examined bias in medical LLMs, few studies focus on realistic use cases or analyze the source of the bias. To assess whether medical LLMs exhibit differential performance by gender, we audit their responses and investigate whether the disparities stem from implicit or explicit gender cues. We conduct a large-scale human evaluation of GPT-4 responses to medical questions, including counterfactual gender pairs for each question. Our findings reveal differential treatment based on the original patient gender. Specifically, responses for women more often recommend supportive resources, while those for men advise emergency care. Additionally, LLMs tend to downplay medical urgency for female patients and escalate it for male patients. Given rising interest in “LLM-as-a-judge” approaches, we also evaluate whether LLMs can serve as a proxy for human annotators in identifying disparities. We find that LLM-generated annotations diverge from human assessments in heterogeneous ways, particularly regarding error detection and relative urgency.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Highly Integrated Graphene-Based Chemical Sensing Platform for Structural Monitoring Applications</title>
<link href="https://hdl.handle.net/1721.1/163715" rel="alternate"/>
<author>
<name>López Ángeles, Christian Emmanuel</name>
</author>
<id>https://hdl.handle.net/1721.1/163715</id>
<updated>2025-11-18T06:27:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Highly Integrated Graphene-Based Chemical Sensing Platform for Structural Monitoring Applications
López Ángeles, Christian Emmanuel
Two-dimensional materials, such as graphene, hold promise for sensing applications. Graphene's remarkable surface-to-volume ratio, when employed as a transducer, enables the sensor channel to be readily modulated in response to chemical changes in proximity to its surface, effectively converting chemical signals into the electrical domain. However, their utilization has been constrained due to variations in device-to-device performance arising from synthesis and fabrication processes. To address this challenge, we employ Graphene Field Effect Transistors (GFETs) in developing a robust and multiplexed chemical sensing platform. This platform comprises a silicon chip with multiple arrays of sensing units distributed on its surface. This chip is coupled with custom-designed high-speed readout electronics for structural monitoring applications. For example, in harsh environmental conditions, structures constructed from reinforced concrete may experience degradation due to corrosion, a chemical process initiated by carbonation from atmospheric CO₂ and significant fluctuations in temperature and humidity. Under normal conditions, concrete maintains a pH level within the alkaline range of 13 to 14. However, when subjected to carbonation, its pH decreases to values between 8 and 9. Our platform excels in real-time pH monitoring. By conducting I-V sweep measurements in the sensor channel, we have established a correlation between [H⁺] concentration and the device transfer characteristics, i.e. gate-source voltage (&#119881;_&#119866;&#119878;) at graphene's Dirac point with an accuracy of roughly 97%. Additionally, we evaluate changes in graphene channel resistance induced by pH variations. This system and correlation allow for the prompt detection of any deviations induced by corrosion within a concrete environment.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards More Interpretable AI With Sparse Autoencoders</title>
<link href="https://hdl.handle.net/1721.1/163714" rel="alternate"/>
<author>
<name>Engels, Joshua</name>
</author>
<id>https://hdl.handle.net/1721.1/163714</id>
<updated>2025-11-18T06:26:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards More Interpretable AI With Sparse Autoencoders
Engels, Joshua
While large language models demonstrate remarkable capabilities across diverse domains, the specific representations and algorithms they learn remain largely unknown. The quest to understand these mechanisms holds dual significance: scientifically, it represents a fundamental inquiry into the principles underlying intelligence, while practically–and with growing urgency– it is vital for mitigating risks from these very same increasingly powerful systems. The initial section of this thesis tackles this challenge of interpreting internal language model representations (features) by employing sparse autoencoders (SAEs). An SAE decomposes neural network hidden states into a potentially more interpretable basis. In Chapter 2, we introduce an unsupervised, SAE-based methodology that successfully identifies inherently multi-dimensional features. Notably, we establish that language models causally represent concepts such as days of the week and months of the year using circular structures. This work provided the first definitive evidence of causal, multi-dimensional features, thereby refuting the one-dimensional linear representation hypothesis. Chapter 3 further assesses whether SAEs identify “true” atomic language model features. We compare the generalization performance and data efficiency of linear probes trained on SAE latents against those trained on the original hidden state basis. The negative outcomes of these experiments suggest limitations in SAEs for capturing the true ontology of language models. Motivated by the aforementioned limitations, the second part of this thesis investigates sparse autoencoders themselves, exploring potential improvements and characterizing their failure modes. Chapter 4 examines the portion of activations not reconstructed by SAEs, which we term “Dark Matter.” We find that a significant fraction of this dark matter is linearly predictable, and furthermore, that specific tokens poorly reconstructed by SAEs remain largely consistent across SAE sizes and sparsities. This suggests that SAEs may systematically fail to capture certain input subspaces, which we hypothesize to contain inherently dense features. Subsequently, Chapter 5 investigates a method to enhance SAE utility: freezing the learned SAE parameters and finetuning the surrounding language model components to minimize KL divergence with the original model’s output distribution. This technique results in a 30% to 55% decrease in the cross-entropy loss gap incurred by inserting the SAE into the model.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transmission Line Dynamics Modeling For Power Electronics-Enabled Control in the Electric Power Systems</title>
<link href="https://hdl.handle.net/1721.1/163713" rel="alternate"/>
<author>
<name>Lawson, Riley E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163713</id>
<updated>2025-11-18T06:27:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Transmission Line Dynamics Modeling For Power Electronics-Enabled Control in the Electric Power Systems
Lawson, Riley E.
In the analysis and operation of electric power systems, understanding the rates at which dynamic phenomena evolve is critical. Classically, power systems operate on multiple time scales, with slower mechanical dynamics from synchronous machines, faster electromechanical controls and protection, and very fast electrical dynamics from transmission networks. This time scale separation results in system modeling techniques which neglect certain component dynamics. However, in systems with significant penetration of power electronic devices and under fast time scale phenomena, the rates at which dynamics evolve become less separated, necessitating the modeling of all system dynamics. In large-scale systems, this becomes computationally challenging due to the high dimensionality of the interconnected system model. This work investigates the role transmission line dynamics play at very fast time scales in power systems. Theoretical results are presented to analyze which transmission line dynamics contribute significantly to power system dynamics, allowing for the intelligent incorporation of transmission line dynamics into computationally tractable models. For the first time, the use of control co-design techniques are demonstrated algorithmically to design fast power electronics-enabled control to stabilize unstable dynamics in electric power systems. This technique allows the design of controls, in an iterative way, to create stable interconnected systems. Finally, transmission line modeling impacts on the design of protection on fast time scales is analyzed. This work presents techniques to protect from short circuits in response to load disconnections, and introduces DC circuit breaker configurations to cause current commutation. In the modern day, power systems operators possess the technology to implement fast control of dynamics, however, due to insufficient information on how to model and prepare for them, system operators instead rely on using conventional, overly conservative control schemes. This work aims to bridge this gap by presenting methodologies to incorporate these dynamics into next-generation system models, and how to design control and protection to mitigate the risks these fast dynamics pose.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Foundation Models for Protein Phenotype Prediction</title>
<link href="https://hdl.handle.net/1721.1/163712" rel="alternate"/>
<author>
<name>Calef, Robert</name>
</author>
<id>https://hdl.handle.net/1721.1/163712</id>
<updated>2025-11-18T06:27:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Foundation Models for Protein Phenotype Prediction
Calef, Robert
Understanding the roles of human proteins remains a major challenge, with approximately 20% of human proteins lacking known functions and more than 40% missing context-specific functional insights. Even well-annotated proteins are often poorly characterized in diverse biological contexts, disease states, and perturbations. We present ProCyon, a foundation model for modeling, generating, and predicting protein phenotypes across five interrelated knowledge domains: molecular functions, therapeutic mechanisms, disease associations, functional protein domains, and molecular interactions. To support this, we created ProCyon-Instruct, a dataset of 33 million protein phenotype instructions, representing a comprehensive resource for multiscale protein phenotypes. By co-training a large language model with multimodal molecular encoders, ProCyon integrates phenotypic and protein data. A novel architecture and instruction tuning strategy allow ProCyon to process arbitrarily interleaved proteinand-phenotype inputs, achieve zero-shot task transfer, and generate free-form text phenotypes interleaved with retrieved protein sequence, structure, and drug modalities in a single unified model.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Functionalization of CNFET arrays for chemical sensing</title>
<link href="https://hdl.handle.net/1721.1/163711" rel="alternate"/>
<author>
<name>Song, Jaekang</name>
</author>
<id>https://hdl.handle.net/1721.1/163711</id>
<updated>2025-11-18T06:26:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Functionalization of CNFET arrays for chemical sensing
Song, Jaekang
Practical deployment of gas sensors for general-purpose applications requires integrated chips that operate at room temperature. However, real-world implementation has been limited by challenges such as the integration of highly sensitive and selective sensors, as well as insufficient statistical validation. In this work, we present an integrated gas sensor array comprising 2048 carbon nanotube field-effect transistors (CNFETs), functionalized with conductive metal-organic frameworks (cMOFs) and metal nanoparticles. Our functionalization approach enhances sensor responses by up to two orders of magnitude and enables on-chip pattern generation. Furthermore, the large number of redundant sensors allows for statistically significant measurements. The improved sensitivity is attributed to increased Schottky barrier modulation. We also demonstrate the chip’s capability to classify bacteria and yeast based on the gas mixtures emitted from cultures grown on agar plates. This work highlights the potential of integrated gas sensors as a practical, rapid, and cost-effective approach for general gas sensing applications, including biomedical applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning Methods for Single Cell RNA-Sequencing Data to Improve Clinical Oncology</title>
<link href="https://hdl.handle.net/1721.1/163710" rel="alternate"/>
<author>
<name>Boiarsky, Rebecca</name>
</author>
<id>https://hdl.handle.net/1721.1/163710</id>
<updated>2025-11-18T03:03:21Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Machine Learning Methods for Single Cell RNA-Sequencing Data to Improve Clinical Oncology
Boiarsky, Rebecca
Single-cell RNA sequencing (scRNA-seq) offers a detailed view of the cellular and phenotypic composition of healthy and diseased tissues. While machine learning (ML) methods are well-suited for the high-dimensional nature of scRNA-seq data, current computational tools face limitations, particularly when confronted with data from clinical oncology. This thesis presents the development and application of ML techniques for scRNA-seq data to address key computational challenges, with a focus on challenges in clinical oncology. It covers four key areas: identifying gene signatures and biomarkers in multiple myeloma, developing methods to account for somatic copy number variations in tumor samples, benchmarking large, pre-trained scRNA-seq foundation models, and creating a framework for predicting clinical outcomes using patient-level representations of single-cell data. Together, these studies aim to develop and evaluate novel ML algorithms for scRNA-seq data which can unlock actionable insights for personalized medicine.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-efficiency, low-loss Floquet Josephson Traveling&#13;
Wave Parametric Amplifier</title>
<link href="https://hdl.handle.net/1721.1/163709" rel="alternate"/>
<author>
<name>Wang, Jennifer</name>
</author>
<id>https://hdl.handle.net/1721.1/163709</id>
<updated>2025-11-18T06:26:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">High-efficiency, low-loss Floquet Josephson Traveling&#13;
Wave Parametric Amplifier
Wang, Jennifer
Advancing error-corrected quantum computing and fundamental science necessitates quantum-limited amplifiers with near-ideal quantum efficiency and multiplexing capability. However, existing solutions achieve one at the expense of the other; for example, Josephson traveling wave parametric amplifiers (JTWPAs) are highgain, broadband, and chip-based quantum amplifiers that conventionally incur a bandwidth-noise tradeoff. When operated at 20-dB gain and instantaneous bandwidths of a few GHz, JTWPAs typically reach near-quantum limited intrinsic efficiencies of 70% - 85% relative to that of an ideal phase-preserving quantum amplifier. This is due to information leakage to the sidebands of the JTWPA, which can be recovered by adiabatically transforming the input modes to Floquet modes of the system within the device. In this thesis, we experimentally demonstrate the first Floquet-mode travelingwave parametric amplifier (Floquet TWPA). Fabricated in a superconducting qubit process, this Floquet TWPA achieves minimal dissipation, quantum-limited noise performance, and broadband operation. Our device exhibits &gt; 20-dB amplification over a 3-GHz instantaneous bandwidth, &lt;0.5 -dB average in-band insertion loss, and the highest-reported intrinsic quantum efficiency for a TWPA of 92.1±7.6%, relative to an ideal phase-preserving amplifier. When measuring a superconducting qubit, our Floquet TWPA enables a system measurement efficiency of 65.1 ± 5.8%, the highest-reported in a superconducting qubit readout experiment utilizing phase-preserving amplifiers to the best of our knowledge. Finally, we discuss the noise limitations of our current experimental setup, as well as impedance matching strategies that will enable us to push towards ideal JTWPA performance. These general-purpose Floquet TWPAs are suitable for fast, high-fidelity multiplexed readout in large-scale quantum systems and future monolithic integration with quantum processors.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Scalable Robot Learning without Physical Robots</title>
<link href="https://hdl.handle.net/1721.1/163708" rel="alternate"/>
<author>
<name>Park, Younghyo</name>
</author>
<id>https://hdl.handle.net/1721.1/163708</id>
<updated>2025-11-18T06:26:42Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards Scalable Robot Learning without Physical Robots
Park, Younghyo
The development of generalist robots—capable of performing a wide range of tasks in diverse environments—requires large-scale datasets of robot interactions. Unlike language or vision domains, where data can be passively collected at scale, robotic data collection remains costly, labor-intensive, and constrained by physical hardware. This thesis explores two complementary directions to overcome this challenge. First, we examine the limitations of training robots from scratch using reinforcement learning (RL). While RL has achieved promising results in simulation, its scalability is hindered by a largely overlooked bottleneck: environment shaping. Designing suitable rewards, action and observation spaces, and task dynamics typically requires extensive human intervention. We formalize environment shaping as a critical optimization problem and introduce tools and benchmarks to study and eventually automate this process, a necessary step toward general-purpose RL. Second, we introduce an alternative paradigm for robot data collection that does not rely on real-world robots. Using the Apple Vision Pro, we develop DART, an augmented reality (AR) teleoperation platform that streams human hand motions to cloud-hosted robot simulations. This setup enables scalable, low-latency collection of high-quality robot demonstrations without the overhead of physical setup or maintenance. Our user studies show that DART more than doubles data collection throughput while reducing operator fatigue, and policies trained in simulation using this data successfully transfer to the real world. Together, these contributions address two key bottlenecks in robot learning: the human effort required for RL environment design, and the dependence on physical robots for data. They lay the groundwork for scalable, accessible approaches to training generalist robot models.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Reconfigurable, Distributed-Memory Accelerator for Sparse Applications</title>
<link href="https://hdl.handle.net/1721.1/163707" rel="alternate"/>
<author>
<name>Golden, Courtney K.</name>
</author>
<id>https://hdl.handle.net/1721.1/163707</id>
<updated>2025-11-18T06:27:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Reconfigurable, Distributed-Memory Accelerator for Sparse Applications
Golden, Courtney K.
Iterative sparse matrix computations lie at the heart of many scientific computing and graph analytics algorithms. On conventional systems, their irregular memory accesses and low arithmetic intensity create challenging memory bandwidth bottlenecks. To overcome such bottlenecks, distributed-SRAM architectures use tiled arrays of high-bandwidth local storage to achieve very high aggregate memory bandwidth. However, current distributedSRAM architectures suffer from either poor programmability due to over-specialization or poor compute performance due to inefficient general-purpose hardware. This thesis proposes Quartz, a new architecture that uses short dataflow tasks and reconfigurable compute in a distributed-SRAM system to deliver both high performance and high programmability. Unlike traditional sparse CGRAs or on-die reconfigurable engines, Quartz allows reconfigurable compute to be highly utilized and scaled by (1) providing high memory bandwidth to each processing element and (2) introducing a task-level dataflow execution model that fits this new setting. Our execution model dynamically reconfigures tile hardware based on inter-tile messages to execute tasks on local data with fine-grained data partitioning across tiles. To make execution efficient, we explore novel data partitioning techniques that use graph and hypergraph partitioning to minimize network traffic and balance load. This is especially challenging for computations where one operand’s sparsity pattern (i.e., distribution of nonzeros) exhibits dynamic behavior across iterations, and we are the first to provide techniques to address this case. To ensure programmability, we show how a wide range of computations (expressed in an extended version of tensor algebra’s Einsum notation) and flexible data distributions can be systematically captured in small tasks for execution on Quartz. We evaluate Quartz in simulation, using an 8-chiplet design with 2,048 tiles and 824 MB of SRAM per chiplet, running six different iterative sparse applications from scientific computing and graph analytics. Quartz’s architecture, data partitioning techniques, and programming model together achieve gmean 26.2× speedup over the prior state-of-the-art programmable distributed-SRAM architecture.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dipole Contact Engineering for Field-Effect Transistors&#13;
Based on Two-Dimensional Materials</title>
<link href="https://hdl.handle.net/1721.1/163706" rel="alternate"/>
<author>
<name>Gupta, Ayush Sagar</name>
</author>
<id>https://hdl.handle.net/1721.1/163706</id>
<updated>2025-11-18T06:27:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Dipole Contact Engineering for Field-Effect Transistors&#13;
Based on Two-Dimensional Materials
Gupta, Ayush Sagar
In the next several years and decades, the expanded use of artificial intelligence and edge computing will demand more powerful and energy-efficient electronics. Two-dimensional (2D) semiconductors, and in particular transition metal dichalcogenides (TMDs) such as molybdenum disulfide (MoS₂), are promising candidates for future field-effect transistors. TMDs can enable aggressive lateral and vertical device scaling, and they can add computing power density and new memory and sensing capabilities via 3D integration. However, several key challenges remain before 2D-channel transistors become commercially viable, including large contact resistances at the source and drain due to the van der Waals surface of 2D materials and the Fermi level pinning effect. A variety of methods have been explored to make ohmic contacts to MoS₂, the most promising of which so far is to use semimetals such as Bi and Sb, however these materials suffer from thermal instability. This thesis addresses these challenges by (1) exploring the ultimate limit of contact metal workfunction scaling to better understand the metal-MoS₂ interface, and (2) introducing a new method of reducing contact resistance to 2D materials by inserting dipole layers at the contact interface. Initial work on ultralow-workfunction (ULWF) metal deposition on MoS₂ and subsequent device fabrication is presented, though further study is required to mitigate effects from deposition equipment and the reactive nature of these metals. In parallel, the Janus TMD MoSSe is explored as an example system for dipole contacts, with extensive material characterization of the Janus TMD MoSSe being performed, and the effect of a dipole layer on the contact properties of FETs being established. Together, these results are a significant step towards solving one of the major hurdles for the commercial introduction of 2D-channel transistors.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Specialization of Vision Representations with Personalized&#13;
Synthetic Data</title>
<link href="https://hdl.handle.net/1721.1/163705" rel="alternate"/>
<author>
<name>Chae, Nayoung (Julia)</name>
</author>
<id>https://hdl.handle.net/1721.1/163705</id>
<updated>2025-11-18T06:26:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Specialization of Vision Representations with Personalized&#13;
Synthetic Data
Chae, Nayoung (Julia)
Modern vision models excel at general purpose downstream tasks. It is unclear, however, how they may be used for personalized vision tasks, which are both fine-grained and data-scarce. Recent works have successfully applied synthetic data to general-purpose representation learning, while advances in Text-to-Image (T2I) diffusion models have enabled the generation of personalized images from just a few real examples. Here, we explore a potential connection between these ideas, and formalize the challenge of using personalized synthetic data to learn personalized representations, which encode knowledge about an object of interest and may be flexibly applied to any downstream task relating to the target object. We introduce an evaluation suite for this challenge, including reformulations of two existing datasets and a novel dataset explicitly constructed for this purpose, and propose a contrastive learning approach that makes creative use of image generators. We show that our method improves personalized representation learning for diverse downstream tasks, from recognition to segmentation, and analyze characteristics of image generation approaches that are key to this gain.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Microservice Design Parameters</title>
<link href="https://hdl.handle.net/1721.1/163704" rel="alternate"/>
<author>
<name>Chen, Qihang</name>
</author>
<id>https://hdl.handle.net/1721.1/163704</id>
<updated>2025-11-18T06:27:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing Microservice Design Parameters
Chen, Qihang
Production-level cloud services are increasingly deployed as microservices. An important question is given application logic, how to design an effective microservice architecture. Existing studies have underscored the importance of microservice cohesiveness and coupling, using these metrics to drive automatic design optimizations. However, they have not accounted for the potential impact that such design changes may have on overall system performance, which is confirmed by our case study. In this work, we present a system that can automatically identify microservice designs that are well-balanced across performance, coupling, and cohesiveness to meet cloud provider’s requirements. the system uses a multi-round dynamic programming approach, selectively identifies promising design candidates, generates the corresponding microservice code, measures and compares the results to ultimately determine the optimal design. The designs produced by our system typically achieve over 20% throughput improvement under the same QoS with less than a 10% increase in average LCOM, and often outperform the original benchmark architectures across all evaluated metrics.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>City in the River: Regeneration of the Santa Catarina River as an Intermittent Urban River</title>
<link href="https://hdl.handle.net/1721.1/163703" rel="alternate"/>
<author>
<name>Martínez Chapa, Daniela</name>
</author>
<id>https://hdl.handle.net/1721.1/163703</id>
<updated>2025-11-18T06:27:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">City in the River: Regeneration of the Santa Catarina River as an Intermittent Urban River
Martínez Chapa, Daniela
Full of dichotomies, the Santa Catarina River is both dry and wet, present but forgotten, central yet disconnected, valued yet feared. How should an intermittent river in a dense urban context be regenerated? This thesis reimagines its ecological, hydrological, and public potential. Set in Monterrey, Mexico, this research addresses the urgent need to rethink water management in the face of the intensifying climate crisis through different urban systems and regeneration strategies within the river basin. Focusing on the Santa Catarina River, long dismissed as a plot, void, or threat, this work proposes how an intermittent river might be re-understood not as an absence of activities or function but as a space of seasonal abundance, ecological possibility, and urban interaction. Historically engineered for control, the river has been used as a flood channel, markets, sports complexes, transportation corridors, and more. However, rarely has it been seen, treated, or protected as a river. Through the development of a pilot zone, this research suggests a replicable framework of regenerative strategies to slow down, retain, and absorb water flows, supporting both dry and wet season dynamics. These include restoring riparian ecologies, reintroducing soft edges, enabling groundwater recharge, and designing permeable, public, and accessible urban interventions that reconnect the city with the riverbed. This thesis is not a fixed proposal but a living toolkit, an adaptable model to be tested, expanded, and reimagined in the pilot as time and nature take over. At stake is not only the river’s future but also the city’s capacity to shift from resistance to relation, becoming one with it, becoming a city in the river.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Banjiha Stories (2025)</title>
<link href="https://hdl.handle.net/1721.1/163702" rel="alternate"/>
<author>
<name>Park, Habin</name>
</author>
<id>https://hdl.handle.net/1721.1/163702</id>
<updated>2025-11-18T06:27:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Banjiha Stories (2025)
Park, Habin
Banjiha are everywhere in Seoul. You don’t always see them—tucked below eye level, half-hidden underground—but they’re there. First built as military bunkers after the Korean War, later turned into last-resort housing, banjiha have become symbols of urban failure—spaces of neglect, flooding disasters, a problem to be erased. Both media portrayals and policy responses have advocated for their disappearance. But does removal truly protect the people who call these spaces home? This thesis moves beyond the idea that banjiha are simply failures of the city. Through three homes —three lives, it traces how these spaces are shaped, not only by policies and architecture but by the people who inhabit them. A home vulnerable to flooding, where protections exist—but not with the greatest risk. A place worn by time, held together by quiet repairs. A financial foothold in a city where affordable housing is disappearing. A space of temporary sacrifice. A shelter to return to, again and again. This is not just a story of risk or resilience, neglect or demolition. It is a story of how people live; how they adapt, negotiate, and make do in spaces that were never designed with them in mind. Rather than asking how to erase banjiha, this thesis asks: What can we learn by noticing them? What would it mean to shift the conversation—from removal to recognition, from assumption to understanding? To see these homes is to recognize not just their constraints, but the small interventions that could reshape them: a door that opens both ways so no one is trapped, policies that hold upstairs owners accountable for leaks, materials layered to prevent mold rather than mask it. Not grand reinventions, but deliberate shifts—openings for a different way forward. But before deciding what must change, we must first learn to see.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Probabilistic Inference for Inference Time Scaling of Language Models</title>
<link href="https://hdl.handle.net/1721.1/163701" rel="alternate"/>
<author>
<name>Puri, Isha</name>
</author>
<id>https://hdl.handle.net/1721.1/163701</id>
<updated>2025-11-18T06:26:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Probabilistic Inference for Inference Time Scaling of Language Models
Puri, Isha
Large language models (LLMs) have achieved significant performance gains via scaling up model sizes and/or data. However, recent evidence suggests diminishing returns from such approaches, motivating a pivot to scaling test-time compute. Existing deterministic inference-time scaling methods, usually with reward models, cast the task as a search problem, but suffer from a key limitation: early pruning. Due to inherently imperfect reward models, promising trajectories may be discarded prematurely, leading to suboptimal performance. We propose a novel inference-time scaling approach by adapting particle-based Monte Carlo methods. Our method maintains a diverse set of candidates and robustly balances exploration and exploitation. Our empirical evaluation demonstrates that our particle filtering methods have a 4–16x better scaling rate over deterministic search counterparts on both various challenging mathematical and more general reasoning tasks. Using our approach, we show that Qwen2.5-Math-1.5B-Instruct surpasses GPT-4o accuracy in only 4 rollouts, while Qwen2.5-Math-7B-Instruct scales to o1 level accuracy in only 32 rollouts. Our work not only presents an effective method to inference-time scaling, but also connects rich literature in probabilistic inference with inference-time scaling of LLMs to develop more robust algorithms in future work. Code, videos, and further information available at probabilistic-inference-scaling.github.io/
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward Systematic Integration of Inverter-Based Resources in Electricity Markets</title>
<link href="https://hdl.handle.net/1721.1/163700" rel="alternate"/>
<author>
<name>Pierre, Jordina</name>
</author>
<id>https://hdl.handle.net/1721.1/163700</id>
<updated>2025-11-18T06:26:34Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Toward Systematic Integration of Inverter-Based Resources in Electricity Markets
Pierre, Jordina
This thesis introduces a multi-layer control architecture for inverter-based resources (IBRs), separating fast local feedback control from slower self-dispatch and system-level market coordination. Existing integration methods for IBRs limit their control flexibility and completely restrict their market participation potential. Two common practices include treatment of IBRs as negative loads and setting a fixed power factor during grid commissioning. Modeling IBRs as negative loads excludes them from dispatch coordination in electricity markets, significantly limiting incentive for contribution to grid reliability and flexibility. Likewise, a fixed power factor prevents the IBR from providing voltage support through reactive power absorption/injection. With a fixed power factor, constant real and reactive power limits are imposed on the inverter, even during voltage transients, ignoring the fact that an inverter’s available capacity can vary significantly due to internal current constraints and the power provided by the renewable energy source. To address the need for reactive power adjustment in IBRs and pave the way for their active participation in electricity markets , this work presents a coordinated control approach that enables IBRs to transition into active, self-dispatching participants. This thesis proposes a first layer hybrid PLL plus Q-V droop based controller in the first layer which governs millisecond-scale autonomous behavior, including low-voltage ride-through and real-time power adjustment based on voltage deviations at the point of common coupling and irradiance fluctuations from the renewable energy source, in this case solar. Given implementation from the first layer and predicted irradiance, Layer 2, which will be implemented in future work, uses a model predictive controller to provide bid functions for both real and reactive power while keeping voltage at the Point of Common Coupling within its limits. Finally, the third layer performs centralized market clearing through a security-constrained optimization by the system operator. By advocating for self-dispatched, constraint aware control, this thesis challenges the prevailing passive modeling paradigm and offers a structured, physics-informed alternative. It demonstrates how IBRs can evolve into reliable, market-integrated assets, enabling smarter renewable integration and a more resilient, cost-effective and decarbonized grid.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Approximations to worst-case data dropping: unmasking failure modes</title>
<link href="https://hdl.handle.net/1721.1/163699" rel="alternate"/>
<author>
<name>Huang, Jenny Yijian</name>
</author>
<id>https://hdl.handle.net/1721.1/163699</id>
<updated>2025-11-18T06:28:40Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Approximations to worst-case data dropping: unmasking failure modes
Huang, Jenny Yijian
A data analyst might worry about generalization if dropping a very small fraction of data points from a study could change its substantive conclusions. Checking this non-robustness directly poses a combinatorial optimization problem and is intractable even for simple models and moderate data sizes. Recently various authors have proposed a diverse set of approximations to detect this non-robustness. In the present work, we show that, even in a setting as simple as ordinary least squares (OLS) linear regression, many of these approximations can fail to detect (true) non-robustness in realistic data arrangements. We focus on OLS in the present work due its widespread use and since some approximations work only for OLS. Of the approximations that do not fail our tests, we find not only that a simple recursive greedy algorithm is the most conceptually straightforward but also that it can be orders of magnitude faster to run than the others.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Highly Scaled p-GaN-gate HEMTs for Low Voltage Power Electronics</title>
<link href="https://hdl.handle.net/1721.1/163698" rel="alternate"/>
<author>
<name>Darmawi-Iskandar, Patrick</name>
</author>
<id>https://hdl.handle.net/1721.1/163698</id>
<updated>2025-11-18T06:28:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Highly Scaled p-GaN-gate HEMTs for Low Voltage Power Electronics
Darmawi-Iskandar, Patrick
Rising global energy demands, driven by the advent of artificial intelligence (AI), cloud computing, and Internet of Things (IoT) devices, underscore the need for more efficient power electronics. In particular, power switches based on wide bandgap semiconductors such as gallium nitride (GaN) have emerged as promising alternatives to traditional silicon devices for low-voltage (10-100 V) applications. This work investigates the design, fabrication, and scaling of p-GaN-gate highelectron-mobility transistors (HEMTs). A p-GaN-gate epitaxial structure was developed with considerations for short channel effects. A self-aligned, gate-first process employing tungsten metallization was implemented to enable gate lengths as small as 100 nm. Device scaling was studied systematically, revealing the importance of gate aspect ratio and gate-to-drain spacing in managing short channel effects and maintaining breakdown voltage. Electrical characterization showed strong device performance, although contact resistance accounted for a substantial portion of total on-resistance. To address this, a modified fabrication approach incorporating regrown contacts was introduced, resulting in reduced contact resistance and improved overall device characteristics. The combined results highlight practical strategies for enhancing the performance and scalability of p-GaN-gate HEMTs for next-generation low-voltage power electronics.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modelling Diarists: Diary-writing and Moral Anxieties in China, 1918–62</title>
<link href="https://hdl.handle.net/1721.1/163697" rel="alternate"/>
<author>
<name>Li, Tien Yi</name>
</author>
<id>https://hdl.handle.net/1721.1/163697</id>
<updated>2025-11-18T06:28:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Modelling Diarists: Diary-writing and Moral Anxieties in China, 1918–62
Li, Tien Yi
This thesis is a history of diary-writing in China from 1918 through 1961. Diaries are an increasingly popular but still inadequately understood primary source for historians of modern China. Previous scholars have suggested that, in the twentieth century, diary-writing became increasingly popular due to Japanese and Soviet influences, the increasing availability of manufactured blank diaries, and ruling governments that used diary-writing as a way of enforcing ideological conformity. This thesis traces an alternative history, starting from the popularization of published diaries in Shanghai in the long 1920s; to diaries’ emergence as a recognizable genre that could discoursed be theorized; to the moment the genre gained its reputation as a kind of self-expression par excellence; to its widespread inclusion into school curricula; to loosely connected attempts on the part of educators to delimit a normative way of diarywriting that, ironically, increasingly regimented self-expression. In doing so, this thesis contributes to the existing historiography by offering three correctives: I argue that 1) the initial proliferation of diaries was economically––not ideologically––motivated, 2) the popularization of diary-writing was not a concerted effort orchestrated by China’s political leaders but at best a loosely connected effort led by a middling class of educators, textbook writers, and intellectuals, and 3) diary-writing was not only regimented by communist ideology in the Maoist era but shifting moral principles and anxieties throughout the twentieth century. All in all, this thesis demonstrates the value of diaries for studying moral knowledge, epistemologies, and anxieties at the grassroots in midcentury China.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Image of the Tunnels: Mapping Perception of the MIT Underground</title>
<link href="https://hdl.handle.net/1721.1/163696" rel="alternate"/>
<author>
<name>Ravichandran, Shruthi</name>
</author>
<id>https://hdl.handle.net/1721.1/163696</id>
<updated>2025-11-18T06:29:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Image of the Tunnels: Mapping Perception of the MIT Underground
Ravichandran, Shruthi
Kevin Lynch’s influential book, The Image of the City, proposes five elements by which residents of a space create mental maps of their neighborhood and use these to define their spatial perception and navigation: paths, edges, districts, nodes, and landmarks. The MIT Tunnels are spaces utilized daily for a myriad of purposes: to reach labs and offices, to avoid slow-moving tourist traffic and biting Boston cold, and to explore MIT’s iconic hacking spots. This work exploresif Lynchian principles apply to these pseudourban underground spaces and culminates in a GeoGuessr-inspired virtual game where students can test and grow their knowledge of tunnel navigation. The hypotheses tested in this thesis project extend Lynch’s framework to relevant tunnel analogs: familiar paths, districts (clusters of buildings and departments), tunnel landmarks, and cross-level relationships between above- and underground mental maps. These hypotheses were tested via preliminary surveys on MIT students. Once completed, the subsequent experiments involved two games - one physically in the tunnels, one online with images of the tunnels gathered with a 360-camera. The games involved having participants navigate to a target building from a starting point. After the in-person game was completed, participants answered a series of questions about their route. These races offered information about familiar paths, landmarks, and strategies participants used to navigate the tunnels. Results from this game confirmed conclusions drawn from preliminary surveys that Lynchian principles do extend to the tunnels via relevant analogs, and above-ground knowledge and connection points offered even more information than Lynch’s five principles alone. Students consistently rely on heavily traveled paths, navigating through familiar districts, and using above ground knowledge to traverse in unknown underground buildings. This work can be extended to help grow students’ understanding of these tunnels, fostering further creativity and student expression in this complex network of spaces.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parallel Batch-Dynamic Graph Algorithms: Coreness Decomposition and Spanners</title>
<link href="https://hdl.handle.net/1721.1/163695" rel="alternate"/>
<author>
<name>Koo, Jaehyun</name>
</author>
<id>https://hdl.handle.net/1721.1/163695</id>
<updated>2025-11-18T06:27:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Parallel Batch-Dynamic Graph Algorithms: Coreness Decomposition and Spanners
Koo, Jaehyun
This thesis contributes to the burgeoning field of batch-dynamic parallel algorithms by presenting parallel batch-dynamic graph algorithms for coreness decomposition and spanners, as well as a number of other related problems. The first class of problems we consider involves approximating coreness decomposition and several closely related concepts, such as (subgraph) density estimation, arboricity estimation, and low out-degree orientations. These are extremely useful structures for organizing graphs based on their density. Our algorithms process any batch of edge insertions and deletions in polylogarithmic depth while using work that is linear in the batch size (up to logarithmic factors), in the worst case. The second class of problems we consider concerns graph spanners. Over the past two to three decades, graph sparsifications that approximately preserve key graph properties have become essential tools in algorithm design. In particular, spanners—reducing the number of edges while approximately preserving pairwise distances—have been widely studied. We present the first such algorithms for computing and maintaining spanners. These algorithms achieve near-optimal amortized runtime—processing each batch in polylogarithmic depth with work nearly linear in the batch size for any number of processors.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bridging the Sim-to-Real Gap for Athletic Loco-Manipulation</title>
<link href="https://hdl.handle.net/1721.1/163694" rel="alternate"/>
<author>
<name>Fey, Nolan</name>
</author>
<id>https://hdl.handle.net/1721.1/163694</id>
<updated>2025-11-18T06:27:05Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Bridging the Sim-to-Real Gap for Athletic Loco-Manipulation
Fey, Nolan
Achieving athletic loco-manipulation on robots requires moving beyond traditional tracking rewards—which simply guide the robot along a reference trajectory—to task rewards that drive truly dynamic, goal-oriented behaviors. Commands such as “throw the ball as far as you can” or “lift the weight as quickly as possible” compel the robot to exhibit the agility and power inherent in athletic performance. However, training solely with task rewards introduces two major challenges: these rewards are prone to exploitation (reward hacking), and the exploration process can lack sufficient direction. To address these issues, we propose a two-stage training pipeline. First, we introduce the Unsupervised Actuator Net (UAN), which leverages real-world data to bridge the sim-to-real gap for complex actuation mechanisms without requiring access to torque sensing. UAN mitigates reward hacking by ensuring that the learned behaviors remain robust and transferable. Second, we use a pre-training and fine-tuning strategy that leverages reference trajectories as initial hints to guide exploration. With these innovations, our robot athlete learns to lift, throw, and drag with remarkable fidelity from simulation to reality.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Wound Designates a Subject</title>
<link href="https://hdl.handle.net/1721.1/163693" rel="alternate"/>
<author>
<name>Lum, Luca E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163693</id>
<updated>2025-11-18T06:28:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Wound Designates a Subject
Lum, Luca E.
What haunts when haunting itself has been foreclosed? This thesis develops “ghostlessness” as a conceptual and aesthetic framework across my work in moving image, drawing, and writing. Ghostlessness refers to conditions that suppress haunting where it would otherwise emerge or be felt. Drawing from theoretical elaborations of hauntology, where the present is understood as structured by both suppressed pasts and unrealized futures, ghostlessness names the absence—or foreclosure—of that temporal disruption. It marks a contemporary condition in which systems oriented toward predictive governance and managed futurity preemptively neutralize rupture, sealing wounds before they can fester, reroute, or become sites of transformation. Through the works gathered here, I explore how ghostlessness functions not simply as absence but as affective and infrastructural suppression—rendering the spectral illegible, unaddressable, or unreal. Against this, my practice seeks to recapture the value of haunting in death-ridden, crisis-laden times where its presence is more prevalent than ever – hence its management, erasure, and suppression: ghostlessness.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stylizing 3D Models With Generative AI for Fabrication</title>
<link href="https://hdl.handle.net/1721.1/163692" rel="alternate"/>
<author>
<name>Tejedor, Leandra</name>
</author>
<id>https://hdl.handle.net/1721.1/163692</id>
<updated>2025-11-18T06:28:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Stylizing 3D Models With Generative AI for Fabrication
Tejedor, Leandra
This thesis presents two novel approaches for modifying 3D models using generative AI for stylization while ensuring the resulting models preserve the properties required for fabrication. The first method, Style2Fab, separates functional and stylistic sections of 3D models to enable targeted modifications that preserve the model's intended functionality. By distinguishing between these sections, Style2Fab allows for alterations that maintain the model's functional purpose while providing flexibility in its aesthetic design. This approach ensures that the modified models retain their original functionality after stylistic changes.&#13;
&#13;
The second method, MechStyle, incorporates finite element analysis (FEA) into the generative modeling pipeline to maintain the structural integrity of the modified models. By analyzing changes in stress values during a simulated drop test at various stages of the stylization process, MechStyle restricts changes to those that preserve the model's structural viability. This ensures that the resulting models are both stylistically accurate to the user's desired results and structurally sound for 3D printing.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Limits of Recovering Planted Subgraphs</title>
<link href="https://hdl.handle.net/1721.1/163691" rel="alternate"/>
<author>
<name>Rajaraman, Amit</name>
</author>
<id>https://hdl.handle.net/1721.1/163691</id>
<updated>2025-11-18T06:28:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Limits of Recovering Planted Subgraphs
Rajaraman, Amit
Given an arbitrary subgraph H = Hₙ and p = pₙ ∈ (0, 1), the planted subgraph model is defined as follows. A statistician observes the union of the “signal,” which is a random “planted” copy H* of H, together with random noise in the form of an instance of an Erdős–Rényi graph ´ G(n, p). Their goal is to then recover the planted H* from the observed graph. Our focus in this work is to understand the minimum mean squared error (MMSE), defined in terms of recovering the edges of H*, as a function of p and H, for large n. A recent paper [MNS⁺23] characterizes the graphs for which the limiting (as n grows) MMSE curve undergoes a sharp phase transition from 0 to 1 as p increases, a behavior known as the all-or-nothing phenomenon, up to a mild density assumption on H. However, their techniques fail to describe the MMSE curves for graphs that do not display such a sharp phase transition. In this paper, we provide a formula for the limiting MMSE curve for any graph H = Hₙ, up to the same mild density assumption. This curve is expressed in terms of a variational formula over pairs of subgraphs of H, and is inspired by the celebrated subgraph expectation thresholds from probabilistic combinatorics [KK07]. Furthermore, we give a polynomial-time description of the optimizers of this variational problem. This allows one to efficiently approximately compute the MMSE curve for any dense graph H when n is large. The proof relies on a novel graph decomposition of H as well as a new minimax theorem which may be of independent interest. Our results generalize to the setting of minimax rates of recovering arbitrary monotone boolean properties planted in random noise, where the statistician observes the union of a planted minimal element A ⊆ [N] of a monotone property and a random Ber(p)^⊗N vector. In this setting, we provide a variational formula inspired by the so-called “fractional” expectation threshold [Tal10], again describing the MMSE curve (in this case up to a multiplicative constant) for large n.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Routing in the CityMesh Decentralized Fallback&#13;
Wireless Network</title>
<link href="https://hdl.handle.net/1721.1/163690" rel="alternate"/>
<author>
<name>Liu, Ziqian</name>
</author>
<id>https://hdl.handle.net/1721.1/163690</id>
<updated>2025-11-18T06:28:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Efficient Routing in the CityMesh Decentralized Fallback&#13;
Wireless Network
Liu, Ziqian
As modern communication systems increasingly rely on centralized network infrastructure, they become more vulnerable to disruptions caused by disasters, failures, or cyberattacks. To address this risk, CityMesh proposes a decentralized fallback wireless network that leverages existing Wi-Fi devices, such as access points (APs), in buildings to maintain essential connectivity during outages. However, achieving scalable and reliable message delivery in such a network, without introducing excessive overhead, poses significant challenges. This thesis presents a new routing protocol for CityMesh, designed to operate efficiently at city scale. We first identify the limitations of traditional shortest-path source routing in CityMesh’s context, including the use of unreliable links and overhead from redundant transmissions. To address these issues, we introduce a safer path selection metric that prioritizes link reliability, a waypoint-based routing compression scheme, and a conduit mechanism to increase robustness to local failures. Our protocol further supports compact routing tables through a grid-based addressing scheme, enabling constant-size packet headers and scalable routing decisions. Additionally, we propose a suppression strategy to reduce unnecessary transmissions both between and within buildings. Finally, we extend our approach to reconnect disconnected network segments by formulating a relay placement strategy based on map data and geometric heuristics. Additionally, to reconnect fragmented network segments, we develop a practical relay placement algorithm by leveraging on the convex hull optimization and re-using global map knowledge, which ensures fast relay point computation in feasible locations such as roads and bridges. Simulations across 20 global cities show that our routing protocol achieves up to 2× higher packet delivery rates and reduces transmission overhead by up to 28× compared to GPSR under high packet loss and realistic localization error. The routing table footprint sampled across 4 randomly selected cities shows on average under 2 KB memory usage per device. Our fast relay placement algorithm also demonstrates only a small number of relays are needed to achieve full network connectivity for most of the cities, which validates CityMesh’s core premise that existing urban Wi-Fi infrastructure is sufficient to support a robust, scalable decentralized fallback network with minimal augmentation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>GPU-accelerated Inference for Discrete Probabilistic Programs</title>
<link href="https://hdl.handle.net/1721.1/163689" rel="alternate"/>
<author>
<name>Ghavami, Matin</name>
</author>
<id>https://hdl.handle.net/1721.1/163689</id>
<updated>2025-11-18T06:28:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">GPU-accelerated Inference for Discrete Probabilistic Programs
Ghavami, Matin
This thesis presents a comprehensive approach to GPU-accelerated inference for discrete probabilistic programs.  We make two key contributions : (1) a factor graph IR implemented in JAX that supports variable elimination and Gibbs sampling, and (2) a modeling DSL with a compiler that lowers programs to the factor graph IR. Our system enables significant performance optimizations through static analysis of the factor graph structure. Variable elimination is optimized by reduction to tensor contraction with optimized contraction paths, while Gibbs sampling is automatically parallelized through graph coloring techniques. Empirical evaluations on standard benchmarks demonstrate orders of magnitude performance improvements over existing systems, with the parallelized Gibbs sampler showing speed-ups of up to 144x on Bayesian networks and even greater improvements for models with regular graph topologies such as Ising models and hidden Markov models.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Vernaculars of Our Networks: From The Cloud to a Plurality of Grassroots Digital Infrastructures</title>
<link href="https://hdl.handle.net/1721.1/163688" rel="alternate"/>
<author>
<name>Hernandez-Cornejo, Mark A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163688</id>
<updated>2025-11-18T06:27:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Vernaculars of Our Networks: From The Cloud to a Plurality of Grassroots Digital Infrastructures
Hernandez-Cornejo, Mark A.
This thesis is concerned with DIY "off-the-cloud" networks as socio-technical models that can reinscribe a community's organizational processes, identity, and culture. It questions how these networks can break away from corporate and extractive services of "the cloud" in order to achieve digital sovereignty as well as resist the hegemonic understanding of Western universal technology. Rather than grafting an outside network onto a community, how might the nodes of a network emerge from the cultural ontologies and local knowledge systems, creating a "vernacular cloud," with political, epistemic, and ontological implications? The social practice of what I call 'net/work' involves the facilitation of local digital territories that create a grassroots politics of "organic internets." In Chapter One, recent attempts to break from monopolized services like Google and Facebook are examined, providing insight into why these networks are formed and how they “de-link” from “the cloud.” Drawing from Walter Mignolo's understanding of "de-linking," the thesis argues that this process is a political project that is also epistemologically and economically non-western. Chapter Two examines the notion of 'community' in community networks through the lens of grassroots organizing such as mutual aid, delving into the care and maintenance required for system administration. Chapter Two builds on Geri Augusto's understanding of "re/trans" as a project that has developed new assemblages of knowledge and integrated them into different landscapes. It examines community networks from the Global South, where network nodes have the potential to be cosmo-ontological. Chapter Three provides examples of the principles outlined in Chapters One and Two from my work in pursuit of technical autonomy within an organization.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sensing Buildings: Environmental Impact of Sensor Technologies&#13;
and Data Infrastructure in Buildings</title>
<link href="https://hdl.handle.net/1721.1/163687" rel="alternate"/>
<author>
<name>Lesina-Debiasi, Simon</name>
</author>
<id>https://hdl.handle.net/1721.1/163687</id>
<updated>2025-11-18T06:27:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Sensing Buildings: Environmental Impact of Sensor Technologies&#13;
and Data Infrastructure in Buildings
Lesina-Debiasi, Simon
Building operations and the construction sector are one of the largest contributors to global carbon emissions and energy consumption. While novel construction materials and insulation offer lower embodied carbon solutions, improved heating and cooling devices offer cost and energy effective building services. Above all, “smart” devices promise remote control, oversight, and optimization of building operations. With the rising implementation of AI solutions to every sector, it is important to see the digital devices as an interface to the material machinery they are connected to. The way through which we are introduced to these systems as solutions to environmental problems leaves out the operational and infrastructural costs of the devices. Making material design decisions that are conscious of the mining operations that source the rare earth minerals, to the pumping of oil for polymer coatings, to the chemical baths that separate it from the ore, all the way to the hard drives in server rigs that are cooled with water and driven by electricity, the cloud is nothing but materiality and resources. When evaluating buildings operations and construction techniques for sustainability considerations and environmental impact, connected services such as data networks and optimizations that rely on large server infrastructures and cloud computing are not part of the scope. This thesis reveals the missing components of energy evaluations in “smart” devices within the walls, floors, windows, doors, and roofs of our building, to create a framework through which building efficiency and sustainability can be reconsidered. Through historic research, literature reviews, and experiments, this work shines some light on the environmental impact of data infrastructure to which our buildings are connected. The work presented in this thesis does not claim to be comprehensive nor to solve the problem of optimizing buildings for energy efficiency. Instead, the goal is to build upon existing and established research on data infrastructure, smart technology, climate research etc. showing that, while the efforts currently taken might be improving the efficiency in a building on-site, considerations that are impacting the energy consumption off site need to be taken into consideration.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decarbonization strategies for North American urban landscapes:                                   &#13;
Evaluating pavements and vegetation across design typologies</title>
<link href="https://hdl.handle.net/1721.1/163686" rel="alternate"/>
<author>
<name>Ramirez Cuebas, Adriana</name>
</author>
<id>https://hdl.handle.net/1721.1/163686</id>
<updated>2025-11-18T06:27:50Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Decarbonization strategies for North American urban landscapes:                                   &#13;
Evaluating pavements and vegetation across design typologies
Ramirez Cuebas, Adriana
Urban landscapes are increasingly recognized as critical to climate mitigation, yet remain underrepresented in carbon accounting frameworks relative to buildings and infrastructure. This thesis advances landscape carbon assessment by introducing a typology-based Life Cycle Assessment (LCA) framework for landscape architecture. &#13;
The framework integrates anthropogenic emissions and natural carbon dynamics while addressing uncertainty. It proceeds through three layers of analysis: 1) developing landscape system and project categories for carbon footprint benchmarking, 2) benchmarking the performance of the proposed landscape systems and urban typologies; and 3) assessing the mitigation potential of decarbonization strategies across systems and project types.&#13;
Concrete pavers on reinforced concrete slabs and asphalt pavements (78 to 104 kgCO₂e/m²) are the most carbon intensive in the production-to-construction stage. Turfgrass and shrubs show wide variability, functioning as sources or sinks depending on species mix, maintenance, and flux magnitudes, underscoring the need for species-specific, ecologically dynamic modeling (-21 to 42 kgCO₂e/m² and -35 to 258 kgCO₂e/m²). Canopy systems act as consistent carbon sinks (-611 to -388 kgCO₂e/m² over 50 years) despite significant emissions from transportation and structural soil.&#13;
Landscape systems were used to benchmark four urban typologies—streetscapes, plazas, courtyards, and urban parks. Their 50-year carbon footprints range from –80 to 21 kgCO₂e/m² in urban parks, –13 to 63 in courtyards, 22 to 79 in plazas, and 3 to 80 in streetscapes. Applying decarbonization strategies makes all typologies achieve net carbon sink status at the high bound. Urban parks achieve neutrality immediately post-construction, courtyards in 13 years, plazas in 26 years, and streetscapes by year 33. At higher emission estimates, urban parks and courtyards deepen carbon sink performance, plazas cross into net sink territory, and streetscapes approach neutrality. The detailed findings highlight the influence of planting density, maintenance regimes, and land cover composition.&#13;
By structuring assessment around land covers and urban typologies, this thesis delivers a transferable carbon accounting framework aligned with design practice, offering actionable insights for embedding climate accountability into landscape architecture and public policy.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulation and Design of Quantum Processors for Low‑Overhead Quantum Error Correction</title>
<link href="https://hdl.handle.net/1721.1/163685" rel="alternate"/>
<author>
<name>Pahl, David</name>
</author>
<id>https://hdl.handle.net/1721.1/163685</id>
<updated>2025-11-18T06:27:47Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Simulation and Design of Quantum Processors for Low‑Overhead Quantum Error Correction
Pahl, David
This thesis investigates the simulation and design of the hardware architecture required for large‑scale quantum error correction (QEC). Specifically, we design microwave circuits for fast and high‑fidelity readout and devise a long‑range coupler (LRC) that spans five qubit lattice sites, suitable for low‑overhead quantum low‑density parity‑check (qLDPC) codes [1]. We present a prototypical nine‑qubit qLDPC code incorporating two long‑ range couplers and optimized readout circuits, achieving state‑of‑the‑art readout fidelities of up to 99.63% in 56 ns and demonstrating strong, well‑targeted couplings mediated by the LRC. Our simulations employ an efficient microwave abstraction based on ABCD transfer matrices, modeling complete qubit devices as networks of circuit elements. We use this formalism to develop a closed‑loop optimization algorithm that determines optimal readout parameters in seconds. The ABCD framework also accurately captures the multi‑mode behavior of the LRC, offering a valuable tool for developing large‑scale, low‑ overhead QEC devices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cost-Based Optimization for Semantic Operator Systems</title>
<link href="https://hdl.handle.net/1721.1/163684" rel="alternate"/>
<author>
<name>Russo, Matthew D.</name>
</author>
<id>https://hdl.handle.net/1721.1/163684</id>
<updated>2025-11-18T06:27:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Cost-Based Optimization for Semantic Operator Systems
Russo, Matthew D.
Recently, AI developers have turned to modular AI systems in order to achieve state-ofthe-art performance on challenging benchmarks and industry problems. New programming frameworks have enabled developers to build these systems by composing them out of semantic operators—i.e., LLM-powered maps, filters, joins, aggregations, etc.—inspired by relational operators from data management systems. While these systems of semantic operators can achieve strong performance on benchmarks, they can be difficult to optimize. For example, an optimizer may need to determine which model, prompting strategy, and retrieval mechanism to use for each operator. Existing optimizers are limited in the number of optimizations they can apply, and most (if not all) cannot optimize system quality, cost, or latency subject to constraint(s) on the other dimensions. In this thesis, we build an extensible, cost-based optimizer called Abacus, which searches for the best implementation of a semantic operator system given a (possibly constrained) optimization objective. The optimizer estimates operator performance by leveraging a minimal set of training examples and, if available, prior beliefs about operator performance. We evaluate the optimizer on a range of workloads including biomedical multi-label classification (BioDEX), information extraction from legal contracts (CUAD), and multi-modal question answering (MMQA). We demonstrate that systems optimized by our work achieve 18.7%-39.2% better quality and up to 23.6x lower cost and 4.2x lower latency than the next best system.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Learning and Computation of Linear Correlated Equilibrium in General Convex Games</title>
<link href="https://hdl.handle.net/1721.1/163683" rel="alternate"/>
<author>
<name>Pipis, Charilaos</name>
</author>
<id>https://hdl.handle.net/1721.1/163683</id>
<updated>2025-11-18T06:27:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Efficient Learning and Computation of Linear Correlated Equilibrium in General Convex Games
Pipis, Charilaos
We propose efficient no-regret learning dynamics and ellipsoid-based methods for computing linear correlated equilibria—a relaxation of correlated equilibria and a strengthening of coarse correlated equilibria—in general convex games. These are games where the number of pure strategies is potentially exponential in the natural representation of the game, such as extensive-form games. Our work identifies linear correlated equilibria as the tightest known notion of equilibrium that is computable in polynomial time and is efficiently learnable for general convex games. Our results are enabled by a generalization of the seminal framework of Gordon et al. [2008] for Φ-regret minimization, providing extensions to this framework that can be used even when the set of deviations Φ is intractable to separate/optimize over. Our polynomial-time algorithms are similarly enabled by extending the Ellipsoid-Against-Hope approach of Papadimitriou and Roughgarden [2008] and its generalization to games of non-polynomial type proposed by Farina and Pipis [2024a]. We provide an extension to these approaches when we do not have access to the separation oracles required by these works for the dual player. This work will appear in STOC 2025, [Daskalakis et al., 2025].
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Explaining Black-Box Classifiers by Implicitly Learning&#13;
Decision Trees</title>
<link href="https://hdl.handle.net/1721.1/163682" rel="alternate"/>
<author>
<name>Lange, Jane</name>
</author>
<id>https://hdl.handle.net/1721.1/163682</id>
<updated>2025-11-18T06:27:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Explaining Black-Box Classifiers by Implicitly Learning&#13;
Decision Trees
Lange, Jane
We present algorithms for finding two types of objects that explain the classification of a black-box model f : {±1}^d → {±1} on an instance x ∈ {±1}^d . The first is a certificate: a small set of x’s features that in conjunction essentially determines f(x). The second is a counterfactual: a nearest instance x′ for which f(x′) ≠ f(x). We obtain both algorithms via a connection to the problem of implicitly learning decision trees. The implicit nature of this learning task allows for efficient algorithms even when the complexity of f necessitates an intractably large surrogate decision tree. We solve the implicit learning task by bringing together techniques from learning theory, local computation algorithms, and complexity theory. Our approach of “explaining by implicit learning” shares elements of two previously disparate methods for post-hoc explanations, global and local explanations, and we make the case that it enjoys advantages of both. Our certification algorithm runs in time poly(d, C(f)) and outputs a certificate of size poly(C(f)), where C(f) is the “average certificate complexity" of f. Our counterfactual algorithm runs in time S(f)^[O(∆f (x))] ·log d, where S(f) is the sensitivity of f (a discrete analogue of the Lipschitz constant) and ∆f (x) is the distance from x to its nearest counterfactual. We further prove a lower bound of S(f)^[Ω(∆f (x))] + Ω(log d) for finding counterfactuals, thereby showing that the guarantees of our algorithm are essentially optimal.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analog On-chip Training and Inference with Non-volatile&#13;
Memory Devices</title>
<link href="https://hdl.handle.net/1721.1/163681" rel="alternate"/>
<author>
<name>Lee, Jungsoo</name>
</author>
<id>https://hdl.handle.net/1721.1/163681</id>
<updated>2025-11-18T06:27:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Analog On-chip Training and Inference with Non-volatile&#13;
Memory Devices
Lee, Jungsoo
As the demand for computation in neural networks continues to rise, conventional computing resources are increasingly constrained by their limited energy efficiency. One promising solution to this challenge is analog in-memory computing (AIMC), which enables efficient matrix-vector multiplications by encoding synaptic weights into the conductance of nonvolatile memory devices. These devices are structured into crossbar arrays. To explore the potential of non-volatile memory devices in AIMC, investigations involve simulating crossbar array operations using IBM’s AIHWKIT. With this tool, I investigate the implementation of various analog computing algorithms, including TikiTaka. AIMC is evaluated for simple MNIST classification tasks and more complex deep learning models, Long Short-Term Memory (LSTM) networks. I demonstrate that devices can be categorized based on their asymmetry and non-linear weight modulation behavior. Performance improvements through the Tikitaka algorithm are observed only when the device provides a sufficient converge-dragging force; otherwise, the algorithm may even degrade performance. I also investigate how pulse-to-pulse noise and device-to-device variability affect system performance, as well as how different peripheral circuit configurations influence the overall behavior. Finally, I propose an Analog Low-Rank Adapter (Analog LoRA) by applying analog computing to the fine-tuning of large language models. I explore the necessary conditions for Analog LoRA to achieve performance comparable to its digital counterpart. Based on these findings, I present design guidelines for effectively applying analog computing to various machine learning tasks on edge devices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CMOS-Compatible Wafer-Scale Synthesis and Rapid Characterization of Two-Dimensional Transition Metal Dichalcogenides</title>
<link href="https://hdl.handle.net/1721.1/163680" rel="alternate"/>
<author>
<name>Jiao, Yixuan</name>
</author>
<id>https://hdl.handle.net/1721.1/163680</id>
<updated>2025-11-18T06:26:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">CMOS-Compatible Wafer-Scale Synthesis and Rapid Characterization of Two-Dimensional Transition Metal Dichalcogenides
Jiao, Yixuan
Two-dimensional transition metal dichalcogenides (TMDs) such as monolayer MoS₂ offer great promise for next generation nanoelectronics due to their atomic thickness, tunable bandgaps, and excellent electrostatic control. However, industrial semiconductor manufacturing demands CMOS-compatible, wafer-scale growth and conventional CVD methods often exceed thermal budgets and introduce contaminants, while achieving uniform, defect-free monolayers remain difficult. This thesis presents in-depth discussion on low-temperature MOCVD system design and optimization methodology for uniform monolayer TMD synthesis. We investigate the effect of alkali halide promoters (e.g. NaCl) and novel alkali-free promoters (e.g. NH4Cl and crystal violet) on synthesis of monolayer MoS₂. By optimizing the NaCl-promoted route, we achieve coalesced monolayer MoS₂ films with enlarged grain domains and demonstrate field-effect transistors with improved mobility. In parallel, we develop a CMOS-compatible crystal violet seeding method that avoids alkali metal contaminants and yields uniform monolayer coverage. To support process development, a rapid characterization pipeline was introduced: optical/SEM imaging combined with machine learning to quickly map thickness, grain size, and infer electronic quality across the wafer. These contributions collectively advance the integration of 2D TMD materials into CMOS fabrication, enabling monolithic 3D integration in future electronics.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automating the Search for Artificial Life with Foundation&#13;
Models</title>
<link href="https://hdl.handle.net/1721.1/163679" rel="alternate"/>
<author>
<name>Kumar, Akarsh</name>
</author>
<id>https://hdl.handle.net/1721.1/163679</id>
<updated>2025-11-18T06:27:21Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Automating the Search for Artificial Life with Foundation&#13;
Models
Kumar, Akarsh
With the recent Nobel Prize awarded for radical advances in protein discovery, foundation models (FMs) for exploring large combinatorial spaces promise to revolutionize many scientific fields. Artificial Life (ALife) has not yet integrated FMs, thus presenting a major opportunity for the field to alleviate the historical burden of relying chiefly on manual design and trial-anderror to discover the configurations of lifelike simulations. This paper presents, for the first time, a successful realization of this opportunity using vision-language FMs. The proposed approach, called Automated Search for Artificial Life (ASAL), (1) finds simulations that produce target phenomena, (2) discovers simulations that generate temporally open-ended novelty, and (3) illuminates an entire space of interestingly diverse simulations. Because of the generality of FMs, ASAL works effectively across a diverse range of ALife substrates including Boids, Particle Life, Game of Life, Lenia, and Neural Cellular Automata. A major result highlighting the potential of this technique is the discovery of previously unseen Lenia and Boids lifeforms, as well as cellular automata that are open-ended like Conway’s Game of Life. Additionally, the use of FMs allows for the quantification of previously qualitative phenomena in a human-aligned way. This new paradigm promises to accelerate ALife research beyond what is possible through human ingenuity alone.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Uncertainty-aware Joint Physical Tracking and Prediction</title>
<link href="https://hdl.handle.net/1721.1/163678" rel="alternate"/>
<author>
<name>Dasgupta, Arijit</name>
</author>
<id>https://hdl.handle.net/1721.1/163678</id>
<updated>2025-11-18T06:26:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Uncertainty-aware Joint Physical Tracking and Prediction
Dasgupta, Arijit
Humans possess a remarkable capacity to track and predict the motion of objects even when visual information is temporarily absent. This thesis investigates how missing sensory evidence—such as during occlusion—alters current and future beliefs about object motion, and introduces an uncertainty-aware framework to model this process. A behavioral experiment was conducted in which participants continuously predicted the future destination of a ball moving in 2.5D environments with occlusion. Results demonstrate that participants dynamically updated their predictions throughout occlusion, exhibiting adaptive belief revision and physically grounded reasoning. To model this behavior, a structured Bayesian modeling and inference approach for joint tracking and prediction was developed that integrates perception, state estimation, and future prediction in a unified process. The approach, implemented via a Sequential Monte Carlo algorithm embedded within a GPU-accelerated and parallel probabilistic programming system, maintains time-varying beliefs over both present and future object states, conditioned on observed images. These belief states are explicitly represented in symbolic form, enabling interpretable, frame-by-frame introspection of uncertainty and prediction over time. When compared against human responses, the model closely matched the temporal evolution of time-aligned decisions and outperformed plausible alternative hypotheses that failed to reason during occlusion. These findings affirm that the absence of changing visual evidence does not engender a void in physical reasoning, but is evidence in itself—processed and revised through structured, probabilistic inference. By integrating probabilistic programming with human behavioral data through structured Bayesian modeling and inference, this thesis advances a computational account of intuitive physical reasoning and provides a foundation for building interpretable, uncertainty-aware AI systems that mirror human-like physical intelligence.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward an Age-Ready Suburbia</title>
<link href="https://hdl.handle.net/1721.1/163677" rel="alternate"/>
<author>
<name>Du, Minghao</name>
</author>
<author>
<name>Zhuang, Kaicheng</name>
</author>
<id>https://hdl.handle.net/1721.1/163677</id>
<updated>2025-11-18T06:27:19Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Toward an Age-Ready Suburbia
Du, Minghao; Zhuang, Kaicheng
As America’s population ages, suburban neighborhoods face urgent challenges. Originally designed for young, car-dependent families, the suburban landscape today often presents barriers to aging in place, including poor walkability, inaccessible housing, and limited access to essential services and care. This thesis investigates these challenges and proposes a strategy for reimagining suburban environments through demographic analysis, spatial mapping, persona-driven research, architectural prototyping, and community planning. It traces the historical evolution of suburbia, critically evaluates existing senior housing typologies, and advances new frameworks for retrofitting residential neighborhoods to better support aging populations. Focusing on Sacramento, California, the research identifies high-priority areas where aging, affordability challenges, and mobility barriers intersect. Grounded by a pilot care home project, the study demonstrates how modest interventions, such as retrofitting single-family homes into small-scale residential care environments, can enhance both livability and care access. The first phase of the pilot project has been constructed, offering a demonstration of the proposed model’s feasibility. A phased development and financial strategy are also outlined to ensure broader applicability. While rooted in Sacramento, the thesis offers a framework relevant to many suburban contexts across the United States, particularly naturally occurring retirement communities (NORCs) where older adults are aging in place. Rather than creating isolated senior enclaves, the work promotes a distributed, community-integrated model that strengthens neighborhood resilience and supports intergenerational living. By combining design innovation with policy awareness and development feasibility, the thesis presents a scalable and adaptable approach to reshaping suburbs for an aging society.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Calibration and Control of Superconducting Qubits for Low‑Overhead Quantum Error Correction</title>
<link href="https://hdl.handle.net/1721.1/163676" rel="alternate"/>
<author>
<name>Pahl, Lukas</name>
</author>
<id>https://hdl.handle.net/1721.1/163676</id>
<updated>2025-11-18T06:27:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Calibration and Control of Superconducting Qubits for Low‑Overhead Quantum Error Correction
Pahl, Lukas
The ability to coherently and reliably manipulate quantum information marks a fundamental technological leap—realizable through a universal, fault‑tolerant quantum computer. Achieving this goal requires progress across all layers of the quantum computing stack, from physical qubits to theoretical algorithms. In this work, we address multiple layers of this stack. We develop a software architecture for scalable device calibration using modular calibration graphs. We introduce real‑time frequency stabilization techniques, demonstrating improved single‑qubit gate fidelities and progress toward multiqubit feedback. Finally, we explore how quantum error correction overhead can be reduced using low‑density parity‑check codes. We present logical protocols for a non‑local nine‑qubit code, which significantly outperforms comparable surface code implementations in both qubit efficiency and computational capability. These results represent practical steps toward overcoming key challenges in fault‑tolerant quantum computing.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ModelDiff: A Framework for Comparing Learning Algorithms</title>
<link href="https://hdl.handle.net/1721.1/163675" rel="alternate"/>
<author>
<name>Shah, Harshay</name>
</author>
<id>https://hdl.handle.net/1721.1/163675</id>
<updated>2025-11-18T06:27:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">ModelDiff: A Framework for Comparing Learning Algorithms
Shah, Harshay
We study the problem of (learning) algorithm comparison, where the goal is to find differences between models trained with two different learning algorithms. We begin by formalizing this goal as one of finding distinguishing feature transformations, i.e., input transformations that change the predictions of models trained with one learning algorithm but not the other. We then present ModelDiff, a method that leverages the datamodels framework (Ilyas et al., 2022) to compare learning algorithms based on how they use their training data. We demonstrate ModelDiff through three case studies, comparing models trained with/without data augmentation, with/without pre-training, and with different SGD hyperparameters. Our code is available at https://github.com/MadryLab/modeldiff.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Prediction of Large Events in Directed Sandpiles</title>
<link href="https://hdl.handle.net/1721.1/163674" rel="alternate"/>
<author>
<name>Shah, Dhruv</name>
</author>
<id>https://hdl.handle.net/1721.1/163674</id>
<updated>2025-11-18T06:33:43Z</updated>
<published>2025-11-15T00:00:00Z</published>
<summary type="text">Prediction of Large Events in Directed Sandpiles
Shah, Dhruv
The degree of predictability of large avalanche events in the directed sandpile model is studied. This degree is defined in terms of how successfully a strategy can predict such events, as compared to a random guess. A waiting time based prediction strategy which exploits the local anticorrelation of large events is discussed. With this strategy we show analytically and numerically that large events are predictable, and that this predictability persists in the thermodynamic limit. We introduce another strategy which predicts large avalanches in the future based on the present excess density in the sandpile. We obtain the exact conditional probabilities for large events given an excess density, and use this to determine the exact form of the ROC predictability curves. We show that for this strategy, the model is predictable only for finite lattice sizes, and unpredictable in the thermodynamic limit. This behaviour is to be contrasted with previously established numerical studies carried out for Manna sandpiles.
</summary>
<dc:date>2025-11-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>What are the most informative data points for predicting extreme events?</title>
<link href="https://hdl.handle.net/1721.1/163673" rel="alternate"/>
<author>
<name>Champenois, Bianca</name>
</author>
<author>
<name>Sapsis, Themistoklis P.</name>
</author>
<id>https://hdl.handle.net/1721.1/163673</id>
<updated>2025-11-18T06:33:26Z</updated>
<published>2025-09-22T00:00:00Z</published>
<summary type="text">What are the most informative data points for predicting extreme events?
Champenois, Bianca; Sapsis, Themistoklis P.
The growing availability of large datasets that describe complex dynamical systems, such as climate models and turbulence simulations, has made machine learning an increasingly popular tool for modeling and analysis, but the inherent low representation of extreme events poses a major challenge for model accuracy in the tails of the distribution. This raises a fundamental question: Given a large dataset, which data points should we use to train machine learning models that effectively learn extremes? To address this question, we study a likelihood-weighted active data selection framework that identifies the most informative data points for model training. The framework improves predictions of extreme values of a target observable, scales to high-dimensional systems, and is model-agnostic. Unlike traditional active learning, which assumes the ability to query new data, our method is designed for problems where the dataset is fixed but vast, focusing on selection rather than acquisition. Points are scored using a likelihood-weighted uncertainty sampling criterion that prioritizes samples expected to reduce model uncertainty and improve predictions in the tails of the distribution for systems with non-Gaussian statistics. When applied to a machine learning climate model with input dimensionality on the order of tens of thousands, we find that the likelihood-weighted active data selection algorithm most accurately captures the statistics of extreme events using only a fraction of the original dataset. We also introduce analysis techniques to further interpret the optimally selected points. Looking ahead, the approach can serve as a compression algorithm that preserves information associated with extreme events in vast datasets.
</summary>
<dc:date>2025-09-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Coclique level structure for stochastic chemical reaction networks</title>
<link href="https://hdl.handle.net/1721.1/163672" rel="alternate"/>
<author>
<name>Bruno, Simone</name>
</author>
<author>
<name>Fu, Yi</name>
</author>
<author>
<name>Campos, Felipe A.</name>
</author>
<author>
<name>Del Vecchio, Domitilla</name>
</author>
<author>
<name>Williams, Ruth J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163672</id>
<updated>2025-11-18T06:33:51Z</updated>
<published>2025-11-10T00:00:00Z</published>
<summary type="text">Coclique level structure for stochastic chemical reaction networks
Bruno, Simone; Fu, Yi; Campos, Felipe A.; Del Vecchio, Domitilla; Williams, Ruth J.
Continuous time Markov chains are commonly used as models for the stochastic behavior of chemical reaction networks. More precisely, these Stochastic Chemical Reaction Networks (SCRNs) are frequently used to gain a mechanistic understanding of how chemical reaction rate parameters impact the stochastic behavior of these systems. One property of interest is mean first passage times (MFPTs) between states. However, deriving explicit formulas for MFPTs can be highly complex. In order to address this problem, we first introduce the concept of $$coclique\, level\, structure$$ and develop theorems to determine whether certain SCRNs have this feature by studying associated graphs. Additionally, we develop an algorithm to identify, under specific assumptions, all possible coclique level structures associated with a given SCRN. Finally, we demonstrate how the presence of such a structure in a SCRN allows us to derive closed form formulas for both upper and lower bounds for the MFPTs. Our methods can be applied to SCRNs taking values in a generic finite state space and can also be applied to models with non-mass-action kinetics. We illustrate our results with examples from the biological areas of epigenetics, neurobiology and ecology.
</summary>
<dc:date>2025-11-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Psyche Light Elements Investigation</title>
<link href="https://hdl.handle.net/1721.1/163671" rel="alternate"/>
<author>
<name>Prettyman, Thomas H.</name>
</author>
<author>
<name>Mittlefehldt, David W.</name>
</author>
<author>
<name>Asphaug, Erik I.</name>
</author>
<author>
<name>Binzel, Richard P.</name>
</author>
<author>
<name>Courville, Samuel W.</name>
</author>
<author>
<name>Elkins-Tanton, Linda T.</name>
</author>
<author>
<name>Lawrence, David J.</name>
</author>
<author>
<name>Marchi, Simone</name>
</author>
<author>
<name>Merayo, José M. G.</name>
</author>
<author>
<name>McCoy, Timothy J.</name>
</author>
<author>
<name>Weiss, Benjamin P.</name>
</author>
<id>https://hdl.handle.net/1721.1/163671</id>
<updated>2025-11-18T06:33:40Z</updated>
<published>2025-11-11T00:00:00Z</published>
<summary type="text">The Psyche Light Elements Investigation
Prettyman, Thomas H.; Mittlefehldt, David W.; Asphaug, Erik I.; Binzel, Richard P.; Courville, Samuel W.; Elkins-Tanton, Linda T.; Lawrence, David J.; Marchi, Simone; Merayo, José M. G.; McCoy, Timothy J.; Weiss, Benjamin P.
Light elements, such as C, S, Si, O, C, and H, are thought to be present in Earth’s liquid-Fe outer core. These elements lower melting temperatures, thereby allowing the core to remain in liquid state at high pressure and influencing magnetic and geodynamic processes. However, the identity and abundance of the light elements in the cores of terrestrial planets and how they were delivered to these cores is not well known. The NASA Psyche mission will travel to and explore (16) Psyche, which may be the metal-rich core of a differentiated planetesimal exposed by collisional stripping. If so, the Psyche mission could provide a direct assessment of the light element content of an asteroidal core, allowing comparisons to the inferred composition of planetary cores and the parent bodies of the magmatic iron group meteorites. In particular, Earth’s high-pressure core formed gradually (over ∼100 Myr), in a multistage process, under increasingly oxidizing conditions, whereas the cores of planetesimals formed quickly (within 10 Myr) at low pressure, likely in chemical equilibrium with their mantles. The trace element systematics and mineral composition of magmatic iron meteorites indicate the presence of C, P, and S in planetesimal cores prior to solidification. Such elements would have played a role in core dynamics, including dynamo generation. Their low solubility combined with the immiscibility of their mineral precipitates would have resulted in their separation from Fe upon crystallization and their eruption onto the surface of a stripped core (via ferrovolcanism). The Psyche spacecraft will detect their elemental, mineral, and magnetic signatures with the payload instruments, which include a Gamma Ray and Neutron Spectrometer, a Multispectral Imager, and a Magnetometer. Additional constraints on interior composition and processes influenced by light elements will be provided by Psyche’s gravity and geomorphology investigations. We provide a brief introduction to the topic of light elements along with prospects for (16) Psyche. While we emphasize core formation processes, we also consider other possibilities for the origin and evolution of this metal-rich body.
</summary>
<dc:date>2025-11-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>A double exponential chirp waveform for noisy rheology</title>
<link href="https://hdl.handle.net/1721.1/163670" rel="alternate"/>
<author>
<name>Waeterloos, Jarno L.</name>
</author>
<author>
<name>McKinley, Gareth H.</name>
</author>
<author>
<name>Clasen, Christian</name>
</author>
<id>https://hdl.handle.net/1721.1/163670</id>
<updated>2025-11-18T06:33:47Z</updated>
<published>2025-09-23T00:00:00Z</published>
<summary type="text">A double exponential chirp waveform for noisy rheology
Waeterloos, Jarno L.; McKinley, Gareth H.; Clasen, Christian
In the search for faster rheometrical measurements techniques for fast time-evolving systems, optimally windowed chirps (OWCh) have recently been proposed for the determination of the complex modulus. However, such chirps are prone to artefacts at high frequencies due to fact that the input power is distributed over a range of frequencies leading to reduced signal-to-noise ratios in noisy conditions. The Tukey window which modulates the amplitude of the excitation disturbance and which is required to avoid spectral leakage directly reduces the signal-to-noise ratio at the edges of the signal leading to a divergence of the measured moduli at high frequencies. A new double exponential chirp (DEC) signal is proposed to overcome these limitations. Its capabilities are demonstrated with orthogonal superposition rheometry as an example of a demanding high-noise environment. The S-shaped time-frequency history of the new chirp signal redistributes the input power over the frequency spectrum. Numerical simulations using the Maxwell and Giesekus models, along with orthogonal superposition measurements on wormlike micellar fluids, demonstrate the effectiveness of the DEC waveform. Parameter optimization with the Giesekus model identifies the ideal input configurations for achieving a maximum signal-to-noise ratio during rheological measurements.
</summary>
<dc:date>2025-09-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generative BigSMILES: an extension for polymer informatics, computer simulations &amp;amp; ML/AI</title>
<link href="https://hdl.handle.net/1721.1/163669" rel="alternate"/>
<author>
<name>Schneider, Ludwig</name>
</author>
<author>
<name>Walsh, Dylan</name>
</author>
<author>
<name>Olsen, Bradley</name>
</author>
<author>
<name>de Pablo, Juan</name>
</author>
<id>https://hdl.handle.net/1721.1/163669</id>
<updated>2026-03-08T03:30:38Z</updated>
<published>2023-11-17T00:00:00Z</published>
<summary type="text">Generative BigSMILES: an extension for polymer informatics, computer simulations &amp;amp; ML/AI
Schneider, Ludwig; Walsh, Dylan; Olsen, Bradley; de Pablo, Juan
The BigSMILES notation, a concise tool for polymer ensemble representation, is augmented here by introducing an enhanced version called generative BigSMILES. G-BigSMILES is designed for generative workflows, and is complemented by tailored software tools for ease of use. This extension integrates additional data, including reactivity ratios (or connection probabilities among repeat units), molecular weight distributions, and ensemble size. An algorithm, interpretable as a generative graph is devised that utilizes these data, enabling molecule generation from defined polymer ensembles. Consequently, the G-BigSMILES notation allows for efficient specification of complex molecular ensembles via a streamlined line notation, thereby providing a foundational tool for automated polymeric materials design. In addition, the graph interpretation of the G-BigSMILES notation sets the stage for robust machine learning methods capable of encapsulating intricate polymeric ensembles. The combination of G-BigSMILES with advanced machine learning techniques will facilitate straightforward property determination and in silico polymeric material synthesis automation. This integration has the potential to significantly accelerate materials design processes and advance the field of polymer science.
</summary>
<dc:date>2023-11-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Calculating Pairwise Similarity of Polymer Ensembles via Earth Mover’s Distance</title>
<link href="https://hdl.handle.net/1721.1/163668" rel="alternate"/>
<author>
<name>Shi, Jiale</name>
</author>
<author>
<name>Walsh, Dylan</name>
</author>
<author>
<name>Zou, Weizhong</name>
</author>
<author>
<name>Rebello, Nathan J</name>
</author>
<author>
<name>Deagen, Michael E</name>
</author>
<author>
<name>Fransen, Katharina A</name>
</author>
<author>
<name>Gao, Xian</name>
</author>
<author>
<name>Olsen, Bradley D</name>
</author>
<author>
<name>Audus, Debra J</name>
</author>
<id>https://hdl.handle.net/1721.1/163668</id>
<updated>2026-03-08T03:30:44Z</updated>
<published>2024-02-14T00:00:00Z</published>
<summary type="text">Calculating Pairwise Similarity of Polymer Ensembles via Earth Mover’s Distance
Shi, Jiale; Walsh, Dylan; Zou, Weizhong; Rebello, Nathan J; Deagen, Michael E; Fransen, Katharina A; Gao, Xian; Olsen, Bradley D; Audus, Debra J
Synthetic polymers, in contrast to small molecules and deterministic biomacromolecules, are typically ensembles composed of polymer chains with varying numbers, lengths, sequences, chemistry, and topologies. While numerous approaches exist for measuring pairwise similarity among small molecules and sequence-defined biomacromolecules, accurately determining the pairwise similarity between two polymer ensembles remains challenging. This work proposes the earth mover's distance (EMD) metric to calculate the pairwise similarity score between two polymer ensembles. EMD offers a greater resolution of chemical differences between polymer ensembles than the averaging method and provides a quantitative numeric value representing the pairwise similarity between polymer ensembles in alignment with chemical intuition. The EMD approach for assessing polymer similarity enhances the development of accurate chemical search algorithms within polymer databases and can improve machine learning techniques for polymer design, optimization, and property prediction.
</summary>
<dc:date>2024-02-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Engineered selective biotoxin‐binding hydrogels for toxin sequestration</title>
<link href="https://hdl.handle.net/1721.1/163667" rel="alternate"/>
<author>
<name>Morris, Melody A</name>
</author>
<author>
<name>Yang, Yun Jung</name>
</author>
<author>
<name>Mai, Danielle J</name>
</author>
<author>
<name>Olsen, Bradley D</name>
</author>
<id>https://hdl.handle.net/1721.1/163667</id>
<updated>2026-03-08T03:30:37Z</updated>
<published>2024-03-22T00:00:00Z</published>
<summary type="text">Engineered selective biotoxin‐binding hydrogels for toxin sequestration
Morris, Melody A; Yang, Yun Jung; Mai, Danielle J; Olsen, Bradley D
The development of synthetic selective membranes that separate materials of similar sizes, charges, and/or polarities remains a difficult challenge, and looking towards biology provides inspiration for new designs. In this work, a series of cholera toxin binding peptides (CTBPs) are identified, spanning a range of binding inhibitions, and integrated into chemically cross‐linked cholera toxin binding gels (CTBGs) via thiol‐Michael polycondensation reactions. All gels demonstrate rheological profiles consistent with elastic solids. The CTBGs are probed via small‐angle neutron scattering and exhibit a correlation length, &lt;jats:italic&gt;ξ&lt;/jats:italic&gt;, smaller than most proteins (1.3–2.5 nm). Thus, an effective entropic mesh is formed to block non‐targeted proteins. However, the CTBGs have a dynamic mesh size, Ξ, that is larger than cholera toxin (CT) to allow the transport of target proteins. The CTBGs with the highest binding inhibitions both show high selectivity and permeation of CT, rejecting all other tested proteins. In total, two new highly selective CTBGs are synthesized and validated for use in cholera toxin remediation. Together, this platform demonstrates the wide applicability of selectively‐diffusive materials for difficult separations.
</summary>
<dc:date>2024-03-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerated small angle neutron scattering algorithms for polymeric materials</title>
<link href="https://hdl.handle.net/1721.1/163666" rel="alternate"/>
<author>
<name>Dai, Kexin</name>
</author>
<author>
<name>Olsen, Bradley D</name>
</author>
<id>https://hdl.handle.net/1721.1/163666</id>
<updated>2026-03-08T03:30:49Z</updated>
<published>2025-10-10T00:00:00Z</published>
<summary type="text">Accelerated small angle neutron scattering algorithms for polymeric materials
Dai, Kexin; Olsen, Bradley D
Small-angle neutron scattering (SANS) is an extremely powerful technique for characterizing a wide variety of soft, biological, magnetic, and quantum materials, but it is often throughput-limited. This work proposes an algorithm to accelerate small angle neutron scattering (SANS) experiments by estimating the minimum number of counts to perform parameter estimation and model differentiation tasks to a specified level of certainty. Three classes of model polymer materials were examined and analyzed, and time slices of SANS data were used to model a reduced number of counts. The scattering data with reduced numbers of counts were fitted to SANS model functions to perform parameter estimation and model differentiation tasks. For parameter estimation, estimators accurate to within 5–10% of the full count estimator can be produced with only 1–50% of the full counts depending upon the sample and parameter of interest. In order to project parameter uncertainties at lower number of counts prior to the completion of experiments, it is crucial to have a robust error quantification method that reflects the true uncertainty associated with each parameter. Uncertainties from Monte Carlo (MC) bootstrapping are shown to in general overestimate the error from fitting many experimental replicates. For most parameter estimation techniques, the weighted least squares estimator is unbiased; however, certain models yield biased estimators. To differentiate between models, both the Akaike information criterion (AIC) and Bayesian information criterion (BIC) can be used, and with either criterion, reduced numbers of counts can still identify the best model for our samples from a group of related candidate models for each material. The proposed algorithm can help SANS users optimize valuable beamtime and accelerate the use of SANS for structural characterization of libraries of materials while obtaining reasonable parameter estimation and model differentiation when scattering models are available.
</summary>
<dc:date>2025-10-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparative study of conventional and process intensification by reactive distillation designs for glycerol carbonate production from glycerol and diethyl carbonate</title>
<link href="https://hdl.handle.net/1721.1/163665" rel="alternate"/>
<author>
<name>Chalermthai, Bushra</name>
</author>
<author>
<name>Sriharuethai, Chayanin</name>
</author>
<author>
<name>Olsen, Bradley D</name>
</author>
<author>
<name>Ngaosuwan, Kanokwan</name>
</author>
<author>
<name>Soottitantawat, Apinan</name>
</author>
<author>
<name>Assabumrungrat, Suttichai</name>
</author>
<author>
<name>Charoensuppanimit, Pongtorn</name>
</author>
<id>https://hdl.handle.net/1721.1/163665</id>
<updated>2026-03-08T03:30:41Z</updated>
<published>2025-01-12T00:00:00Z</published>
<summary type="text">Comparative study of conventional and process intensification by reactive distillation designs for glycerol carbonate production from glycerol and diethyl carbonate
Chalermthai, Bushra; Sriharuethai, Chayanin; Olsen, Bradley D; Ngaosuwan, Kanokwan; Soottitantawat, Apinan; Assabumrungrat, Suttichai; Charoensuppanimit, Pongtorn
Glycerol carbonate (GC) can be produced from glycerol (GL), a low-value byproduct in the biodiesel industry. In this work, continuous processes of GC production via transesterification from crude GL and diethyl carbonate (DEC) were developed using Aspen Plus. Two cases were considered, and their process performances were compared. In Case I, a conventional design consisted of a continuously stirred tank reactor for the reaction section and a distillation column for the purification section. In Case II, a process intensification design consisted of a reactive distillation column that could accommodate both reaction and purification within a single column. In both cases, the process optimizations were carried out by connecting the process models in Aspen Plus to MATLAB, using the Genetic Algorithm as the optimizer. The results showed that Case II was superior to Case I in terms of energy utilization, CO2 emissions, and economics with the specific energy consumption of 1.92 kWh/kg of diethyl carbonate, % internal rate of return of 274, payback period of 1.44 years, and CO2 emissions of 0.26 kg CO2/kg DEC. Lastly, the proposed process in Case II was compared with the GC production using dimethyl carbonate (DMC). It was found that using DEC was superior to DMC due to easier separation and glycidol avoidance.
</summary>
<dc:date>2025-01-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing for degradation: the importance of considering biotic and abiotic polymer degradation</title>
<link href="https://hdl.handle.net/1721.1/163664" rel="alternate"/>
<author>
<name>Tantawi, Omar</name>
</author>
<author>
<name>Joo, Wontae</name>
</author>
<author>
<name>Martin, Elijah E</name>
</author>
<author>
<name>Av-Ron, Sarah HM</name>
</author>
<author>
<name>Bannister, K'yal R</name>
</author>
<author>
<name>Prather, Kristala LJ</name>
</author>
<author>
<name>Olsen, Bradley D</name>
</author>
<author>
<name>Plata, Desiree L</name>
</author>
<id>https://hdl.handle.net/1721.1/163664</id>
<updated>2026-03-08T03:30:42Z</updated>
<published>2025-04-10T00:00:00Z</published>
<summary type="text">Designing for degradation: the importance of considering biotic and abiotic polymer degradation
Tantawi, Omar; Joo, Wontae; Martin, Elijah E; Av-Ron, Sarah HM; Bannister, K'yal R; Prather, Kristala LJ; Olsen, Bradley D; Plata, Desiree L
Considering the increasing global plastic demand, there is a critical need to gain insight into environmental processes that govern plastic degradation in order to inform novel design of sustainable polymers. Current biological degradation testing standards focus on formation of CO2 (i.e., mineralization) alone as a diagnostic, ultimately limiting identification of structure–degradation relationships in a timely fashion. This work developed a sequential abiotic (i.e., photodegradation and hydrolysis) and biotic degradation test and applied it to a suite of 18 polymers, including ten lab produced, novel polyhydroxyalkanoate polyesters, and eight commercially available, bio-based (i.e., polylactic acid and poly-3-hydroxybutyrate) and fossil-derived (i.e., polystyrene, polypropylene, low density polyethylene, poly(ethylene terephthalate) and tire rubber) polymers. Biomineralization alone following standard methods (i.e., ASTM 6691-17, ISO 23977-1 2020) underestimated polymer degradation up to two-fold over 28 days. Simulated sunlight enhanced the overall polymer degradation by mobilizing dissolved organic carbon (DOC). After photoirradiation, up to 100% of released dissolved organic carbon was bioavailable for marine microbes over 14 days. Photodegradation and hydrolysis could be explained by structural drivers in the commodity polymers, and the lab-synthesized polymers illustrated a limit to total degradation beyond which no enhancements in degradation were achieved. Taken together, this workflow allows for relatively fast experimental determination of environmentally relevant stimuli to help support eventual elucidation of structure–property relationships for enhanced a priori design of degradable polymers.
</summary>
<dc:date>2025-04-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Seroprevalence of COVID-19 neutralizing antibodies among multi-ethnic staff of an Asian primary healthcare institution: insights from point-of-care testing and implications for booster vaccination decisions</title>
<link href="https://hdl.handle.net/1721.1/163663" rel="alternate"/>
<author>
<name>Oka, Prawira</name>
</author>
<author>
<name>Jia, Huan</name>
</author>
<author>
<name>Kongsuphol, Patthara</name>
</author>
<author>
<name>Ng, Say Y.</name>
</author>
<author>
<name>Saravanan, Vivekanandan</name>
</author>
<author>
<name>Ng, Chirk J.</name>
</author>
<author>
<name>Moosa, Aminath S.</name>
</author>
<author>
<name>Xiong, Mengfei</name>
</author>
<author>
<name>Gun, Shih Y.</name>
</author>
<author>
<name>Tsang, Li P. M.</name>
</author>
<author>
<name>Lim, Jingyi</name>
</author>
<author>
<name>Vijaykumar, Kayshini</name>
</author>
<author>
<name>Ho, Cassandra X. Y.</name>
</author>
<author>
<name>Chua, Patrina W. L.</name>
</author>
<author>
<name>Ling, Sharon Y. H.</name>
</author>
<id>https://hdl.handle.net/1721.1/163663</id>
<updated>2026-03-08T03:29:17Z</updated>
<published>2025-10-30T00:00:00Z</published>
<summary type="text">Seroprevalence of COVID-19 neutralizing antibodies among multi-ethnic staff of an Asian primary healthcare institution: insights from point-of-care testing and implications for booster vaccination decisions
Oka, Prawira; Jia, Huan; Kongsuphol, Patthara; Ng, Say Y.; Saravanan, Vivekanandan; Ng, Chirk J.; Moosa, Aminath S.; Xiong, Mengfei; Gun, Shih Y.; Tsang, Li P. M.; Lim, Jingyi; Vijaykumar, Kayshini; Ho, Cassandra X. Y.; Chua, Patrina W. L.; Ling, Sharon Y. H.
Background COVID-19 vaccines have been crucial for establishing immunity; however, emerging data suggest vaccine efficacy is reduced within six months. Healthcare staff face an elevated COVID-19 risk and should make an informed decision to receive timely boosters to maintain their immunity. This study aims to determine the COVID-19 neutralizing antibody (nAb) seroprevalence among primary care staff and the impact of serological testing on their vaccination decision. Methods This cross-sectional study involved multidisciplinary primary healthcare professionals working in 10 public primary care clinics from December 2022 to July 2023. A questionnaire captured sociodemographic data, COVID-19 related history and attitudes toward serological testing. Their COVID-19 nAb levels were measured via point-of-care CoVIm™ Rapid SARS-CoV-2 nAb Test and laboratory cPass™ SARS-CoV-2 nAb Detection Kit. Results The study included 474 subjects, mostly female (88.8%), with a mean age of 40.6 years (SD = 12.3). All received at least two COVID-19 vaccinations, and 80.6% reported at least one infection. COVID-19 nAb seroprevalence was high (99.2%). Post-vaccination, 79.7% contracted COVID-19, with the median time to infection being 163 days. Most staff (93.9%) desired to know their COVID-19 immunity status through a finger pick test (77.0%) instead of venepuncture. Over two-thirds (68.1%) indicated the results would influence their booster vaccination decision. Conclusion The study revealed a high seroprevalence of COVID-19 nAb among the fully vaccinated participating staff. The necessity for timely boosters is underscored by 79.7% contracting COVID-19 post-vaccination. Most subjects were willing to undergo point-of-care testing, with results potentially influencing their decisions for booster vaccination.
</summary>
<dc:date>2025-10-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Problem structuring in urban science education: Why, what, and how</title>
<link href="https://hdl.handle.net/1721.1/163662" rel="alternate"/>
<author>
<name>Lai, Yuan</name>
</author>
<author>
<name>Lavi, Rea</name>
</author>
<id>https://hdl.handle.net/1721.1/163662</id>
<updated>2026-03-08T03:27:20Z</updated>
<published>2025-10-09T00:00:00Z</published>
<summary type="text">Problem structuring in urban science education: Why, what, and how
Lai, Yuan; Lavi, Rea
Urban science is an emerging and transdisciplinary field that attracts deep interest in planning degree programs from educational institutions worldwide. Urban science education emphasizes the science of cities and urban information technology by integrating design, engineering, system science, spatial science, behavioral and social science, decision science, and other disciplines. The increasing complexity of urban systems creates significant pedagogical challenges for urban science education, particularly in problem structuring, which is the process of structuring, or defining, (a) the scope of the problem, (b) the potential ways for addressing the problem, and (c) suitable criteria for judging solutions to the problem. In this article, we describe the theoretical foundations of problem structuring in relation to urban science education and explain why it is difficult to teach. In response to this pedagogical challenge, we propose DIMES (Describe, Inquire, Model, Extract, and State), a novel domain-agnostic method combining design thinking and systems thinking developed for problem structuring in any level of higher education. We describe how the DIMES method can be integrated into urban science curricula with relation to critical considerations for teaching urban science problem structuring, the fast-evolving smart city development, and the disruptive impact of generative artificial intelligence on urban science education. Finally, we provide our thoughts on potential future studies with DIMES in urban science learning settings.
</summary>
<dc:date>2025-10-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>The power of fragmented elites: the role of inadvertent robust action</title>
<link href="https://hdl.handle.net/1721.1/163661" rel="alternate"/>
<author>
<name>Mizruchi, Mark S.</name>
</author>
<author>
<name>Chu, Johan S. G.</name>
</author>
<id>https://hdl.handle.net/1721.1/163661</id>
<updated>2026-03-08T03:20:10Z</updated>
<published>2025-04-09T00:00:00Z</published>
<summary type="text">The power of fragmented elites: the role of inadvertent robust action
Mizruchi, Mark S.; Chu, Johan S. G.
It is broadly accepted among political scientists, political sociologists, and social movement theorists that a unified group will have a higher probability of success than a group that experiences internal divisions or fragmentation. Similarly, it has been assumed that in a society with a relatively unified elite, the elite will experience disproportionately higher benefits relative to the larger population. We take issue with this claim. In the mid-twentieth century, large American corporations exhibited a relatively high level of unity but the relative economic benefits accruing to the elite were at historic lows. In more recent years, American big business has become increasingly fragmented, yet the economic benefits that these elites have received have reached historic highs, and the average American’s standard of living has stagnated. Drawing on Padgett and Ansell, we introduce the concept of inadvertent robust action to explain how a relatively fragmented, disorganized elite can reap benefits that exceed those that its more unified counterparts experienced in an earlier era. We conclude with a discussion of the conditions under which our formulation can be expected to hold.
</summary>
<dc:date>2025-04-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Basic Elements of Strong Gravitational Lensing</title>
<link href="https://hdl.handle.net/1721.1/163660" rel="alternate"/>
<author>
<name>Schechter, Paul L.</name>
</author>
<author>
<name>Schnittman, Jeremy D.</name>
</author>
<id>https://hdl.handle.net/1721.1/163660</id>
<updated>2026-03-08T03:20:09Z</updated>
<published>2025-05-30T00:00:00Z</published>
<summary type="text">Basic Elements of Strong Gravitational Lensing
Schechter, Paul L.; Schnittman, Jeremy D.
Even when used to describe the same phenomenon, equations, graphics and words each give different perspectives and lead to complementary insights. The basic elements of strong gravitational lensing are introduced here favoring words and graphics over equations whenever possible. Fermat’s principle is the fundamental driver of strong lensing. Three “D’s” encapsulate the essential effects of lensing: Delay, Deflection and Distortion. Gravity and geometry both contribute to the delay of photons from a lensed source. Their interplay determines how the images of a source are deflected and how they are stretched or compressed. Caustics and critical curves are explained. Images of doubly, triply, quadruply and quintuply lensed sources are displayed. A table of symbols, their definitions and distinctions provides a summary of the basic elements of strong lensing.
</summary>
<dc:date>2025-05-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Search for dark matter production in association with a single top quark in proton-proton collisions at √s = 13 TeV</title>
<link href="https://hdl.handle.net/1721.1/163659" rel="alternate"/>
<author>
<name>Chekhovsky, V.</name>
</author>
<author>
<name>Hayrapetyan, A.</name>
</author>
<author>
<name>Makarenko, V.</name>
</author>
<author>
<name>Tumasyan, A.</name>
</author>
<author>
<name>Adam, W.</name>
</author>
<author>
<name>Andrejkovic, J. W.</name>
</author>
<author>
<name>Benato, L.</name>
</author>
<author>
<name>Bergauer, T.</name>
</author>
<author>
<name>Chatterjee, S.</name>
</author>
<author>
<name>Damanakis, K.</name>
</author>
<author>
<name>Dragicevic, M.</name>
</author>
<author>
<name>Hussain, P. S.</name>
</author>
<author>
<name>Jeitler, M.</name>
</author>
<author>
<name>Krammer, N.</name>
</author>
<author>
<name>Li, A.</name>
</author>
<author>
<name>Liko, D.</name>
</author>
<author>
<name>The CMS collaboration</name>
</author>
<id>https://hdl.handle.net/1721.1/163659</id>
<updated>2026-03-08T03:27:17Z</updated>
<published>2025-09-17T00:00:00Z</published>
<summary type="text">Search for dark matter production in association with a single top quark in proton-proton collisions at √s = 13 TeV
Chekhovsky, V.; Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; The CMS collaboration
A search for the production of a single top quark in association with invisible particles is performed using proton-proton collision data collected with the CMS detector at the LHC at $$\sqrt{s}=13$$ TeV, corresponding to an integrated luminosity of 138 fb−1. In this search, a flavor-changing neutral current produces a single top quark or antiquark and an invisible state nonresonantly. The invisible state consists of a hypothetical spin-1 particle acting as a new mediator and decaying to two spin-1/2 dark matter candidates. The analysis searches for events in which the top quark or antiquark decays hadronically. No significant excess of events compatible with that signature is observed. Exclusion limits at 95% confidence level are placed on the masses of the spin-1 mediator and the dark matter candidates, and are compared to constraints from the dark matter relic density measurements. In a vector (axial-vector) coupling scenario, masses of the spin-1 mediator are excluded up to 1.85 (1.85) TeV with an expectation of 2.0 (2.0) TeV, whereas masses of the dark matter candidates are excluded up to 0.75 (0.55) TeV with an expectation of 0.85 (0.65) TeV.
</summary>
<dc:date>2025-09-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measurement of the Ω c 0 and Ξ c 0 baryon lifetimes using hadronic b-baryon decays</title>
<link href="https://hdl.handle.net/1721.1/163658" rel="alternate"/>
<author>
<name>Aaij, R.</name>
</author>
<author>
<name>Abdelmotteleb, A. S. W.</name>
</author>
<author>
<name>Abellan Beteta, C.</name>
</author>
<author>
<name>Abudinén, F.</name>
</author>
<author>
<name>Ackernley, T.</name>
</author>
<author>
<name>Adefisoye, A. A.</name>
</author>
<author>
<name>Adeva, B.</name>
</author>
<author>
<name>Adinolfi, M.</name>
</author>
<author>
<name>Adlarson, P.</name>
</author>
<author>
<name>Agapopoulou, C.</name>
</author>
<author>
<name>Aidala, C. A.</name>
</author>
<author>
<name>Ajaltouni, Z.</name>
</author>
<author>
<name>Akar, S.</name>
</author>
<author>
<name>Akiba, K.</name>
</author>
<author>
<name>Albicocco, P.</name>
</author>
<author>
<name>Albrecht, J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163658</id>
<updated>2026-03-08T03:27:11Z</updated>
<published>2025-09-18T00:00:00Z</published>
<summary type="text">Measurement of the Ω c 0 and Ξ c 0 baryon lifetimes using hadronic b-baryon decays
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
The lifetimes of the Ω c 0 and Ξ c 0 baryons are measured using a pp collision dataset collected by the LHCb experiment, corresponding to an integrated luminosity of 9 fb−1. The charm baryons are produced in the fully reconstructed decay chains Ω b − → Ω c 0 → p K − K − π + π − and Ξ b − → Ξ c 0 → p K − K − π + π − . The measurement uses topologically and kinematically similar B− → D0(→ K−K+π−π+)π− decays for normalisation. The measured lifetimes are τ Ω c 0 = 276.3 ± 19.4 stat ± 1.8 syst ± 0.7 τ D 0 fs , τ Ξ c 0 = 149.2 ± 2.5 stat ± 0.9 syst ± 0.4 τ D 0 fs , where the first uncertainty is statistical, the second systematic and the third due to the uncertainty of the D0 lifetime. These results are consistent with previous measurements performed by the LHCb experiment.
</summary>
<dc:date>2025-09-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wormholes, branes and finite matrices in sine dilaton gravity</title>
<link href="https://hdl.handle.net/1721.1/163657" rel="alternate"/>
<author>
<name>Blommaert, Andreas</name>
</author>
<author>
<name>Levine, Adam</name>
</author>
<author>
<name>Mertens, Thomas G.</name>
</author>
<author>
<name>Papalini, Jacopo</name>
</author>
<author>
<name>Parmentier, Klaas</name>
</author>
<id>https://hdl.handle.net/1721.1/163657</id>
<updated>2026-03-08T03:27:13Z</updated>
<published>2025-09-16T00:00:00Z</published>
<summary type="text">Wormholes, branes and finite matrices in sine dilaton gravity
Blommaert, Andreas; Levine, Adam; Mertens, Thomas G.; Papalini, Jacopo; Parmentier, Klaas
We compute the double trumpet in sine dilaton gravity via WdW quantization. The wormhole size is discretized. The wormhole amplitude matches the spectral correlation of a finite-cut matrix integral, where matrices have large but finite dimensions. This strongly suggests an identification of the sine dilaton gravity theory with the q-deformed JT gravity matrix integral. At the very least, it captures all universal content of that matrix model. The disk decomposes into the physical (gauge invariant) solutions of the WdW equation, which are trumpets with discrete sizes. This decomposition modifies the usual no-boundary wavefunction to a normalizable one in sine dilaton gravity. We furthermore present an exact quantization of sine dilaton gravity with open and closed end of the world branes. These EOW branes correspond with FZZT branes for the two Liouville theories that make up sine dilaton gravity. The WdW equation implies redundancies in this space of branes, leaving a one parameter family of gauge invariant branes. One gauge choice corresponds with branes discussed by Okuyama in the context of DSSYK. Legendre transforming the EOW brane amplitude reproduces the trumpet. One could read our work as fleshing out the Hilbert space of closed universes in sine dilaton gravity.
</summary>
<dc:date>2025-09-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Precision e+e− hemisphere masses in the dijet region with power corrections</title>
<link href="https://hdl.handle.net/1721.1/163656" rel="alternate"/>
<author>
<name>Hoang, André H.</name>
</author>
<author>
<name>Mateu, Vicent</name>
</author>
<author>
<name>Schwartz, Matthew D.</name>
</author>
<author>
<name>Stewart, Iain W.</name>
</author>
<id>https://hdl.handle.net/1721.1/163656</id>
<updated>2026-03-08T03:27:15Z</updated>
<published>2025-09-10T00:00:00Z</published>
<summary type="text">Precision e+e− hemisphere masses in the dijet region with power corrections
Hoang, André H.; Mateu, Vicent; Schwartz, Matthew D.; Stewart, Iain W.
We derive high-precision results for the e+e− heavy jet mass (HJM) dσ/dρ and dihemisphere mass (DHM) d2σ/(ds1ds2) distributions, for s1 ~ s2, in the dijet region. New results include: i) the N3LL resummation for HJM of large logarithms lnn(ρ) at small ρ including the exact two-loop non-global hemisphere soft function, the 4-loop cusp anomalous dimension and the 3-loop hard and jet functions, ii) N3LL results for DHM with resummation of logarithms ln(s1,2/Q2) when there is no large separation between s1 and s2, iii) profile functions for HJM to give results simultaneously valid in the peak and tail regions, iv) a complete two-dimensional basis of non-perturbative functions which can be used for double differential observables, that are needed for both HJM and DHM in the peak region, and v) an implementation of renormalon subtractions for large-angle soft radiation to O α s 3 together with a resummation of the additional large ln(Qρ/ΛQCD) logarithms. Here Q is the e+e− center-of-mass energy. Our resummation results are combined with known fixed-order O α s 3 results and we discuss the convergence and remaining perturbative uncertainty in the cross section. We also prove that, at order 1/Q, the first moment of the HJM distribution involves an additional non-perturbative parameter compared to the power correction that shifts the tail of the spectrum (where 1 ≫ ρ ≫ ΛQCD/Q). This differs from thrust where a single non-perturbative parameter at order 1/Q describes both the first moment and the tail, and it disfavors models of power corrections employing a single non-perturbative parameter, such as the low-scale effective coupling model. In this paper we focus only on the dijet region, not the far-tail distribution for ρ ≳ 0.2 beyond which the trijet factorization and resummation become important.
</summary>
<dc:date>2025-09-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modular chaos, operator algebras, and the Berry phase</title>
<link href="https://hdl.handle.net/1721.1/163655" rel="alternate"/>
<author>
<name>de Boer, Jan</name>
</author>
<author>
<name>Najian, Bahman</name>
</author>
<author>
<name>van der Heijden, Jeremy</name>
</author>
<author>
<name>Zukowski, Claire</name>
</author>
<id>https://hdl.handle.net/1721.1/163655</id>
<updated>2026-03-08T03:27:15Z</updated>
<published>2025-09-10T00:00:00Z</published>
<summary type="text">Modular chaos, operator algebras, and the Berry phase
de Boer, Jan; Najian, Bahman; van der Heijden, Jeremy; Zukowski, Claire
Modular Berry transport associates a geometric phase to a zero mode ambiguity in a family of modular operators. In holographic settings, this phase was shown to encode nontrivial information about the emergent spacetime geometry. We reformulate modular Berry transport for arbitrary von Neumann algebras, including giving a precise definition of the zero mode projection in terms of a conditional expectation. For a certain class of state perturbations, we demonstrate that the modular Berry phase gives rise to an emergent symplectic form in the large N limit, extending related results in the context of subregion/subalgebra duality. We also show that the vanishing of the Berry curvature for modular scrambling modes signals the emergence of a local Poincaré algebra, which plays a key role in the quantum ergodic hierarchy. These results provide an intriguing relation between geometric phases, modular chaos and the local structure of spacetime.
</summary>
<dc:date>2025-09-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Builtsphere: A Broken Geological Paradigm</title>
<link href="https://hdl.handle.net/1721.1/163654" rel="alternate"/>
<author>
<name>Parreño Alonso, Cristina</name>
</author>
<id>https://hdl.handle.net/1721.1/163654</id>
<updated>2026-03-08T03:30:40Z</updated>
<published>2022-10-07T00:00:00Z</published>
<summary type="text">The Builtsphere: A Broken Geological Paradigm
Parreño Alonso, Cristina
This essay discusses the role that architecture plays as a new geological paradigm. Similar to the way geologist Peter K. Haff conceived the technosphere as “the proliferation of technology across the globe,” this essay defines the builtsphere as the proliferation of everything built across the planet and proposes both—the technosphere and the builtsphere—as subsystems of the anthroposphere. This essay illustrates this way of thinking architecture with a pedagogical experiment developed as a design studio that takes issue with the various ways in which the builtsphere has caused the breakdown of the Earth cycles.
</summary>
<dc:date>2022-10-07T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Brinkmanship Game: Bargaining Under the Mutual Risk of Escalation*</title>
<link href="https://hdl.handle.net/1721.1/163653" rel="alternate"/>
<author>
<name>Haun, Phil</name>
</author>
<author>
<name>O’Hara, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/163653</id>
<updated>2026-03-08T03:30:36Z</updated>
<published>2022-02-14T00:00:00Z</published>
<summary type="text">The Brinkmanship Game: Bargaining Under the Mutual Risk of Escalation*
Haun, Phil; O’Hara, Michael
This article describes a simple two-player game which illustrates basicconcepts of brinkmanship, to include calculations of probability andexpected outcomes, and risk-taking profiles. The game befits a single50-minute class period with introduction, gameplay, and discussion.The game can supplement the study of conflict from classic Cold Warcase studies of crisis bargaining, to arms control, or negotiating inter-national protocols for global climate change such as the ParisAgreement. The Brinkmanship Game was developed for the seventhweek of a 10-week graduate course called Game Theory andDecisionmaking: Exploring Strategic Situations. The course features aflipped classroom with class time devoted to experimentation, game-play, and discussion of readings and games; lectures are online. TheBrinkmanship Game would be appropriate for students in anyadvanced undergraduate or graduate level course in international rela-tions, security studies, negotiation, or game theory. The BrinkmanshipGame provides an active learning opportunity that can be valuable forencouraging students to come to their own understanding of con-cepts of mutual risk-taking. The authors have found the game to beeffective in the classroom and hope it may prove valuable to thosesearching for ways to motivate students and to help them learn.
</summary>
<dc:date>2022-02-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>From the square to the shopping mall: new social media, state surveillance, and the evolving geographies of urban protest</title>
<link href="https://hdl.handle.net/1721.1/163652" rel="alternate"/>
<author>
<name>Stokols, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/163652</id>
<updated>2026-03-08T03:30:38Z</updated>
<published>2022-06-22T00:00:00Z</published>
<summary type="text">From the square to the shopping mall: new social media, state surveillance, and the evolving geographies of urban protest
Stokols, Andrew
Despite the rise of social media as a major factor in protests sincethe early 2010s, scholars have documented the continuedimportance of urban space and “place-based networks” for socialmovements. However, the 2019–2020 Hong Kong Anti-ELABprotests saw a shift from occupying symbolic public space to amore variegated use of urban spaces in the city. Combiningnetwork analysis of Telegram channels and georeferencing ofprotest events, this study shows how new digital media platformssuch as Telegram enabled a diverse array of protest activities, aswell as a shift from formal centrally located civic spaces to awider range of everyday spaces including malls, oﬃces, andindustrial buildings. This study also asks why this occurred,situating the shifting geography of protests as a response toseveral factors: new social media technologies, strengthening ofstate surveillance of physical and digital space, and collectivelearning from the perceived failures of past movements. Theimplications of these shifts for the future of urban socialmovements and the “public sphere” are discussed.
</summary>
<dc:date>2022-06-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interdependence of driver and pedestrian behavior in naturalistic roadway negotiations</title>
<link href="https://hdl.handle.net/1721.1/163651" rel="alternate"/>
<author>
<name>Noonan, T Zach</name>
</author>
<author>
<name>Gershon, Pnina</name>
</author>
<author>
<name>Domeyer, Josh</name>
</author>
<author>
<name>Mehler, Bruce</name>
</author>
<author>
<name>Reimer, Bryan</name>
</author>
<id>https://hdl.handle.net/1721.1/163651</id>
<updated>2026-03-08T03:30:51Z</updated>
<published>2022-08-26T00:00:00Z</published>
<summary type="text">Interdependence of driver and pedestrian behavior in naturalistic roadway negotiations
Noonan, T Zach; Gershon, Pnina; Domeyer, Josh; Mehler, Bruce; Reimer, Bryan
OBJECTIVE: This paper characterizes the actions of pedestrian-driver dyads by examining their interdependence across intersection types (e.g., zebra crossings, stop signs). Additionally, the analysis of interdependence captures other external factors, such as other vehicles or pedestrians, that may influence the interaction.&#13;
METHODS: A 228 epoch vehicle-pedestrian interaction dataset was extracted from a large naturalistic driving data collection effort, which included vehicle, pedestrian, and contextual information (e.g., intersection type, jaywalking, vehicle maneuver, and lead vehicle presence). An expanded Actor-Partner Interdependence Model (APIM) was used to analyze driver-pedestrian dyads using driver and pedestrian standard deviations of velocity as the independent variables and wait times as dependent variables. APIM structural equation models were augmented to include driver effects (i.e., lead vehicle and maneuver type) and pedestrian effects (i.e., lead pedestrian, crossing group size, crossing direction).&#13;
RESULTS: The level of protection afforded by an intersection had an effect on the extent of driver-pedestrian dyadic behavior. Interactions in undesignated crossings (i.e., jaywalking) were associated with interdependent behavior whereas interactions in designated crossings (i.e., crosswalks and parking lots) showed a partner effect on the driver's wait time but no significant corresponding partner effect on the pedestrian. Finally, protected intersection interactions (i.e., traffic lights and stop signs) demonstrated no significant partner effects.&#13;
CONCLUSIONS: The difference in behavior patterns associated with the intersection type and level of protection shows that context can mediate the level of negotiation required between drivers and pedestrians. These findings inform how context and driver-pedestrian interactions should be incorporated in future modeling efforts which may, ultimately, support design of automated systems that are able to interact more safely, efficiently, and socially.
</summary>
<dc:date>2022-08-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>When do systematic strategies decay?</title>
<link href="https://hdl.handle.net/1721.1/163650" rel="alternate"/>
<author>
<name>Falck, Antoine</name>
</author>
<author>
<name>Rej, Adam</name>
</author>
<author>
<name>Thesmar, David</name>
</author>
<id>https://hdl.handle.net/1721.1/163650</id>
<updated>2026-03-08T03:30:50Z</updated>
<published>2022-08-08T00:00:00Z</published>
<summary type="text">When do systematic strategies decay?
Falck, Antoine; Rej, Adam; Thesmar, David
Published anomalies evaluated outside the data sample deliver about 50% of in-sample performance.
</summary>
<dc:date>2022-08-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>When AI Is Wrong: Addressing Liability Challenges in Women’s Healthcare</title>
<link href="https://hdl.handle.net/1721.1/163649" rel="alternate"/>
<author>
<name>Marotta, Angelica</name>
</author>
<id>https://hdl.handle.net/1721.1/163649</id>
<updated>2026-03-08T03:30:43Z</updated>
<published>2022-06-20T00:00:00Z</published>
<summary type="text">When AI Is Wrong: Addressing Liability Challenges in Women’s Healthcare
Marotta, Angelica
Healthcare professionals can leverage Artificial intelligence (AI) to provide better care for theirpatients. However, it is also necessary to consider that AI algorithms operate according to historicaldiagnostic data, which often include evidence gathered from men. The biases of prior practices andthe perpetuation of exclusionary processes toward women can lead to inaccurate medical deci-sions. The ramifications of such errors show that the incorrect use of AI raises several criticalquestions regarding who should be responsible for potential incidents. This study aims to providean analysis of the role of AI in affecting women’s healthcare and an overview of the liabilityimplications caused by AI mistakes. Finally, this work presents a framework for algorithmic auditingto ensure that AI data are collected and stored according to secure, legal, and fair practices.
</summary>
<dc:date>2022-06-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toughening and Imparting Deconstructability to 3D‐Printed Glassy Thermosets with “Transferinker” Additives</title>
<link href="https://hdl.handle.net/1721.1/163646" rel="alternate"/>
<author>
<name>Qin, K Peter</name>
</author>
<author>
<name>Herzog‐Arbeitman, Abraham</name>
</author>
<author>
<name>Zou, Weizhong</name>
</author>
<author>
<name>Chakraborty, Saswata</name>
</author>
<author>
<name>Kristufek, Samantha L</name>
</author>
<author>
<name>Husted, Keith EL</name>
</author>
<author>
<name>Joly, Guy D</name>
</author>
<author>
<name>Craig, Stephen L</name>
</author>
<author>
<name>Olsen, Bradley D</name>
</author>
<author>
<name>Johnson, Jeremiah A</name>
</author>
<id>https://hdl.handle.net/1721.1/163646</id>
<updated>2026-03-08T03:30:52Z</updated>
<published>2024-09-11T00:00:00Z</published>
<summary type="text">Toughening and Imparting Deconstructability to 3D‐Printed Glassy Thermosets with “Transferinker” Additives
Qin, K Peter; Herzog‐Arbeitman, Abraham; Zou, Weizhong; Chakraborty, Saswata; Kristufek, Samantha L; Husted, Keith EL; Joly, Guy D; Craig, Stephen L; Olsen, Bradley D; Johnson, Jeremiah A
Thermoset toughness and deconstructability are often opposing features; simultaneously improving both without sacrificing other mechanical properties (e.g., stiffness and tensile strength) is difficult, but, if achieved, could enhance the usage lifetime and end‐of‐life options for these materials. Here, a strategy that addresses this challenge in the context of photopolymer resins commonly used for 3D printing of glassy, acrylic thermosets is introduced. It is shown that incorporating bis‐acrylate “transferinkers,” which are cross‐linkers capable of undergoing degenerative chain transfer and new strand growth, as additives (5–25 mol%) into homemade or commercially available photopolymer resins leads to photopolymer thermosets with substantially improved tensile toughness and triggered chemical deconstructability with minimal impacts on Young's moduli, tensile strengths, and glass transition temperatures. These properties result from a transferinker‐driven topological transition in network structure from the densely cross‐linked long, heterogeneous primary strands of traditional photopolymer networks to more uniform, star‐like networks with few dangling ends; the latter structure more effectively bear stress yet is also more easily depercolated via solvolysis. Thus, transferinkers represent a simple and effective strategy for improving the mechanical properties of photopolymer thermosets and providing a mechanism for their triggered deconstructability.
</summary>
<dc:date>2024-09-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, MIT.nano</title>
<link href="https://hdl.handle.net/1721.1/163645" rel="alternate"/>
<author>
<name>Bulovic, Vladimir</name>
</author>
<id>https://hdl.handle.net/1721.1/163645</id>
<updated>2025-11-14T03:11:17Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, MIT.nano
Bulovic, Vladimir
This report contains the following sections: Catalyzing Discovery; User Base; Infrastructure, Tools, and Capabilities; Cultivating a Community; Financial Sustainability and Programs; Operational Model and Governance; and Looking Forward.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>DustNet++: Deep Learning-Based Visual Regression for Dust Density Estimation</title>
<link href="https://hdl.handle.net/1721.1/163644" rel="alternate"/>
<author>
<name>Michel, Andreas</name>
</author>
<author>
<name>Weinmann, Martin</name>
</author>
<author>
<name>Kuester, Jannick</name>
</author>
<author>
<name>AlNasser, Faisal</name>
</author>
<author>
<name>Gomez, Tomas</name>
</author>
<author>
<name>Falvey, Mark</name>
</author>
<author>
<name>Schmitz, Rainer</name>
</author>
<author>
<name>Middelmann, Wolfgang</name>
</author>
<author>
<name>Hinz, Stefan</name>
</author>
<id>https://hdl.handle.net/1721.1/163644</id>
<updated>2026-03-08T03:19:42Z</updated>
<published>2025-02-24T00:00:00Z</published>
<summary type="text">DustNet++: Deep Learning-Based Visual Regression for Dust Density Estimation
Michel, Andreas; Weinmann, Martin; Kuester, Jannick; AlNasser, Faisal; Gomez, Tomas; Falvey, Mark; Schmitz, Rainer; Middelmann, Wolfgang; Hinz, Stefan
Detecting airborne dust in standard RGB images presents significant challenges. Nevertheless, the monitoring of airborne dust holds substantial potential benefits for climate protection, environmentally sustainable construction, scientific research, and various other fields. To develop an efficient and robust algorithm for airborne dust monitoring, several hurdles have to be addressed. Airborne dust can be opaque or translucent, exhibit considerable variation in density, and possess indistinct boundaries. Moreover, distinguishing dust from other atmospheric phenomena, such as fog or clouds, can be particularly challenging. To meet the demand for a high-performing and reliable method for monitoring airborne dust, we introduce DustNet++, a neural network designed for dust density estimation. DustNet++ leverages feature maps from multiple resolution scales and semantic levels through window and grid attention mechanisms to maintain a sparse, globally effective receptive field with linear complexity. To validate our approach, we benchmark the performance of DustNet++ against existing methods from the domains of crowd counting and monocular depth estimation using the Meteodata airborne dust dataset and the URDE binary dust segmentation dataset. Our findings demonstrate that DustNet++ surpasses comparative methodologies in terms of regression and localization capabilities.
</summary>
<dc:date>2025-02-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>The first half-century of empirical capital markets research in accounting in pictures</title>
<link href="https://hdl.handle.net/1721.1/163643" rel="alternate"/>
<author>
<name>Kothari, S. P.</name>
</author>
<author>
<name>Schonberger, Bryce</name>
</author>
<author>
<name>Wasley, Charles</name>
</author>
<author>
<name>Xiao, Jason J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163643</id>
<updated>2025-11-14T03:10:09Z</updated>
<published>2025-06-07T00:00:00Z</published>
<summary type="text">The first half-century of empirical capital markets research in accounting in pictures
Kothari, S. P.; Schonberger, Bryce; Wasley, Charles; Xiao, Jason J.
Seminal papers by Ball and Brown (1968) and Beaver (1968) spawned a vast literature on the role of accounting numbers in capital markets. This literature, often referred to as capital markets research in accounting (CMRA), is now more than a half-century old. In light of numerous changes to the economic and financial reporting environments over this time, we estimate CMRA’s major relations using a comprehensive sample period. We illustrate each relation using plots, allowing us to efficiently present CMRA’s first half-century consistent with the adage “a picture is worth a thousand words.” The aims of our study are to document the extent of time-series variation in CMRA’s major relations and to provide evidence on market-level determinants of that variation. In doing so, our study provides a natural starting point for future research designed to develop and test additional causal explanations for time-series variation in the properties of CMRA’s major relations.
</summary>
<dc:date>2025-06-07T00:00:00Z</dc:date>
</entry>
<entry>
<title>SeCOM-B: an integrated model for understanding human behaviour change in wicked socio-ecological problems</title>
<link href="https://hdl.handle.net/1721.1/163642" rel="alternate"/>
<author>
<name>Nguyen-Trung, Kien</name>
</author>
<author>
<name>Saeri, Alexander K.</name>
</author>
<author>
<name>Zhao, Kun</name>
</author>
<author>
<name>Boulet, Mark</name>
</author>
<author>
<name>Kaufman, Stefan</name>
</author>
<id>https://hdl.handle.net/1721.1/163642</id>
<updated>2026-03-08T03:29:30Z</updated>
<published>2025-09-19T00:00:00Z</published>
<summary type="text">SeCOM-B: an integrated model for understanding human behaviour change in wicked socio-ecological problems
Nguyen-Trung, Kien; Saeri, Alexander K.; Zhao, Kun; Boulet, Mark; Kaufman, Stefan
The COM-B model, widely adopted in behaviour change research, systematically explores and categorises the behavioural barriers and facilitators to inform intervention design. The model highlights that where the right mix of barriers and facilitators, in the broad categories of capability, motivation, and opportunity exist, a given behaviour is more likely to be enacted. However, for wicked problems, applying the COM-B model becomes challenging due to complexity, uncertainty, manageability challenges, and the interpretative opacity of the systems that influence behaviour. This paper introduces a combined framework (SeCOM-B) that integrates the Socio-ecological model (SEM) and the COM-B model, highlighting its potential application in co-designing behaviour change interventions to address wicked problems, which often involve non-scientific stakeholders and interdisciplinary team members. Drawing on three case studies of practical behaviour change projects taking place in Australia (2) and Vietnam (1) between March 2022 and July 2023, the paper further illustrates the application of the SeCOM-B model in analysing the drivers and barriers of behaviours, and exploring the implications for intervention design.
</summary>
<dc:date>2025-09-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Horizontal transfer of matrix metalloproteinase genes links early animal and microbial evolution</title>
<link href="https://hdl.handle.net/1721.1/163641" rel="alternate"/>
<author>
<name>Parsons, Chris</name>
</author>
<author>
<name>Fournier, Gregory P.</name>
</author>
<id>https://hdl.handle.net/1721.1/163641</id>
<updated>2026-03-08T03:29:35Z</updated>
<published>2025-11-05T00:00:00Z</published>
<summary type="text">Horizontal transfer of matrix metalloproteinase genes links early animal and microbial evolution
Parsons, Chris; Fournier, Gregory P.
Background The early evolution of animals is characterized by the emergence of complex tissues, organs, and integument, made possible in part by the diversification of groups of structural proteins. The abundance of this new kind of organic material in the environment would have provided novel nutrient opportunities for microbes, as part of the beginnings of animal-microbial coevolution. Indeed, a diverse ensemble of extant microbial groups appear to possess the enzymatic ability to cleave collagen, the most abundant animal-specific protein, through the use of matrix metalloproteinases (MMPs). In animals, MMPs serve to reshape the extracellular matrix in the course of development, but their prevalence in the microbial world has been largely overlooked. Results MMPs have extensive diversity in Bacteria, Eumetazoa, and Streptophyta. We show that in marine metagenomes, MMP abundance is highly correlated with chitinase abundance, implying that even microbial MMPs are associated with animal-derived substrates. Reconstructing the phylogeny of MMP proteins reveals a history of rapid diversification, as well as multiple interkingdom and interdomain horizontal gene transfers. Included among these is a transfer to the ancestral lineage of the archaeal family Methanosarcinaceae, constraining this group to postdate the evolution of collagen, and therefore animal diversification. Conclusions MMPs have an unusual genetic history, marked by multiple instances of gene transfer between bacteria and multicellular eukaryotes, a smoking gun for some of the earliest coevolution between prokaryotes and metazoans. By calculating an end-Permian divergence of Methanosarcina, we demonstrate that the phylogenies of substrate-specific enzymes can provide valuable older-bound age calibrations for improving molecular clock age estimates across the Tree of Life.
</summary>
<dc:date>2025-11-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>New physics versus quenching factors in Coherent Neutrino Scattering</title>
<link href="https://hdl.handle.net/1721.1/163640" rel="alternate"/>
<author>
<name>Li, Yulun</name>
</author>
<author>
<name>Herrera, Gonzalo</name>
</author>
<author>
<name>Huber, Patrick</name>
</author>
<id>https://hdl.handle.net/1721.1/163640</id>
<updated>2026-03-08T03:29:34Z</updated>
<published>2025-11-05T00:00:00Z</published>
<summary type="text">New physics versus quenching factors in Coherent Neutrino Scattering
Li, Yulun; Herrera, Gonzalo; Huber, Patrick
Recent results on the Coherent Elastic Neutrino-Nucleus Scattering (CEνNS) on germanium present significant discrepancies among experiments. We perform a combined analysis of the Dresden-II, CONUS+ and COHERENT data, quantifying the impact of quenching factor uncertainties on their CEνNS cross section measurement. No choice of quenching factor can bring these three data sets into mutual agreement, whereas the combination of COHERENT with either Dresden-II or CONUS+ agrees well albeit for very different quenching factors. We further study the quenching factor dependence on the sensitivity of these experiments to a large neutrino magnetic moment, finding that the constraints can vary by up to an order of magnitude. Our work highlights the importance of reducing this uncertainty on quenching factors in order to probe new physics from neutrinos at the low-energy frontier.
</summary>
<dc:date>2025-11-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Skydiving to bootstrap islands</title>
<link href="https://hdl.handle.net/1721.1/163639" rel="alternate"/>
<author>
<name>Liu, Aike</name>
</author>
<author>
<name>Simmons-Duffin, David</name>
</author>
<author>
<name>Su, Ning</name>
</author>
<author>
<name>van Rees, Balt C.</name>
</author>
<id>https://hdl.handle.net/1721.1/163639</id>
<updated>2026-03-08T03:29:33Z</updated>
<published>2025-11-04T00:00:00Z</published>
<summary type="text">Skydiving to bootstrap islands
Liu, Aike; Simmons-Duffin, David; Su, Ning; van Rees, Balt C.
We study families of semidefinite programs (SDPs) that depend nonlinearly on a small number of “external” parameters. Such families appear universally in numerical bootstrap computations. The traditional method for finding an optimal point in parameter space works by first solving an SDP with fixed external parameters, then moving to a new point in parameter space and repeating the process. Instead, we unify solving the SDP and moving in parameter space in a single algorithm that we call “skydiving”. We test skydiving on some representative problems in the conformal bootstrap, finding significant speedups compared to traditional methods.
</summary>
<dc:date>2025-11-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Search for top squarks in final states with many&#13;
light-flavor jets and 0, 1, or 2 charged leptons in&#13;
proton-proton collisions at √s = 13 TeV</title>
<link href="https://hdl.handle.net/1721.1/163638" rel="alternate"/>
<author>
<name>Chekhovsky, V.</name>
</author>
<author>
<name>Hayrapetyan, A.</name>
</author>
<author>
<name>Makarenko, V.</name>
</author>
<author>
<name>Tumasyan, A.</name>
</author>
<author>
<name>Adam, W.</name>
</author>
<author>
<name>Andrejkovic, J. W.</name>
</author>
<author>
<name>Benato, L.</name>
</author>
<author>
<name>Bergauer, T.</name>
</author>
<author>
<name>Chatterjee, S.</name>
</author>
<author>
<name>Damanakis, K.</name>
</author>
<author>
<name>Dragicevic, M.</name>
</author>
<author>
<name>Hussain, P. S.</name>
</author>
<author>
<name>Jeitler, M.</name>
</author>
<author>
<name>Krammer, N.</name>
</author>
<author>
<name>Li, A.</name>
</author>
<author>
<name>Liko, D.</name>
</author>
<author>
<name>Mikulec, I.</name>
</author>
<author>
<name>Schieck, J.</name>
</author>
<author>
<name>The CMS collaboration</name>
</author>
<id>https://hdl.handle.net/1721.1/163638</id>
<updated>2026-03-08T03:29:31Z</updated>
<published>2025-10-29T00:00:00Z</published>
<summary type="text">Search for top squarks in final states with many&#13;
light-flavor jets and 0, 1, or 2 charged leptons in&#13;
proton-proton collisions at √s = 13 TeV
Chekhovsky, V.; Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; The CMS collaboration
Several new physics models including versions of supersymmetry (SUSY) characterized by R-parity violation (RPV) or with additional hidden sectors predict the production of events with top quarks, low missing transverse momentum, and many additional quarks or gluons. The results of a search for top squarks decaying to two top quarks and six additional light-flavor quarks or gluons are reported. The search employs a novel machine learning method for background estimation from control samples in data using decorrelated discriminators. The search is performed using events with 0, 1, or 2 electrons or muons in conjunction with at least six jets. No requirement is placed on the magnitude of the missing transverse momentum. The result is based on a sample of proton-proton collisions at $$\sqrt{s}=13$$ TeV corresponding to 138 fb−1 of integrated luminosity collected with the CMS detector at the LHC in 2016–2018. With no statistically significant excess of events observed beyond the expected contributions from the standard model, the data are used to determine upper limits on the top squark pair production cross section in the frameworks of RPV and stealth SUSY. Models with top squark masses less than 700 (930) GeV are excluded at 95% confidence level for RPV (stealth) SUSY scenarios.
</summary>
<dc:date>2025-10-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Study of same-sign W boson scattering and anomalous couplings in events with one tau lepton from pp collisions at √s = 13 TeV</title>
<link href="https://hdl.handle.net/1721.1/163637" rel="alternate"/>
<author>
<name>Hayrapetyan, A.</name>
</author>
<author>
<name>Tumasyan, A.</name>
</author>
<author>
<name>Adam, W.</name>
</author>
<author>
<name>Andrejkovic, J. W.</name>
</author>
<author>
<name>Bergauer, T.</name>
</author>
<author>
<name>Chatterjee, S.</name>
</author>
<author>
<name>Damanakis, K.</name>
</author>
<author>
<name>Dragicevic, M.</name>
</author>
<author>
<name>Hussain, P. S.</name>
</author>
<author>
<name>Jeitler, M.</name>
</author>
<author>
<name>Krammer, N.</name>
</author>
<author>
<name>Li, A.</name>
</author>
<author>
<name>Liko, D.</name>
</author>
<author>
<name>Mikulec, I.</name>
</author>
<author>
<name>Schieck, J.</name>
</author>
<author>
<name>Schöfbeck, R.</name>
</author>
<author>
<name>Schwarz, D.</name>
</author>
<author>
<name>Sonawane, M.</name>
</author>
<author>
<name>The CMS collaboration</name>
</author>
<id>https://hdl.handle.net/1721.1/163637</id>
<updated>2026-03-08T03:29:30Z</updated>
<published>2025-10-27T00:00:00Z</published>
<summary type="text">Study of same-sign W boson scattering and anomalous couplings in events with one tau lepton from pp collisions at √s = 13 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.; Schwarz, D.; Sonawane, M.; The CMS collaboration
A first study is presented of the cross section for the scattering of same-sign W boson pairs via the detection of a τ lepton. The data from proton-proton collisions at the center-of-mass energy of 13 TeV were collected by the CMS detector at the LHC, and correspond to an integrated luminosity of 138 fb−1. Events were selected that contain two jets with large pseudorapidity and large invariant mass, one τ lepton, one light lepton (e or μ), and significant missing transverse momentum. The measured cross section for electroweak same-sign WW scattering is $${1.44}_{-0.56}^{+0.63}$$ times the standard model prediction. In addition, a search is presented for the indirect effects of processes beyond the standard model via the effective field theory framework, in terms of dimension-6 and dimension-8 operators.
</summary>
<dc:date>2025-10-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Search for dark matter produced in association with a Higgs boson decaying to a τ lepton pair in proton-proton collisions at √s = 13 TeV</title>
<link href="https://hdl.handle.net/1721.1/163636" rel="alternate"/>
<author>
<name>Chekhovsky, V.</name>
</author>
<author>
<name>Hayrapetyan, A.</name>
</author>
<author>
<name>Makarenko, V.</name>
</author>
<author>
<name>Tumasyan, A.</name>
</author>
<author>
<name>Adam, W.</name>
</author>
<author>
<name>Andrejkovic, J. W.</name>
</author>
<author>
<name>Benato, L.</name>
</author>
<author>
<name>Bergauer, T.</name>
</author>
<author>
<name>Chatterjee, S.</name>
</author>
<author>
<name>Damanakis, K.</name>
</author>
<author>
<name>Dragicevic, M.</name>
</author>
<author>
<name>Hussain, P. S.</name>
</author>
<author>
<name>Jeitler, M.</name>
</author>
<author>
<name>Krammer, N.</name>
</author>
<author>
<name>Li, A.</name>
</author>
<author>
<name>Liko, D.</name>
</author>
<author>
<name>The CMS Collaboration</name>
</author>
<id>https://hdl.handle.net/1721.1/163636</id>
<updated>2026-03-08T03:29:29Z</updated>
<published>2025-10-21T00:00:00Z</published>
<summary type="text">Search for dark matter produced in association with a Higgs boson decaying to a τ lepton pair in proton-proton collisions at √s = 13 TeV
Chekhovsky, V.; Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; The CMS Collaboration
A search for dark matter particles produced in association with a Higgs boson decaying into a pair of τ leptons is performed using data collected in proton-proton collisions at a center-of-mass energy of 13 TeV with the CMS detector. The analysis is based on a data set corresponding to an integrated luminosity of 101 fb−1 collected in 2017–2018. No significant excess over the expected standard model background is observed. This result is interpreted within the frameworks of the 2HDM+a and baryonic Z′ benchmark simplified models. The 2HDM+a model is a type-II two-Higgs-doublet model featuring a heavy pseudoscalar with an additional light pseudoscalar. Upper limits at 95% confidence level are set on the product of the production cross section and the branching fraction for each of these two simplified models. Heavy pseudoscalar boson masses between 400 and 700 GeV are excluded for a light pseudoscalar mass of 100 GeV. For the baryonic Z′ model, a statistical combination is made with an earlier search based on a data set of 36 fb−1 collected in 2016. In this model, Z′ boson masses up to 1050 GeV are excluded for a dark matter particle mass of 1 GeV.
</summary>
<dc:date>2025-10-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Massively parallel enrichment of low-frequency alleles enables duplex sequencing at low depth</title>
<link href="https://hdl.handle.net/1721.1/163635" rel="alternate"/>
<author>
<name>Gydush, Gregory</name>
</author>
<author>
<name>Nguyen, Erica</name>
</author>
<author>
<name>Bae, Jin H</name>
</author>
<author>
<name>Blewett, Timothy</name>
</author>
<author>
<name>Rhoades, Justin</name>
</author>
<author>
<name>Reed, Sarah C</name>
</author>
<author>
<name>Shea, Douglas</name>
</author>
<author>
<name>Xiong, Kan</name>
</author>
<author>
<name>Liu, Ruolin</name>
</author>
<author>
<name>Yu, Fangyan</name>
</author>
<author>
<name>Leong, Ka Wai</name>
</author>
<author>
<name>Choudhury, Atish D</name>
</author>
<author>
<name>Stover, Daniel G</name>
</author>
<author>
<name>Tolaney, Sara M</name>
</author>
<author>
<name>Krop, Ian E</name>
</author>
<author>
<name>Christopher Love, J</name>
</author>
<author>
<name>Parsons, Heather A</name>
</author>
<author>
<name>Mike Makrigiorgos, G</name>
</author>
<author>
<name>Golub, Todd R</name>
</author>
<author>
<name>Adalsteinsson, Viktor A</name>
</author>
<id>https://hdl.handle.net/1721.1/163635</id>
<updated>2026-03-08T03:30:50Z</updated>
<published>2022-03-17T00:00:00Z</published>
<summary type="text">Massively parallel enrichment of low-frequency alleles enables duplex sequencing at low depth
Gydush, Gregory; Nguyen, Erica; Bae, Jin H; Blewett, Timothy; Rhoades, Justin; Reed, Sarah C; Shea, Douglas; Xiong, Kan; Liu, Ruolin; Yu, Fangyan; Leong, Ka Wai; Choudhury, Atish D; Stover, Daniel G; Tolaney, Sara M; Krop, Ian E; Christopher Love, J; Parsons, Heather A; Mike Makrigiorgos, G; Golub, Todd R; Adalsteinsson, Viktor A
Assaying for large numbers of low-frequency mutations requires sequencing at extremely high depth and accuracy. Increasing sequencing depth aids the detection of low-frequency mutations yet limits the number of loci that can be simultaneously probed. Here we report a method for the accurate tracking of thousands of distinct mutations that requires substantially fewer reads per locus than conventional hybrid-capture duplex sequencing. The method, which we named MAESTRO (for minor-allele-enriched sequencing through recognition oligonucleotides), combines massively parallel mutation enrichment with duplex sequencing to track up to 10,000 low-frequency mutations, with up to 100-fold fewer reads per locus. We show that MAESTRO can be used to test for chimaerism by tracking donor-exclusive single-nucleotide polymorphisms in sheared genomic DNA from human cell lines, to validate whole-exome sequencing and whole-genome sequencing for the detection of mutations in breast-tumour samples from 16 patients, and to monitor the patients for minimal residual disease via the analysis of cell-free DNA from liquid biopsies. MAESTRO improves the breadth, depth, accuracy and efficiency of mutation testing by sequencing.
</summary>
<dc:date>2022-03-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Peanut oral immunotherapy differentially suppresses clonally distinct subsets of T helper cells</title>
<link href="https://hdl.handle.net/1721.1/163634" rel="alternate"/>
<author>
<name>Monian, Brinda</name>
</author>
<author>
<name>Tu, Ang A</name>
</author>
<author>
<name>Ruiter, Bert</name>
</author>
<author>
<name>Morgan, Duncan M</name>
</author>
<author>
<name>Petrossian, Patrick M</name>
</author>
<author>
<name>Smith, Neal P</name>
</author>
<author>
<name>Gierahn, Todd M</name>
</author>
<author>
<name>Ginder, Julia H</name>
</author>
<author>
<name>Shreffler, Wayne G</name>
</author>
<author>
<name>Love, J Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/163634</id>
<updated>2026-03-08T03:30:39Z</updated>
<published>2021-11-23T00:00:00Z</published>
<summary type="text">Peanut oral immunotherapy differentially suppresses clonally distinct subsets of T helper cells
Monian, Brinda; Tu, Ang A; Ruiter, Bert; Morgan, Duncan M; Petrossian, Patrick M; Smith, Neal P; Gierahn, Todd M; Ginder, Julia H; Shreffler, Wayne G; Love, J Christopher
Food allergy affects an estimated 8% of children in the United States. Oral immunotherapy (OIT) is a recently approved treatment, with outcomes ranging from sustained tolerance to food allergens to no apparent benefit. The immunological underpinnings that influence clinical outcomes of OIT remain largely unresolved. Using single-cell RNA-Seq and paired T cell receptor α/β (TCRα/β) sequencing, we assessed the transcriptomes of CD154+ and CD137+ peanut-reactive T helper (Th) cells from 12 patients with peanut allergy longitudinally throughout OIT. We observed expanded populations of cells expressing Th1, Th2, and Th17 signatures that further separated into 6 clonally distinct subsets. Four of these subsets demonstrated a convergence of TCR sequences, suggesting antigen-driven T cell fates. Over the course of OIT, we observed suppression of Th2 and Th1 gene signatures in effector clonotypes but not T follicular helper-like (Tfh-like) clonotypes. Positive outcomes were associated with stronger suppression of Th2 signatures in Th2A-like cells, while treatment failure was associated with the expression of baseline inflammatory gene signatures that were present in Th1 and Th17 cell populations and unmodulated by OIT. These results demonstrate that differential clinical responses to OIT are associated with both preexisting characteristics of peanut-reactive CD4+ T cells and suppression of a subset of Th2 cells.
</summary>
<dc:date>2021-11-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mitochondrial variant enrichment from high-throughput single-cell RNA sequencing resolves clonal populations</title>
<link href="https://hdl.handle.net/1721.1/163633" rel="alternate"/>
<author>
<name>Miller, Tyler E</name>
</author>
<author>
<name>Lareau, Caleb A</name>
</author>
<author>
<name>Verga, Julia A</name>
</author>
<author>
<name>DePasquale, Erica AK</name>
</author>
<author>
<name>Liu, Vincent</name>
</author>
<author>
<name>Ssozi, Daniel</name>
</author>
<author>
<name>Sandor, Katalin</name>
</author>
<author>
<name>Yin, Yajie</name>
</author>
<author>
<name>Ludwig, Leif S</name>
</author>
<author>
<name>El Farran, Chadi A</name>
</author>
<author>
<name>Morgan, Duncan M</name>
</author>
<author>
<name>Satpathy, Ansuman T</name>
</author>
<author>
<name>Griffin, Gabriel K</name>
</author>
<author>
<name>Lane, Andrew A</name>
</author>
<author>
<name>Love, J Christopher</name>
</author>
<author>
<name>Bernstein, Bradley E</name>
</author>
<author>
<name>Sankaran, Vijay G</name>
</author>
<author>
<name>van Galen, Peter</name>
</author>
<id>https://hdl.handle.net/1721.1/163633</id>
<updated>2026-03-08T03:30:47Z</updated>
<published>2022-02-24T00:00:00Z</published>
<summary type="text">Mitochondrial variant enrichment from high-throughput single-cell RNA sequencing resolves clonal populations
Miller, Tyler E; Lareau, Caleb A; Verga, Julia A; DePasquale, Erica AK; Liu, Vincent; Ssozi, Daniel; Sandor, Katalin; Yin, Yajie; Ludwig, Leif S; El Farran, Chadi A; Morgan, Duncan M; Satpathy, Ansuman T; Griffin, Gabriel K; Lane, Andrew A; Love, J Christopher; Bernstein, Bradley E; Sankaran, Vijay G; van Galen, Peter
The combination of single-cell transcriptomics with mitochondrial DNA variant detection can be used to establish lineage relationships in primary human cells, but current methods are not scalable to interrogate complex tissues. Here, we combine common 3′ single-cell RNA-sequencing protocols with mitochondrial transcriptome enrichment to increase coverage by more than 50-fold, enabling high-confidence mutation detection. The method successfully identifies skewed immune-cell expansions in primary human clonal hematopoiesis.
</summary>
<dc:date>2022-02-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>SARS-CoV-2 receptor binding domain displayed on HBsAg virus–like particles elicits protective immunity in macaques</title>
<link href="https://hdl.handle.net/1721.1/163632" rel="alternate"/>
<author>
<name/>
</author>
<id>https://hdl.handle.net/1721.1/163632</id>
<updated>2026-03-08T03:30:47Z</updated>
<published>2022-03-16T00:00:00Z</published>
<summary type="text">SARS-CoV-2 receptor binding domain displayed on HBsAg virus–like particles elicits protective immunity in macaques
Authorized vaccines against SARS-CoV-2 remain less available in low- and middle-income countries due to insufficient supply, high costs, and storage requirements. Global immunity could still benefit from new vaccines using widely available, safe adjuvants, such as alum and protein subunits, suited to low-cost production in existing manufacturing facilities. Here, a clinical-stage vaccine candidate comprising a SARS-CoV-2 receptor binding domain–hepatitis B surface antigen virus–like particle elicited protective immunity in cynomolgus macaques. Titers of neutralizing antibodies (&gt;104) induced by this candidate were above the range of protection for other licensed vaccines in nonhuman primates. Including CpG 1018 did not significantly improve the immunological responses. Vaccinated animals challenged with SARS-CoV-2 showed reduced median viral loads in bronchoalveolar lavage (~3.4 log10) and nasal mucosa (~2.9 log10) versus sham controls. These data support the potential benefit of this design for a low-cost modular vaccine platform for SARS-CoV-2 and other variants of concern or betacoronaviruses.
</summary>
<dc:date>2022-03-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Searching for exotic scalars at fusion reactors</title>
<link href="https://hdl.handle.net/1721.1/163631" rel="alternate"/>
<author>
<name>Baruch, Chaja</name>
</author>
<author>
<name>Fitzpatrick, Patrick J.</name>
</author>
<author>
<name>Menzo, Tony</name>
</author>
<author>
<name>Soreq, Yotam</name>
</author>
<author>
<name>Trifinopoulos, Sokratis</name>
</author>
<author>
<name>Zupan, Jure</name>
</author>
<id>https://hdl.handle.net/1721.1/163631</id>
<updated>2026-03-08T03:29:22Z</updated>
<published>2025-10-27T00:00:00Z</published>
<summary type="text">Searching for exotic scalars at fusion reactors
Baruch, Chaja; Fitzpatrick, Patrick J.; Menzo, Tony; Soreq, Yotam; Trifinopoulos, Sokratis; Zupan, Jure
Part of the energy created in deuterium-tritium fusion reactors is carried away from plasma by a high-intensity neutron flux, which is then absorbed by the reactor’s inner walls. The neutron flux can be used to sustain the reaction by the following mechanism: the walls are coated with lithium-rich breeding blankets, in which a fraction of neutrons interacts with lithium, creating tritium, which can be, in turn, used a fuel for the main reaction. The interactions of neutrons with the materials within the breeding blanket can also result in the production of dark sector particles, feebly interacting light scalars or pseudoscalars, via nuclear transitions. We estimate the potential size of such dark sector flux outside the reactor and consider possible detection methods at current and future thermonuclear fusion reactors. In our analysis, we take into account all other current bounds, recasting also the SNO axion bound for a CP even scalar. We find that year-long searches at current and future reactors can set leading constraints on dark scalar- and dark pseudoscalar-nucleon couplings.
</summary>
<dc:date>2025-10-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measurements of charmed meson and antimeson production asymmetries at √s = 13.6 TeV</title>
<link href="https://hdl.handle.net/1721.1/163630" rel="alternate"/>
<author>
<name>Aaij, R.</name>
</author>
<author>
<name>Abdelmotteleb, A. S. W.</name>
</author>
<author>
<name>Abellan Beteta, C.</name>
</author>
<author>
<name>Abudinén, F.</name>
</author>
<author>
<name>Ackernley, T.</name>
</author>
<author>
<name>Adefisoye, A. A.</name>
</author>
<author>
<name>Adeva, B.</name>
</author>
<author>
<name>Adinolfi, M.</name>
</author>
<author>
<name>Adlarson, P.</name>
</author>
<author>
<name>Agapopoulou, C.</name>
</author>
<author>
<name>Aidala, C. A.</name>
</author>
<author>
<name>Ajaltouni, Z.</name>
</author>
<author>
<name>Akar, S.</name>
</author>
<author>
<name>Akiba, K.</name>
</author>
<author>
<name>Albicocco, P.</name>
</author>
<author>
<name>Albrecht, J.</name>
</author>
<author>
<name>The LHCb collaboration</name>
</author>
<id>https://hdl.handle.net/1721.1/163630</id>
<updated>2026-03-08T03:29:26Z</updated>
<published>2025-10-07T00:00:00Z</published>
<summary type="text">Measurements of charmed meson and antimeson production asymmetries at √s = 13.6 TeV
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; The LHCb collaboration
This article presents doubly differential measurements of the asymmetries in production rates between mesons containing a charm quark and those containing an anti-charm quark in proton-proton collisions at a centre-of-mass energy of s = 13.6 TeV using data recorded by the LHCb experiment. The asymmetries of D0, D+ and D s + mesons are measured for two-dimensional intervals in transverse momentum and pseudorapidity, within the range 2.5 &lt; pT &lt; 25.0 GeV/c and 2.0 &lt; η &lt; 4.5. No significant production asymmetries are observed. Comparisons to the Pythia 8 and Herwig 7 event generators are also presented, and their agreement with the data is evaluated. These measurements constitute the first measurements of production asymmetries at this centre-of-mass energy of colliding beams, and the first measurements with the LHCb Run 3 detector.
</summary>
<dc:date>2025-10-07T00:00:00Z</dc:date>
</entry>
<entry>
<title>Urban Planning for Health Equity Must Employ an Intersectionality Framework</title>
<link href="https://hdl.handle.net/1721.1/163629" rel="alternate"/>
<author>
<name>Williams, Patrice C</name>
</author>
<author>
<name>Binet, Andrew</name>
</author>
<author>
<name>Alhasan, Dana M</name>
</author>
<author>
<name>Riley, Nyree M</name>
</author>
<author>
<name>Jackson, Chandra L</name>
</author>
<id>https://hdl.handle.net/1721.1/163629</id>
<updated>2026-03-08T03:30:37Z</updated>
<published>2022-07-12T00:00:00Z</published>
<summary type="text">Urban Planning for Health Equity Must Employ an Intersectionality Framework
Williams, Patrice C; Binet, Andrew; Alhasan, Dana M; Riley, Nyree M; Jackson, Chandra L
Urban planning for health equity should be guided by an intersectional approach. Intersectionality is an essential framework for understanding the multiple overlapping factors, such as social and economic inequalities, that produce health disparities. We offer four strategies that planning researchers and practitioners can use to develop and integrate an intersectional approach into planning for health equity: challenging implicit and explicit assumptions, building cross-sectoral coalitions united by a shared vision for social and environmental justice, applying transdisciplinary and co-designing approaches throughout the planning process, and using existing tools to evaluate the impact of programs and policies on advancing health equity.
</summary>
<dc:date>2022-07-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing driver speeding behavior when using partial-automation in real-world driving</title>
<link href="https://hdl.handle.net/1721.1/163628" rel="alternate"/>
<author>
<name>Haus, Samantha H</name>
</author>
<author>
<name>Gershon, Pnina</name>
</author>
<author>
<name>Mehler, Bruce</name>
</author>
<author>
<name>Reimer, Bryan</name>
</author>
<id>https://hdl.handle.net/1721.1/163628</id>
<updated>2026-03-08T03:30:42Z</updated>
<published>2022-07-12T00:00:00Z</published>
<summary type="text">Characterizing driver speeding behavior when using partial-automation in real-world driving
Haus, Samantha H; Gershon, Pnina; Mehler, Bruce; Reimer, Bryan
Objective: Speeding is a prevalent and complex risky behavior that can be affected by many fac-tors. Understanding how drivers speed is important for developing countermeasures, especially asnew automation features emerge. The current study seeks to identify and describe types of real-world speeding behaviors with and without the use of partial-automation.&#13;
Methods: This study used a combination of supervised and unsupervised data analysis techniquesto assess relevant factors in real-world speeding epochs, extracted from the MIT Advanced VehicleTechnology Naturalistic Driving Study, and classified them into distinct speeding behaviors.Speeding epochs were defined as traveling at least 5 mph over the speed limit for a minimumduration of 3 s. Vehicle speed-exceedance profiles were characterized over time using DynamicTime Warping and included in multivariate models that evaluated the associations between differ-ent features of the speeding epochs, such as speeding duration and magnitude. Finally, the identi-fied features were used to cluster speeding behaviors using the Gower dissimilarity measure.&#13;
Results: The analysis yielded four types of behaviors in both partially-automated and manual driv-ing: (i) Incidental speeding (low duration, low magnitude), (ii) Moderate speeding (low duration,moderate magnitude), (iii) Elevated speeding (moderate duration, high magnitude), and (iv)Extended speeding (long duration, high magnitude). When comparing the behaviors with andwithout partial-automation use, both Incidental and Moderate speeding were found to have sig-nificantly longer durations with partial-automation than manual driving. Elevated speeding wasfound to be more prevalent and associated with higher magnitudes during manual than with par-tially-automated driving. Finally, although Extended speeding was more prevalent during automa-tion use, it was associated with a lower mean and maximum speed magnitude compared toExtended speeding during manual driving.&#13;
Conclusions: This work highlights the variability in speeding behavior between and within par-tially-automated and manual driving. The design of systems that mitigate risky speeding behav-iors should consider targeting divergent behaviors observed between manual and automateddriving as a mechanism to mitigate the prevalence of the different behaviors associated witheach state.
</summary>
<dc:date>2022-07-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Validation and Uncertainty Quantification of Transient Reflood Models Using COBRA-TF and Machine Learning Techniques Based on the NRC/PSU RBHT Benchmark</title>
<link href="https://hdl.handle.net/1721.1/163627" rel="alternate"/>
<author>
<name>Jin, Yue</name>
</author>
<author>
<name>Bajorek, Stephen M</name>
</author>
<author>
<name>Cheung, Fan-Bill</name>
</author>
<id>https://hdl.handle.net/1721.1/163627</id>
<updated>2026-03-08T03:30:45Z</updated>
<published>2022-07-28T00:00:00Z</published>
<summary type="text">Validation and Uncertainty Quantification of Transient Reflood Models Using COBRA-TF and Machine Learning Techniques Based on the NRC/PSU RBHT Benchmark
Jin, Yue; Bajorek, Stephen M; Cheung, Fan-Bill
The accurate prediction of the fluid flow mass and the heat transfer process as well as the system response during reflood transients has long been a critical and challenging issue for reactor system safety analyses. Accurate characterization of the flow and energy transport can also significantly facilitate the various system/component design and optimization tasks. In the current study based on the U.S. Nuclear Regulatory Commission/Pennsylvania State University Rod Bundle Heat Transfer (RBHT) reflood experimental data, a comprehensive uncertainty analysis framework is developed using DAKOTA. The developed framework is used to perform an in-depth reflood model validation and verification for the subchannel analysis code COBRA-TF. In the meantime, the artificial intelligence (AI)–based machine learning (ML) model for rod cladding temperature prediction during reflood is also developed and evaluated using the current framework. Key input parametric effects for reflood thermal-hydraulic prediction include the system pressure, inlet liquid temperature/enthalpy, inlet mass flow rate, and average bundle power input. The figure of merit under consideration is the peak cladding temperature variations. It is found in the current study that, while further model improvement is needed, COBRA-TF can predict the correct parametric trends when compared with the RBHT data. On the other hand, it is challenging for the pure AI-based ML models to correctly reflect the parametric trends. Suggestions for future ML model development are provided in the end.
</summary>
<dc:date>2022-07-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Remote language revitalisation efforts during COVID-19</title>
<link href="https://hdl.handle.net/1721.1/163626" rel="alternate"/>
<author>
<name>Wiley-Camacho, Grahm</name>
</author>
<author>
<name>Hillaire, Garron</name>
</author>
<author>
<name>Buttimer, Christopher J</name>
</author>
<author>
<name>Colwell, Richard</name>
</author>
<id>https://hdl.handle.net/1721.1/163626</id>
<updated>2026-03-08T03:30:35Z</updated>
<published>2022-06-13T00:00:00Z</published>
<summary type="text">Remote language revitalisation efforts during COVID-19
Wiley-Camacho, Grahm; Hillaire, Garron; Buttimer, Christopher J; Colwell, Richard
As schools shift to online instruction during the COVID-19 pandemic, it is important to support disenfranchised populations and keep issues of equity at the centre of our response. In this study, the authors focus on supporting one of the few urban-based Indigenous language schools in the United States because language revitalisation is critical for Native American communities. The authors explore the extent to which video conferencing and flipped classrooms support the development of a community of speakers. The study focuses on a single classroom of 16 students in first through third grade. The authors use a digital decolonisation framework focused on empowering local communities in conjunction with design-based research methodology to explore contextualised remote instruction solutions. They report on benefits for the development of a community of speakers from remote instruction that come with costs in reduced efficacy of language learning. Finally, they distil those results into preliminary design principles.
</summary>
<dc:date>2022-06-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cohomogeneity Two Ricci Solitons with Sub-Euclidean Volume</title>
<link href="https://hdl.handle.net/1721.1/163625" rel="alternate"/>
<author>
<name>Firester, Benjy</name>
</author>
<author>
<name>Tsiamis, Raphael</name>
</author>
<id>https://hdl.handle.net/1721.1/163625</id>
<updated>2026-03-08T03:29:24Z</updated>
<published>2025-10-27T00:00:00Z</published>
<summary type="text">Cohomogeneity Two Ricci Solitons with Sub-Euclidean Volume
Firester, Benjy; Tsiamis, Raphael
We introduce new families of four-dimensional Ricci solitons of cohomogeneity two with volume collapsing ends. In a local presentation of the metric conformal to a product, we reduce the soliton equation to a degenerate Monge-Ampère equation for the conformal factor coupled with ODEs. We obtain explicit complete expanding solitons as well as abstract existence results for shrinking and steady solitons with boundary. These families of Ricci solitons specialize to classical examples of Einstein and soliton metrics. We also classify local solutions of this Monge-Ampère equation to prove rigidity for these solitons.
</summary>
<dc:date>2025-10-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Many Sexes? How Many Genders?</title>
<link href="https://hdl.handle.net/1721.1/163624" rel="alternate"/>
<author>
<name>Byrne, Alex</name>
</author>
<id>https://hdl.handle.net/1721.1/163624</id>
<updated>2026-03-08T03:26:32Z</updated>
<published>2025-09-10T00:00:00Z</published>
<summary type="text">How Many Sexes? How Many Genders?
Byrne, Alex
The British philosopher and public intellectual C. E. M. Joad was a regular panelist on the BBC radio show The Brains Trust during and after the Second World War. He often began an answer to listeners’ questions with his catchphrase “It all depends what you mean by…,” which caught on throughout the country (Ayto &amp; Crofton, 2011). If any question deserves Joad’s catchphrase, it is “How many genders?”
</summary>
<dc:date>2025-09-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Is There Super-Normal Profit in Real Estate Development?*</title>
<link href="https://hdl.handle.net/1721.1/163623" rel="alternate"/>
<author>
<name>Geltner, David</name>
</author>
<author>
<name>Kumar, Anil</name>
</author>
<author>
<name>Van de Minne, Alex M</name>
</author>
<id>https://hdl.handle.net/1721.1/163623</id>
<updated>2026-03-08T03:30:46Z</updated>
<published>2022-07-11T00:00:00Z</published>
<summary type="text">Is There Super-Normal Profit in Real Estate Development?*
Geltner, David; Kumar, Anil; Van de Minne, Alex M
This paper explores the question of whether real estate development (RED) projects systematically present positive net present value (NPV) and therefore, provide super-normal profit. Such projects are the products of a business operation that governs the exercise of the real call option on development that is represented by developable land. We present a framework for considering super-normal profit in the RED industry, and then in light of that framework we examine RED projects produced by publicly-traded equity real estate investment trusts (REITs). We find strong evidence of positive correlation between REITs’ Tobin-Q ratios, indicative of positive NPV, and the ratio of development assets to total assets in the firm, controlling for other factors. The nature of the firm’s Tobin’s-Q metric is such that the implied added firm value is net of land cost and net of overhead and search costs associated with the RED business operation. While our findings do not prove a direction of causality between REITs’ RED activity and positive NPV, the robust positive correlation controlling for other factors raises interesting implications which are discussed in the paper.
</summary>
<dc:date>2022-07-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>Countervailing Effects of Extreme Maximum and Minimum Temperature Days on Conflict in Mainland Southeast Asia</title>
<link href="https://hdl.handle.net/1721.1/163622" rel="alternate"/>
<author>
<name>Gasser, André Tashi</name>
</author>
<author>
<name>Lanz, Bruno</name>
</author>
<id>https://hdl.handle.net/1721.1/163622</id>
<updated>2026-03-08T03:29:28Z</updated>
<published>2025-11-03T00:00:00Z</published>
<summary type="text">Countervailing Effects of Extreme Maximum and Minimum Temperature Days on Conflict in Mainland Southeast Asia
Gasser, André Tashi; Lanz, Bruno
We exploit 0.5◦ × 0.5◦ raster data to document how exceedances of the local 90th percentile thresholds for daily maximum and minimum temperatures affect conflict in mainland&#13;
Southeast Asia. We show that conflict incidence increases with extreme high maximum&#13;
temperature days and decreases with extreme high minimum temperature days. This implies that failing to control for extreme minimums understates the effects of extreme maximums. Moreover, as the frequency of extreme maximums and minimums is expected to&#13;
increase together with average temperatures, the countervailing effects at both tails of&#13;
the temperature distribution offset one another in mean-temperature regressions, helping&#13;
to explain earlier inconclusive findings for the region. We also show that the effects of&#13;
extreme maximums and minimums differ by conflict type, actors involved and affected&#13;
populations. Thus, even in the absence of an aggregate mean-temperature effect, a rising frequency of extreme temperature days may generate complex distributional conflict&#13;
incidence.
</summary>
<dc:date>2025-11-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Optimization-Based Construction Procedure for Function Space-Based Summation-by-Parts Operators on Arbitrary Grids</title>
<link href="https://hdl.handle.net/1721.1/163621" rel="alternate"/>
<author>
<name>Glaubitz, Jan</name>
</author>
<author>
<name>Nordström, Jan</name>
</author>
<author>
<name>Öffner, Philipp</name>
</author>
<id>https://hdl.handle.net/1721.1/163621</id>
<updated>2026-03-08T03:29:27Z</updated>
<published>2025-11-06T00:00:00Z</published>
<summary type="text">An Optimization-Based Construction Procedure for Function Space-Based Summation-by-Parts Operators on Arbitrary Grids
Glaubitz, Jan; Nordström, Jan; Öffner, Philipp
We introduce a novel construction procedure for one-dimensional function space summation-by-parts (FSBP) operators. Existing construction procedures for FSBP operators of the form D = P - 1 Q proceed as follows: Given a boundary operator B, the norm matrix P is first determined and then in a second step the complementary matrix Q is calculated to finally get the FSBP operator D. In contrast, the approach proposed here determines the norm and complementary matrices, P and Q, simultaneously by solving an optimization problem. The proposed construction procedure applies to classical summation-by-parts (SBP) operators based on polynomial approximation and the broader class of FSBP operators. According to our experiments, the presented approach yields a numerically stable construction procedure and FSBP operators with higher accuracy for diagonal norm difference operators at the boundaries than the traditional approach. Through numerical simulations, we highlight the advantages of our proposed technique.
</summary>
<dc:date>2025-11-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cochlea to categories: The spatiotemporal dynamics of semantic auditory representations</title>
<link href="https://hdl.handle.net/1721.1/163620" rel="alternate"/>
<author>
<name>Lowe, Matthew X</name>
</author>
<author>
<name>Mohsenzadeh, Yalda</name>
</author>
<author>
<name>Lahner, Benjamin</name>
</author>
<author>
<name>Charest, Ian</name>
</author>
<author>
<name>Oliva, Aude</name>
</author>
<author>
<name>Teng, Santani</name>
</author>
<id>https://hdl.handle.net/1721.1/163620</id>
<updated>2026-03-08T03:30:48Z</updated>
<published>2022-06-21T00:00:00Z</published>
<summary type="text">Cochlea to categories: The spatiotemporal dynamics of semantic auditory representations
Lowe, Matthew X; Mohsenzadeh, Yalda; Lahner, Benjamin; Charest, Ian; Oliva, Aude; Teng, Santani
How does the auditory system categorize natural sounds? Here we apply multimodal neuroimaging to illustrate the progression from acoustic to semantically dominated representations. Combining magnetoencephalographic (MEG) and functional magnetic resonance imaging (fMRI) scans of observers listening to naturalistic sounds, we found superior temporal responses beginning ∼55 ms post-stimulus onset, spreading to extratemporal cortices by ∼100 ms. Early regions were distinguished less by onset/peak latency than by functional properties and overall temporal response profiles. Early acoustically-dominated representations trended systematically toward category dominance over time (after ∼200 ms) and space (beyond primary cortex). Semantic category representation was spatially specific: Vocalizations were preferentially distinguished in frontotemporal voice-selective regions and the fusiform; scenes and objects were distinguished in parahippocampal and medial place areas. Our results are consistent with real-world events coded via an extended auditory processing hierarchy, in which acoustic representations rapidly enter multiple streams specialized by category, including areas typically considered visual cortex.
</summary>
<dc:date>2022-06-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rapid and automated alloy design with graph neural network-powered large language model-driven multi-agent AI</title>
<link href="https://hdl.handle.net/1721.1/163619" rel="alternate"/>
<author>
<name>Ghafarollahi, Alireza</name>
</author>
<author>
<name>Buehler, Markus J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163619</id>
<updated>2026-03-08T03:29:32Z</updated>
<published>2025-11-06T00:00:00Z</published>
<summary type="text">Rapid and automated alloy design with graph neural network-powered large language model-driven multi-agent AI
Ghafarollahi, Alireza; Buehler, Markus J.
A multi-agent artificial intelligence (AI) model is developed to automate the discovery of new metallic alloys, integrating multimodal data and external knowledge, including insights from physics via atomistic simulations. The system consists of (a) large language models (LLMs) for tasks such as reasoning and planning, (b) AI agents with distinct roles collaborating dynamically, and (c) a newly developed graph neural network (GNN) model for rapid retrieval of physical properties. We chose the ternary NbMoTa body-centered-cubic alloy as our model system and developed the GNN to predict two fundamental materials properties: the Peierls barrier and the solute/screw dislocation interaction energy. Our GNN model efficiently predicts these properties, reducing reliance on costly brute-force calculations and alleviating the computational demands on the multi-agent system. By combining the predictive capabilities of GNNs with the collaborative intelligence of LLM-driven reasoning agents, the system autonomously explores vast alloy design spaces, identifies trends in atomic-scale properties, and predicts macroscale mechanical strength, as demonstrated by several computational experiments. This synergistic approach accelerates the discovery of advanced alloys and holds promise for broader applications in other complex systems, marking a step forward in automated materials discovery and design. Impact statement Traditional deep learning models, such as graph neural networks and convolutional neural networks, operate within the confines of their training data sets, making single-step inferences for regression or classification. Our work introduces a multi-agent strategy that transcends these limitations by integrating deep learning with reasoning and decision-making capabilities. This intelligent system actively interprets results, determines subsequent actions, and iteratively refines predictions, accelerating the materials design process. We demonstrate its effectiveness in exploring the vast compositional space of a ternary alloy, where the model dynamically solicits data, analyzes trends, generates visualizations, and derives insights into materials behavior. By enabling accurate predictions of key alloy characteristics, our approach advances the discovery of novel metallic systems and underscores the critical role of solid-solution alloying. More broadly, it represents a major step toward integrating artificial intelligence with scientific reasoning, moving closer to artificial general intelligence in engineering. This paradigm shift has profound implications for materials science, enabling more efficient, autonomous, and intelligent exploration of complex materials spaces. Graphical Abstract
</summary>
<dc:date>2025-11-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>Latent Space Alignment Using Adversarially Guided Self-Play</title>
<link href="https://hdl.handle.net/1721.1/163618" rel="alternate"/>
<author>
<name>Tucker, Mycal</name>
</author>
<author>
<name>Zhou, Yilun</name>
</author>
<author>
<name>Shah, Julie A</name>
</author>
<id>https://hdl.handle.net/1721.1/163618</id>
<updated>2026-03-08T03:30:44Z</updated>
<published>2022-08-26T00:00:00Z</published>
<summary type="text">Latent Space Alignment Using Adversarially Guided Self-Play
Tucker, Mycal; Zhou, Yilun; Shah, Julie A
We envision a world in which robots serve as capable partners in heterogeneous teams composed of other robots or humans. A crucial step towards such a world is enabling robots to learn to use the same representations as their partners; with a shared representation scheme, information may be passed among teammates. We define the problem of learning a fixed partner’s representation scheme as that of latent space alignment and propose metrics for evaluating the quality of alignment. While techniques from prior art in other fields may be applied to the latent space alignment problem, they often require interaction with partners during training time or large amounts of training data. We developed a technique, Adversarially Guided Self-Play (ASP), that trains agents to solve the latent space alignment problem with little training data and no access to their pre-trained partners. Simulation results confirmed that, despite using less training data, agents trained by ASP aligned better with other agents than agents trained by other techniques. Subsequent human-participant studies involving hundreds of Amazon Mechanical Turk workers showed how laypeople understood our machines enough to perform well on team tasks and anticipate their machine partner’s successes or failures.
</summary>
<dc:date>2022-08-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>7.02 / 10.702 Experimental Biology &amp; Communication, Spring 2005</title>
<link href="https://hdl.handle.net/1721.1/152546.2" rel="alternate"/>
<author>
<name>King, Jonathan</name>
</author>
<author>
<name>Guarente, Leonard</name>
</author>
<author>
<name>Steiner, Lisa</name>
</author>
<author>
<name>RajBhandary, Uttam</name>
</author>
<id>https://hdl.handle.net/1721.1/152546.2</id>
<updated>2025-11-17T22:08:38Z</updated>
<published>2005-06-01T00:00:00Z</published>
<summary type="text">7.02 / 10.702 Experimental Biology &amp; Communication, Spring 2005
King, Jonathan; Guarente, Leonard; Steiner, Lisa; RajBhandary, Uttam
This introductory biology laboratory course covers the application of experimental techniques in microbiology, biochemistry, cell and developmental biology. Emphasis is placed on the integration of factual knowledge with understanding of the design of the experiments and data analysis in order to prepare the students for future research projects. Development of skills critical for writing about scientific findings in modern biology is also covered in the Scientific Communications portion of the curriculum, 7.02CI. Additional Faculty Dr. Katherine Bacon Schneider Dr. Jean-Francois Hamel Ms. Deborah Kruzel Dr. Megan Rokop
</summary>
<dc:date>2005-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sensing Lights: The Challenges of Transforming Street Lights into an Urban Intelligence Platform</title>
<link href="https://hdl.handle.net/1721.1/163617" rel="alternate"/>
<author>
<name>Alvarez, Ricardo</name>
</author>
<author>
<name>Duarte, Fabio</name>
</author>
<author>
<name>Frenchman, Dennis</name>
</author>
<author>
<name>Ratti, Carlo</name>
</author>
<id>https://hdl.handle.net/1721.1/163617</id>
<updated>2025-11-11T03:09:31Z</updated>
<published>2022-08-22T00:00:00Z</published>
<summary type="text">Sensing Lights: The Challenges of Transforming Street Lights into an Urban Intelligence Platform
Alvarez, Ricardo; Duarte, Fabio; Frenchman, Dennis; Ratti, Carlo
The technological transformation behind intelligent infrastructure systems requires institutional and stakeholder realignment in their development. In this article, we evaluate the challenges for the production of smart infrastructure through an in-depth analysis of the development of smart street lighting strategies. We conduct surveys and semi-structured interviews with key stakeholders and industry leaders in public illumination, as well with public officials from cities in three continents to understand the related challenges they face, the strategies being developed to meet those challenges, and reflect on the lessons provided for the design, creation, and operation of public smart infrastructure systems. We find that there are key barriers. First, differences in vision that reflect a lack of fit between operators of the current infrastructure and the new possibilities afforded by digital technologies. Second, lack of policies that would help facilitate the adoption of these new technologies particularly in regards to privacy and data operationalization. Third, difficulties in public engagement. These barriers to innovation hinder the capacity of cities to maximize the possibilities as well as the social value of intelligent street lights as a future-proof platform for urban knowledge and urban applications.
</summary>
<dc:date>2022-08-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Intratumorally anchored cytokine therapy</title>
<link href="https://hdl.handle.net/1721.1/163616" rel="alternate"/>
<author>
<name>Wittrup, K Dane</name>
</author>
<author>
<name>Kaufman, Howard L</name>
</author>
<author>
<name>Schmidt, Michael M</name>
</author>
<author>
<name>Irvine, Darrell J</name>
</author>
<id>https://hdl.handle.net/1721.1/163616</id>
<updated>2025-11-11T03:09:36Z</updated>
<published>2022-06-02T00:00:00Z</published>
<summary type="text">Intratumorally anchored cytokine therapy
Wittrup, K Dane; Kaufman, Howard L; Schmidt, Michael M; Irvine, Darrell J
INTRODUCTION: On-target, off-tumor toxicity severely limits systemic dosing of cytokines and agonist antibodies for cancer. Intratumoral administration is increasingly being explored to mitigate this problem. Full exploitation of this mode of administration must include a mechanism for sustained retention of the drug; otherwise, rapid diffusion out of the tumor eliminates any advantage.&#13;
&#13;
AREAS COVERED: We focus here on strategies for anchoring immune agonists in accessible formats. Such anchoring may utilize extracellular matrix components, cell surface receptor targets, or exogenously administered particulate materials. Promising alternative strategies not reviewed here include slow release from the interior of a material depot, expression following local transfection, and conditional proteolytic activation of masked molecules.&#13;
&#13;
EXPERT OPINION: An effective mechanism for tissue retention is a critical component of intratumorally anchored cytokine therapy, as leakage leads to decreased tumor drug exposure and increased systemic toxicity. Matching variable drug release kinetics with receptor-mediated cellular uptake is an intrinsic requirement for the alternative strategies mentioned above. Bioavailability of an anchored form of the administered drug is key to obviating this balancing act.
</summary>
<dc:date>2022-06-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Generative Dialogue Framework and the Pursuit of Better Listening by Journalists: A Design-Centered Approach for More Constructive Conversations with Audiences</title>
<link href="https://hdl.handle.net/1721.1/163615" rel="alternate"/>
<author>
<name>Dimitrakopoulou, Dimitra</name>
</author>
<author>
<name>Lewis, Seth C</name>
</author>
<id>https://hdl.handle.net/1721.1/163615</id>
<updated>2025-11-11T03:09:21Z</updated>
<published>2022-05-18T00:00:00Z</published>
<summary type="text">The Generative Dialogue Framework and the Pursuit of Better Listening by Journalists: A Design-Centered Approach for More Constructive Conversations with Audiences
Dimitrakopoulou, Dimitra; Lewis, Seth C
This article introduces the Generative Dialogue Framework (GDF) and explores its potential as a pedagogical intervention, one that could help reimagine the future of engaged journalism by bringing design-thinking practices, creativity, and deep-listening modalities into play. The framework is developed through design thinking and builds around principles from the field of design. It uses virtual meeting technologies to organize small-group conversations, allows for creative and playful activities to help people share stories and feelings, and aims to create an ambient atmosphere of mutual understanding and co-creative problem-solving. With this article, we aspire to initiate a conversation around the value of “pollinating” journalism studies with concepts and principles from design thinking and facilitation so that journalists could become empowered to connect with their audiences with greater empathy and compassion and thereby surface diverse and rich lived experiences using more active and reflective listening skills. To test the framework’s potential for enhancing engaged journalism curricula, we collaborated with 17 journalism students at a U.S. university in a series of activities, from initial training on the platform to hosting a conversation using the GDF to ultimately producing a news story based on the insights acquired through this design-centered approach.
</summary>
<dc:date>2022-05-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Programmable Continuous Electrowetting of Liquid Metal for Reconfigurable Electronics</title>
<link href="https://hdl.handle.net/1721.1/163614" rel="alternate"/>
<author>
<name>Babatain, Wedyan</name>
</author>
<author>
<name>Park, Christine</name>
</author>
<author>
<name>Harraz, Deiaa M</name>
</author>
<author>
<name>Kilic Afsar, Ozgun</name>
</author>
<author>
<name>Honnet, Cedric</name>
</author>
<author>
<name>Lov, Sarah</name>
</author>
<author>
<name>Labrune, Jean‐Baptiste</name>
</author>
<author>
<name>Dickey, Michael D</name>
</author>
<author>
<name>Ishii, Hiroshi</name>
</author>
<id>https://hdl.handle.net/1721.1/163614</id>
<updated>2025-11-11T03:09:24Z</updated>
<published>2025-09-15T00:00:00Z</published>
<summary type="text">Programmable Continuous Electrowetting of Liquid Metal for Reconfigurable Electronics
Babatain, Wedyan; Park, Christine; Harraz, Deiaa M; Kilic Afsar, Ozgun; Honnet, Cedric; Lov, Sarah; Labrune, Jean‐Baptiste; Dickey, Michael D; Ishii, Hiroshi
Dynamic manipulation of the shape and position of liquid metal (LM), a conductive and deformable conductor, presents new opportunities for reconfigurable electronics, fluidic logic, and soft-actuation systems. This study combines continuous electrowetting (CEW) with electrochemical modulation of the interface of LM in electrolyte to achieve tunable and directional LM manipulation in 2D spaces. A key finding is that under a fixed external electric field, the LM moves in a direction that depends on its electrochemical potential. The LM potential is controlled using a substrate featuring patterns of laser-induced graphene (LIG) since it is non-wetting to LM and electrically conductive. This strategy enables a range of functionalities, including “valves” for on-demand LM control, LM droplet sorting, feedback sensing, and fluidic logic gates. The strategy can also control the motion of LM droplets across 2D spaces. Finally, it is utilized within a reconfigurable circuit platform where the LM functions as a dynamic interconnect for sequential activation, parallel switching, and self-healing circuits. By coupling the electrically-driven motion of LM and the versatility of LIG patterning, this work establishes a versatile framework for reconfigurable electronics, programmable fluidic systems, and adaptive systems.
</summary>
<dc:date>2025-09-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization of CEST MRI Reporter Protein Design Using Cation‐Pi Networks</title>
<link href="https://hdl.handle.net/1721.1/163613" rel="alternate"/>
<author>
<name>Korenchan, David E.</name>
</author>
<author>
<name>French, Ethan J.</name>
</author>
<author>
<name>Runco, Emerenziana</name>
</author>
<author>
<name>Dhakan, Chetan B.</name>
</author>
<author>
<name>Yan, Jinwu</name>
</author>
<author>
<name>Nakashima, Hiroshi</name>
</author>
<author>
<name>McMahon, Michael T.</name>
</author>
<author>
<name>Gilad, Assaf A.</name>
</author>
<author>
<name>Farrar, Christian T.</name>
</author>
<id>https://hdl.handle.net/1721.1/163613</id>
<updated>2025-11-11T03:09:13Z</updated>
<published>2025-09-10T00:00:00Z</published>
<summary type="text">Optimization of CEST MRI Reporter Protein Design Using Cation‐Pi Networks
Korenchan, David E.; French, Ethan J.; Runco, Emerenziana; Dhakan, Chetan B.; Yan, Jinwu; Nakashima, Hiroshi; McMahon, Michael T.; Gilad, Assaf A.; Farrar, Christian T.
Nucleic acid-based therapeutics, such as oncolytic virotherapy or gene therapy, would benefit greatly from a reporter gene that induces endogenous production of a protein biomarker to noninvasively track the delivery, persistence, and spread with imaging. Several chemical exchange saturation transfer (CEST) reporter proteins detectable by magnetic resonance imaging (MRI) have been demonstrated to have high sensitivity. However, to date none can provide strong CEST contrast at a distinct resonance from that of endogenous proteins, limiting their specificity. We investigated proteins and peptides containing tyrosine (Tyr), tryptophan (Trp), and lysine (Lys) residues that demonstrate CEST contrast shifted far downfield (4–10 ppm) from water. Although Tyr, Trp, and Lys exchangeable protons are typically not detectable under physiological conditions, those in our tested molecules are, having exchange rates of 400–2500 s−1. The large chemical shift dispersion and rapid exchange rates are attributed to unique hydrogen bonding and cation-π network interactions. These discoveries set the stage for designing a stable reporter protein with high detection specificity and sensitivity that can facilitate the in vivo monitoring of viral and gene therapies using MRI.
</summary>
<dc:date>2025-09-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>3D‐Printed Mixed Ionic‐Electronic Conductive Polymer Composites for Long‐Term Bioelectronic Sensing</title>
<link href="https://hdl.handle.net/1721.1/163612" rel="alternate"/>
<author>
<name>Bagatella, Simone</name>
</author>
<author>
<name>Roh, Heejung</name>
</author>
<author>
<name>Cavallaro, Marco</name>
</author>
<author>
<name>Suriano, Raffaella</name>
</author>
<author>
<name>Levi, Marinella</name>
</author>
<author>
<name>Gumyusenge, Aristide</name>
</author>
<id>https://hdl.handle.net/1721.1/163612</id>
<updated>2025-11-11T03:09:34Z</updated>
<published>2025-09-07T00:00:00Z</published>
<summary type="text">3D‐Printed Mixed Ionic‐Electronic Conductive Polymer Composites for Long‐Term Bioelectronic Sensing
Bagatella, Simone; Roh, Heejung; Cavallaro, Marco; Suriano, Raffaella; Levi, Marinella; Gumyusenge, Aristide
Reliable, long-term monitoring of health data is becoming increasingly essential in modern healthcare. While computational and machine learning capabilities continue to advance, the lack of lightweight, conformable, and customizable hardware remains a key limitation. In the context of heart health, traditional electrocardiogram (ECG) electrodes are rigid and often uncomfortable for continuous wear. Existing soft electrodes tend to be either cost-prohibitive or unreliable over extended use. In this work, all-polymer, 3D-printed, highly stable, and conformable ECG patches are developed for long-term signal acquisition. Through material optimization, composite materials with electrical conductivity up to 1.7 S cm−1 are developed, maintaining over 85% of their conductivity after 60 days of exposure to open air. These materials also exhibit remarkable stretchability (strain at break up to 253%) and high mechanical strength (tensile strength of 25 MPa). The formulated inks are fully compatible with 3D material extrusion techniques, significantly reducing manufacturing costs. The printed electrodes are flexible, stretchable, and capable of recording high-quality ECG signals, performing comparably to state-of-the-art metal electrodes, even after more than a month of use-and-store in open air.
</summary>
<dc:date>2025-09-07T00:00:00Z</dc:date>
</entry>
<entry>
<title>A causal inference framework to compare the effectiveness of life-sustaining ICU therapies—using the example of cancer patients with sepsis</title>
<link href="https://hdl.handle.net/1721.1/163611" rel="alternate"/>
<author>
<name>Matos, João</name>
</author>
<author>
<name>Struja, Tristan</name>
</author>
<author>
<name>Woite, Naira Link</name>
</author>
<author>
<name>Restrepo, David</name>
</author>
<author>
<name>Waschka, Andre Kurepa</name>
</author>
<author>
<name>Celi, Leo A</name>
</author>
<author>
<name>Sauer, Christopher M</name>
</author>
<id>https://hdl.handle.net/1721.1/163611</id>
<updated>2025-11-11T03:09:33Z</updated>
<published>2025-09-08T00:00:00Z</published>
<summary type="text">A causal inference framework to compare the effectiveness of life-sustaining ICU therapies—using the example of cancer patients with sepsis
Matos, João; Struja, Tristan; Woite, Naira Link; Restrepo, David; Waschka, Andre Kurepa; Celi, Leo A; Sauer, Christopher M
The rise in cancer patients could lead to an increase in intensive care units (ICUs) admissions. We explored differences in treatment practices and outcomes of invasive therapies between patients with sepsis with and without cancer. Adults from 2008 to 2019 admitted to the ICU for sepsis were extracted from the databases MIMIC-IV and eICU-CRD. Using Extreme Gradient Boosting, we estimated the odds for invasive mechanical ventilation (IMV) or vasopressors. Targeted maximum likelihood estimation (TMLE) models estimated treatment effects of IMV and vasopressors on in-hospital mortality and 28 hospital-free days. 58,988 adult septic patients were included, of which 6145 had cancer. In-hospital mortality was higher for cancer patients (30.3% vs. 16.1%). Patients with cancer had lower odds of receiving IMV (aOR [95%CI], 0.94 [0.90–0.97]); pronounced for hematologic patients (aOR 0.89 [0.84–0.93]). Odds for vasopressors were also lower for hematologic patients (aOR 0.89 [0.84–0.94]). TMLE models found IMV to be overall associated with higher in-hospital mortality for solid and hematological patients (ATE 3% [1%–5%], 6% [3%–9%], respectively), while vasopressors were associated with higher in-hospital mortality for patients with solid and metastatic cancer (ATE 6% [4%–8%], 3% [1%–6%], respectively). We utilized US-wide ICU data to estimate a relationship between mortality and the use of common therapies. With the exception of hematologic patients being less likely to receive IMV, we did not find differential treatment patterns. We did not demonstrate an average survival benefit for therapies, underscoring the need for a more granular analysis to identify subgroups who benefit from these interventions.
</summary>
<dc:date>2025-09-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerated Navigator for Rapid ∆B0 Field Mapping for Real-Time Shimming and Motion Correction of Human Brain MRI</title>
<link href="https://hdl.handle.net/1721.1/163610" rel="alternate"/>
<author>
<name>Jayadev, Nutandev Bikkamane</name>
</author>
<author>
<name>Stockmann, Jason</name>
</author>
<author>
<name>Frost, Robert</name>
</author>
<author>
<name>Arango, Nicolas</name>
</author>
<author>
<name>Chang, Yulin</name>
</author>
<author>
<name>van der Kouwe, André</name>
</author>
<author>
<name>Andronesi, Ovidiu C.</name>
</author>
<id>https://hdl.handle.net/1721.1/163610</id>
<updated>2025-11-11T03:09:28Z</updated>
<published>2025-09-04T00:00:00Z</published>
<summary type="text">Accelerated Navigator for Rapid ∆B0 Field Mapping for Real-Time Shimming and Motion Correction of Human Brain MRI
Jayadev, Nutandev Bikkamane; Stockmann, Jason; Frost, Robert; Arango, Nicolas; Chang, Yulin; van der Kouwe, André; Andronesi, Ovidiu C.
∆B0 shim optimization performed at the beginning of an MR scan is unable to correct for ∆B0 field inhomogeneities caused by patient motion or hardware instability during scans. Navigator-based methods have been demonstrated previously to be effective for motion and shim correction. The purpose of this work was to accelerate volumetric navigators to allow fast acquisition of the parent navigated sequence with short real-time feedback time and high spatial resolution of the ∆B0 field mapping. A GRAPPA-accelerated 3D dual-echo EPI vNav was implemented on a 3 T Prisma MRI scanner. Testing was performed on an anthropomorphic head phantom and 11 human participants. vNav-derived ∆B0 field maps with various spatial resolutions were compared to Cartesian-encoded gold-standard 3D gradient-echo ∆B0 field mapping. ∆B0 shimming was evaluated for the scanner's spherical harmonics shims and a custom-made AC/DC RF-receive/∆B0-shim array. The performance of dual-echo and single-echo accelerated navigators was compared for tracking and updating ∆B0 field maps during motion. Real-time motion and shim corrections for 2D MRI and 3D MRSI sequences were assessed in vivo with controlled head movement. Up to 8-fold acceleration of volumetric navigators (vNavs) significantly reduced geometric distortions and signal dropouts near air-tissue interfaces and metal implants. Acceleration allowed a flexible tradeoff between spatial resolution (2.5–7.5 mm) and acquisition time (242–1302 ms). Notably, accelerated high-resolution (5 mm) vNav was faster (378 ms) than unaccelerated low-resolution (7.5 mm) vNav (700 ms) and showed better agreement with 3D-GRE ∆B0 field mapping with 5.5 Hz RMSE, 1 Hz bias, and [−10%, +10%] confidence interval. Accelerated vNavs improved 3D MRSI and 2D MRI in real-time motion and shim correction applications. Advanced shimming with spherical harmonic and shim array showed superior ΔB0 correction, especially with joint shim optimization. GRAPPA-accelerated vNavs provide fast, robust, and high-quality ∆B0 field mapping and shimming over the whole-brain. The accelerated vNavs enable rapid correction of ∆B0 field inhomogeneities and faster acquisition of the navigated parent sequence. This methodology can be used for real-time motion and shim correction to enhance data quality in various MRI applications.
</summary>
<dc:date>2025-09-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wirelessly Powered Ingestible Capsule for Optical Stimulation of the Gastrointestinal Tract in Rodents</title>
<link href="https://hdl.handle.net/1721.1/163609" rel="alternate"/>
<author>
<name>Elsherif, Mohamed</name>
</author>
<author>
<name>El‐Din, Rawan Badr</name>
</author>
<author>
<name>Makhambetova, Zhansaya</name>
</author>
<author>
<name>Naser, Heba</name>
</author>
<author>
<name>Boitet, Maylis</name>
</author>
<author>
<name>Singh, Rahul</name>
</author>
<author>
<name>Oh, Keonghwan</name>
</author>
<author>
<name>Sukesan, Revathi</name>
</author>
<author>
<name>Ha, Sohmyung</name>
</author>
<author>
<name>Ramadi, Khalil B.</name>
</author>
<id>https://hdl.handle.net/1721.1/163609</id>
<updated>2025-11-11T03:09:29Z</updated>
<published>2025-08-20T00:00:00Z</published>
<summary type="text">Wirelessly Powered Ingestible Capsule for Optical Stimulation of the Gastrointestinal Tract in Rodents
Elsherif, Mohamed; El‐Din, Rawan Badr; Makhambetova, Zhansaya; Naser, Heba; Boitet, Maylis; Singh, Rahul; Oh, Keonghwan; Sukesan, Revathi; Ha, Sohmyung; Ramadi, Khalil B.
Optogenetics enables cell-specific activation and inhibition of neurons. The gut contains intricate networks of enteric and central neurons, but in vivo investigation is difficult due to its motile and harsh environment. This work reports an ingestible electronic capsule for non-invasive optical gut stimulation (ICOPS) in rodents. ICOPS is wirelessly powered via a transmitter coil, delivered by oral gavage, and excreted safely without obstruction within 20 h. The device integrates a micro-light-emitting diode (µLED) operating at 470 nm—a standard wavelength for channelrhodopsin-2 activation—together with a 460-turn ferrite-core coil and a shunt capacitor. Optimized circuits enable efficient power transfer at low frequencies (45–140 kHz), addressing weak coupling and misalignment. ICOPS operates effectively up to 14 cm longitudinally, 9 cm laterally, and at 75° rotation relative to the magnetic field. Specific absorption rate (SAR) analysis confirms exposure within safe occupational limits at 6 A and 45/63 kHz. In vivo validation using an in vivo imaging system (IVIS) and micro-computed tomography (µCT) confirms functionality and safety. ICOPS is the first rodent-scale ingestible capsule fabricated entirely in-house using 3D printing, without the need for cleanroom facilities, providing a compact, scalable platform for non-invasive optogenetic modulation of enteric circuits.
</summary>
<dc:date>2025-08-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>RBD-VLP Vaccines Adjuvanted with Alum or SWE Protect K18-hACE2 Mice against SARS-CoV-2 VOC Challenge</title>
<link href="https://hdl.handle.net/1721.1/163608" rel="alternate"/>
<author>
<name>Wong, Ting Y</name>
</author>
<author>
<name>Russ, Brynnan P</name>
</author>
<author>
<name>Lee, Katherine S</name>
</author>
<author>
<name>Miller, Olivia A</name>
</author>
<author>
<name>Kang, Jason</name>
</author>
<author>
<name>Cooper, Melissa</name>
</author>
<author>
<name>Winters, Michael T</name>
</author>
<author>
<name>Rodriguez-Aponte, Sergio A</name>
</author>
<author>
<name>Dalvie, Neil C</name>
</author>
<author>
<name>Johnston, Ryan S</name>
</author>
<author>
<name>Rader, Nathaniel A</name>
</author>
<author>
<name>Wong, Zeriel Y</name>
</author>
<author>
<name>Cyphert, Holly A</name>
</author>
<author>
<name>Martinez, Ivan</name>
</author>
<author>
<name>Shaligram, Umesh</name>
</author>
<author>
<name>Batwal, Saurabh</name>
</author>
<author>
<name>Lothe, Rakesh</name>
</author>
<author>
<name>Chandrasekaran, Rahul</name>
</author>
<author>
<name>Nagar, Gaurav</name>
</author>
<author>
<name>Rajurkar, Meghraj</name>
</author>
<author>
<name>Rao, Harish</name>
</author>
<author>
<name>Bevere, Justin R</name>
</author>
<author>
<name>Barbier, Mariette</name>
</author>
<author>
<name>Love, J Christopher</name>
</author>
<author>
<name>Damron, F Heath</name>
</author>
<id>https://hdl.handle.net/1721.1/163608</id>
<updated>2026-03-08T03:29:25Z</updated>
<published>2022-08-15T00:00:00Z</published>
<summary type="text">RBD-VLP Vaccines Adjuvanted with Alum or SWE Protect K18-hACE2 Mice against SARS-CoV-2 VOC Challenge
Wong, Ting Y; Russ, Brynnan P; Lee, Katherine S; Miller, Olivia A; Kang, Jason; Cooper, Melissa; Winters, Michael T; Rodriguez-Aponte, Sergio A; Dalvie, Neil C; Johnston, Ryan S; Rader, Nathaniel A; Wong, Zeriel Y; Cyphert, Holly A; Martinez, Ivan; Shaligram, Umesh; Batwal, Saurabh; Lothe, Rakesh; Chandrasekaran, Rahul; Nagar, Gaurav; Rajurkar, Meghraj; Rao, Harish; Bevere, Justin R; Barbier, Mariette; Love, J Christopher; Damron, F Heath
The ongoing COVID-19 pandemic has contributed largely to the global&#13;
vaccine disparity. Development of protein subunit vaccines can help alleviate shortages of COVID-19 vaccines delivered to low-income countries. Here, we evaluated&#13;
the efficacy of a three-dose virus-like particle (VLP) vaccine composed of hepatitis B&#13;
surface antigen (HBsAg) decorated with the receptor binding domain (RBD) from the&#13;
Wuhan or Beta SARS-CoV-2 strain adjuvanted with either aluminum hydroxide (alum)&#13;
or squalene in water emulsion (SWE). RBD HBsAg vaccines were compared to the&#13;
standard two doses of Pfizer mRNA vaccine. Alum-adjuvanted vaccines were composed of either HBsAg conjugated with Beta RBD alone (b RBD HBsAg1Al) or a combination of both Beta RBD HBsAg and Wuhan RBD HBsAg (b/Wu RBD HBsAg1Al).&#13;
RBD vaccines adjuvanted with SWE were formulated with Beta RBD HBsAg (b RBD&#13;
HBsAg1SWE) or without HBsAg (b RBD1SWE). Both alum-adjuvanted RBD HBsAg vaccines generated functional RBD IgG against multiple SARS-CoV-2 variants of concern&#13;
(VOC), decreased viral RNA burden, and lowered inflammation in the lung against&#13;
Alpha or Beta challenge in K18-hACE2 mice. However, only b/Wu RBD HBsAg1Al was&#13;
able to afford 100% survival to mice challenged with Alpha or Beta VOC. Furthermore,&#13;
mice immunized with b RBD HBsAg1SWE induced cross-reactive neutralizing antibodies&#13;
against major VOC of SARS-CoV-2, lowered viral RNA burden in the lung and brain, and&#13;
protected mice from Alpha or Beta challenge similarly to mice immunized with Pfizer&#13;
mRNA. However, RBD1SWE immunization failed to protect mice from VOC challenge.&#13;
Our findings demonstrate that RBD HBsAg VLP vaccines provided similar protection profiles to the approved Pfizer mRNA vaccines used worldwide and may offer protection&#13;
against SARS-CoV-2 VOC.
</summary>
<dc:date>2022-08-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Monoid Algebras Having Every Nonempty Subset of N ≥ 2 as a Length Set</title>
<link href="https://hdl.handle.net/1721.1/163607" rel="alternate"/>
<author>
<name>Geroldinger, Alfred</name>
</author>
<author>
<name>Gotti, Felix</name>
</author>
<id>https://hdl.handle.net/1721.1/163607</id>
<updated>2026-03-08T03:20:09Z</updated>
<published>2025-04-12T00:00:00Z</published>
<summary type="text">On Monoid Algebras Having Every Nonempty Subset of N ≥ 2 as a Length Set
Geroldinger, Alfred; Gotti, Felix
We construct monoid algebras that satisfy the ascending chain condition on principal ideals and have the property that every nonempty subset of N ≥ 2 occurs as a length set.
</summary>
<dc:date>2025-04-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Psyche Multispectral Imager Investigation: Characterizing the Geology, Topography, and Multispectral Properties of a Metal-Rich World</title>
<link href="https://hdl.handle.net/1721.1/163606" rel="alternate"/>
<author>
<name>Bell, J. F.</name>
</author>
<author>
<name>Ravine, M. A.</name>
</author>
<author>
<name>Caplinger, M. A.</name>
</author>
<author>
<name>Schaffner, J. A.</name>
</author>
<author>
<name>Brylow, S. M.</name>
</author>
<author>
<name>Clark, M. J.</name>
</author>
<author>
<name>Peckham, D. A.</name>
</author>
<author>
<name>Otjens, P. T.</name>
</author>
<author>
<name>Price, G. J.</name>
</author>
<author>
<name>Rowell, T.</name>
</author>
<author>
<name>Ravine, J. W.</name>
</author>
<author>
<name>Laramee, J. D.</name>
</author>
<author>
<name>Juergens, R. C.</name>
</author>
<author>
<name>Morgan, W.</name>
</author>
<author>
<name>Parker, A. G.</name>
</author>
<id>https://hdl.handle.net/1721.1/163606</id>
<updated>2026-03-08T03:20:06Z</updated>
<published>2025-05-21T00:00:00Z</published>
<summary type="text">The Psyche Multispectral Imager Investigation: Characterizing the Geology, Topography, and Multispectral Properties of a Metal-Rich World
Bell, J. F.; Ravine, M. A.; Caplinger, M. A.; Schaffner, J. A.; Brylow, S. M.; Clark, M. J.; Peckham, D. A.; Otjens, P. T.; Price, G. J.; Rowell, T.; Ravine, J. W.; Laramee, J. D.; Juergens, R. C.; Morgan, W.; Parker, A. G.
The Psyche Multispectral Imager (“the Imager”) is a payload system designed to directly achieve or to indirectly enable the key scientific goals and optical navigation requirements of NASA’s Psyche mission, which will conduct the first up-close orbital investigation of the metal-rich Main Belt asteroid (16) Psyche. The Imager consists of a pair of block redundant cameras and electronics that are mounted inside the thermally controlled spacecraft body, with a view out the spacecraft −X panel that will be nadir-pointed during nominal asteroid orbital mapping operations. The two identical Camera Heads are connected to a separate Digital Electronics Assembly (DEA) box that interfaces to the spacecraft avionics and that provides power, commanding, data processing, and onboard image storage. The Imager system shares significant heritage with imaging instruments flown on the Mars Climate Orbiter, the Mars Science Laboratory and Mars 2020 rovers, and Juno. Each camera consists of a 1600 × 1200 photosensitive pixel charge-coupled device (CCD) detector and its associated electronics, a 9-position filter wheel assembly, a compact catadioptric f /2.9 telescope with a fixed focal length of 148 mm, and a sunshade to minimize stray and scattered light. The Imager CCD, filters, and optics enable broadband polychromatic (∼540 ± 250 nm) imaging plus narrowband imaging in 7 colors centered from 439 to 1015 nm. An additional neutral density filter enables protection of the CCD from direct solar illumination. Each camera has a field of view of 4.6° × 3.4° and an instantaneous field of view of 50 μrad/pixel that enables imaging of the asteroid at scales ranging from ∼35 m/pix from 700 km altitude to ∼4 m/pix at 75 km altitude. The primary camera (“Imager A”) is pointed along the spacecraft −X axis, and the backup camera (“Imager B”) is toed-out by 3.7° to potentially enable greater surface area coverage per unit time if both Imagers are operated simultaneously during some mission phases. Stereoscopic mapping is performed by observing the same surface regions with either camera over a range of off-nadir pointing angles.
</summary>
<dc:date>2025-05-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mechanobiochemical finite element model to analyze impact-loading-induced cell damage, subsequent proteoglycan loss, and anti-oxidative treatment effects in articular cartilage</title>
<link href="https://hdl.handle.net/1721.1/163605" rel="alternate"/>
<author>
<name>Kosonen, Joonas P.</name>
</author>
<author>
<name>Eskelinen, Atte S. A.</name>
</author>
<author>
<name>Orozco, Gustavo A.</name>
</author>
<author>
<name>Coleman, Mitchell C.</name>
</author>
<author>
<name>Goetz, Jessica E.</name>
</author>
<author>
<name>Anderson, Donald D.</name>
</author>
<author>
<name>Grodzinsky, Alan J.</name>
</author>
<author>
<name>Tanska, Petri</name>
</author>
<author>
<name>Korhonen, Rami K.</name>
</author>
<id>https://hdl.handle.net/1721.1/163605</id>
<updated>2026-03-08T03:20:07Z</updated>
<published>2025-05-10T00:00:00Z</published>
<summary type="text">Mechanobiochemical finite element model to analyze impact-loading-induced cell damage, subsequent proteoglycan loss, and anti-oxidative treatment effects in articular cartilage
Kosonen, Joonas P.; Eskelinen, Atte S. A.; Orozco, Gustavo A.; Coleman, Mitchell C.; Goetz, Jessica E.; Anderson, Donald D.; Grodzinsky, Alan J.; Tanska, Petri; Korhonen, Rami K.
Joint trauma often leads to articular cartilage degeneration and post-traumatic osteoarthritis (PTOA). Pivotal determinants include trauma-induced excessive tissue strains that damage cartilage cells. As a downstream effect, these damaged cells can trigger cartilage degeneration via oxidative stress, cell death, and proteolytic tissue degeneration. N-acetylcysteine (NAC) has emerged as an antioxidant capable of inhibiting oxidative stress, cell death, and cartilage degeneration post-impact. However, the temporal effects of NAC are not fully understood and remain difficult to assess solely by physical experiments. Thus, we developed a computational finite element analysis framework to simulate a drop-tower impact of cartilage in Abaqus, and subsequent oxidative stress-related cell damage, and NAC treatment upon cartilage proteoglycan content in Comsol Multiphysics, based on prior ex vivo experiments. Model results provide evidence that immediate NAC treatment can reduce proteoglycan loss by mitigating oxidative stress, cell death (improved proteoglycan biosynthesis), and enzymatic proteoglycan depletion. Our simulations also indicate that delayed NAC treatment may not inhibit cartilage proteoglycan loss despite reduced cell death after impact. These results enhance understanding of the temporal effects of impact-related cell damage and treatment that are critical for the development of effective treatments for PTOA. In the future, our modeling framework could increase understanding of time-dependent mechanisms of oxidative stress and downstream effects in injured cartilage and aid in developing better treatments to mitigate PTOA progression.
</summary>
<dc:date>2025-05-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>The three-point energy correlator in the coplanar limit</title>
<link href="https://hdl.handle.net/1721.1/163604" rel="alternate"/>
<author>
<name>Gao, Anjie</name>
</author>
<author>
<name>Yang, Tong-Zhi</name>
</author>
<author>
<name>Zhang, Xiaoyuan</name>
</author>
<id>https://hdl.handle.net/1721.1/163604</id>
<updated>2026-03-08T03:26:34Z</updated>
<published>2025-08-04T00:00:00Z</published>
<summary type="text">The three-point energy correlator in the coplanar limit
Gao, Anjie; Yang, Tong-Zhi; Zhang, Xiaoyuan
Energy correlators are a type of observables that measure how energy is distributed across multiple detectors as a function of the angles between pairs of detectors. In this paper, we study the three-point energy correlator (EEEC) at lepton colliders in the three-particle near-to-plane (coplanar) limit. The leading-power contribution in this limit is governed by the three-jet (trijet) configuration. We introduce a new approach by projecting the EEEC onto the volume of the parallelepiped formed by the unit vectors aligned with three detected final-state particles. Analogous to the back-to-back limit of the two-point energy correlator probing the dijet configuration, the small-volume limit of the EEEC probes the trijet configuration. We derive a transverse momentum dependent (TMD) based factorization theorem that captures the soft and collinear logarithms in the coplanar limit, which enables us to achieve the next-to-next-to-next-to-leading logarithm (N3LL) resummation. To our knowledge, this is the first N3LL result for a trijet event shape. Additionally, we demonstrate that a similar factorization theorem can be applied to the fully differential EEEC in the three-particle coplanar limit, which provides a clean environment for studying different coplanar trijet shapes.
</summary>
<dc:date>2025-08-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reduction of Plane Quartics and Cayley Octads</title>
<link href="https://hdl.handle.net/1721.1/163603" rel="alternate"/>
<author>
<name>van Bommel, Raymond</name>
</author>
<author>
<name>Docking, Jordan</name>
</author>
<author>
<name>Dokchitser, Vladimir</name>
</author>
<author>
<name>Lercier, Reynald</name>
</author>
<author>
<name>Lorenzo García, Elisa</name>
</author>
<id>https://hdl.handle.net/1721.1/163603</id>
<updated>2026-03-08T03:20:02Z</updated>
<published>2025-06-02T00:00:00Z</published>
<summary type="text">Reduction of Plane Quartics and Cayley Octads
van Bommel, Raymond; Docking, Jordan; Dokchitser, Vladimir; Lercier, Reynald; Lorenzo García, Elisa
We give a conjectural characterisation of the stable reduction of plane quartics over local fields in terms of their Cayley octads. This results in p-adic criteria that efficiently give the stable reduction type amongst the 42 possible types, and whether the reduction is hyperelliptic or not. These criteria are in the vein of the machinery of “cluster pictures” for hyperelliptic curves. We also construct explicit families of quartic curves that realise all possible stable types, against which we test these criteria. We give numerical examples that illustrate how to use these criteria in practice.
</summary>
<dc:date>2025-06-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Exceptional Is the Ear?</title>
<link href="https://hdl.handle.net/1721.1/163602" rel="alternate"/>
<author>
<name>Bergevin, Christopher</name>
</author>
<author>
<name>Freeman, Dennis M.</name>
</author>
<author>
<name>Coffin, Allison</name>
</author>
<id>https://hdl.handle.net/1721.1/163602</id>
<updated>2026-03-08T03:20:01Z</updated>
<published>2025-05-12T00:00:00Z</published>
<summary type="text">How Exceptional Is the Ear?
Bergevin, Christopher; Freeman, Dennis M.; Coffin, Allison
Studies of hearing often conclude that the ear is “remarkable” or that its performance is “exceptional.” Some common examples include the following: ▹  the ears of mammals are encased in the hardest bone in the body; ▹  the ear contains the most vascularized tissue in body; ▹  the ear has the highest resting potential in the body; ▹  ears have a unique “fingerprint”; ▹  the ear can detect signals below the thermal noise floor; and ▹  the ear is highly nonlinear (or highly linear, depending upon who you ask). Some claims hold up to further scrutiny, while others do not. Additionally, several claims hold for animals in one taxon, while others are shared across taxa. Most frequently, our sense of wonder results from the differences between ears as products of natural selection (over eons) and artificial systems as products of engineering design. Our goal in analyzing claims of remarkable or exceptional performance is to deepen our appreciation of these differences.
</summary>
<dc:date>2025-05-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Somato‐Cognitive Action Network in Focal Dystonia</title>
<link href="https://hdl.handle.net/1721.1/163601" rel="alternate"/>
<author>
<name>Wang, Yuchao</name>
</author>
<author>
<name>Huynh, Baothy</name>
</author>
<author>
<name>Ren, Jianxun</name>
</author>
<author>
<name>Chen, Mo</name>
</author>
<author>
<name>Zhang, Wei</name>
</author>
<author>
<name>Hu, Dan</name>
</author>
<author>
<name>Li, Shasha</name>
</author>
<author>
<name>Liu, Hesheng</name>
</author>
<author>
<name>Kimberley, Teresa J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163601</id>
<updated>2026-03-08T03:29:21Z</updated>
<published>2025-08-28T00:00:00Z</published>
<summary type="text">Somato‐Cognitive Action Network in Focal Dystonia
Wang, Yuchao; Huynh, Baothy; Ren, Jianxun; Chen, Mo; Zhang, Wei; Hu, Dan; Li, Shasha; Liu, Hesheng; Kimberley, Teresa J.
Background&#13;
The central pathology causing idiopathic focal dystonia remains unclear. The recently identified somato-cognitive action network (SCAN) has been implicated.&#13;
&#13;
Objective&#13;
We tested whether the effector-agnostic SCAN may constitute a central pathology shared across dystonia subtypes, whereas the effector-specific regions in the primary sensorimotor cortex may show distinct functional changes specific to the dystonic body part.&#13;
&#13;
Methods&#13;
We collected functional magnetic resonance imaging (MRI) from patients with focal dystonia (laryngeal dystonia [LD], N = 24; focal hand dystonia [FHD], N = 18) and healthy control participants (N = 21). Regions of interest were selected a priori within the basal ganglia-thalamo-cortical and cerebello-thalamo-cortical sensorimotor pathways. We investigated dystonia-dependent resting-state connectivity changes: between SCAN and related cortical regions, between cortical and noncortical regions, and among noncortical regions. Cortical network boundaries were individualized based on resting-state data. Separately, individualized hand and mouth/larynx regions were also generated from task-based MRI (finger-tapping and phonation, respectively) for comparison.&#13;
&#13;
Results&#13;
Both focal dystonia subtypes showed significant functional changes (P = 0.048 for LD, P = 0.017 for FHD) compared to controls, driven by SCAN's higher functional connectivity to task-based mouth/larynx region and concomitantly lower connectivity to the cingulo-opercular network. No significant subcortical or cerebellar changes were observed when LD and FHD were modeled as independent groups. However, exploratory analysis combining LD and FHD suggested a dystonia-dependent asynchronization between SCAN and sensorimotor cerebellum (P = 0.010) that may indicate a pathological rather than compensatory process.&#13;
&#13;
Conclusions&#13;
We demonstrate that SCAN is uniquely associated with focal dystonia dysfunction beyond the dystonic effector regions, offering insights into pathophysiology and treatments. © 2025 The Author(s). Movement Disorders published by Wiley Periodicals LLC on behalf of International Parkinson and Movement Disorder Society.
</summary>
<dc:date>2025-08-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Practical and Optimal First-Order Method for Large-Scale Convex Quadratic Programming</title>
<link href="https://hdl.handle.net/1721.1/163600" rel="alternate"/>
<author>
<name>Lu, Haihao</name>
</author>
<author>
<name>Yang, Jinwen</name>
</author>
<id>https://hdl.handle.net/1721.1/163600</id>
<updated>2026-03-08T03:20:04Z</updated>
<published>2025-07-02T00:00:00Z</published>
<summary type="text">A Practical and Optimal First-Order Method for Large-Scale Convex Quadratic Programming
Lu, Haihao; Yang, Jinwen
Convex quadratic programming (QP) is an important class of optimization problem with wide applications in practice. The classic QP solvers are based on either simplex or barrier method, both of which suffer from the scalability issue because their computational bottleneck is solving linear equations. In this paper, we design and analyze a first-order method for QP, called restarted accelerated primal-dual hybrid gradient (rAPDHG), whose computational bottleneck is matrix-vector multiplication. We show that rAPDHG has a linear convergence rate to an optimal solution when solving QP, and the obtained linear rate is optimal among a wide class of primal-dual methods. Furthermore, we connect the linear rate with a sharpness constant of the KKT system of QP, which is a standard quantity to measure the hardness of a continuous optimization problem. Numerical experiments demonstrate that both restarts and acceleration can significantly improve the performance of the algorithm. Lastly, we present PDQP.jl, an open-source solver based on rAPDHG that can be run on both GPU and CPU. With a numerical comparison with SCS and OSQP on standard QP benchmark sets and large-scale synthetic QP instances, we demonstrate the effectiveness of rAPDHG for solving QP.
</summary>
<dc:date>2025-07-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effect of Iron Oxidation State on Solvent Extraction Scandium Extraction Process from Bauxite Residue and Life Cycle Assessment</title>
<link href="https://hdl.handle.net/1721.1/163599" rel="alternate"/>
<author>
<name>Braz, Vitor M. P.</name>
</author>
<author>
<name>Vaccari, Mentore</name>
</author>
<author>
<name>Espinosa, Denise C. R.</name>
</author>
<author>
<name>Tenório, Jorge A. S.</name>
</author>
<author>
<name>Botelho Junior, Amilton B.</name>
</author>
<id>https://hdl.handle.net/1721.1/163599</id>
<updated>2025-11-08T04:12:10Z</updated>
<published>2025-09-15T00:00:00Z</published>
<summary type="text">Effect of Iron Oxidation State on Solvent Extraction Scandium Extraction Process from Bauxite Residue and Life Cycle Assessment
Braz, Vitor M. P.; Vaccari, Mentore; Espinosa, Denise C. R.; Tenório, Jorge A. S.; Botelho Junior, Amilton B.
The extraction of Sc from bauxite residue (also known as red mud) is a promising but challenging secondary source due to the high Fe content, reducing efficiency. This study investigated the impact of Fe on Sc recovery by solvent extraction and evaluated the environmental impact of the process. A hydrometallurgical route was chosen for Sc extraction involving leaching with H2SO4 followed by solvent extraction with Cyanex 923 and Alamine 336. Synergistic combination of these extractants was tested to increase selectivity. Results showed that Cyanex 923 extracted nearly 100% of the Sc, but the co-extraction of Fe (25–80%) remained a significant challenge. A combination of Cyanex 923 and Alamine 336 improved Sc selectivity by minimizing Fe extraction at pH 0.5–1.0 (&lt; 20%). LCA indicated that leaching had the greatest environmental impact due to high energy consumption, while solvent extraction also contributed considerably because of kerosene use for dilution. The highest environmental impact is on ozone depletion in all steps of the process (leaching and solvent extraction). Synergistic use of Cyanex 923 and Alamine 336 is an efficient strategy for Sc extraction with low Fe co-extraction. Further optimizations are needed for the industrial scale, particularly concerning environmental impacts.
</summary>
<dc:date>2025-09-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Membrane Application in Hydrometallurgical Processing</title>
<link href="https://hdl.handle.net/1721.1/163598" rel="alternate"/>
<author>
<name>Botelho Junior, Amilton B.</name>
</author>
<author>
<name>Peng, Hong</name>
</author>
<author>
<name>Kim, Jihye</name>
</author>
<id>https://hdl.handle.net/1721.1/163598</id>
<updated>2026-03-08T03:28:28Z</updated>
<published>2025-02-18T00:00:00Z</published>
<summary type="text">Membrane Application in Hydrometallurgical Processing
Botelho Junior, Amilton B.; Peng, Hong; Kim, Jihye
Critical minerals are crucial for energy transition and for the success of commercialization of hydro power, wind turbines, and photovoltaic panels. The increasing demand puts pressure on the search for new sources, including new mining sites, tailings, and urban solid wastes. Membrane-based separation is well-stated for water desalination and wastewater treatment. Recently, the search for new processes to recover critical minerals in aqueous processing has shed light on its potential application. Electrodialysis has demonstrated a mature electrochemical separation technique, while supported liquid membranes have great potential for future developments. Membrane cost represents the main drawback, and for this reason new materials are under development including synthesis for a specific critical mineral, such as Li and rare earth elements.
</summary>
<dc:date>2025-02-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Review of The Rhetoricity of Philosophy: Audience in Perelman and Ricoeur After the Badiou-Cassin Debate</title>
<link href="https://hdl.handle.net/1721.1/163597" rel="alternate"/>
<author>
<name>Schiappa, Edward</name>
</author>
<id>https://hdl.handle.net/1721.1/163597</id>
<updated>2025-11-08T04:12:21Z</updated>
<published>2025-09-06T00:00:00Z</published>
<summary type="text">Review of The Rhetoricity of Philosophy: Audience in Perelman and Ricoeur After the Badiou-Cassin Debate
Schiappa, Edward
In this well-written and superbly researched book, Blake D. Scott uses the “debate” between Alain Badiou and Barbara Cassin as a point of departure to revisit the longstanding tension between philosophy and rhetoric. Through substantial exegeses of the work of Chaïm Perelman and Lucie Olbrechts‑Tyteca, as well as selected writings by Paul Ricœur, Scott rejects the conventional view that philosophy and rhetoric are separate disciplines. He argues instead for their asymmetrical interdependence: rhetoric is constitutive of philosophical practice. Central to his thesis is the concept of rhetoricity—the rhetorical dimension inherent in all discourse by virtue of the human “rhetorical capacity,” our ability to reflect on audiences and the potential for persuasion.
</summary>
<dc:date>2025-09-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>Household Portfolios and Retirement Saving over the Life Cycle</title>
<link href="https://hdl.handle.net/1721.1/163596" rel="alternate"/>
<author>
<name>PARKER, JONATHAN A</name>
</author>
<author>
<name>SCHOAR, ANTOINETTE</name>
</author>
<author>
<name>COLE, ALLISON</name>
</author>
<author>
<name>SIMESTER, DUNCAN</name>
</author>
<id>https://hdl.handle.net/1721.1/163596</id>
<updated>2026-03-08T03:29:20Z</updated>
<published>2025-08-12T00:00:00Z</published>
<summary type="text">Household Portfolios and Retirement Saving over the Life Cycle
PARKER, JONATHAN A; SCHOAR, ANTOINETTE; COLE, ALLISON; SIMESTER, DUNCAN
Using account-level data on millions of U.S. middle-class investors over 2006 to 2018, we characterize the share of investable wealth that they hold in the stock market over their working lives. Relative to the 1990s, this share has both risen by 10% and become age-dependent. The Pension Protection Act (PPA)—which allowed target date funds (TDFs) to be default options in retirement plans—played an important role: younger (older) workers starting at a firm after TDFs became the default option post-PPA invested more (less) in stocks, in line with the TDF glidepath. In contrast, contribution rates changed little following the PPA.
</summary>
<dc:date>2025-08-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Many Americans Work Remotely? A Survey of Surveys and Their Measurement Issues</title>
<link href="https://hdl.handle.net/1721.1/163595" rel="alternate"/>
<author>
<name>Brynjolfsson, Erik</name>
</author>
<author>
<name>Horton, John</name>
</author>
<author>
<name>Makridis, Christos</name>
</author>
<author>
<name>Mas, Alex</name>
</author>
<author>
<name>Ozimek, Adam</name>
</author>
<author>
<name>Rock, Daniel</name>
</author>
<author>
<name>TuYe, Hong‐Yi</name>
</author>
<id>https://hdl.handle.net/1721.1/163595</id>
<updated>2026-03-08T03:29:23Z</updated>
<published>2025-10-28T00:00:00Z</published>
<summary type="text">How Many Americans Work Remotely? A Survey of Surveys and Their Measurement Issues
Brynjolfsson, Erik; Horton, John; Makridis, Christos; Mas, Alex; Ozimek, Adam; Rock, Daniel; TuYe, Hong‐Yi
Remote work surged during the COVID-19 pandemic, but estimates vary widely. To address this, we field the Remote Life Survey (RLS), a nationally representative survey. In October 2020, we find that 31.6% of continuously employed workers always worked from home (WFH), and 21.9% did so sometimes or rarely, totaling 53.5%. We compare our results with government surveys and assess four factors contributing to measurement differences: (a) web versus mail-based respondents, (b) inclusion of self-employed workers, (c) occupation mix, and (d) exclusion of pre-pandemic remote workers. We find that (d) explains most of the discrepancy between the Current Population Survey (CPS) and other measures. Policymakers and researchers relying on CPS data should note that it may underestimate remote work prevalence by up to 25 percentage points. Our preferred estimates suggest that about half of the U.S. workforce worked remotely at least one day per week as of December 2020.
</summary>
<dc:date>2025-10-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Impact of Internal Variability on Benchmarking Deep Learning Climate Emulators</title>
<link href="https://hdl.handle.net/1721.1/163594" rel="alternate"/>
<author>
<name>Lütjens, Björn</name>
</author>
<author>
<name>Ferrari, Raffaele</name>
</author>
<author>
<name>Watson‐Parris, Duncan</name>
</author>
<author>
<name>Selin, Noelle E</name>
</author>
<id>https://hdl.handle.net/1721.1/163594</id>
<updated>2026-03-08T03:29:23Z</updated>
<published>2025-08-26T00:00:00Z</published>
<summary type="text">The Impact of Internal Variability on Benchmarking Deep Learning Climate Emulators
Lütjens, Björn; Ferrari, Raffaele; Watson‐Parris, Duncan; Selin, Noelle E
Full-complexity Earth system models (ESMs) are computationally very expensive, limiting their use in exploring the climate outcomes of multiple emission pathways. More efficient emulators that approximate ESMs can directly map emissions onto climate outcomes, and benchmarks are being used to evaluate their accuracy on standardized tasks and data sets. We investigate a popular benchmark in data-driven climate emulation, ClimateBench, on which deep learning-based emulators are currently achieving the best performance. We compare these deep learning emulators with a linear regression-based emulator, akin to pattern scaling, and show that it outperforms the incumbent 100M-parameter deep learning foundation model, ClimaX, on 3 out of 4 regionally resolved climate variables, notably surface temperature and precipitation. While emulating surface temperature is expected to be predominantly linear, this result is surprising for emulating precipitation. Precipitation is a much more noisy variable, and we show that deep learning emulators can overfit to internal variability noise at low frequencies, degrading their performance in comparison to a linear emulator. We address the issue of overfitting by increasing the number of climate simulations per emission pathway (from 3 to 50) and updating the benchmark targets with the respective ensemble averages from the MPI-ESM1.2-LR model. Using the new targets, we show that linear pattern scaling continues to be more accurate on temperature, but can be outperformed by a deep learning-based technique for emulating precipitation. We publish our code and data at https://github.com/blutjens/climate-emulator.
</summary>
<dc:date>2025-08-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Don’t Just Send in the Chiefs</title>
<link href="https://hdl.handle.net/1721.1/163593" rel="alternate"/>
<author>
<name>Wright, Randall S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163593</id>
<updated>2026-03-08T03:29:17Z</updated>
<published>2022-04-19T00:00:00Z</published>
<summary type="text">Don’t Just Send in the Chiefs
Wright, Randall S.
A few years ago, my wife and I visited the aircraft carrierMidway on a vacation to San Diego. The Midway is one ofthe largest aircraft carriers ever built. It was to be deployedin World War II, but the war ended before the Midway couldbe commissioned. It was the largest ship in the US Navy until1955 and the first aircraft carrier too big to pass through thePanama Canal. The ship was in service for 47 years, includingthe Vietnam War and Operation Desert Storm. It’s now afloating museum (Wikimedia Foundation 2022).
</summary>
<dc:date>2022-04-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for the year ended June 30, 2025, Center for Clinical and Translational Research</title>
<link href="https://hdl.handle.net/1721.1/163592" rel="alternate"/>
<author>
<name>Edelman, Elazer</name>
</author>
<id>https://hdl.handle.net/1721.1/163592</id>
<updated>2025-11-07T03:14:39Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for the year ended June 30, 2025, Center for Clinical and Translational Research
Edelman, Elazer
This report contains the following sections:  Goals, Objectives and Priorities; Funding; Personnel; and Accomplishments.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Department of Architecture</title>
<link href="https://hdl.handle.net/1721.1/163591" rel="alternate"/>
<author>
<name>de Monchaux, Nicholas</name>
</author>
<id>https://hdl.handle.net/1721.1/163591</id>
<updated>2025-11-07T03:14:38Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Department of Architecture
de Monchaux, Nicholas
This report contains the following sections: Priorities (Climate, Community, and Design); Administration; Finance; and Staff and Transitions.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Updated measurement of CP violation and polarisation in B s 0 → J / ψ K ¯ ∗ 892 0 decays</title>
<link href="https://hdl.handle.net/1721.1/163590" rel="alternate"/>
<author>
<name>Aaij, R.</name>
</author>
<author>
<name>Abdelmotteleb, A. S. W.</name>
</author>
<author>
<name>Abellan Beteta, C.</name>
</author>
<author>
<name>Abudinén, F.</name>
</author>
<author>
<name>Ackernley, T.</name>
</author>
<author>
<name>Adefisoye, A. A.</name>
</author>
<author>
<name>Adeva, B.</name>
</author>
<author>
<name>Adinolfi, M.</name>
</author>
<author>
<name>Adlarson, P.</name>
</author>
<author>
<name>Agapopoulou, C.</name>
</author>
<author>
<name>Aidala, C. A.</name>
</author>
<author>
<name>Ajaltouni, Z.</name>
</author>
<author>
<name>Akar, S.</name>
</author>
<author>
<name>Akiba, K.</name>
</author>
<author>
<name>Albicocco, P.</name>
</author>
<author>
<name>Albrecht, J.</name>
</author>
<author>
<name>The LHCb Collaboration</name>
</author>
<id>https://hdl.handle.net/1721.1/163590</id>
<updated>2026-03-08T03:29:18Z</updated>
<published>2025-10-21T00:00:00Z</published>
<summary type="text">Updated measurement of CP violation and polarisation in B s 0 → J / ψ K ¯ ∗ 892 0 decays
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; The LHCb Collaboration
A time-integrated angular analysis of the decay B s 0 → J / ψ K ¯ ∗ 892 0 , with J/ψ → μ+μ− and K ¯ ∗ 892 0 → K − π + , is presented. The analysis employs a sample of proton-proton collision data collected by the LHCb experiment during 2015–2018 at a centre-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 6 fb−1. A simultaneous maximum-likelihood fit is performed to the angular distributions in bins of the K−π+ mass. This fit yields measurements of the CP-averaged polarisation fractions and CP asymmetries for the P-wave component of the K−π+ system. The longitudinal and parallel polarisation fractions are determined to be f0 = 0.534 ± 0.012 ± 0.009 and f|| = 0.211 ± 0.014 ± 0.005, respectively, where the first uncertainty is statistical and the second is systematic. The CP asymmetries are measured with 3–7% precision and are found to be consistent with zero. These measurements, along with an updated determination of the branching fraction relative to the B0 → J/ψK*0 decay, are combined with previous LHCb results, providing the most precise values for these observables to date.
</summary>
<dc:date>2025-10-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measurements of inclusive and differential Higgs boson production cross sections at √s = 13.6 TeV in the H → γγ decay channel</title>
<link href="https://hdl.handle.net/1721.1/163589" rel="alternate"/>
<author>
<name>Hayrapetyan, A.</name>
</author>
<author>
<name>Makarenko, V.</name>
</author>
<author>
<name>Tumasyan, A.</name>
</author>
<author>
<name>Adam, W.</name>
</author>
<author>
<name>Andrejkovic, J. W.</name>
</author>
<author>
<name>Benato, L.</name>
</author>
<author>
<name>Bergauer, T.</name>
</author>
<author>
<name>Damanakis, K.</name>
</author>
<author>
<name>Dragicevic, M.</name>
</author>
<author>
<name>Giordano, C.</name>
</author>
<author>
<name>Hussain, P. S.</name>
</author>
<author>
<name>Jeitler, M.</name>
</author>
<author>
<name>Krammer, N.</name>
</author>
<author>
<name>Li, A.</name>
</author>
<author>
<name>Liko, D.</name>
</author>
<author>
<name>Mikulec, I.</name>
</author>
<author>
<name>Schieck, J.</name>
</author>
<author>
<name>The CMS Collaboration</name>
</author>
<id>https://hdl.handle.net/1721.1/163589</id>
<updated>2026-03-08T03:27:21Z</updated>
<published>2025-09-08T00:00:00Z</published>
<summary type="text">Measurements of inclusive and differential Higgs boson production cross sections at √s = 13.6 TeV in the H → γγ decay channel
Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Damanakis, K.; Dragicevic, M.; Giordano, C.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; The CMS Collaboration
Inclusive and differential cross sections for Higgs boson production in protonproton collisions at a centre-of-mass energy of 13.6 TeV are measured using data collected&#13;
with the CMS detector at the LHC in 2022, corresponding to an integrated luminosity of&#13;
34.7 fb−1&#13;
. Events with the diphoton final state are selected, and the measured inclusive&#13;
fiducial cross section is σfid = 74±11 (stat)+5&#13;
−4&#13;
(syst) fb, in agreement with the standard model&#13;
prediction of 67.8 ± 3.8 fb. Differential cross sections are measured as functions of several&#13;
observables: the Higgs boson transverse momentum and rapidity, the number of associated&#13;
jets, and the transverse momentum of the leading jet in the event. Within the uncertainties,&#13;
the differential cross sections agree with the standard model predictions.
</summary>
<dc:date>2025-09-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the structure of multiple stable equilibria in competitive ecological systems</title>
<link href="https://hdl.handle.net/1721.1/163588" rel="alternate"/>
<author>
<name>Taylor, Washington</name>
</author>
<author>
<name>O’Dwyer, James</name>
</author>
<id>https://hdl.handle.net/1721.1/163588</id>
<updated>2026-03-08T03:27:23Z</updated>
<published>2025-10-06T00:00:00Z</published>
<summary type="text">On the structure of multiple stable equilibria in competitive ecological systems
Taylor, Washington; O’Dwyer, James
For some ecological systems with a large pool of possible species, there can be multiple stable equilibria with different species composition. Natural or anthropogenic disruption can induce a shift between different such equilibria. While some work has been done on ecological systems with multiple equilibria, there is no general theory governing the distribution of equilibria or characterizing the basins of attraction of different equilibria. This article addresses these questions in a simple class of Lotka-Volterra models. We focus on competitive systems of species on a niche axis with multiple equilibria. We find that basins of attraction are generally larger for equilibria with greater biomass; in many cases, the basin of attraction size scales roughly exponentially with the net biomass of equilibria. This is illustrated in two ecologically relevant limits. In a continuous limit with species spaced arbitrarily closely on the niche axis, equilibria with different numbers of species provide a new perspective on the notion of limiting similarity. In another limit, akin to a statistical mechanical model, the niche axis becomes infinite while the range of interactions remains fixed; in this limit, we prove the exponential relation between basin size and biomass using the Markov chain central limit theorem.
</summary>
<dc:date>2025-10-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measurement of branching fractions and CP asymmetries in Λ b 0 Ξ b 0 → p K S 0 h − decays</title>
<link href="https://hdl.handle.net/1721.1/163587" rel="alternate"/>
<author>
<name>Aaij, R.</name>
</author>
<author>
<name>Abdelmotteleb, A. S. W.</name>
</author>
<author>
<name>Abellan Beteta, C.</name>
</author>
<author>
<name>Abudinén, F.</name>
</author>
<author>
<name>Ackernley, T.</name>
</author>
<author>
<name>Adefisoye, A. A.</name>
</author>
<author>
<name>Adeva, B.</name>
</author>
<author>
<name>Adinolfi, M.</name>
</author>
<author>
<name>Adlarson, P.</name>
</author>
<author>
<name>Agapopoulou, C.</name>
</author>
<author>
<name>Aidala, C. A.</name>
</author>
<author>
<name>Ajaltouni, Z.</name>
</author>
<author>
<name>Akar, S.</name>
</author>
<author>
<name>Akiba, K.</name>
</author>
<author>
<name>Albicocco, P.</name>
</author>
<author>
<name>Albrecht, J.</name>
</author>
<author>
<name>The LHCb Collaboration</name>
</author>
<id>https://hdl.handle.net/1721.1/163587</id>
<updated>2026-03-08T03:29:19Z</updated>
<published>2025-10-21T00:00:00Z</published>
<summary type="text">Measurement of branching fractions and CP asymmetries in Λ b 0 Ξ b 0 → p K S 0 h − decays
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; The LHCb Collaboration
A study of Λ b 0 and Ξ b 0 baryon decays to the final states p K S 0 π − and p K S 0 K − is performed using pp collision data collected by the LHCb experiment, corresponding to an integrated luminosity of 9 fb−1. The decays Λ b 0 → p K S 0 K − and Ξ b 0 → p K S 0 K − are observed for the first time, with significances reaching eight standard deviations. The branching fractions and integrated CP asymmetries are measured for the Λ b 0 → p K S 0 π − , Λ b 0 → p K S 0 K − , and Ξ b 0 → p K S 0 K − decays. For the decay Λ b 0 → p K S 0 π − , the CP asymmetries are measured in different regions of the Dalitz plot. No evidence of CP violation is observed.
</summary>
<dc:date>2025-10-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>(Not so) universal literacy screening: a survey of educators reveals variability in implementation</title>
<link href="https://hdl.handle.net/1721.1/163586" rel="alternate"/>
<author>
<name>Ozernov-Palchik, Ola</name>
</author>
<author>
<name>Elizee, Zoe</name>
</author>
<author>
<name>Catania, Fabio</name>
</author>
<author>
<name>Hacikamiloglu, Meral</name>
</author>
<author>
<name>Shattuck-Hufnagel, Stefanie</name>
</author>
<author>
<name>Petscher, Yaacov</name>
</author>
<author>
<name>Ghosh, Satrajit</name>
</author>
<author>
<name>Gabrieli, John D. E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163586</id>
<updated>2026-03-08T03:29:21Z</updated>
<published>2025-10-29T00:00:00Z</published>
<summary type="text">(Not so) universal literacy screening: a survey of educators reveals variability in implementation
Ozernov-Palchik, Ola; Elizee, Zoe; Catania, Fabio; Hacikamiloglu, Meral; Shattuck-Hufnagel, Stefanie; Petscher, Yaacov; Ghosh, Satrajit; Gabrieli, John D. E.
Currently, most states in the United States have enacted legislation mandating universal screening for literacy risk in kindergarten through 3rd grade. However, the degree to which these policies translate into consistent, high-quality screening practices remains unclear. In this survey study, we collected responses from a diverse sample of K–3 educators (N = 251) across 39 states, representing varied school types, professional roles, and experience levels, to examine the real-world implementation of universal screening. Guided by the Exploration, Preparation, Implementation, and Sustainment (EPIS) framework, we analyzed quantitative and qualitative data to identify real-world factors that could impede the fidelity and effectiveness of screening implementation. We found substantial variability across multiple dimensions of literacy screening implementation. Educators described considerable variation in screener selection, administration practices, testing environments, training quality, scoring accuracy, and the use of results to guide intervention. Notably, many indicated insufficient training and professional development, expressing uncertainty about administering and interpreting screeners, particularly for English language learners. Nearly half also reported the absence of systematic procedures for developing intervention plans, suggesting that many students identified as at risk do not receive appropriate follow-up support. These implementation challenges occurred despite widespread recognition among educators of screening’s importance for early literacy intervention. Educators from lower-socioeconomic status schools reported significantly greater time burdens in conducting screenings and more technology-related challenges compared to their higher-SES counterparts. Without systematic improvements to implementation support and training, current screening initiatives may fail to achieve their intended goal of early identification and intervention for struggling readers.
</summary>
<dc:date>2025-10-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Institute Office of Communications</title>
<link href="https://hdl.handle.net/1721.1/163585" rel="alternate"/>
<author>
<name>Ironside, Alfred</name>
</author>
<id>https://hdl.handle.net/1721.1/163585</id>
<updated>2025-11-07T03:14:31Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Institute Office of Communications
Ironside, Alfred
This report contains the following sections: Brand, Digital, and Internal Communications; Media Relations and Crisis Communications; Strategic Communications and MIT News; MIT Copytech; and IOC Staff News.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Laboratory for Nuclear Science</title>
<link href="https://hdl.handle.net/1721.1/163584" rel="alternate"/>
<author>
<name>Wyslouch, Boleslaw</name>
</author>
<id>https://hdl.handle.net/1721.1/163584</id>
<updated>2025-11-07T03:14:40Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Laboratory for Nuclear Science
Wyslouch, Boleslaw
This report contains the following sections: Experimental Nuclear Physics, Experimental Particle Physics, Institute for Artificial Intelligence and Fundamental Interactions, Theoretical Particle and Nuclear Physics, MIT-Bates Research and Engineering Center, MIT Central Machine Shop, and Education.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Institute Events</title>
<link href="https://hdl.handle.net/1721.1/163583" rel="alternate"/>
<author>
<name>Johnson, Ted E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163583</id>
<updated>2025-11-07T03:14:37Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Institute Events
Johnson, Ted E.
This report contains the following sections: Events, Institute Events Office, Community Services Office, and Institute Events Personnel.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of a Compton imager setup</title>
<link href="https://hdl.handle.net/1721.1/163582" rel="alternate"/>
<author>
<name>Arya, Anuraag</name>
</author>
<author>
<name>Bilkhu, Harmanjeet S.</name>
</author>
<author>
<name>Vishwakarma, Sandeep</name>
</author>
<author>
<name>Belatikar, Hrishikesh</name>
</author>
<author>
<name>Bhalerao, Varun</name>
</author>
<author>
<name>Ghodgaonkar, Abhijeet</name>
</author>
<author>
<name>Koyande, Jayprakash G.</name>
</author>
<author>
<name>Marathe, Aditi</name>
</author>
<author>
<name>Mithun, N. P. S.</name>
</author>
<author>
<name>Narang, Sanjoli</name>
</author>
<author>
<name>Nimbalkar, Sudhanshu</name>
</author>
<author>
<name>Page, Pranav</name>
</author>
<author>
<name>Palit, Sourav</name>
</author>
<author>
<name>Patel, Arpit</name>
</author>
<author>
<name>Shetye, Amit</name>
</author>
<author>
<name>Tallur, Siddharth</name>
</author>
<id>https://hdl.handle.net/1721.1/163582</id>
<updated>2025-11-07T03:12:26Z</updated>
<published>2025-11-04T00:00:00Z</published>
<summary type="text">Development of a Compton imager setup
Arya, Anuraag; Bilkhu, Harmanjeet S.; Vishwakarma, Sandeep; Belatikar, Hrishikesh; Bhalerao, Varun; Ghodgaonkar, Abhijeet; Koyande, Jayprakash G.; Marathe, Aditi; Mithun, N. P. S.; Narang, Sanjoli; Nimbalkar, Sudhanshu; Page, Pranav; Palit, Sourav; Patel, Arpit; Shetye, Amit; Tallur, Siddharth
Hard X-ray photons with energies in the range of hundreds of keV typically undergo Compton scattering when they are incident on a detector. In this process, an incident photon deposits a fraction of its energy at the point of incidence and continues onwards with a change in direction that depends on the amount of energy deposited. By using a pair of detectors to detect the point of incidence and the direction of the scattered photon, we can calculate the scattering direction and angle. The position of a source in the sky can be reconstructed using many Compton photon pairs from a source. We demonstrate this principle in the laboratory by using a pair of Cadmium Zinc Telluride (CZT) detectors sensitive in the energy range of 20–200 keV, similar to those used in AstroSat/CZT Imager (CZTI). The laboratory setup consists of two detectors placed perpendicular to each other in a lead-lined box. The detectors are read out by a custom-programmed Xilinx PYNQ-Z2 FPGA board, and data are then transferred to a personal computer (PC). There are two key updates from CZTI: the detectors are read concurrently rather than serially, and the time resolution has been improved from 20 to 7.5  μ s. We irradiated the detectors with a collimated 133 Ba source and identified Compton scattering events for the 356 keV line. We run a Compton reconstruction algorithm to correctly infer the location of the source in the detector frame, with a location-dependent angular response measure of 16 ∘ –30 ∘ . This comprises a successful technology demonstration for a Compton imaging camera in the hard X-ray regime. We present the details of our setup, the data acquisition process, and software algorithms, and showcase our results. We also quantify the limitations of this setup and discuss ways of improving the performance in future experiments.
</summary>
<dc:date>2025-11-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sanctuary for Who?</title>
<link href="https://hdl.handle.net/1721.1/163581" rel="alternate"/>
<author>
<name>Salazar, Juan</name>
</author>
<id>https://hdl.handle.net/1721.1/163581</id>
<updated>2025-11-06T03:07:47Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Sanctuary for Who?
Salazar, Juan
Philadelphia, often recognized as the poorest major city in the United States, became a Sanctuary City in 2014. The designation committed the region to policies limiting cooperation with federal law enforcement in the persecution of undocumented communities. Policies have ranged from refusing to detain individuals without judicial warrants to prohibiting Immigration and Customs Enforcement (ICE) from accessing municipal databases or facilities for detention purposes. At the community level, the notion of the Sanctuary City sought to promote organizing against unlawful persecution of residents. Over the past eleven years, however, the framework of protection it promised has faltered under mounting federal pressure. The Sanctuary City's symbolic authority and limited scope have failed to shield residents from persecution or restrict ICE's intensifying operations within the area. In 2019, Juntos, the city's foremost immigrant advocacy organization, criticized Philadelphia's Sanctuary status as inadequate. Cited the ongoing persecution of its communities and the declining quality of life for all residents, the organization urged the city to abandon the term "Sanctuary." They instead petition the city to focus instead on meaningfully protecting all residents of Philadelphia, stating, "Let us instead work together to build the kind of city we all want to live in." Junto's critique forms the basis of this thesis, using it as an invitation to reimagine the Sanctuary City as a shift from a policy framework toward a general ethic and design sensibility. This thesis proposes that Philadelphia's crux, like all cities, lies in its ability to sustain communities' pursuit of a dignified life. As a primary agent in the formation of cities, the architect must then make this struggle their own and deploy the tools of their discipline to protect life and inspire dignity. By framing Philadelphia as a city shaped by deindustrialization, disinvestment, and policing, the thesis explores how architecture can respond to these forces by reviving the city's industrial character and establishing new boundaries able to safeguard community rights. Integrating legal, spatial, and semantic insights from federal authorities' rules of engagement will provide novel typologies and programs for the city that address its systemic inequities while fostering environments where life and dignity can flourish. By inscribing meaningful boundaries, and re-equipping the city to make for itself, the thesis suggests architecture becomes a tool for collective protection and urban regeneration.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structural analysis at scale: Computational modeling of embodied carbon in complex floor layouts</title>
<link href="https://hdl.handle.net/1721.1/163580" rel="alternate"/>
<author>
<name>Hirt, Natasha K.</name>
</author>
<id>https://hdl.handle.net/1721.1/163580</id>
<updated>2025-11-06T03:07:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Structural analysis at scale: Computational modeling of embodied carbon in complex floor layouts
Hirt, Natasha K.
To meet the needs of growing populations, rates of new construction are increasing at a record pace worldwide. The built environment, already one of the single largest contributors to global CO₂e emissions, will become a significant environmental challenge in the coming decades. To mitigate the anticipated environmental impact of future construction, we need to rethink how we build.&#13;
&#13;
One strategy, which is the subject of this work, is improving the material efficiency of flexural systems like floors. Floors are among the most materially wasteful structural components in buildings, and while decades of research have explored optimal floor system design, the complexity of proposed solutions has limited their practical implementation. Furthermore, the industrial tools available to structural designers do not lend themselves to flexible experimentation or large-scale analysis. As a result, most flexural systems today rely on approximations and rules of thumb rather than mathematically optimal designs, data-driven decision making, or iterative design processes.&#13;
&#13;
This thesis bridges the gap between practical engineering, material efficiency, and design freedom. It presents novel, code-compliant tools for the computational analysis and optimization of flat slabs supported by a network, or grillage, of beams, using a model system of reinforced concrete supported by steel W-sections. The method is used to perform a large-scale analysis of 24,192 unique combinations of beam topologies and assembly design decisions. The results of this analysis find improvements in structural embodied carbon of up to 53.4% over the business-as-usual design case, and also yield generalizable takeaways about the key factors influencing material efficiency in floor slabs. &#13;
&#13;
One of the advantages of the method is its flexibility in taking on a range of complex design challenges. These are presented as extensions to the method, and include designing with a constrained inventory for a series of real-world case studies, and automatically deriving novel structural geometries from dense ground structures.&#13;
&#13;
The method and results shown in this thesis expand the range of analysis tools that engineers have access to, enabling a wide range of creative designs and explicitly linking design decisions to environmental impact.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inscrutability: An Epistemological Experiment</title>
<link href="https://hdl.handle.net/1721.1/163579" rel="alternate"/>
<author>
<name>Huang, Brian Hudson</name>
</author>
<id>https://hdl.handle.net/1721.1/163579</id>
<updated>2025-11-06T03:07:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Inscrutability: An Epistemological Experiment
Huang, Brian Hudson
Through four different projects, this thesis explores the idea of dimensions of representation, a concept introduced by 20th century French philosopher Michel Foucault in his book The Order of Things. Foucault argues that the Classical episteme, which Foucault defines as the discourse surrounding knowledge-making that lasted from the 17th century to the 19th century, was determined by the idea of dimensions of representations. Dimensions of representations states that during the Classical episteme, knowledge was formulated by representations of the external world, such as through systems of classification, ordering, and relations, rather than through resemblance. The first project, Holes in the Sieve (2023) will address the problematics of classification through a infamous case in the history of paleoanthropology: the Piltdown Man. The second project, Contrapposto in Space (2024) addresses how representation has been instrumentalized in technoscience through space research. Finally, the last two projects, the Poem Box (2024) and Micropoetry (2025) posit a way forward at the limits of representation by engaging with semiotic theory. By engaging with language games, poetry opens up the possibility to deny the position of being knowable, allowing one to disappear into inscrutability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Koalisi Lahan–Gambut: Assembling Peat–Land Futures in Kalimantan</title>
<link href="https://hdl.handle.net/1721.1/163578" rel="alternate"/>
<author>
<name>El Haq, Haidar</name>
</author>
<id>https://hdl.handle.net/1721.1/163578</id>
<updated>2025-11-06T03:07:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Koalisi Lahan–Gambut: Assembling Peat–Land Futures in Kalimantan
El Haq, Haidar
Throughout Indonesia’s colonial and postcolonial histories, the peatlands of Kalimantan have been not only politically contested spaces but also sites of ontological struggle. From transmigrasi programs to Suharto’s Mega-Rice Project and most notably today’s carbon offset regimes, peat has been transformed into a paradoxical ecology: degraded yet investible, conserved yet profitable. These transformations enclose land, force communities to choose between extraction or restoration, criminalize fire, and abandon regenerative forms of cultivation. These are histories of ontological occupation institutionalized: the marginalization of both peat’s inhabitants and the soil itself as world-making agents, shaped by speculative regimes of governance, rooted in planetary imaginaries of climate salvation and fantasies of productivity. This thesis proposes Koalisi Lahan–Gambut (Peat–Land Coalition), a speculative parainstitution that explores how coalitional spatial practices might reclaim inhabitation in peat ecologies. Situated in a Ngaju village within the buffer zone of one of the world’s largest carbon offset territories—between deep peat and riverine edges, between restoration enclosures and plantation areas—the coalition works through the murkiness of peat, the heterogeneity of its inhabitants, and the crowded terrain of overlapping institutional claims. It foregrounds the frictions between gambut (peat) and lahan (land). Structured across three inquiries, the document presents a Living Glossary that assembles field terms and relational epistemologies drawn from Kalimantan’s peatlands; a genealogy of Governance, Carbon Fix, and Buffer Zone that traces the historical and institutional processes that rendered peatlands governable; and Landing in the Buffer Zone, which turns to the coalition’s situated experiments in becoming-with, inhabiting, and reclaiming the space between peat and land.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Engineering TEV Protease Specificity: An Exploration of Machine Learning and High-Throughput Experimentation for Protein Design</title>
<link href="https://hdl.handle.net/1721.1/163577" rel="alternate"/>
<author>
<name>Sundar, Vikram</name>
</author>
<id>https://hdl.handle.net/1721.1/163577</id>
<updated>2025-11-06T03:04:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Engineering TEV Protease Specificity: An Exploration of Machine Learning and High-Throughput Experimentation for Protein Design
Sundar, Vikram
Engineering sequence-specific proteases would enable a wide variety of therapeutic applications in diseases ranging from cancer to Parkinson’s disease. However, many previous experimental and physics-based attempts at protease engineering have failed to engineer specificity in cleaving alternative substrates, rendering them useless. In this thesis, we aim to engineer TEV (tobacco etch virus) protease, a highly sequence-specific protease, to cleave alternative substrates. We incorporate novel high-throughput assays and powerful machine learning (ML) methods for highly effective protein engineering. The first portion of this thesis focuses on generating fitness landscapes from high-throughput experiments. Most machine learning models do not account for experimental noise, harming model performance and changing model rankings in benchmarking studies. Here we develop FLIGHTED, a Bayesian method of accounting for uncertainty by generating probabilistic fitness landscapes from noisy high-throughput experiments. We demonstrate how FLIGHTED can improve model performance on two categories of experiments: single-step selection assays, such as phage display, and a novel high-throughput assay called DHARMA that ties activity to base editing. FLIGHTED can be used to generate robust, well-calibrated fitness landscapes, and when combined with DHARMA, our methods enable us to generate fitness landscapes of millions of variants. We then evaluate how to model protein fitness given a fitness dataset of millions of variants. Accounting for noise via FLIGHTED significantly improves model performance, especially of high-performing models. Data size, not model scale, is the most important factor in improving model performance. Furthermore, the choice of top model architecture matters more than the protein language model embedding. The best way to generate sufficient data scale is via error-prone PCR libraries; models trained on these landscapes achieve high accuracy. Using these methods, we successfully engineer both activity on an alternative substrate and specificity when compared to the wild-type. The ML-designed variants outperform anything found in the training set, demonstrating the value of machine learning even with experimental libraries of millions of variants. However, our results are limited to relatively close substrates. How best to improve model performance on distant substrates remains an open question.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Experimental Study on the Effects of Three-Piece Oil Control Ring Design and Liner Finish on Lubricating Oil Consumption in a Hydrogen-Fueled Single-Cylinder Reciprocating Engine</title>
<link href="https://hdl.handle.net/1721.1/163576" rel="alternate"/>
<author>
<name>Tamburro, Alexandra</name>
</author>
<id>https://hdl.handle.net/1721.1/163576</id>
<updated>2025-11-06T03:07:40Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Experimental Study on the Effects of Three-Piece Oil Control Ring Design and Liner Finish on Lubricating Oil Consumption in a Hydrogen-Fueled Single-Cylinder Reciprocating Engine
Tamburro, Alexandra
Reducing lubricating oil consumption (LOC) in reciprocating engines is an increasingly important objective in the pursuit of lower greenhouse gas emissions, longer maintenance intervals, and compliance with tightening environmental regulations. In 2022, the U.S. transportation sector alone was responsible for 29% of national greenhouse gas emissions, 87% of which originated from systems powered by reciprocating engines [1]. While significant progress has been made in fuel efficiency, oil consumption remains as a key contributor to carbon emissions. This research investigates the impact of design parameters in three-piece oil control rings (TPOCRs) and liner surface finish on oil consumption behavior.&#13;
&#13;
Utilizing a hydrogen-fueled engine—where the only source of CO₂ emissions is from consumed lubricating oil—this study develops a high-fidelity, FTIR-based method for direct LOC measurement. A derivation of oil consumption based on air and fuel mass flow rates and measured CO₂ emissions is presented, alongside a sensitivity analysis which identified FTIR measurement uncertainty and ambient CO₂ variation as dominant error sources. All experiments were conducted at 2000 RPM under medium load (4 bar IMEP). The experimental results showed that under the tested condition, 1) increasing liner roughness increases the LOC and 2) changing the orientation of any rails with asymmetrical profile to favor up-scraping results in an elevation of LOC.  Analyses applying liner vaporization and TPOCR models showed that the changes in liner oil film thickness brought by the TPOCR changes have negligible effect on the LOC from the oil evaporation.  Increases in upper-rail up-scraping ability and the oil accumulation inside the TPOCR groove can both elevate the LOC although further investigation is needed to understand the oil transport paths leading to the LOC.&#13;
&#13;
This work provides a foundation for future optimization of TPOCR design by highlighting key ring-liner interactions and oil transport mechanisms. Further study of asymmetric geometries and surface characteristics will provide further insights for reducing oil consumption in engine platforms.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of Future Energy Infrastructure: Understanding trade-offs between Renewable Capacity, Storage and Transmission Networks for Low-Carbon Landscape</title>
<link href="https://hdl.handle.net/1721.1/163575" rel="alternate"/>
<author>
<name>Bhupathi, Hari Raghavendran</name>
</author>
<id>https://hdl.handle.net/1721.1/163575</id>
<updated>2025-11-06T03:07:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design of Future Energy Infrastructure: Understanding trade-offs between Renewable Capacity, Storage and Transmission Networks for Low-Carbon Landscape
Bhupathi, Hari Raghavendran
In 2021, the United States committed to achieving net-zero greenhouse gas emissions by 2050, requiring a fundamental transformation of its energy infrastructure. This thesis develops a nationwide optimization model to minimize capital expenditures and understand the trade-off between renewable capacity, storage, and transmission networks. The results show that the least-cost configuration, achieved when nuclear and battery capital costs fall by 50%, requires approximately $3.25 trillion in new investment - a 37% reduction relative to the baseline scenario. Comparative scenario analysis reveals a marked shift toward centralized storage when nuclear costs decline, which improves reliability and reduces contingency requirements - mirroring inventory pooling dynamics in supply chains. Concurrently, wind capacity additions fall sharply, with each 10% reduction in nuclear cost halving the predicted wind capacity addition. Transmission infrastructure evolves accordingly: 765 kV lines decline as nuclear becomes more decentralized, while 230 kV lines expand modestly to manage increased intermittency. By&#13;
quantifying trade-offs across technologies and identifying system tipping points, this work offers a framework for policymakers and long-horizon investors.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Barometer-Based Tactile Sensing: Characterization,&#13;
Processing, and Applications for Dynamic Manipulation</title>
<link href="https://hdl.handle.net/1721.1/163574" rel="alternate"/>
<author>
<name>Shah, Sharmi</name>
</author>
<id>https://hdl.handle.net/1721.1/163574</id>
<updated>2025-11-06T03:06:30Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Barometer-Based Tactile Sensing: Characterization,&#13;
Processing, and Applications for Dynamic Manipulation
Shah, Sharmi
Reliable tactile feedback is essential for robotic systems to interact effectively with their environments, especially in dynamic manipulation tasks where detecting contact onset, direction, and force is critical for control and planning. This thesis advances the development of barometer-based tactile sensors for low-force interactions, building upon prior work from the Biomimetic Robotics Lab. Previous work demonstrated that neural networks could infer contact location and three-axis contact force from barometers embedded within an elastomer. However, these models did not account for the viscoelastic behavior of the elastomer, which degrades sensor repeatability and bandwidth. To address these limitations, this thesis introduces a recurrent neural network (RNN) architecture that captures viscoelastic transients in the sensor response. The proposed methods are evaluated on two sensor geometries: a spherical sensor and a slimmer ellipsoid variant. An automated data collection pipeline is developed to generate temporally-continuous, uniformly sampled datasets across the sensor surface. RNN models trained on this data show that temporal modeling improves force prediction accuracy across both designs. To improve angle prediction accuracy, a binning strategy is used to enforce a uniform prior over contact orientations. The resulting "Binned RNN" neural networks are small-scale and demonstrate high sensitivity, enabling responsive tactile feedback. The utility of these tactile sensors is demonstrated by integrating the sensors onto a dexterous two-finger gripper and performing light grasping and estimation of object reorientation using solely tactile measurements. This work shows that accounting for viscoelastic effects through informed sampling and temporal modeling enhances the practical performance of elastomer-based tactile sensors in robotic systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Can diffusion models capture extreme event statistics?</title>
<link href="https://hdl.handle.net/1721.1/163573" rel="alternate"/>
<author>
<name>Stamatelopoulos, Stamatios</name>
</author>
<id>https://hdl.handle.net/1721.1/163573</id>
<updated>2025-11-06T03:07:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Can diffusion models capture extreme event statistics?
Stamatelopoulos, Stamatios
For many important problems it is essential to be able to accurately quantify the statistics of extremes for specific quantities of interest, such as extreme atmospheric weather events or ocean-related quantities. While there are many classical approaches to perform such modeling tasks, recent interest has been increasing in the usage of generative models trained on available data. Despite the sporadic success of such methods, it is not clear for what systems or datasets a system-agnostic generative AI tool is capable of generating previously ‘unseen’ extreme events in a manner that accurately extrapolates the tails for the observable of interest. Here, we propose an apriori criterion, which based on the geometry of the training dataset, it can predict whether a generative AI tool will be able to extrapolate the tails, i.e. generate previously unseen extreme events. The idea is to quantify whether existing extreme events lie in the interior of the dataset or its boundary. In the former case it is shown that generative AI tools can work in an ‘interpolation’ mode and generate new extreme events. On the other hand, if the topology of the dataset is such that extremes live in the boundary of the domain then the generative AI algorithm needs to operate in an extrapolation mode which does not lead to accurate results. We illustrate our findings on a specific class of Diffusion Models (DMs) called Denoising Diffusion Probabilistic Models (DDPMs) and we test on three datasets, a simple on-hyperball dataset following a Weibull distribution for the radii of the data points of dimensionality 2 • 10³, a dataset sampled from the so-called Majda-McLaughlin-Tabak Wave Model (MMT), of dimensionality 8.1 • 10³ and a dataset consisting of Lagrangian turbulence trajectories, of dimensionality 2 • 10³.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Face Me, I face you: Towards an Indigenous Economy of Glass in Southern Nigerian Dwellings</title>
<link href="https://hdl.handle.net/1721.1/163572" rel="alternate"/>
<author>
<name>Ajienka, Soala Lolia</name>
</author>
<id>https://hdl.handle.net/1721.1/163572</id>
<updated>2025-11-06T03:06:53Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Face Me, I face you: Towards an Indigenous Economy of Glass in Southern Nigerian Dwellings
Ajienka, Soala Lolia
This thesis proposes the weaving together of two lost traditions - the practice of primary glassmaking in southern Nigeria and the U-shaped bungalow typology of multi-family housing - as a means to address both the qualitative and quantitative housing deficits in Port Harcourt and to support the broader requisites of macroeconomic productivity in Nigeria. The thesis frames the argument that the materiality and application of glass can reconnect the inhabitation and construction of Face Me, I Face You (FMIFY) housing to Nigerian history, culture, and identity. By charting a blueprint for localized material production and engaging questions of affordability, cost structure, and financing, this work positions design as a technical solution and an act of cultural authorship. As an architect, builder, and member of the community, I advocate for a new practice in which the bond between local craftsmanship and housing development is re-established - through material choices, construction systems, economic benchmarking and spatial design strategies. This body of work braids together three interconnected narratives: First, it traces the historical evolution of the U shaped bungalow typology, revealing its roots as a colonial adaptation of the rural compound house, the economic conditions that have led to its physical obsolescence yet sustained market relevance and examining how its cultural significance was gradually diluted through climate-insensitive design and the introduction of imported materials. Second, this body of work rediscovers Nigeria’s precolonial glassmaking traditions, with a focus on artisanal production methods that offer environmental efficiency, energy intelligence, and deep cultural resonance - qualities in stark contrast to the high-energy, standardized imported glass that dominates today’s housing. Third, it integrates these two recoveries through built interventions: redesigning roof structures to support artisanal glass rondels, optimizing daylighting, ventilation, and thermal comfort, and reorganizing courtyards to revive their role as culturally vibrant, socially essential spaces. By leveraging indigenous glassmaking practices and small-batch production models, this thesis advocates for the creation of a circular economy, generating local employment, reducing embodied energy, and restoring cultural resilience - while delivering environmentally sensitive and economically viable housing solutions that demonstrate comparable return on costs for their owners. Foregrounding opacity as a design value, the project seeks to balance communal life with cultural and spatial notions of privacy, challenging the hegemony of imported transparency. Through the strategic curation of apertures, the careful modulation of light and shadow, and the integration of locally crafted glass rondels, the thesis re-envisions the Face Me I Face You typology. Ultimately, this work positions artisanal glass not only as a building material, but as a medium for recalibrating housing production in southern Nigeria toward systemic resilience and self-determination.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>DexWrist: A Robotic Wrist for Constrained and Dynamic Manipulation</title>
<link href="https://hdl.handle.net/1721.1/163571" rel="alternate"/>
<author>
<name>Ulloa, Gabriella E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163571</id>
<updated>2025-11-06T03:06:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">DexWrist: A Robotic Wrist for Constrained and Dynamic Manipulation
Ulloa, Gabriella E.
DexWrist is a compliant robotic wrist designed to advance robotic manipulation in highly-constrained environments, enable dynamic tasks, and speed up data collection. DexWrist is designed to be close to the functional capabilities of the human wrist and achieves mechanical compliance and a greater workspace as compared to existing robotic wrist designs. The DexWrist can supercharge policy learning by (i) enabling faster teleoperation and therefore making data collection more scalable; (ii) completing tasks in fewer steps which reduces trajectory lengths and therefore can ease policy learning; (iii) DexWrist is designed to be torque transparent with easily simulateable kinematics for simulated data collection; and most importantly (iv) expands the workspace of manipulation for approaching highly cluttered scenes and tasks. More details about the wrist can be found at: https://sites.google.com/view/dexwrist/home.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Guiding Labor: Sensable Instructions through Digital Jigs</title>
<link href="https://hdl.handle.net/1721.1/163570" rel="alternate"/>
<author>
<name>Griffin, Danny</name>
</author>
<id>https://hdl.handle.net/1721.1/163570</id>
<updated>2025-11-06T03:07:18Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Guiding Labor: Sensable Instructions through Digital Jigs
Griffin, Danny
Contemporary architects find themselves at a juncture, navigating the transition from traditional modes of instruction to an asymmetrical integration of digital technologies. Drawings remain central to architectural practice, yet a widening gap persists between tools for making drawings and tools for interpreting them. Since Alberti’s division between intellectual and productive labor, architectural instructions have been generated in remote offices and executed on distant construction sites. Digital tools have expanded the information density of drawings, yet the process of interpretation remains predominantly analog. Graphical conventions, though precise, are abstract, and so paper instructions alone lack spatial meaning. Builders ultimately rely on the aid of analog locating techniques to translate these abstractions into actions. Tools as simple as strings and squares have long been present on construction sites, enabling this translation. Over time, the shape and function of such devices has evolved in response to different pressures of location, from the Gothic template which left room for the builder to improvise, to the industrial jig that constrained movement to ensure replicability. The limitations of analog locating became clear when the plumb bob, long trusted to mark which direction was vertical, proved inadequate for navigating trajectories of flying objects. The solution was to embed physical devices with memory, marking a transition from tools which measure where they are to those that know where they are going. This shift from stateless to stateful devices gradually entered construction sites, and though we might distrust the devices that make possible the steering of missiles, this paradigm shift offers a productive challenge to the field of architecture. If simplifying complex construction is worthwhile, then communication pathways which more faithfully transfer information from digital model to physical destination must be explored. Central to this transformation are the tools which anchor instructions on site: interfaces already mediating between architect and builder, which must now evolve to interpret digital signals from afar. Digital jigs will be the conduits of paperless instruction on physical sites, enabling what this thesis terms sensable instructions: instructions receivable by both machines and humans.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Natural Interaction: 3D Modeling in Wearable VR Using a Gesture and Speech Interface</title>
<link href="https://hdl.handle.net/1721.1/163569" rel="alternate"/>
<author>
<name>Bei, Yining</name>
</author>
<id>https://hdl.handle.net/1721.1/163569</id>
<updated>2025-11-06T03:06:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Natural Interaction: 3D Modeling in Wearable VR Using a Gesture and Speech Interface
Bei, Yining
Designers often rely on keyboard and mouse for 3D modeling, a method that can feel unintuitive or restrictive—especially in collaborative or spatially immersive settings. This thesis explores how multimodal interaction, specifically the combination of hand gestures and voice commands, can support more natural, efficient, and accessible 3D modeling in virtual reality (VR). Built on a custom Unity-based system integrating Meta Quest hand tracking and Wit.ai voice recognition, the study investigates how these two input modes—gesture and speech—can be used together to manipulate and modify 3D geometry in real time. The research proceeds in three phases: (1) a formative study analyzing how users intuitively deploy gestures, revealing common preferences, task breakdown strategies, and limitations in gesture inputs; (2) system design and implementation of both gesture-only and gesture + speech interfaces for navigation and object manipulation (e.g., translation, scaling, duplication); and (3) a comparative user study evaluating gesture-only, gesture + speech, and keyboard + mouse workflows in terms of learning curve, task efficiency, and user satisfaction. Results show that gesture + speech enables smoother transitions across modeling subtasks and allows users to offload certain parameters (e.g., numeric values, distances) to voice while using gestures for spatial control. Participants reported higher engagement and lower cognitive load compared to keyboard-based workflows, especially in tasks involving spatial scale and collaboration. This thesis demonstrates the feasibility and design potential of multimodal interaction for immersive modeling workflows and offers insights for future XR design tools that seek to blend precision with embodied interaction.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning-Guided Optimization for Intelligent Mobility Systems</title>
<link href="https://hdl.handle.net/1721.1/163568" rel="alternate"/>
<author>
<name>Li, Sirui</name>
</author>
<id>https://hdl.handle.net/1721.1/163568</id>
<updated>2025-11-06T03:04:05Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Learning-Guided Optimization for Intelligent Mobility Systems
Li, Sirui
Efficient and reliable mobility systems are essential to modern-day society, with broad impacts ranging from day-to-day commuting, public transportation, emergency response to last-mile package delivery and freight logistics. Autonomous vehicles have the potential to improve mobility efficiency and convenience but also raise questions about reliability and feasibility of deployment. The first contribution of this thesis is a set of novel, principled control-theoretical analyses that provide strong stability and reliability guarantees for autonomous vehicles and human-compatible driving, and they further covers emergent traffic behaviors in mixed-autonomy systems. While these theoretical guarantees offer valuable insights, mobility systems are inherently complex, and their overall performance often relies on solving difficult optimization problems, many of which are combinatorial, thus presenting significant scalability challenges. Overcoming these challenges requires innovative approaches that extend beyond traditional control techniques. This thesis further contributes a set of machine learning-guided optimization algorithms that significantly enhance the efficiency and scalability of solving combinatorial optimization problems. These algorithms have proven effective across a wide range of mobility-related applications. Compared to state-of-the-art solvers, they achieve 10× to 100× speed-up in large-scale vehicle routing problems, 35% to 70% solve-time improvement in various mixed-integer linear programming problems, and up to 54% acceleration in long-horizon scheduling problems. These advancements open new possibilities for efficient decision-making in large-scale transportation systems, enabling smarter, faster, and more adaptive mobility solutions. Combining learning, optimization, and control, this thesis demonstrates the potential of learning-guided optimization and principled control-theoretical analysis to address the increasing complexity of modern mobility systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Top-down and bottom-up interactions for cortical bursting</title>
<link href="https://hdl.handle.net/1721.1/163567" rel="alternate"/>
<author>
<name>Tang, Vincent D.</name>
</author>
<id>https://hdl.handle.net/1721.1/163567</id>
<updated>2025-11-06T03:04:21Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Top-down and bottom-up interactions for cortical bursting
Tang, Vincent D.
High-frequency burst firing occurs throughout the mammalian cortex in vivo, yet both the underlying mechanisms and functional roles of bursts are unclear. Burst firing in brain slices is strongly modulated by the activity of apical dendrites, which branch extensively in layer 1 (L1) and receive long-range inputs from higher-order cortical and thalamic areas. These properties suggest a powerful subcellular substrate by which single pyramidal neurons could multiplex bottom-up and top-down information via L1-independent tonic spikes and L1-dependent bursts, respectively, and have provided a basis for emerging theoretical models of cortical computation and learning. However, our understanding of burst firing and subcellular processing remains critically limited by a lack of evidence in awake animals. It is unclear whether burst firing a) is preferentially recruited by bottom-up versus top-down inputs, and b) requires apical dendritic engagement. To answer these questions, we performed high-density extracellular recordings in primary visual cortex of awake mice while presenting a battery of Gabor (bottom-up) and inverse (top-down) visual stimuli. We report widespread high-frequency bursts in L2/3 and L5 pyramidal neurons. Contrary to expectation, bursts exhibited extremely short response latencies, and were most strongly recruited by Gabor stimuli. We further tested the causal contribution(s) of apical dendrites to burst firing and top-down visual tuning via two optogenetic manipulations: direct L5 apical tuft inhibition and NDNF interneuron activation. Strikingly, L1 inhibition only modestly reduced the burst fraction, and did not differentially affect Gabor vs inverse responses. Taken together, these results challenge prevailing theories of apical dendritic involvement in burst spike generation and feedback visual tuning, and provide new biological constraints for future theoretical and experimental work.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding the Limits of Coupled Condensation and Desorption in Sorption-Based Atmospheric Water Harvesting (SAWH) Devices</title>
<link href="https://hdl.handle.net/1721.1/163566" rel="alternate"/>
<author>
<name>Stamler, Natasha Lia</name>
</author>
<id>https://hdl.handle.net/1721.1/163566</id>
<updated>2025-11-06T03:06:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Understanding the Limits of Coupled Condensation and Desorption in Sorption-Based Atmospheric Water Harvesting (SAWH) Devices
Stamler, Natasha Lia
Access to clean water is a serious challenge around the world, with almost 2/3 of the global population experiencing water scarcity at some point during the year, especially in dry regions. One solution to this problem is sorbent-based atmospheric water harvesting (SAWH) due to its ability to produce drinking water in a range of environments, including at low humidity. SAWH device operation is composed of adsorption and desorption phases. During adsorption, moist air flows into the device and is adsorbed onto the sorbent bed. This is followed by the desorption phase during which the sorbent is heated to desorb the water as vapor, which is then transported to a colder condenser surface on which it is condensed as liquid water. Finally, the condensed water can be collected outside the device. However, current state-of-the-art SAWH devices are inefficient, with less than 70% of their adsorbed water being collected. This means the adsorbed water is either not condensed or condensed but not collected. This work discusses the impact of the coupling between desorption and condensation on the efficiency of SAWH devices. In general, SAWH systems can suffer from three scenarios of inefficient desorption-condensation: flux-limited, when the desorption rate in the device is insufficient to fully utilize the condenser’s condensation capacity; transport-limited, when the time scale of the vapor transport from the sorbent bed to the condenser is slow compared to the desorption operation time; and condenser-limited, when the condenser has a poor thermal design compared to the vapor flux. We developed a system-level model of a SAWH device to inform design strategies to mitigate these three bottlenecks and optimize device performance. Additionally, we quantified hydrocarbons, common airborne contaminants, as a mechanism for slowing water collection. Experimental findings are used to develop a model for the impact of airborne hydrocarbon adsorption on surface wettability and water retention for six metals commonly used as condenser materials. The findings from these models can inform design recommendations for SAWH devices as well as various other industrial applications in which water condenses on metal surfaces such as refrigeration and power generation. Future work will focus on continued experimental validation of the models.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact of Vimentin Intermediate Filaments on 3D Multicellular Collective Behavior</title>
<link href="https://hdl.handle.net/1721.1/163565" rel="alternate"/>
<author>
<name>Rodriguez, Camille Dyani</name>
</author>
<id>https://hdl.handle.net/1721.1/163565</id>
<updated>2025-11-06T03:06:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Impact of Vimentin Intermediate Filaments on 3D Multicellular Collective Behavior
Rodriguez, Camille Dyani
Vimentin, a type III intermediate filament, is an understudied component of the cytoskeletal system. However, in recent studies we can see its structural and mechanical properties aid in a cell's survival and migration. It forms a hyperelastic network and works synergistically with actin and microtubule to protect against large deformations.  Despite vimentin intermediate filaments critical role in many biological processes, there are limited studies on its role in collective migration in 3D in vitro. To elucidate vimentin’s role in a collective cell cluster, single MCF-7 cells are embedded in a Matrigel-Alginate gel, which then grow into multicellular systems. The MCF-7 cells utilized are vimentin null, chemically inducible to form vimentin networks that interact with the other components of the cytoskeleton. These MCF-7 allow for controlled expression of mature vimentin intermediate filament (VIFs) which then form networks. We study these multicellular clusters over the course of 14 days. We demonstrate that there are key differences in morphology and mechanics, with the presence of vimentin. Our results suggests VIFs create more irregular cell clusters with more visible dynamic interplay with the environment. Uninduced (no VIFs) clusters were overall less dynamic and exhibited spherical morphology and minimal protrusions. Cluster with mature VIFs tended to form more elongated multicellular clusters with increased number of projections into the surrounding gel. In these induced (with VIFs) clusters these projections are shown to be constantly protruding and retracting along with the nuclei continually reorganizing. Our results show that these projections are accompanied with increased protrusive and contractile gel displacements. These results indicate that vimentin network generate an dynamic and functional morphology, along with mechanically perturbing their environment in the early stages of cell cluster collective behavior.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Planning for Dynamic Nonprehensile Object Transport</title>
<link href="https://hdl.handle.net/1721.1/163564" rel="alternate"/>
<author>
<name>Wang, Eric K.</name>
</author>
<id>https://hdl.handle.net/1721.1/163564</id>
<updated>2025-11-06T03:06:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Planning for Dynamic Nonprehensile Object Transport
Wang, Eric K.
Generalized planning methods for dynamic manipulation struggle to efficiently solve kinodynamic constraints. Gradient-based methods suffer from initialization sensitivity, local optimum convergence, and lack of feasibility guarantees, while sampling-based methods can require large computation times if there exist challenging boundary conditions. Iterative Time Optimal Path Parameterization, or iTOPP, guarantees a feasible local minimum for a dynamic grasping problem by iteratively decreasing transit time for a trajectory initially generated to satisfy kinodynamic contact constraints. We demonstrate solutions that can handle initial or final goal states defined as quasistatically infeasible, in which purely quasistatic motions cannot generate a warm start trajectory. We also design an indirect adaptive controller that can track a desired dynamic grasping trajectory assuming unknown object mass and location parameters.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fundamental Behavior of Nanoporous Networks in the Out-of-Autoclave Manufacturing of Carbon Fiber Reinforced Polymer Composites</title>
<link href="https://hdl.handle.net/1721.1/163563" rel="alternate"/>
<author>
<name>Webb, Alisa Nicole</name>
</author>
<id>https://hdl.handle.net/1721.1/163563</id>
<updated>2025-11-06T03:08:28Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Fundamental Behavior of Nanoporous Networks in the Out-of-Autoclave Manufacturing of Carbon Fiber Reinforced Polymer Composites
Webb, Alisa Nicole
Throughout the aerospace industry, carbon fiber reinforced polymer (CFRP) laminated composites are used extensively in spacecraft and aircraft vehicles due to their high specific strength and stiffness and other properties. Processing these advanced structural CFRP composites, especially in prepreg form, is often completed via autoclaves where elevated temperatures and pressures of typically 180 ◦C (350 ◦F) and 0.7 MPa (7 bars), respectively, are applied to cure the polymer matrix and compress the constituent laminae together. However, autoclaves are energy intensive, expensive, and impose geometrical constraints on components due to thermal gradients within the chamber. Thus, there exists a need to find alternative manufacturing techniques. Throughout this thesis, an alternative method to autoclave processing is presented using vacuum-bag only (VBO) techniques with nanoporous networks (NPNs) in the interlaminar regions in autoclave-required epoxy prepreg CFRP composites. Nanoporous materials are defined as materials containing pores in the mid nanometer to low micrometer range. Once placed in the interlaminar region of the laminate, voids are reduced by the induced capillary pressures of the NPNs, and trapped gas evacuates through the NPN. By utilizing capillary flow porometry, capillary pressure and through-thickness permeability are quantified for various NPNs, along with other porous materials. Capillary pressure and permeability exhibit an inversely proportional relationship for all tested materials with CNT-based and polymer aerogel NPNs providing capillary pressures higher than an autoclave pressure of 0.7 MPa. Accordingly, an Ashby-type plot is presented as an aid for NPN selection for composites manufacturing. Previous studies of unidirectional glass fiber reinforced polymer (GFRP) composites and unidirectional CFRP composites show success with NPN-enabled VBO-manufacturing using aligned carbon nanotubes (A-CNTs) and electrospun polymer nanofiber (EPN) mats. However, success with woven prepreg has not been consistently achieved before this thesis. Autoclave woven epoxy CFRP laminates of IM7/8552 are manufactured using EPN and polymer aerogel NPNs with a VBO procedure. Once manufactured, these laminates were characterized for quality through void content analysis. 0.11 void vol% was achieved which is well within the 1 vol% of void requirement for aerospace-grade composite components. To aid the in the understanding of NPNs, in situ experiments utilizing microcomputed tomography are developed to investigate the (presumed Newtonian) flow of resin throughout the NPN as a function of temperature, which varies throughout a typical manufacturer recommended cure cycle (MRCC), along with the void evolution throughout the cure cycle. Based on this new in situ understanding, a manufacturing process modification is devised to produce void-free woven laminates at the 152.4 mm laminate scale. Through manufacturing, material characterization, and designed in situ experiments, this thesis demonstrates the use of NPNs for VBO-manufacturing of low-void content aerospace-grade CFRP composites to replace autoclaves for energy and cost savings.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mediators: Participatory Collective Intelligence for Multi-Stakeholder Urban Decision-Making</title>
<link href="https://hdl.handle.net/1721.1/163562" rel="alternate"/>
<author>
<name>Gao, Jin</name>
</author>
<id>https://hdl.handle.net/1721.1/163562</id>
<updated>2025-11-06T03:08:26Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Mediators: Participatory Collective Intelligence for Multi-Stakeholder Urban Decision-Making
Gao, Jin
Cities are dynamic and evolving organisms shaped through the check-and-balance of interest exchange. As cities gain complexity and more stakeholders become involved in decision-making, reaching consensus becomes the core challenge and the essence of the urbanism process. This thesis introduces a computational framework for AI-augmented collective decision-making in urban settings. Based on real-world case studies, the core decision-making process is abstracted as a multiplayer board game modeling the check-and-balance dynamics among stakeholders with differing values. Players are encouraged to balance short-term interests and long-term resilience, and evaluate the risks and benefits of collaboration. The system is implemented as a physical interactive play-table with digital interfaces, enabling two use cases: simulating potential outcomes via AI self-play, and human–agent co-play via human-inthe-loop interactions. Technically, the framework integrates multi-agent reinforcement learning (MARL) for agent strategy training, multi-agent large language model (LLM) discussions to enable natural language negotiation, and retrieval-augmented generation (RAG) to ground decisions in contextual knowledge. Together, these components form a full-stack pipeline for simulating collective decision-making enriched by human participation. This research offers a novel participatory tool for planners, policymakers, architects, and the public to examine how differing values shape development trajectories. It also demonstrates an integrated approach to collective intelligence, combining numerical optimization, language-based reasoning, and human participation, to explore how AI–AI and AI–human collaboration can emerge within complex multi-stakeholder environments.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Creating space for HVAC systems: A new, intuition-building approach to HVAC system integration in architectural education and practice</title>
<link href="https://hdl.handle.net/1721.1/163561" rel="alternate"/>
<author>
<name>Irani, Ali</name>
</author>
<id>https://hdl.handle.net/1721.1/163561</id>
<updated>2025-11-06T03:04:50Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Creating space for HVAC systems: A new, intuition-building approach to HVAC system integration in architectural education and practice
Irani, Ali
Heating, Ventilation, and Air Conditioning (HVAC) systems are vital to ensuring a healthy indoor environment in buildings. They are essential to the global shift toward a decarbonized, all-electric future. While integrated design practice has promised cost, energy, and space savings due to earlier and more frequent collaboration between design disciplines, remaining missed opportunities in the HVAC system design and coordination process often lead to spatial conflicts, performance tradeoffs, and uncomfortable spaces. This dissertation aims to understand current coordination practices to identify the root causes of existing problems, timeline issues, and knowledge gaps. Then, it proposes a series of enhancements to address these shortcomings, focusing on National Architectural Accrediting Board (NAAB) accredited architectural education programs that train the next generation of practicing architects. The proposed research hypotheses are validated in a three-part research approach: (1) releasing architecture industry surveys and conducting interviews, (2) designing and testing an early-stage design tool, and (3) developing, implementing, and evaluating a comprehensive HVAC curriculum for architecture students. The dissertation demonstrates that with the right tools and educational resources, architecture students can make informed, intuition-based HVAC system selections and integrate them into their building design, with students who studied the comprehensive curriculum demonstrating a 13% improvement in understanding and application of HVAC concepts compared to a control group of students. This work helps bridge the knowledge gap regarding HVAC systems, empowering designers to coordinate more effectively and prioritizing the role of HVAC systems in building performance simulation education.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Scar To Scaffold: The Afterlife of the Oil Pipeline for a Decarbonizing World</title>
<link href="https://hdl.handle.net/1721.1/163560" rel="alternate"/>
<author>
<name>Apostolopoulou, Katerina</name>
</author>
<id>https://hdl.handle.net/1721.1/163560</id>
<updated>2025-11-06T03:08:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">From Scar To Scaffold: The Afterlife of the Oil Pipeline for a Decarbonizing World
Apostolopoulou, Katerina
With over 86,000 kilometers of crude oil pipelines—and more than 2.13 million kilometers of total oil and gas pipelines in the United States as of 2024—many segments are already corroded and aging, deeply embedded within urban and ecological systems that are increasingly endangered. As the global energy transition accelerates, this thesis investigates the future of these infrastructures, reconsidering the vast network of decommissioned and declining legacy pipelines not as obsolete relics, but as latent spatial assets for ecological repair, climate resilience, and socio-environmental justice. Moving beyond narratives of extraction and decay, the project repositions pipelines as linear territories of opportunity—capable of being retrofitted into new civic, ecological, and infrastructural frameworks. Central to the project is the transformation of the pipeline’s linear, extractive logic into a circular and connective one: a loop that is both finite and infinite, territorial and experiential. Focusing on a strategically selected loop of crude oil pipelines spanning 14 states, the thesis constructs a cartographic and architectural framework to reimagine these lines as sites of ecological repair, social infrastructure, and alternative energy distribution—where design, much like a biological scaffold, acts as a catalyst for regeneration along landscapes shaped by extraction. Through spatial analysis, typological classification, and mapping, five territorial conditions are defined along the pipeline loop, each offering distinct opportunities for intervention. These are tested through speculative design prototypes that transform the pipeline through operations of repurpose, renewable energy distribution, or ecological remediation. The interventions reframe invasive infrastructures into public and environmental assets—generating new spaces for inhabitation, production, and collective memory. Ultimately, the thesis proposes a post-carbon design paradigm rooted in ecological reciprocity, collective agency, and infrastructural care—revealing hidden energy landscapes and inscribing them with new values: resilience, equity, and repair.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Control and Aerodynamic Design of a Solar Road Vehicle with Articulated Surfaces</title>
<link href="https://hdl.handle.net/1721.1/163559" rel="alternate"/>
<author>
<name>Salmon, Jason</name>
</author>
<id>https://hdl.handle.net/1721.1/163559</id>
<updated>2025-11-06T03:08:18Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Control and Aerodynamic Design of a Solar Road Vehicle with Articulated Surfaces
Salmon, Jason
The automobile industry is critical to modern society. Simultaneously, the constant release of toxic emissions such as greenhouse gases into the atmosphere is detrimental to health and the environment. Vehicles which exploit cleaner energy sources would be preferable to reduce the horrific scale of human-initiated damage such as climate change. However, solar road vehicles—though designed and fabricated by some—have not reached a sufficient level to be production-worthy. The low efficiency of solar cells and the high energy demands of the average land vehicle are irreconcilable for most manufacturers using industry methods and design precedent. Therefore, this work centres around the design and control of a solar road vehicle which fundamentally breaks from the mould of the typical road vehicle design—a vehicle which employs extensive articulated surfaces (dubbed "solar wings") which can be angled to directly face the sun, thereby maximising solar irradiation. A solar tracker using Bayesian inference achieving promising results in both convergence and accuracy is presented. Additionally, a systematic method for optimizing a solar road vehicle with solar wings is developed and documented.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mechanisms of Multi-Object Working Memory and Motion Prediction in the Primate Brain</title>
<link href="https://hdl.handle.net/1721.1/163558" rel="alternate"/>
<author>
<name>Watters, Nick</name>
</author>
<id>https://hdl.handle.net/1721.1/163558</id>
<updated>2025-11-06T03:04:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Mechanisms of Multi-Object Working Memory and Motion Prediction in the Primate Brain
Watters, Nick
Sample-efficient learning and flexible generalization are hallmarks of intelligent behavior. Both sample-efficient learning and flexible generalization rely on re-using a mental model of the world in new contexts. For many decades, researchers in cognitive science, neuroscience, and machine learning have studied competing theories about the structure of our mental model of the world. One set of theories concerns the structure of multi-object representations in the brain. Some studies claim the brain represents multiple objects by allocating them to disjoint “slots” in working memory, others claim that the brain flexibly distributes a common pool of resources across objects, and yet others claim the brain represents multiple objects by rapidly switching between them through time. Another set of theories concerns the nature of predicting object motion. Some claim that the mind has an internal model of physics in the world that it uses to simulate the motion of objects through time, whereas others claim the mind relies on priors and heuristics to predict object motion without explicit simulation. Both of these sets of competing theories are long-standing and unresolved. In this work, we tackle these two open questions using primate neurophysiology and computational modeling. We trained monkeys to perform multi-object memory and motion prediction tasks, recorded large-scale single-unit activity from frontal cortex brain areas, and rigorously compared different hypotheses for the neural mechanisms of multi-object working memory and motion prediction. In the case of multi-object working memory, we found that the neural activity we recorded is more consistent with a model that flexibly distributes attentional resources across objects than with models that use object slots or temporal switching representations. In the case of motion prediction, we found that the neural activity is not consistent with the monkeys simulating an occluded moving object in real-time. Instead, the monkeys’ neural activity is driven largely by an anticipation of the position of the object at a future point in time. Both of these findings call into question long-standing cognitive theories and imply that the brain’s model of the world incorporates attentional mechanisms, priors, and heuristics. Lastly, we introduce a neural data preprocessing method for stabilizing electrophysiology recordings. This method improves spike-sorting results, helped us recover more neurons from our data, and we hope may help others make the most of their electrophysiology data as well.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Commercialization Strategy of a Gantry-Based Automation Platform for High-Throughput Raman Spectroscopy</title>
<link href="https://hdl.handle.net/1721.1/163557" rel="alternate"/>
<author>
<name>Romero, Catalina</name>
</author>
<id>https://hdl.handle.net/1721.1/163557</id>
<updated>2025-11-06T03:07:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design and Commercialization Strategy of a Gantry-Based Automation Platform for High-Throughput Raman Spectroscopy
Romero, Catalina
Raman spectroscopy is a powerful optical technique that enables rapid, label-free molecular analysis. This offers significant potential to be used across pharmaceutical development, microbiome research, and food diagnostics. However, the utility of Raman spectroscopy in high-throughput applications has been limited by the lack of cost-effective, modular automation platforms capable of handling large volumes of samples with precision and repeatability. Conventional Raman workflows are constrained by manual sample handling, slow throughput, and high user variability, limiting their applicability in high-volume testing environments. To address these challenges, this thesis presents the development and initial validation of a custom two-axis (XY) gantry and a robotic well plate stacker automation platform designed to streamline the sample handling workflow in Raman spectroscopy systems, facilitating high-throughput, precise, and reproducible positioning of microplate samples under a Raman microscope. This thesis also provides a commercialization framework for the system as a standalone automation product, targeting pharmaceutical high-throughput screening, microbiome analysis, and food safety testing. The platform serves the unmet needs in these industries, where labor-intensive and inconsistent sample positioning limits scalability. The commercialization analysis includes an evaluation of market sizing, competitive benchmarking, pricing models, and go-to-market strategies. The modular platform has the potential to enable broader adoption of Raman-based analysis tools by reducing labor intensity and improving repeatability in sample positioning workflows. This work lays the foundation for the future integration of optical feedback and automated analysis, with the goal of transforming how Raman-based diagnostics are conducted at scale.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dishing It Out: Reimagining Multicultural College Dining Through Student-Centered Design</title>
<link href="https://hdl.handle.net/1721.1/163556" rel="alternate"/>
<author>
<name>Dong, Annie</name>
</author>
<id>https://hdl.handle.net/1721.1/163556</id>
<updated>2025-11-06T03:08:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Dishing It Out: Reimagining Multicultural College Dining Through Student-Centered Design
Dong, Annie
Dining halls are central spaces in colleges, fostering not only nourishment but also cultural connection and community. However, when dining centers fall short in catering to the needs of their multicultural student body, students are often left feeling isolated and even further from home. Using MIT as a case study, this thesis employs user research and digital storytelling to explore how collecting student perspectives can inform college dining centers on better supporting the diverse cultural backgrounds and dietary needs of their students. The research and findings highlight the critical gaps and strengths in cultural representation within MIT’s dining halls. Through surveys and user research, this thesis gathers student perspectives on food authenticity, comfort, and identity, which inform the design of an interactive website prototype exploring student culinary backgrounds and preferences. This project serves as both a resource for dining services and a digital cookbook curated by the student body. By centering student voices through a culinary lens, this project aims to reimagine dining spaces as inclusive, representative, and comforting shared spaces within college campuses.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Intelligence Through Interaction: Leveraging Physical Interactions In Simple Robots To Produce Complex Behaviors</title>
<link href="https://hdl.handle.net/1721.1/163555" rel="alternate"/>
<author>
<name>Spino III, Pascal</name>
</author>
<id>https://hdl.handle.net/1721.1/163555</id>
<updated>2025-11-06T03:08:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Intelligence Through Interaction: Leveraging Physical Interactions In Simple Robots To Produce Complex Behaviors
Spino III, Pascal
This thesis investigates how intelligent robot behavior can emerge from physical interactions rather than sensing, computation, and actuation in the traditional sense. Two robotic systems are presented to explore this concept in different domains. The first is a swarm of simple rolling robots whose collective morphology is shaped by distributed control laws and magnetic interactions, enabling decentralized construction-like behaviors such as bridge formation. The second is a soft underwater robot inspired by anguilliform swimming, which achieves efficient locomotion through a single actuator that leverages fluid–structure interactions in a compliant silicone tail. Useful behavior arises in both systems from the physical design and the dynamics of environmental interaction, rather than from algorithmic or computational complexity. These results demonstrate that physical intelligence can serve as a powerful design principle for building adaptive, robust, and minimal robotic systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of Revenue Management to Satellite Communications</title>
<link href="https://hdl.handle.net/1721.1/163554" rel="alternate"/>
<author>
<name>Eiskowitz, Skylar</name>
</author>
<id>https://hdl.handle.net/1721.1/163554</id>
<updated>2025-11-06T03:04:23Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Application of Revenue Management to Satellite Communications
Eiskowitz, Skylar
As the demand for satellite Internet continues to grow, satellite communication (SatCom) operators are faced with the challenge of effectively managing their capacity sales. While Revenue Management (RM) techniques have been widely used in other industries such as airline, hotel, and car rental services, the application of these methods in the context of SatCom is still scarce. This Thesis aims to bridge this gap by developing RM concepts, techniques, and optimization algorithms specifically tailored to the unique operational characteristics of SatCom capacity management and sales. The proposed SatCom RM method guides operators with quantitative recommendations of the amount of capacity to sell to different products in time and in different regions to maximize revenues.&#13;
&#13;
 Though SatCom has characteristics that favor the use of RM concepts (perishable inventory, fixed capacity with a low variable cost, the possibility to segment demand), there are unique structural characteristics that complicate the development of SatCom RM models. The primary challenge is that different products consume varying amounts of capacity, with larger terminal size products utilizing less power on a satellite than smaller terminal size products. Moreover, the selling practices in SatCom are complex because products may be sold in one period and consumed across multiple periods in which additional sales are made. This requires rolling both the selling and consumption periods. Lastly, the SatCom RM problem poses a multidimensional network problem, as products can consume bundles of resources in both space and time. &#13;
&#13;
We extend two commonly used airline RM algorithms, Expected Marginal Seat Revenue (EMSRb) heuristic and Displacement Adjusted Virtual Nesting (DAVN) to the SatCom problem to create booking limits. The booking limits recommend a threshold amount of capacity an operator should sell of each product. The contribution of this Thesis is the modification of established airline RM algorithms to handle products with variable capacity uptakes. Further, these algorithms typically account for displacement costs of products, but only in one dimension of space or time (e.g., selling an airline flight that uses multiple spatial legs may displace capacity away from flights that only use one leg). Our modifications allow for the consideration of displacement costs in both dimensions of space and time.&#13;
 &#13;
In order to evaluate the effectiveness of our inventory control approach, we conduct simulations of various demand scenarios and compare the revenue gains to a baseline scenario with no controls, as well as a simpler method that does not consider product duration. In a large-scale simulation spanning three years and encompassing thousands of product requests, we observe revenue gains ranging from 15%-30% depending on the demand scenario. Then, we extend the model to multiple zones and achieve 2%-10% revenue improvement using our Multi-Zone DAVN method compared to the DAVN method applied to each zone separately.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hidden Monuments</title>
<link href="https://hdl.handle.net/1721.1/163553" rel="alternate"/>
<author>
<name>Lee, Sesil</name>
</author>
<id>https://hdl.handle.net/1721.1/163553</id>
<updated>2025-11-06T03:08:12Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Hidden Monuments
Lee, Sesil
Jeju Island’s burial culture is embedded in the island’s distinct landscape, where sandam burial mounds are not isolated monuments but quietly coexist with fields, ranches, and forests. These sites are living records of intangible heritage—ancestral beliefs, Beolcho rituals, and vernacular stone-stacking practices—manifested not through formalized memory, but through their modest yet persistent presence in the landscape. Today, however, these spaces are under threat: policies favoring cremation, rapid urbanization, and shifting land values render them increasingly invisible or obsolete. In the past few decades, two-thirds of sandam have been displaced, and with fewer than six out of over 100,000 burial sites designated as cultural heritage, traditional models of conservation are inadequate—unable to engage with the dispersed, landscape-bound nature of these burial grounds. This project reimagines Jeju’s burial mounds not as relics to be preserved, but as spatial anchors for cultural and communal expressions. Through a series of small-scale architectural interventions—gates, stages, passages, and shelters—deployed along paths tracing sandam clusters, the work explores how memory can be practiced rather than displayed. By offering ways to engage with the buried, the forgotten, and the living simultaneously, the project expands the idea of heritage: not as a static record, but as a participatory and evolving relationship between people, land, and memory.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Non-invasive tuning of experience-dependent plasticity in the primary visual cortex</title>
<link href="https://hdl.handle.net/1721.1/163552" rel="alternate"/>
<author>
<name>Reilly-Andújar, Francis</name>
</author>
<id>https://hdl.handle.net/1721.1/163552</id>
<updated>2025-11-06T03:04:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Non-invasive tuning of experience-dependent plasticity in the primary visual cortex
Reilly-Andújar, Francis
The cerebral cortex exhibits a remarkable capacity for experience-dependent plasticity, a feature that is predominantly confined to critical periods (CPs) during early postnatal development. In the mouse primary visual cortex (V1), ocular dominance plasticity (ODP) has served as a premier model for investigating the cellular and molecular mechanisms that underlie the formation and stabilization of cortical circuits. During the CP, short-term monocular deprivation (MD) induces both functional and anatomical changes in binocular V1, characterized by a weakening of deprived-eye responsiveness via mechanisms of synaptic long-term depression. As the critical period closes, increased inhibitory drive and the emergence of perineuronal nets (PNNs) stabilize neural circuits and restrict further experience-dependent plasticity. In Chapter 1, I review the key literature on ODP and provide a survey of interventions that have been shown to enhance ODP in adulthood. In Chapter 2, I present our findings that repeated anesthetic ketamine treatment can reinstate ‘juvenile-like’ plasticity in the adult mouse V1. Importantly, I demonstrate that this effect relies on the microglia-mediated depletion of PNNs, and that interfering with microglial purinergic P2Y12 receptor activation blocks the ketamine-induced enhancement of ODP. Building on these insights, Chapter 3 investigates the use of non-invasive light-flicker stimulation at different temporal frequencies as a means to unlock different forms of ODP in the adult mouse V1. Our results reveal that 60 Hz light-flicker stimulation reduces PNN levels and promotes a depression of deprived-eye responses following short-term MD, whereas 40 Hz stimulation – without altering PNN levels – enhances an adult form of ODP characterized by the strengthening of non-deprived eye responses following short-term MD. Furthermore, we show that in mice subjected to long-term MD initiated early in life, 40 Hz light-flicker treatment promotes recovery of visual function, as evidenced through physiological and behavioral assays. Finally, Chapter 4, outlines a series of future experiments designed to further elucidate the mechanisms by which light-flicker stimulation promotes enhanced ODP in adult V1. Together, the findings presented in this thesis introduce novel, minimally invasive (ketamine) and non-invasive (light-flicker) interventions that show promise as therapeutic strategies for ameliorating deficits arising from early life sensory deprivation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Mechanical and Electrical Interfaces for Rapid Swap Battery Systems</title>
<link href="https://hdl.handle.net/1721.1/163551" rel="alternate"/>
<author>
<name>Wucherer, Abigail</name>
</author>
<id>https://hdl.handle.net/1721.1/163551</id>
<updated>2025-11-06T03:07:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Development of Mechanical and Electrical Interfaces for Rapid Swap Battery Systems
Wucherer, Abigail
In the drive towards a globally decarbonized energy economy, rapid swap battery packs provide a potential means to improve electric vehicle adoption in high utilization industrial vehicles where lengthy charge times are a barrier to electrification. High voltage, high current battery connectors are a critical component for coupling the pack to the electric vehicle, distributing power from the battery to the drivetrain. Most state-of-the-art connections require precision alignment of contact surfaces, and bolted preload or retention mechanisms, hindering the implementation of rapid swap battery systems. The need for robust, high life cycle, high-power contacts motivates a new approach to connector design. The integration of electrical connectors with the battery mount’s structural loop creates a new design space where preload, geometry, and contact resistance may be optimized. This co-design approach enables mechanical and electrical functional requirements to be considered in conjunction to ensure reliable fulfillment in both areas while reducing the time for battery pack swaps. This work introduces two distinct approaches for aligning the pack to the vehicle, locking the battery in place, and engaging electrical contact with geometry unique to the system design. These approaches offer higher reliability, mechanical and electrical longevity, and automatic alignment capabilities during loading of the battery pack. Across both designs, the contact resistance is the primary metric for evaluating the electrical performance, and the contact pressure is used to evaluate the risk of mechanical wear. The first approach integrates a quasi-kinematic coupling-based connector with integrated electrical contacts, allowing for repeatable and accurate positioning of the battery pack to the vehicle. A slotted ball and socket design approach is considered to accommodate for angular misalignment and establish repeatable contact area through elastic averaging. The second approach proposes a planar contact to further reduce the contact pressure and increase contact longevity without the need for expensive and rare hardened coatings. This system relies on a rail and flat system for guiding the battery pack into a locked position and engaging the planar contacts.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Weaving Borders, Mapping Place: Afghan War Rugs of the Soviet-Afghan War (1979-1989)</title>
<link href="https://hdl.handle.net/1721.1/163550" rel="alternate"/>
<author>
<name>Hakemy, Arezo</name>
</author>
<id>https://hdl.handle.net/1721.1/163550</id>
<updated>2025-11-06T03:08:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Weaving Borders, Mapping Place: Afghan War Rugs of the Soviet-Afghan War (1979-1989)
Hakemy, Arezo
Early Afghan war rugs delineate place through their pictorial design, embedding spatial memory into the tactile surface of the woven field. Emerging in the wake of the Soviet invasion in the late 1970s, these rugs integrate modern war iconography of tanks, helicopters, and maps into a medium historically tied to regional identity, spiritual practice, and craft. While earlier scholarship has often read these rugs as commodities of war tourism, this thesis moves beyond this interpretation to foreground the rug as a placemaking device, one that asserts territory and agency through mapping techniques. Afghan war rugs frame and define space on a land that has largely been considered placeless, at times porous and seemingly unknown. Through their borders, these rugs resist the geopolitical narratives that have long reduced Afghanistan to a war-torn frontier. The border serves as a framing device, structuring the rug’s design while simultaneously asserting territorial presence. Whether following a prescribed cartoon or improvising patterns, the weaver actively engages in “border-ing,” exercising cartographic agency by embedding personal, traditional, and political motifs into the rug. This research interrogates how early Afghan war rugs engage in spatial representation against the backdrop of the Soviet-Afghan war from 1979-1989. From historical colonial mapping projects to Soviet and American cartographic investigations, Afghanistan’s borders have long been sites of surveillance, resource extraction, and imperial ambition. Yet, in contrast to these external mapping practices, the war rug’s design is a resistant act of placemaking. Examining the rug as both artifact and map, this study explores how Afghan weavers reclaim their landscapes through rug making, embedding memory and materiality into woven form.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The role of texture in auditory scene analysis</title>
<link href="https://hdl.handle.net/1721.1/163549" rel="alternate"/>
<author>
<name>Hicks, Jarrod M.</name>
</author>
<id>https://hdl.handle.net/1721.1/163549</id>
<updated>2025-11-06T03:04:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The role of texture in auditory scene analysis
Hicks, Jarrod M.
Everyday auditory scenes contain sounds from many sources. For example, when crossing the street, you might hear sounds produced from the rumble of passing cars, the chatter of pedestrians, and the rapid tick of crosswalk signals. To make sense of this complex mixture of sounds, the auditory system must separate the mixture into coherent perceptual representations that are likely to correspond to the underlying sources in the world. This process is known as auditory scene analysis. Although a rich body of work has probed auditory scene analysis with simple synthetic stimuli and revealed principles of perceptual organization, the extent to which these principles apply to real-world scenes with natural sounds remains unclear. This thesis empirically examines auditory scene analysis with realistic sounds. In particular, we study the perception of scenes containing a common class of environmental sounds known as “textures”, investigating how the auditory system makes use of statistical structure to separate textures from other sources and how the underlying statistical representation both constrains and enables scene analysis. We first investigated the mechanisms of hearing in noise using real-world background “noise” textures. The results show that the auditory system estimates the properties of “noise” textures and stores them over time, using the resulting internal model to estimate other concurrent sounds. We then considered how concurrent sound texture sources are separated from each other. We found that auditory scene analysis with textures involves some principles identified in classical scene analysis work with simple sounds, but that these principles apply to the higher-order statistical representations that define natural textures. Together, the results reveal new aspects of auditory scene analysis with real-world sounds and clarify the role texture plays in everyday hearing. Our findings provide a bridge between the simple, synthetic stimuli studied historically and the rich complexity of real-world sounds.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Driving Temporally Precise Learning in Individual Premotor Neurons using Closed-Loop Neurofeedback</title>
<link href="https://hdl.handle.net/1721.1/163548" rel="alternate"/>
<author>
<name>Scherrer, Josefa R.</name>
</author>
<id>https://hdl.handle.net/1721.1/163548</id>
<updated>2025-11-06T03:04:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Driving Temporally Precise Learning in Individual Premotor Neurons using Closed-Loop Neurofeedback
Scherrer, Josefa R.
Much of human existence is based on our ability to learn complex sequences of motor movements. Speech, writing, and tool use all require activating a series of different muscles in a precisely timed pattern, and these patterns are learned through a long process of trial and error. How does the neural circuitry in our motor system learn to generate the activity patterns that drive these sequences? This question can be explored by studying a similarly precise learned motor pattern in a different organism, the learned song of the songbird zebra finch.&#13;
&#13;
Zebra finches learn to sing a stereotyped song through a process of vocal experimentation and comparison to an internal template. Every time a bird sings, it varies the acoustic parameters of its song and determines whether each variation brings the song closer to its internal template. Variations that result in a better match are then repeated in subsequent renditions of the song, in a trial and error process suggestive of reinforcement learning. The learning process requires a basal ganglia-thalamocortical loop called the anterior forebrain pathway (AFP) that is similar to basal ganglia-thalamocortical circuitry in mammals. Existing evidence suggests that the AFP learns a time-dependent bias signal that steers the motor pathway to avoid vocal errors. This bias signal is known to be dependent on the cortical output of the AFP known as LMAN (lateral magnocellular nucleus of the anterior nidopallium). However, little is known about the neural code in LMAN that underlies this bias signal, or how this neural code is learned and generated.&#13;
&#13;
We address these questions by building a neural feedback system that allows us to impose correlations between the activity of individual LMAN neurons and a dopaminergic reward signal. We designed a low-latency feedback system that records neural activity from a chronic Neuropixels 2.0 implant, extracts the activity of specific neurons, and plays noise bursts to the bird contingent on the activity of those neurons. We used this system to perform feedback based on the activity of an arbitrarily chosen neuron in LMAN within a given 10 ms window in songs. All birds responded to the feedback by learning to bias the activity of the chosen LMAN neuron up or down within the chosen time window, transiently driving firing rates up by as much as 200 Hz. We observed a remarkable degree of timing precision in the learned bias, with birds able to control the activity of the chosen neuron at single millisecond levels of rise time and jitter. This high degree of precision informs models of the basal ganglia circuit architecture thought to drive learning. We also found the learned bias to be specific to the LMAN neurons correlated with reward, with neighboring uncorrelated neurons exhibiting no change in firing rate during learning. This single-neuron specificity strongly constrains the spatial precision of axonal targeting from thalamic regions that are thought to propagate the learned bias signal from the basal ganglia to LMAN. Finally, we demonstrated that fluctuations in neural activity of a given LMAN neuron drive transient and predictable changes in vocal output approximately 25 milliseconds later, consistent with what is known about signal propagation speeds in the song system. This fact, together with the results of our feedback experiments, combine to confirm our central hypothesis that LMAN drives song learning by independently activating LMAN neurons at precise points in time in order to bias vocal output and avoid vocal errors.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Social Sensory Somatic Scores for Species, Spaces, Soils, and&#13;
Structures of Steep Slopes</title>
<link href="https://hdl.handle.net/1721.1/163547" rel="alternate"/>
<author>
<name>Bondarenko, Lina</name>
</author>
<id>https://hdl.handle.net/1721.1/163547</id>
<updated>2025-11-06T03:07:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Social Sensory Somatic Scores for Species, Spaces, Soils, and&#13;
Structures of Steep Slopes
Bondarenko, Lina
Modern knowledge systems have physically and conceptually “flattened” the world, erasing the ecological, political, and sensory complexities inherent to sloped terrain. By attending closely to the slope—as both a material condition and a generative metaphor—this thesis foregrounds movement as a form of resistance to regimes of exploitation, abstraction, and estrangement that have historically transformed land into data and place into property. Weaving together interdisciplinary methodologies from performance studies, landscape architecture theory, feminist geography, ecological theology, environmental history, sensory ethnography, and media studies, SSSSSSSSSS dances an inclined methodological structure, oscillating deliberately between critical systemic analysis and situated sensory experience. Ch1. sets the stage among steep slopes and introduces the discipline to movement as pedagogy, enacting the urgency for new methodologies into schemes of the project’s medium and the book’s format. Ch.2 is a feminist investigation of the ways modern infrastructures and spaces have been designed to reinforce land abstraction and commodification in the name of improvement-- severing embodied relationality, contributing to societal apathy toward ecological and social crises. Imperial post-enlightenment statecraft, the suppression of wildness, and the standardization of level form have flattened our upright movements to enact a state of senslessness. Contradicting Ch.2’s straight critique, Ch.3 attempts to reweave the sinuous nuance of symbiogenesis between soils and species, revealing that humans are but one among many sloped organisms moving, and inclining, and co-evolving as the lithosphere; we have been slorgs all along. Slorgs belong to divine mythologies of terrain’s elevations and have reciprocated in admiration, mimicking topographic spatial functions and adorning the summits with artistic interventions--some inadvertently contributing to the damaging regimes of Ch.2. Interwoven through both chapters, outliers resisting those forces of governance and exploitation are often those displaced by them-- those moving in ways the system polices and erases from comprehension-- refugees, queers, witches, tricksters, artists, herbalists, and healers. The intended medium of SSSSSSSSSS coalesces in Ch.4: inviting the general public to participatory happenings with hills, composing scores, coaxing their inner slorgs to slither askew, sloping themselves as moving loci for sympoietic becoming. Multi-species attune to a social, sensed, somatic experience, co-composing spatial relations among local steep soils. Slorgs challenge the abstractions of dominant epistemologies in the temporal, situated act of trusting their own proprioception in collective balance, affirming the multidimensional value of embodied, ecological geo-choreography. Social Sensory Somatic Scores for Soils, Structures, Spaces, and Species of Steep Slopes are presented through photographs in Ch.4 and in moving image, available as supplemental material.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Spiritual Curation of American Modernism</title>
<link href="https://hdl.handle.net/1721.1/163546" rel="alternate"/>
<author>
<name>Saha, Indrani</name>
</author>
<id>https://hdl.handle.net/1721.1/163546</id>
<updated>2025-11-06T03:04:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Spiritual Curation of American Modernism
Saha, Indrani
Where do the spiritual go? In this study of late-nineteenth- and early-twentieth-century seekers, they join seances in Vermont farmhouses, attend Theosophical lectures on Karma, get lost in copies of Jnana-Y oga, journey to Buddhist temples in China, and consume spiritual manuals on Mentalphysics. But where do they go after those encounters? And, more importantly, what do they do? In this dissertation, they build modern art institutions. A cadre of artist-writers, museum curators, and public intellectuals found their power in early 20th century America by building institutions to introduce a new, spiritually grounded modern art to a mercantile nation. In the US, beyond European sources for "the spiritual" were flirtations with vaguely "Eastern" ones by way of Theosophy. Those who sought to institutionally manifest Wassily Kandinsky's "spiritual" in art believed themselves to provide the assistance necessary to cultivate and preserve these spiritual impulses in modern art. Alfred Stieglitz's Intimate Gallery (1925-1929), Katherine Sophie Dreier's Societe Anonyme (1920-1950), and Hilla Rebay's Museum of Non-Objective Painting (1939-1952)-all in New York City-served as intermediaries in translating predominantly Eastern spiritual ideas into productive ways of being. It would be needed, each curator believed, to cultivate these spiritual protocols just to survive in a material world they held to be detrimentally bankrupt of spirit. In other words, the American institutionalization of modernism built its canon around spiritual systems of national aesthetic welfare. Crucial to these spiritual curators' respective operations would be the promotion of not just any abstraction but a radically non-objective art thought to use the inner expressions of the artist to elevate the spectator. This dissertation takes the turn-of-the-century claims of spirituality by the founders of key art institutions seriously. In doing so, I argue that esoteric forms of Eastern spirituality infused formerly Protestant centers of culture to propel a twentieth-century embrace of radically abstract modern art.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Programmable Mud: 3D Printing earth to achieve low-carbon, low-cost construction automation</title>
<link href="https://hdl.handle.net/1721.1/163545" rel="alternate"/>
<author>
<name>Curth, Alexander (Sandy) McCormick</name>
</author>
<id>https://hdl.handle.net/1721.1/163545</id>
<updated>2025-11-06T03:04:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Programmable Mud: 3D Printing earth to achieve low-carbon, low-cost construction automation
Curth, Alexander (Sandy) McCormick
Large-scale additive manufacturing (LSAM) with locally sourced materials, such as earth, presents a promising approach to addressing the urgent challenges of rapid urbanization and construction-related carbon emissions. &#13;
This dissertation establishes a comprehensive framework for integrating low-carbon materials, particularly minimally processed earth, with computational design methodologies and robotic fabrication processes for architectural-scale applications. Through systematic material characterization, novel testing protocols, and case studies across multiple building systems, the research demonstrates that minimally processed earthen materials can be transformed into high-performance building elements uniquely suited to local environmental conditions and design considerations. The developed computational framework employs multi-objective optimization and material-aware toolpath generation to balance structural performance, thermal comfort, embodied carbon, and construction time. &#13;
Four case studies validate this approach: (1) toolpath optimization for shell structures, (2) a hybrid floor system combining shape-optimized concrete beams with 3D-printed ceramic blocks, (3) zero-waste earthen formwork for reinforced concrete, and (4) thermally optimized wall systems for passive climate control. Life cycle assessment reveals that 3D-printed earth structures have approximately one-fifth the embodied carbon of conventional concrete and one-fiftieth that of industry-standard 3D-printed mortar. This research bridges the gap between additive computational design and material circularity, offering scalable approaches to sustainable construction that can be implemented across diverse environmental and economic contexts.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cooling Machines:&#13;
Exploring the Heat Mitigation Effect of Urban Trees with Computer Vision</title>
<link href="https://hdl.handle.net/1721.1/163544" rel="alternate"/>
<author>
<name>Klimenko, Nikita</name>
</author>
<id>https://hdl.handle.net/1721.1/163544</id>
<updated>2025-11-06T03:07:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Cooling Machines:&#13;
Exploring the Heat Mitigation Effect of Urban Trees with Computer Vision
Klimenko, Nikita
As the impacts of climate change on cities become more pronounced, urban authorities are under pressure to prepare existing streetscapes for increased levels of heat stress. While many aspects of existing urban morphology have an impact on heat exposure (e.g. sky view factor, glazing levels, facade materials), they cannot be rapidly changed at large across existing urban infrastructures. Urban authorities across the world increasingly turn to planting trees as a way of cooling urban streetscapes. Urban vegetation is indeed known to have a cooling effect, primarily due to trees providing shade and preventing urban materials from heating up, as well as due to their ability to maintain their own internal temperature due to evapotranspiration. Even though the positive impacts of urban trees on thermal comfort are long known and well-studied, little work is dedicated to how these impacts vary across trees of different species and morphology. This is due to both the complexity of studying vegetation life cycles at sufficient scale, as well as due to the dispersed nature of the issue across disciplines of biology, urban climate, design, and data science. Nevertheless, this specific knowledge is vital to urban planners for deciding which trees have the most cooling effect in specific parts of the city. This thesis embraces the notion of trees as ‘cooling machines’ and dissects the diverse morphological and contextual factors that affect the role of individual trees on local urban heatscape. Leveraging a set of computer vision methodologies, including species recognition, context-aware segmentation, and photogrammetry, the thesis examines a large dataset of thermal imagery of urban trees collected in Los Angeles and Dubai to describe the impact of individual tree species, height and form, as well as spatial context on the cooling effect. Building on this approach, the thesis proposes a prototyping framework for architects to cure urban heatscapes via targeted curation of tree planting schemes, tying the visual and thermal aspects of urban greenery. This approach will allow cities to leverage the power of urban vegetation in the most efficient way, and tame urban heat in a scalable and globally affordable manner.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Co-Authoring Beyond the Human: Disordering Architectural Processes through Play and Multi-Agent Co-Existence</title>
<link href="https://hdl.handle.net/1721.1/163543" rel="alternate"/>
<author>
<name>Dundar Arifoglu, Nasibe Nur</name>
</author>
<id>https://hdl.handle.net/1721.1/163543</id>
<updated>2025-11-06T03:07:50Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Co-Authoring Beyond the Human: Disordering Architectural Processes through Play and Multi-Agent Co-Existence
Dundar Arifoglu, Nasibe Nur
This thesis reconsiders architectural authorship and the extended processes through which the built environment is shaped, using a series of playful, participatory interventions to expose the human-centric assumptions embedded in spatial decision-making. Presented as a collection of games and booklets, the work invites participants to engage with a wide spectrum of architectural processes—from site understanding and planning to permitting, construction, and post-occupancy—through the perspectives of multiple agents entangled in shared environments. These agents include beings, materials, living organisms, legal frameworks, and other forces typically excluded from spatial authorship, challenging conventional boundaries and expanding the discourse around the entangled forces and relations that shape the spaces we inhabit. A series of playful explorations opens space for friction, misalignment, and shared authorship. Each booklet engages a distinct stage of the architectural process through participatory formats that make visible the biases, exclusions, and regulatory fictions often treated as neutral. By gamifying these systems, the work reveals how architectural decision-making tends to privilege hierarchy, human control, and speed—often at the expense of multispecies co-existence. This thesis positions play as a critical lens: a way to rehearse alternative futures, to listen differently, to embody other perspectives, and to surface the black-box logics embedded in architectural norms. It invites readers and players to participate in unbuilding these assumptions. And the games evolve—with each use, each misreading, each encounter, and each agent who joins the conversation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Objectiles Guide to Time Travel:&#13;
Re-Envisioning Building Materials as Narrative-Collecting&#13;
Object-Projectiles on a Trajectory Through Space-Time</title>
<link href="https://hdl.handle.net/1721.1/163542" rel="alternate"/>
<author>
<name>Chaussabel, Celia Quynh-Mai</name>
</author>
<id>https://hdl.handle.net/1721.1/163542</id>
<updated>2025-11-06T03:06:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Objectiles Guide to Time Travel:&#13;
Re-Envisioning Building Materials as Narrative-Collecting&#13;
Object-Projectiles on a Trajectory Through Space-Time
Chaussabel, Celia Quynh-Mai
As the architectural discipline grapples with its role in resource depletion, carbon emissions, and waste generation, there is a growing urgency to stop sourcing new materials and to reuse materials from existing buildings instead. One challenge to integrating reused materials into current building practices is technical: inventorying, deconstructing, reconditioning, and designing with reused materials is slower and more labor-intensive than with new ones. But another challenge is cultural: the materials that make up architecture are currently perceived as unmoving and single-use, with little consideration for their trajectories from raw resource to landfill. This thesis is focused on developing an aesthetic sensibility and design methodology that helps us re-envision materials as objects on a trajectory instead: Objectiles, or object-projectiles. Objectiles are objects on an adventure across space-time to collect as many uses as possible. Rather than remaining associated with one primary use, Objectiles are impressionable, bearing ambiguous traces of all the uses they encounter as they re-circulate. Through the aesthetic qualities that hint at their many uses, Objectiles invite us to time travel - to imagine the potential past and future narratives that may precede or follow their present physical state. Embedding the aesthetics of Objectiles into architecture can lead to the development of a new collective consciousness of the materials that surround us. They can make us aware that all the objects around us have trajectories that extend beyond their present state, and lead to an alternative material culture of greater care in how we use, re-circulate, and dispose of all objects.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Problem-Independent Regrets on Expectation-Dependent Multi-Armed Bandits</title>
<link href="https://hdl.handle.net/1721.1/163541" rel="alternate"/>
<author>
<name>Ai, Rui</name>
</author>
<id>https://hdl.handle.net/1721.1/163541</id>
<updated>2025-11-06T03:07:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Problem-Independent Regrets on Expectation-Dependent Multi-Armed Bandits
Ai, Rui
The independence axiom (IA) proposed by Von Neumann and Morgenstern [50] is the cornerstone of the expected utility theory. However, some empirical experiments show that the IA is often violated in the real world. We propose a new kind of multi-armed bandit problem where the expectation of outcomes may influence the agent’s utility which we call expectation-dependent multi-armed bandits and rationalize the choice of agents in Machina’s paradox lacking the IA. We design provably efficient algorithms with low minimax regrets and show their consistency of time horizon T with corresponding regret lower bounds, revealing statistical optimality. Furthermore, as we first consider bandits whose corresponding utility depends on both reality and expectation, it provides a bridge between machine learning and economic behavior theory, shedding light on how to interpret some counterintuitive economic scenarios, like bounded rationality explored by Zhang et al. [54].
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Differentially Private Synthetic Data Generation for Relational Databases</title>
<link href="https://hdl.handle.net/1721.1/163540" rel="alternate"/>
<author>
<name>Alimohammadi, Kaveh</name>
</author>
<id>https://hdl.handle.net/1721.1/163540</id>
<updated>2025-11-06T03:06:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Differentially Private Synthetic Data Generation for Relational Databases
Alimohammadi, Kaveh
Existing differentially private (DP) synthetic data generation mechanisms typically assume a single-source table. In practice, data is often distributed across multiple tables with relationships across tables. This study presents the first-of-its-kind algorithm that can be combined with \emph{any} existing DP mechanisms to generate synthetic relational databases. The algorithm iteratively refines the relationship between individual synthetic tables to minimize their approximation errors in terms of low-order marginal distributions while maintaining referential integrity; consequently eliminates the need to flatten a relational database into a master table (saving space), operates efficiently (saving time), and scales effectively to high-dimensional data. We provide both DP and theoretical utility guarantees for our algorithm. Through numerical experiments on real-world datasets, we demonstrate the effectiveness of our method in preserving fidelity to the original data.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>VLEO-Bench: A Framework to Evaluate Vision-Language Models for Earth Observation Applications</title>
<link href="https://hdl.handle.net/1721.1/163539" rel="alternate"/>
<author>
<name>Zhang, Chenhui</name>
</author>
<id>https://hdl.handle.net/1721.1/163539</id>
<updated>2025-11-06T03:07:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">VLEO-Bench: A Framework to Evaluate Vision-Language Models for Earth Observation Applications
Zhang, Chenhui
Large Vision-Language Models (VLMs) have demonstrated impressive performance on complex tasks involving visual input with natural language instructions. However, it remains unclear to what extent capabilities on natural images transfer to Earth observation (EO) data, which are predominantly satellite and aerial images less common in VLM training data. In this work, we propose VLEO-Bench, a comprehensive evaluation framework to quantify the progress of VLMs toward being useful tools for EO data by assessing their abilities on scene understanding, localization and counting, and change detection tasks. Motivated by real-world applications, our framework includes scenarios like urban monitoring, disaster relief, land use, and conservation. We discover that, although state-of-the-art VLMs like GPT-4V possess extensive world knowledge that leads to strong performance on open-ended tasks like location understanding and image captioning, their poor spatial reasoning limits usefulness on object localization and counting tasks.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>She Swims in Silence: Spatial Narrative, Women's labor in Contemporary Art</title>
<link href="https://hdl.handle.net/1721.1/163538" rel="alternate"/>
<author>
<name>Feng, Haozhen</name>
</author>
<id>https://hdl.handle.net/1721.1/163538</id>
<updated>2025-11-06T03:06:21Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">She Swims in Silence: Spatial Narrative, Women's labor in Contemporary Art
Feng, Haozhen
This thesis investigates the collective lives of Chinese women sent to Xinjiang in state-led migration after 1949 and the erasure of their gendered narratives. Drawing on a unique family history and archival evidence, the thesis reveals how the personal identities of these female “Aid to Xinjiang” participants were stripped away and subsumed under the grand socialist nation-building myth. Through practice-based artistic research, the project attempts to restore their lost voices and unacknowledged suffering and labor, framing the exhibition as a form of praxis. By analyzing the exhibition alongside case studies and critical analysis, the thesis, inspired by Bernard Stiegler’s theory of the “history of representational forms” and interwoven with ideas from philosophers like Judith Butler and Nicholas Mirzoeff, interrogates the gendered silences in official history and highlights the tension between state mythologies and personal memories. In doing so, the exhibition as an interdisciplinary form of research not only restores agency to a silenced group of women, but also demonstrates how artistic practice can serve as an alternative historiography to challenge dominant narratives and recover marginalized voices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluation of Universal Docking Solutions for Autonomous&#13;
Underwater Vehicles</title>
<link href="https://hdl.handle.net/1721.1/163537" rel="alternate"/>
<author>
<name>Pryal, Erik Jeffrey</name>
</author>
<id>https://hdl.handle.net/1721.1/163537</id>
<updated>2025-11-06T03:06:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Evaluation of Universal Docking Solutions for Autonomous&#13;
Underwater Vehicles
Pryal, Erik Jeffrey
Due to their energy-constrained nature, Autonomous Underwater Vehicles (AUVs) need effective docking and charging stations to extend their mission durations. However, diverse AUV designs challenge the universal compatibility of docking stations. This study provides a framework for understanding what makes a docking station universal and offers two potential solutions: the Tapered Funnel Docking Station and the Magnetic Hub Docking Station. The Tapered Funnel features a conical entry that progressively narrows to accommodate various AUV diameters. The Magnetic Hub passively secures the AUV using magnetic forces and an external appendage guided into position by a square duct. MATLAB simulations evaluate these two charging station designs for compatibility with AUVs, alignment capabilities, and docking efficacy under realistic conditions. Both designs are tested through Monte Carlo simulations to address varying AUV approach conditions, showcasing their potential as universally feasible solutions. Future exploration into material durability, sensor integration, and power transfer efficiency will refine these designs for field applicability. This research lays the groundwork for universal docking standards and proposes adaptable solutions to alleviate operational limitations in underwater missions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dowel-laminated timber from waste lumber offcuts: &#13;
Towards structural component circularity</title>
<link href="https://hdl.handle.net/1721.1/163536" rel="alternate"/>
<author>
<name>Blowes, Rachel</name>
</author>
<id>https://hdl.handle.net/1721.1/163536</id>
<updated>2025-11-06T03:06:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Dowel-laminated timber from waste lumber offcuts: &#13;
Towards structural component circularity
Blowes, Rachel
In the context of the global climate crisis, there is a need to develop low embodied carbon building systems. Moreover, construction and demolition generate substantial amounts of waste. The use of salvaged materials for structural applications presents the opportunity to divert this waste while reducing the embodied carbon of new structural components. This thesis proposes a typology for dowel-laminated timber (DLT) slabs built up from waste lumber offcuts. A mechanical model for a segmented DLT system composed of geometrically heterogeneous offcuts is developed. Prototypes of this mass timber system are fabricated and tested to observe their failure behavior and to evaluate the mechanical model. A computational workflow is introduced which employs algorithmic methods for inventory assignment and structural optimization to design slabs which meet deflection requirements under loading. These approaches are undertaken to evaluate whether DLT systems can leverage the irregularity of salvaged lumber dimensions to produce structurally efficient forms.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Allopoietics in Real Time: Unfolding Among Art, Publics, Space, and Time</title>
<link href="https://hdl.handle.net/1721.1/163535" rel="alternate"/>
<author>
<name>Aubry, Vinzenz</name>
</author>
<id>https://hdl.handle.net/1721.1/163535</id>
<updated>2025-11-06T03:06:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Allopoietics in Real Time: Unfolding Among Art, Publics, Space, and Time
Aubry, Vinzenz
This thesis proposes a conceptual lens for understanding contemporary generative arts by introducing the terms Allopoietics and Liquid Media. Building on generative and participatory art, it focuses on the real-time processes among artworks, publics, spaces, and time through which meaning dynamically emerges. Drawing on the author’s artistic works—Conjunktion, Looking at the Sun, and Public Eyes—as well as critical engagement with hermeneutics, process philosophy, and media theory, this thesis explores how agency is distributed across these processes, offering a means to reconsider all elements as equally generative. Allopoietics, derived from cybernetics, describes the generative capacity of systems to produce outcomes beyond the sum of their actants, emphasizing collective unfolding over isolated creation. Liquid Media expands the notion of interfacing beyond traditional media to include publics, space, and time, conceptualizing these as mutable and entangled actants. These concepts outline an Aesthetics of Real Time that evaluates the dynamic relations among increasingly immediate systems. By proposing these new terms, the thesis invites a shift in perspective from object to process: viewing artworks not as stable materializations but as parts of real-time systems of collective meaning-making. While emerging from an artistic practice, this conceptual framework resonates with insights from contemporary sociology and cultural studies, where notions of fluidity, distributed agency, and relationality increasingly shape our understanding of complex systems and realities.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Limits of Longevity</title>
<link href="https://hdl.handle.net/1721.1/163534" rel="alternate"/>
<author>
<name>Rodriguez, Christopher W.</name>
</author>
<id>https://hdl.handle.net/1721.1/163534</id>
<updated>2025-11-06T03:04:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Limits of Longevity
Rodriguez, Christopher W.
Do all animals age? Although aging seems to be a widespread phenomenon, some demographic studies have failed to find evidence of aging in certain species, including some highly regenerative species of planarians and Hydra that reproduce through asexual fission. However, all demographic studies have limits on observation times and sample sizes, so it is unknown if these failures were because of an actual absence of aging or these inherent study limitations. Some argue that these species must be ageless. Because of pressures that result from the lack of a clean division between the germ line and the soma in fissiparous organisms, agelessness becomes necessary as a prerequisite of this kind of reproductive strategy. Others argue that fundamental theories of the evolutionary biology of aging absolutely preclude agelessness. Even putting evolutionary arguments aside, some mathematical models of cellular competition and senescence argue that agelessness is impossible mechanistically in multicellular organisms. In this work, I address evolutionary and mechanistic arguments for and against agelessness. I develop mathematical models of the Disposable Soma Theory that incorporate facets of the arguments for agelessness in asexual fissioning organisms. I construct models of mutation accumulation and drift within an individual and explore how this genetic decay could manifest in the mortality rates. I use these models to understand if aging is inevitable generally and apply them to planarians and Hydra to seek to estimate the likelihood of aging more narrowly in those specific cases. Contrary to other work, I find that agelessness (defined as non-increasing mortality rates in a population) is indeed possible as the optimal evolutionary strategy for multicellular organisms. However, the evolution and mechanistic realization of agelessness requires conditions that are unlikely to be met in any existing species. In the case of planarians and Hydra, they likely do not face the right kind of evolutionary pressure to completely avoid aging. Even if they do face necessary evolutionary pressure, intraindividual genetic decay will almost certainly induce increasing mortality on the population with little recourse. Therefore, these species likely do age, although they could have median lifespans on the order of hundreds or perhaps even thousands of years, which would make detecting aging in any given population study quite difficult indeed.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fundamental representations of regions and interactions in spatial&#13;
transcriptomics</title>
<link href="https://hdl.handle.net/1721.1/163533" rel="alternate"/>
<author>
<name>Maher, Kamal M.</name>
</author>
<id>https://hdl.handle.net/1721.1/163533</id>
<updated>2025-11-06T03:04:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Fundamental representations of regions and interactions in spatial&#13;
transcriptomics
Maher, Kamal M.
While cells are often considered the fundamental unit of biology, it is their spatial coordination that gives rise to the tissue architectures underlying both health and disease. Spatial transcriptomics technologies offer a unique window into this coordination by simultaneously capturing the spatial and molecular identities of individual cells, providing unprecedented insight into tissue organization. However, the computational landscape for analyzing tissue structure remains fragmented, with a wide array of disparate methods. In this work, we aim to distill these approaches into a unified quantitative framework for analyzing tissue architecture. Tissue structure can be represented in terms of anatomical regions as well as the cellcell interactions that occur within them. For regional tissue organization, many existing methods—including those based on probabilistic models and graph neural networks—ultimately perform a form of smoothing, or local averaging of gene expression across neighboring cells. This process emphasizes large-scale spatial variation and enables standard single-cell analysis workflows, such as clustering and trajectory inference, to be applied in spatial contexts. However, we find that naive smoothing introduces artifacts that obscure meaningful spatial features. To address this, we introduce a minimal but powerful modification: subsampling within each neighborhood prior to averaging. This approach enhances spatial feature resolution and generalizes conventional analyses to spatial features: clustering identifies multicellular regions; data integration aligns spatial regions across samples and technologies; and trajectory inference captures spatial gradients. We also show that this subsampling strategy improves the performance of more complex downstream methods. To further generalize our framework, we formalize the joint analysis of tissue regions and multiscale cell-cell interactions using signal processing over graphs: low-frequency components represent regional gene expression patterns across a tissue mesh; high-frequency components capture fine-scale, cell-cell interactions; and mid-frequency signals correspond to boundaries between regions and diffusive signaling. By interpreting spatial gene expression in this spectral framework, we provide a principled way to bridge conceptual and computational perspectives on tissue structure. Ultimately, this work serves as both a theoretical foundation to understand existing methods and a roadmap for developing future approaches to quantitatively describe molecular tissue architecture.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing a Functional in Vitro Model of the Neuromuscular Interface</title>
<link href="https://hdl.handle.net/1721.1/163532" rel="alternate"/>
<author>
<name>Schwendeman, Laura A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163532</id>
<updated>2025-11-06T03:06:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Developing a Functional in Vitro Model of the Neuromuscular Interface
Schwendeman, Laura A.
The neuromuscular system is responsible for the coordination of movement throughout the body, and while research has revealed many of the mechanisms involved in the function of the neuromuscular system, there are still many gaps in our understanding of how all of the components of the system work and how they are affected by environmental factors and disease. This work focuses on developing methods and an in vitro model for studying a subsystem of the neuromuscular system known as the neuromuscular junction (NMJ), which is the connection between skeletal muscle and motor neurons and is relevant in many neuromuscular degenerative diseases. This work identifies that current in vitro NMJ models are cohesively lacking the ability to support long-term, functionally contractile muscle tissue while providing compartmentalization and clear optical access for live imaging of muscle and motor neuron co-cultures. This work therefore presents STAMP, a microgroove patterning method for creating aligned, more physiologically relevant, functional, and optically accessible skeletal muscle tissue cultures on top of fibrin hydrogels. Through investigating a series of different sizing parameters, STAMP is shown to effectively align mouse and human skeletal muscle monolayers in vitro and influence the direction of muscle contraction under electrical and optogenetic stimulation while preserving skeletal muscle tissue integrity and viability. The STAMP approach provides a way to mold hydrogels and the morphology of muscle tissue and will be beneficial for addressing the need for compliant and optically clear substrates in modeling the neuromuscular junction.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Turning and Turbulence: A Comparative Study of Agility and Fluid Mechanics in Men’s and Women’s Soccer</title>
<link href="https://hdl.handle.net/1721.1/163531" rel="alternate"/>
<author>
<name>Sonner, Jessica E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163531</id>
<updated>2025-11-06T03:06:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Turning and Turbulence: A Comparative Study of Agility and Fluid Mechanics in Men’s and Women’s Soccer
Sonner, Jessica E.
Female soccer players demonstrate high levels of agility but remain underrepresented in research and experience anterior cruciate ligament (ACL) tears two to eight times more frequently than their male counterparts [1]. These injuries are often associated with high-torsion movements at the knee, such as quick change-of-direction maneuvers in soccer [2]. To examine gender-based differences in agility, this study introduces an in-game metric based on change-of-direction speeds, derived from center-ofmass tracking data from the 2022 Men’s and 2023 Women’s FIFA World Cups. Results show that across positions, ball proximity, and game segments, female athletes tend to change direction both faster and more frequently than male athletes—supporting current injury hypotheses and informing gender-specific cleat design considerations. Beyond individual movement, this study also examines collective team behavior through a fluid mechanics lens. No significant gender differences were found in power spectral densities or second-order structure functions, suggesting symmetry in the underlying coordination dynamics. A direct cascade was observed in the 0–15m range, indicating a consistent transfer of energy across spatial scales. Team dispersion and the Area-Dominant Spread Index correlated with structure function slopes, bridging spatial metrics with turbulence-based models of group behavior.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deciphering Features of Protective or Maladaptive Cellular Immunity in the Airways Following Primary and Repeated Pathogen Exposure</title>
<link href="https://hdl.handle.net/1721.1/163530" rel="alternate"/>
<author>
<name>Bromley, Joshua David</name>
</author>
<id>https://hdl.handle.net/1721.1/163530</id>
<updated>2025-11-06T03:04:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Deciphering Features of Protective or Maladaptive Cellular Immunity in the Airways Following Primary and Repeated Pathogen Exposure
Bromley, Joshua David
The human respiratory tract is constantly subject to environmental stressors and perturbations that cause deviations from homeostatic conditions. The airway’s cellular constituents – epithelial, stromal, and immune cells – maintain local and global homeostasis by facilitating gas exchange and providing a barrier against noxious environmental agents (e.g., xenobiotics, allergens, toxins, and microbes). Infection with viral, microbial, and eukaryotic pathogens can disrupt airway homeostasis, leading to local and systemic inflammation, which can either contribute to the clearance or persistence of the pathogen. Prior antigenic exposure - prophylactically or from a previous infection - can promote transient and long-lived changes in cellular epigenetics, gene expression networks, and cell type composition that may contribute to protective (or maladaptive) immunity; however, we lack a complete understanding of the pathogen and cellular determinants that modulate immunity upon reinfection. In this thesis, we employed single-cell RNA-seq (scRNA-seq), computational methods, and microbial assays to discover the host and pathogen determinants governing airway homeostasis during primary infection and reinfection at barrier sites where the infection begins and may persist: the nasopharynx, airways, and lung parenchyma. First, we leveraged scRNA-seq to identify the cellular and molecular features of mild, moderate, and severe COVID-19, revealing that persons with severe COVID-19 have blunted anti-viral immunity in the nasopharynx. We further extended these findings by profiling nasopharyngeal swabs from vaccinated and unvaccinated individuals across three waves of SARS-CoV-2 variants, revealing shifts in viral tropism and that intramuscular COVID-19 vaccines promote the recruitment of putative antigen presenting macrophages to the nasal mucosa. Next, we used rhesus macaques to interrogate temporal host-pathogen interactions during SARS-CoV-2 infection and reinfection in the lower respiratory tract. This work identified innate training-like gene programs among myeloid populations that provided enhanced protection against SARS-CoV-2 reinfection. Finally, we used cynomolgus macaques as a model to study Mtb infection and reinfection, demonstrating that CD4+ T cells are required to restrict bacterial growth and induce protective immunomodulatory gene programming and cell-cell interaction networks in pulmonary granulomas formed following Mtb reinfection. These findings extend beyond long-held paradigms of protective TB immunity, revealing that CD4+ T cells regulate pro- and anti-inflammatory granuloma equilibria. Collectively, the work presented in this thesis highlights the utility of single-cell genomics for studying respiratory infection- and immuno-biology and provides a framework for contextualizing pathogen-induced deviations from biological homeostasis in the airways, which has implications for the development of prophylactics and therapeutics.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Expansion Microscopy of Extracellular Space for Light Microscopy-Based Connectomic Analysis</title>
<link href="https://hdl.handle.net/1721.1/163529" rel="alternate"/>
<author>
<name>Emenari, Amauche</name>
</author>
<id>https://hdl.handle.net/1721.1/163529</id>
<updated>2025-11-06T03:04:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Expansion Microscopy of Extracellular Space for Light Microscopy-Based Connectomic Analysis
Emenari, Amauche
In this dissertation, we present an exploratory methodology, termed expansion microscopy of extracellular space (ExECS), designed to enhance the visualization of the extracellular space (ECS) within aldehyde-fixed tissue. This technique leverages the principles of expansion microscopy (ExM), a method that facilitates nanoscale imaging on conventional microscopes through physical magnification of specimens, thereby supporting improved visualization of various cellular and tissue components including proteins, nucleic acids, and lipids 1. The ECS forms a continuous environment between cells2. Its presence throughout neural tissue makes it an attractive target for contrast-based techniques such as shadow imaging, where the ECS is selectively labeled to produce negative contrast, revealing cell shapes and boundaries as unlabeled silhouettes within a labeled background. Although ECS delineation in fixed tissue is limited by the fidelity of fixation and may not fully reflect its live-state structure, the resulting contrast with the intracellular environment may offer useful contrast for investigating neural morphology and connectivity, offering a useful approximation of network organization. A key component of the ExECS methodology is the introduction of a customengineered ECS Filler solution. This formulation, detailed later, includes a macromolecular probe intended to serve as a proxy for the ECS. When applied to aldehyde-fixed tissue, the filler is designed to diffuse throughout the sample, preferentially occupying extracellular compartments while remaining largely excluded from intracellular regions. This selective distribution is expected to persist even in areas where aldehyde fixation may have increased membrane permeability. This diffusion behavior is presumed to result from a combination of size-based exclusion and intermolecular interactions between the hyaluronan polymers, which form the main component of the filler solution, and the plasma membrane. The constituent hyaluronan is functionalized with amine groups to enable covalent crosslinking and with azide groups to allow fluorescent tagging via click chemistry. These modifications are intended to enable the ECS filler to act as a contrast agent by labeling the extracellular space, providing a foundation for a shadow-based imaging strategy to delineate morphology of cellular structures. In parallel, we introduce a lipid-targeted form of ExM, termed membrane expansion microscopy (mExM). This approach employs a custom chemical tag that enables nanoscale optical imaging of lipid membranes using a lipid-optimized expansion protocol. mExM, via a novel post-expansion antibody labeling protocol, enables protein-lipid relationships to be imaged in intracellular organelles. This technique may offer new opportunities to examine aspects of neural circuitry by linking cellular morphology with molecular identity. Together, ExECS and mExM offer a potential basis for a light microscopy-based framework for connectomic reconstructions. Unlike traditional electron microscopy approaches, which are labor-intensive and low-throughput3, this strategy aims to improve throughput in mapping of neuronal morphology with enhanced resolution that surpasses diffraction limitations. With the aim of bridging the gap between tissue ultrastructure and optical accessibility, this work may contribute to efforts toward scalable, high-resolution analysis of neural tissue organization.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decoding Disease Drivers Through Single-Cell Omics and Scalable Phenotypic Screens</title>
<link href="https://hdl.handle.net/1721.1/163528" rel="alternate"/>
<author>
<name>Liu, Nuo</name>
</author>
<id>https://hdl.handle.net/1721.1/163528</id>
<updated>2025-11-06T03:02:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Decoding Disease Drivers Through Single-Cell Omics and Scalable Phenotypic Screens
Liu, Nuo
At the heart of any human disease is an imbalance between normal and aberrant physiological processes— a disproportion between hypo-immunity and hyperimmunity—a lack of homeostasis. In many cases, a more comprehensive understanding of the molecular basis underlying disease progression and therapeutic failure is still required to devise new strategies for improving patient outcomes. Technological advancements in biomedical research, especially in single-cell omics (e.g. single-cell RNA sequencing, single-cell spatial profiling) have given us unprecedented power to decipher the intricate cellular and molecular features that maintain—or disrupt—this balance. However, validating the causality of these features remains a huge challenge, as the wealth of data often results in a considerable number of hypotheses to test. In this thesis, I explore applications of single-cell genomics tools to understand cellular features associated with disease, with a particular focus on tuberculosis (TB). I then present a potential solution for performing phenotypic screens at scale. In the first part, I applied single-cell RNA sequencing and analysis to human lung samples from a TB-endemic region in South Africa. Using contrastive analysis, I identify key cell populations that are differentially abundant between TB-diseased and TB-negative lung including several neutrophils, macrophages, and fibroblasts subsets. I discovered a de novo gene program highly enriched in the MMP1+CXCL5+ Fibroblast that correlates with TB burden in a non-human primates (NHP) granuloma dataset, supporting the importance of this subset in TB. In a collaborative effort, we validate that this MMP1+CXCL5+ Fibroblast localizes to TB granuloma on independent TB-diseased lung tissues using immunohistochemistry assays and recapitulate the induction of this population from lung-derived fibroblast through in vitro stimulation experiment with M.tb. I further report a SPP1+ macrophage population that is enhanced in TB diseased lungs through single-cell analysis. Moreover, I identified a prominent cross talk between SPP1+ macrophages and fibroblasts in TB diseased lung that mimics similar observations in cancer and fibrosis, supporting an important role for this axis in TB. These distinctive cell populations could serve as potential targets for novel host-directed therapies in tuberculosis. In the second part, I developed a method to compress small molecule phenotypic screens by designing randomized drug pools with replicates of distinct candidates across different drug pools. Our team demonstrated that linear regression models can be applied to computationally deconvolute the individual hits, enabling the identification of top effectors for downstream validation. We benchmarked and demonstrated the efficacy of this approach in a cost-effective imaging platform and then moved into applications on pancreatic ductal adenocarcinoma (PDAC), where we discovered a new perturbation response signature to IL-4/IL-13 with prognostic value for patient survival. We also showcased the utility of this tool on understanding immunomodulation effects in heterogenous mixtures of primary blood cells. Together, this thesis describes novel cellular features important to TB in human lungs, offering new insights that complement existing knowledge from animal models. It also presents a bold, yet effective strategy to scale up phenotypic screen across different biological systems, providing a much-needed solution that bridges the translational gap between human disease and experimental model.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leaky Vessels</title>
<link href="https://hdl.handle.net/1721.1/163527" rel="alternate"/>
<author>
<name>Cong, Frank (Haotian)</name>
</author>
<id>https://hdl.handle.net/1721.1/163527</id>
<updated>2025-11-06T03:07:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Leaky Vessels
Cong, Frank (Haotian)
This thesis serves as a written synthesis of my art practice. It starts with Louis Pasteur’s swan neck flask, Robert Boyle’s air pump, the theater of proof, and cabinets of natural historians to discuss the intentional gesture of containment, exclusion, and controlled permeability in scientific containers and the knowledge production paradigm behind them. I argue that these containers possess another intrinsic gesture – to leak – that opens space for social and cultural dimensions to engage. I propose “leaky vessels” as an analytical tool and a methodology that foregrounds the tension between intentional and unintentional in order to attend to the issues of care, belief, and labor that arise within this dynamic. Chapter 2 develops the concept of “leaky” in three aspects – aesthetic intervention, historical residue, institutional sabotage – by analyzing art practices by Eve Andrée Laramée, Oron Catts and Ionat Zurr, Candice Lin, Maria Thereza Alves, Critical Art Ensemble, and Claire Pentecost. Each case demonstrates how alternative approaches to apparatuses can expose and unsettle the systems of control that govern knowledge authority, allowing seepage, contamination, and embodied histories to return to spaces designed to exclude them. Chapters 3 and 4 turn inward to examine my own art practice, Guardian and The Guarded (2024), RapidRise (2024), and Sweat Dough (2025). In Chapter 3, I discuss the experience of entering the biomaker space at MIT and cultivating animal cells in a pendant, interrogating how care, proximity, and cosmology might challenge the lab’s sterile and utilitarian logic. Chapter 4 discusses the other two projects that operate outside the lab, where I investigate how bodily entanglement with dough fermentation can leak into the broader context of food cultures, labor histories, and symbolic inheritance. Together, these chapters propose a practice that embraces contamination and relationality. Those that leak in and leak out are precisely where new layers of meaning reside.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-Driven and Dynamically Feasible Trajectory Generation for Real-Time Powered Descent Guidance and Robotic Exploration</title>
<link href="https://hdl.handle.net/1721.1/163526" rel="alternate"/>
<author>
<name>Briden, Julia</name>
</author>
<id>https://hdl.handle.net/1721.1/163526</id>
<updated>2025-11-06T03:04:07Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Data-Driven and Dynamically Feasible Trajectory Generation for Real-Time Powered Descent Guidance and Robotic Exploration
Briden, Julia
Increasingly complex and high-mass planetary missions require autonomous long-horizon trajectory generation to achieve dynamically feasible powered descent guidance. While analytical and indirect methods are computationally efficient, significant simplifications of the dynamics and constraints are required for both problem formulations. Numerical optimization algorithms enable minimum-energy trajectory generation subject to system dynamics and safety constraints but currently remain computationally infeasible on flight-grade processors, taking seconds to minutes to compute a single trajectory. The objective of this dissertation is to develop new algorithms to advance the state of the art in trajectory optimization and planning for autonomous systems. Due to the limited computational abilities of radiation-hardened processors and an increased need for spacecraft and robotic autonomy, specialized algorithms capable of running in realtime constitute enabling technologies for space exploration. Three major contributions are developed in this dissertation. First, a transformer neural network-based algorithm is created to predict the tight constraints that recover the solution and parameter sets for constrained optimization problems. By training on prior runs of the numerical optimization solver, the learned mapping can construct a reduced problem formulation that recovers the optimal solution while reducing runtime by up to an order of magnitude. Second, a method to embed problem-specific information into the neural network training process was developed. By embedding the Lagrangian and Lagrangian gradient merit functions into the training process, neural network-generated control policies are biased toward constraint satisfaction. Third, an autonomous hybrid targeting and guidance algorithm was designed to utilize probabilistic risk maps and numerical optimization to select and navigate to minimum-risk landing sites. Applications in planetary powered descent and landing, as well as rover path planning, are used to benchmark algorithm performance.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Situation Awareness Framework for Explainable AI (SAFE-AI) and Human Factors Considerations for XAI Systems</title>
<link href="https://hdl.handle.net/1721.1/163525" rel="alternate"/>
<author>
<name>Sanneman, Lindsay</name>
</author>
<author>
<name>Shah, Julie A</name>
</author>
<id>https://hdl.handle.net/1721.1/163525</id>
<updated>2026-03-08T03:29:16Z</updated>
<published>2022-06-22T00:00:00Z</published>
<summary type="text">The Situation Awareness Framework for Explainable AI (SAFE-AI) and Human Factors Considerations for XAI Systems
Sanneman, Lindsay; Shah, Julie A
Recent advances in artificial intelligence (AI) have drawn attention to the need for AI systems to beunderstandable to human users. The explainable AI (XAI) literature aims to enhance human under-standing and human-AI team performance by providing users with necessary information about AI sys-tem behavior. Simultaneously, the human factors literature has long addressed importantconsiderations that contribute to human performance, including how to determine human informa-tional needs, human workload, and human trust in autonomous systems. Drawing from the human fac-tors literature, we propose the Situation Awareness Framework for Explainable AI (SAFE-AI), a three-level framework for the development and evaluation of explanations about AI system behavior. Ourproposed levels of XAI are based on the informational needs of human users, which can be deter-mined using the levels of situation awareness (SA) framework from the human factors literature. Basedon our levels of XAI framework, we also suggest a method for assessing the effectiveness of XAI sys-tems. We further detail human workload considerations for determining the content and frequency ofexplanations as well as metrics that can be used to assess human workload. Finally, we discuss theimportance of appropriately calibrating user trust in AI systems through explanations along with othertrust-related considerations for XAI, and we detail metrics that can be used to evaluate user trust inthese systems.
</summary>
<dc:date>2022-06-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Remove hydrogen and store it too: an acid-in-clay based electro-chemical solution</title>
<link href="https://hdl.handle.net/1721.1/163524" rel="alternate"/>
<author>
<name>Kim, Kyung-Shik</name>
</author>
<author>
<name>Park, Jin-Sung</name>
</author>
<author>
<name>Yoon, Young-Chul</name>
</author>
<author>
<name>Kim, Jinwoo</name>
</author>
<author>
<name>Li, Ju</name>
</author>
<author>
<name>Yildiz, Bilge</name>
</author>
<author>
<name>Tasan, Cemal Cem</name>
</author>
<id>https://hdl.handle.net/1721.1/163524</id>
<updated>2026-03-08T03:29:15Z</updated>
<published>2024-11-14T00:00:00Z</published>
<summary type="text">Remove hydrogen and store it too: an acid-in-clay based electro-chemical solution
Kim, Kyung-Shik; Park, Jin-Sung; Yoon, Young-Chul; Kim, Jinwoo; Li, Ju; Yildiz, Bilge; Tasan, Cemal Cem
Extracting hydrogen from metallic components can open up a new pathway for preventing hydrogen embrittlement. To this end, we propose an electrochemically driven, all-solid method for hydrogen control, capable of both extracting and storing hydrogen simultaneously. In this approach, we employ acid-in-clay as a proton conducting electrolyte at room temperature. Through this electrochemical treatment, hydrogen is efficiently extracted from pre-charged steels, thereby restoring their tensile properties and preventing embrittlement. Moreover, it has been confirmed that the extracted hydrogen can be efficiently collected at the counter electrode, demonstrating the significant advantages of the process.
</summary>
<dc:date>2024-11-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Contrasting interchain order and mixed ionic–electronic conduction in conjugated polymers: an isoindigo case study</title>
<link href="https://hdl.handle.net/1721.1/163523" rel="alternate"/>
<author>
<name>Meacham, Rebecca F</name>
</author>
<author>
<name>Roh, Heejung</name>
</author>
<author>
<name>Cunin, Camille E</name>
</author>
<author>
<name>Lee, Eric R</name>
</author>
<author>
<name>Li, Wenhao</name>
</author>
<author>
<name>Zhao, Yan</name>
</author>
<author>
<name>Samal, Sanket</name>
</author>
<author>
<name>Gumyusenge, Aristide</name>
</author>
<id>https://hdl.handle.net/1721.1/163523</id>
<updated>2026-03-08T03:29:16Z</updated>
<published>2024-10-22T00:00:00Z</published>
<summary type="text">Contrasting interchain order and mixed ionic–electronic conduction in conjugated polymers: an isoindigo case study
Meacham, Rebecca F; Roh, Heejung; Cunin, Camille E; Lee, Eric R; Li, Wenhao; Zhao, Yan; Samal, Sanket; Gumyusenge, Aristide
In mixed ionic–electronic conductive polymers, electronic conduction is optimal in tightly packed flat chains, while ionic conduction benefits from free volume accommodating large ions. To this end, polymers with high crystallinity are often excluded from structure–property studies of high-performing mixed conductors due to their unbalanced transport, which favors electronic charges over ionic ones. Herein, we investigated how mixed conduction can be achieved in ordered conjugated polymers by systematically combining interchain order with side chain engineering. We synthesized a series of isoindigo (IID)-based copolymers with varying amounts of aliphatic and hydrophilic side chains and examined the impact of interchain order on mixed conduction. Through crystallographic, spectro-electrochemical, and molecular dynamics studies, we demonstrated that systematically introducing hydrophilic side chains reduces the bulk order and long-range aggregation by increasing chain flexibility while preserving the interchain stacking distances within crystalline domains. Testing these IID polymers in transistor devices revealed that ion insertion and device transconductance strongly depend on the amount of hydrophilic side chains. We demonstrated that glycol side chains can enhance mixed conduction while maintaining interchain order. Our findings suggest that the IID system is promising for designing polymers that can accommodate ionic species without compromising the chain ordering required for electronic conduction.
</summary>
<dc:date>2024-10-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>The simulation of a multi-product, multi-department factory</title>
<link href="https://hdl.handle.net/1721.1/163522" rel="alternate"/>
<author>
<name>Levy, Donald Stephen.</name>
</author>
<id>https://hdl.handle.net/1721.1/163522</id>
<updated>2025-11-05T05:32:12Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">The simulation of a multi-product, multi-department factory
Levy, Donald Stephen.
Thesis: B.S., Massachusetts Institute of Technology, School of Industrial Management, 1964
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparative elastic and plastic analysis and design of steel frames</title>
<link href="https://hdl.handle.net/1721.1/163521" rel="alternate"/>
<author>
<name>Padilla Valenzuela, Rodolfo Augusto.</name>
</author>
<id>https://hdl.handle.net/1721.1/163521</id>
<updated>2025-11-05T05:32:09Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">Comparative elastic and plastic analysis and design of steel frames
Padilla Valenzuela, Rodolfo Augusto.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1960
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamics of torpedo depth control</title>
<link href="https://hdl.handle.net/1721.1/163520" rel="alternate"/>
<author>
<name>Carleton, John Thomas.</name>
</author>
<id>https://hdl.handle.net/1721.1/163520</id>
<updated>2025-11-05T05:14:46Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">Dynamics of torpedo depth control
Carleton, John Thomas.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1992; Includes bibliographical references (leaf 72).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantum theory of mode locking.</title>
<link href="https://hdl.handle.net/1721.1/163519" rel="alternate"/>
<author>
<name>Lang, W. R.
            (W. Roy)</name>
</author>
<id>https://hdl.handle.net/1721.1/163519</id>
<updated>2025-11-05T04:05:42Z</updated>
<published>1971-01-01T00:00:00Z</published>
<summary type="text">Quantum theory of mode locking.
Lang, W. R.
            (W. Roy)
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1971; Vita.; Bibliography: leaves 88-90.
</summary>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An investigation of the engineering aspects of a wind tunnel magnetic suspension system</title>
<link href="https://hdl.handle.net/1721.1/163518" rel="alternate"/>
<author>
<name>Chrisinger, John Edvil.</name>
</author>
<id>https://hdl.handle.net/1721.1/163518</id>
<updated>2025-11-05T05:14:10Z</updated>
<published>1959-01-01T00:00:00Z</published>
<summary type="text">An investigation of the engineering aspects of a wind tunnel magnetic suspension system
Chrisinger, John Edvil.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1959; Includes bibliographical references (leaf 62).
</summary>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Extrageniculate and extrastriate affiliates of the geniculocortical pathway in the cat</title>
<link href="https://hdl.handle.net/1721.1/163517" rel="alternate"/>
<author>
<name>Berson, David Matthew.</name>
</author>
<id>https://hdl.handle.net/1721.1/163517</id>
<updated>2025-11-05T04:06:26Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">Extrageniculate and extrastriate affiliates of the geniculocortical pathway in the cat
Berson, David Matthew.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Psychology, 1980; Vita.; Bibliography: leaves 114-126.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>New theoretical methods for the study of the electronic structure of solids.</title>
<link href="https://hdl.handle.net/1721.1/163516" rel="alternate"/>
<author>
<name>Mele, Eugene John.</name>
</author>
<id>https://hdl.handle.net/1721.1/163516</id>
<updated>2025-11-05T04:06:18Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">New theoretical methods for the study of the electronic structure of solids.
Mele, Eugene John.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A financial history of the Boston elevated</title>
<link href="https://hdl.handle.net/1721.1/163515" rel="alternate"/>
<author>
<name>Stallman, Edward B.</name>
</author>
<author>
<name>Bush, Horace McM.</name>
</author>
<id>https://hdl.handle.net/1721.1/163515</id>
<updated>2025-11-05T05:31:53Z</updated>
<published>1926-01-01T00:00:00Z</published>
<summary type="text">A financial history of the Boston elevated
Stallman, Edward B.; Bush, Horace McM.
Thesis: B.S., Massachusetts Institute of Technology, Department of Business and Engineering Administration, 1926; Includes bibliographical references (leaf 34).
</summary>
<dc:date>1926-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Molecular engineering of a cryptic epitope in Spike RBD improves manufacturability and neutralizing breadth against SARS-CoV-2 variants</title>
<link href="https://hdl.handle.net/1721.1/163514" rel="alternate"/>
<author>
<name>Rodriguez-Aponte, Sergio A</name>
</author>
<author>
<name>Dalvie, Neil C</name>
</author>
<author>
<name>Wong, Ting Y</name>
</author>
<author>
<name>Johnston, Ryan S</name>
</author>
<author>
<name>Naranjo, Christopher A</name>
</author>
<author>
<name>Bajoria, Sakshi</name>
</author>
<author>
<name>Kumru, Ozan S</name>
</author>
<author>
<name>Kaur, Kawaljit</name>
</author>
<author>
<name>Russ, Brynnan P</name>
</author>
<author>
<name>Lee, Katherine S</name>
</author>
<author>
<name>Cyphert, Holly A</name>
</author>
<author>
<name>Barbier, Mariette</name>
</author>
<author>
<name>Rao, Harish D</name>
</author>
<author>
<name>Rajurkar, Meghraj P</name>
</author>
<author>
<name>Lothe, Rakesh R</name>
</author>
<author>
<name>Shaligram, Umesh S</name>
</author>
<author>
<name>Batwal, Saurabh</name>
</author>
<author>
<name>Chandrasekaran, Rahul</name>
</author>
<author>
<name>Nagar, Gaurav</name>
</author>
<author>
<name>Kleanthous, Harry</name>
</author>
<author>
<name>Biswas, Sumi</name>
</author>
<author>
<name>Bevere, Justin R</name>
</author>
<author>
<name>Joshi, Sangeeta B</name>
</author>
<author>
<name>Volkin, David B</name>
</author>
<author>
<name>Damron, F Heath</name>
</author>
<author>
<name>Love, J Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/163514</id>
<updated>2025-11-04T03:07:58Z</updated>
<published>2023-01-27T00:00:00Z</published>
<summary type="text">Molecular engineering of a cryptic epitope in Spike RBD improves manufacturability and neutralizing breadth against SARS-CoV-2 variants
Rodriguez-Aponte, Sergio A; Dalvie, Neil C; Wong, Ting Y; Johnston, Ryan S; Naranjo, Christopher A; Bajoria, Sakshi; Kumru, Ozan S; Kaur, Kawaljit; Russ, Brynnan P; Lee, Katherine S; Cyphert, Holly A; Barbier, Mariette; Rao, Harish D; Rajurkar, Meghraj P; Lothe, Rakesh R; Shaligram, Umesh S; Batwal, Saurabh; Chandrasekaran, Rahul; Nagar, Gaurav; Kleanthous, Harry; Biswas, Sumi; Bevere, Justin R; Joshi, Sangeeta B; Volkin, David B; Damron, F Heath; Love, J Christopher
There is a continued need for sarbecovirus vaccines that can be manufactured and distributed in low- and middle-income countries (LMICs). Subunit protein vaccines are manufactured at large scales at low costs, have less stringent temperature requirements for distribution in LMICs, and several candidates have shown protection against SARS-CoV-2. We previously reported an engineered variant of the SARS-CoV-2 Spike protein receptor binding domain antigen (RBD-L452K-F490W; RBD-J) with enhanced manufacturability and immunogenicity compared to the ancestral RBD. Here, we report a second-generation engineered RBD antigen (RBD-J6) with two additional mutations to a hydrophobic cryptic epitope in the RBD core, S383D and L518D, that further improved expression titers and biophysical stability. RBD-J6 retained binding affinity to human convalescent sera and to all tested neutralizing antibodies except antibodies that target the class IV epitope on the RBD core. K18-hACE2 transgenic mice immunized with three doses of a Beta variant of RBD-J6 displayed on a virus-like particle (VLP) generated neutralizing antibodies (nAb) to nine SARS-CoV-2 variants of concern at similar levels as two doses of Comirnaty. The vaccinated mice were also protected from challenge with Alpha or Beta SARS-CoV-2. This engineered antigen could be useful for modular RBD-based subunit vaccines to enhance manufacturability and global access, or for further development of variant-specific or broadly acting booster vaccines.
</summary>
<dc:date>2023-01-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Immunotherapy-induced neutralizing antibodies disrupt allergen binding and sustain allergen tolerance in peanut allergy</title>
<link href="https://hdl.handle.net/1721.1/163513" rel="alternate"/>
<author>
<name>LaHood, Nicole A</name>
</author>
<author>
<name>Min, Jungki</name>
</author>
<author>
<name>Keswani, Tarun</name>
</author>
<author>
<name>Richardson, Crystal M</name>
</author>
<author>
<name>Amoako, Kwasi</name>
</author>
<author>
<name>Zhou, Jingjia</name>
</author>
<author>
<name>Marini-Rapoport, Orlee</name>
</author>
<author>
<name>Bernard, Hervé</name>
</author>
<author>
<name>Hazebrouck, Stéphane</name>
</author>
<author>
<name>Shreffler, Wayne G</name>
</author>
<author>
<name>Love, J Christopher</name>
</author>
<author>
<name>Pomes, Anna</name>
</author>
<author>
<name>Pedersen, Lars C</name>
</author>
<author>
<name>Mueller, Geoffrey A</name>
</author>
<author>
<name>Patil, Sarita U</name>
</author>
<id>https://hdl.handle.net/1721.1/163513</id>
<updated>2025-11-04T03:08:00Z</updated>
<published>2023-01-17T00:00:00Z</published>
<summary type="text">Immunotherapy-induced neutralizing antibodies disrupt allergen binding and sustain allergen tolerance in peanut allergy
LaHood, Nicole A; Min, Jungki; Keswani, Tarun; Richardson, Crystal M; Amoako, Kwasi; Zhou, Jingjia; Marini-Rapoport, Orlee; Bernard, Hervé; Hazebrouck, Stéphane; Shreffler, Wayne G; Love, J Christopher; Pomes, Anna; Pedersen, Lars C; Mueller, Geoffrey A; Patil, Sarita U
In IgE-mediated food allergies, exposure to the allergen activates systemic allergic responses. Oral immunotherapy (OIT) treats food allergies through incremental increases in oral allergen exposure. However, OIT only induces sustained clinical tolerance and decreased basophil sensitivity in a subset of individuals despite increases in circulating allergen-specific IgG in all treated individuals. Therefore, we examined the allergen-specific antibodies from 2 OIT cohorts of patients with sustained and transient responses. Here, we compared antibodies from individuals with sustained or transient responses and discovered specific tolerance-associated conformational epitopes of the immunodominant allergen Ara h 2 recognized by neutralizing antibodies. First, we identified what we believe to be previously unknown conformational, intrahelical epitopes using x-ray crystallography with recombinant antibodies. We then identified epitopes only recognized in sustained tolerance. Finally, antibodies recognizing tolerance-associated epitopes effectively neutralized allergen to suppress IgE-mediated effector cell activation. Our results demonstrate the molecular basis of antibody-mediated protection in IgE-mediated food allergy, by defining how these antibodies disrupt IgE-allergen interactions to prevent allergic reactions. Our approach to studying the structural and functional basis for neutralizing antibodies demonstrates the clinical relevance of specific antibody clones in antibody-mediated tolerance. We anticipate that our findings will form the foundation for treatments of peanut allergy using neutralizing antibodies and hypoallergens.
</summary>
<dc:date>2023-01-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tissue-specific abundance of interferon-gamma drives regulatory T cells to restrain DC1-mediated priming of cytotoxic T cells against lung cancer</title>
<link href="https://hdl.handle.net/1721.1/163512" rel="alternate"/>
<author>
<name>Zagorulya, Maria</name>
</author>
<author>
<name>Yim, Leon</name>
</author>
<author>
<name>Morgan, Duncan M</name>
</author>
<author>
<name>Edwards, Austin</name>
</author>
<author>
<name>Torres-Mejia, Elen</name>
</author>
<author>
<name>Momin, Noor</name>
</author>
<author>
<name>McCreery, Chloe V</name>
</author>
<author>
<name>Zamora, Izabella L</name>
</author>
<author>
<name>Horton, Brendan L</name>
</author>
<author>
<name>Fox, James G</name>
</author>
<author>
<name>Wittrup, K Dane</name>
</author>
<author>
<name>Love, J Christopher</name>
</author>
<author>
<name>Spranger, Stefani</name>
</author>
<id>https://hdl.handle.net/1721.1/163512</id>
<updated>2025-11-04T03:07:54Z</updated>
<published>2023-02-14T00:00:00Z</published>
<summary type="text">Tissue-specific abundance of interferon-gamma drives regulatory T cells to restrain DC1-mediated priming of cytotoxic T cells against lung cancer
Zagorulya, Maria; Yim, Leon; Morgan, Duncan M; Edwards, Austin; Torres-Mejia, Elen; Momin, Noor; McCreery, Chloe V; Zamora, Izabella L; Horton, Brendan L; Fox, James G; Wittrup, K Dane; Love, J Christopher; Spranger, Stefani
Local environmental factors influence CD8+ T cell priming in lymph nodes (LNs). Here, we sought to understand how factors unique to the tumor-draining mediastinal LN (mLN) impact CD8+ T cell responses toward lung cancer. Type 1 conventional dendritic cells (DC1s) showed a mLN-specific failure to induce robust cytotoxic T cells responses. Using regulatory T (Treg) cell depletion strategies, we found that Treg cells suppressed DC1s in a spatially coordinated manner within tissue-specific microniches within the mLN. Treg cell suppression required MHC II-dependent contact between DC1s and Treg cells. Elevated levels of IFN-γ drove differentiation Treg cells into Th1-like effector Treg cells in the mLN. In patients with cancer, Treg cell Th1 polarization, but not CD8+/Treg cell ratios, correlated with poor responses to checkpoint blockade immunotherapy. Thus, IFN-γ in the mLN skews Treg cells to be Th1-like effector Treg cells, driving their close interaction with DC1s and subsequent suppression of cytotoxic T cell responses.
</summary>
<dc:date>2023-02-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Minimal purification method enables developability assessment of recombinant proteins</title>
<link href="https://hdl.handle.net/1721.1/163511" rel="alternate"/>
<author>
<name>Rodriguez‐Aponte, Sergio A</name>
</author>
<author>
<name>Naranjo, Christopher A</name>
</author>
<author>
<name>Johnston, Ryan S</name>
</author>
<author>
<name>Dalvie, Neil C</name>
</author>
<author>
<name>Crowell, Laura E</name>
</author>
<author>
<name>Bajoria, Sakshi</name>
</author>
<author>
<name>Kumru, Ozan S</name>
</author>
<author>
<name>Joshi, Sangeeta B</name>
</author>
<author>
<name>Volkin, David B</name>
</author>
<author>
<name>Love, J Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/163511</id>
<updated>2025-11-04T03:07:45Z</updated>
<published>2023-03-17T00:00:00Z</published>
<summary type="text">Minimal purification method enables developability assessment of recombinant proteins
Rodriguez‐Aponte, Sergio A; Naranjo, Christopher A; Johnston, Ryan S; Dalvie, Neil C; Crowell, Laura E; Bajoria, Sakshi; Kumru, Ozan S; Joshi, Sangeeta B; Volkin, David B; Love, J Christopher
Analytical characterization of proteins is a critical task for developing therapeutics and subunit vaccine candidates. Assessing candidates with a battery of biophysical assays can inform the selection of one that exhibits properties consistent with a given target product profile (TPP). Such assessments, however, require several milligrams of purified protein, and ideal assessments of the physicochemical attributes of the proteins should not include unnatural modifications like peptide tags for purification. Here, we describe a fast two‐stage minimal purification process for recombinant proteins secreted by the yeast host &lt;jats:italic&gt;Komagataella phaffii&lt;/jats:italic&gt; from a 20 mL culture supernatant. This method comprises a buffer exchange and filtration with a Q‐membrane filter and we demonstrate sufficient removal of key supernatant impurities including host‐cell proteins (HCPs) and DNA with yields of 1–2 mg and &amp;gt;60% purity. This degree of purity enables characterizing the resulting proteins using affinity binding, mass spectrometry, and differential scanning calorimetry. We first evaluated this method to purify an engineered SARS‐CoV‐2 subunit protein antigen and compared the purified protein to a conventional two‐step chromatographic process. We then applied this method to compare several SARS‐CoV‐2 RBD sequences. Finally, we show this simple process can be applied to a range of other proteins, including a single‐domain antibody, a rotavirus protein subunit, and a human growth hormone. This simple and fast developability methodology obviates the need for genetic tagging or full chromatographic development when assessing and comparing early‐stage protein therapeutics and vaccine candidates produced in &lt;jats:italic&gt;K. phaffii&lt;/jats:italic&gt;.&lt;/jats:p&gt;
</summary>
<dc:date>2023-03-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interstellar Mapping And Acceleration Probe: The NASA IMAP Mission</title>
<link href="https://hdl.handle.net/1721.1/163510" rel="alternate"/>
<author>
<name>McComas, D. J.</name>
</author>
<author>
<name>Christian, E. R.</name>
</author>
<author>
<name>Schwadron, N. A.</name>
</author>
<author>
<name>Gkioulidou, M.</name>
</author>
<author>
<name>Allegrini, F.</name>
</author>
<author>
<name>Baker, D. N.</name>
</author>
<author>
<name>Bzowski, M.</name>
</author>
<author>
<name>Clark, G.</name>
</author>
<author>
<name>Cohen, C. M. S.</name>
</author>
<author>
<name>Cohen, I.</name>
</author>
<id>https://hdl.handle.net/1721.1/163510</id>
<updated>2025-11-04T03:07:34Z</updated>
<published>2025-10-30T00:00:00Z</published>
<summary type="text">Interstellar Mapping And Acceleration Probe: The NASA IMAP Mission
McComas, D. J.; Christian, E. R.; Schwadron, N. A.; Gkioulidou, M.; Allegrini, F.; Baker, D. N.; Bzowski, M.; Clark, G.; Cohen, C. M. S.; Cohen, I.
NASA’s Interstellar Mapping and Acceleration Probe (IMAP) mission provides extensive and well-coordinated new observations of the inner and outer heliosphere and scientific closure on two of the most important topics in Heliophysics: 1) the acceleration of charged particles and 2) the interaction of the solar wind with the local interstellar medium. These topics are intimately coupled because particles accelerated in the inner heliosphere propagate outward through the solar wind and mediate its interaction with the very local interstellar medium (VLISM). The IMAP mission is designed to address these topics, provide extensive new real-time measurements critical to Space Weather observations and predictions, and much more. IMAP’s ten instruments are mounted on a simple, spinning spacecraft that orbits about the first Sun-Earth Lagrange point, L1, and repoints its Sun-facing solar arrays and spin axis toward the Sun each day. The instruments provide complete and synergistic observations that examine particle energization processes at 1 au while simultaneously probing the global heliospheric interaction with the VLISM. The 1 au in-situ observations include solar wind electrons and ions from solar wind through suprathermal energies, pickup and energetic ions, as well as the interplanetary magnetic field. IMAP provides Energetic Neutral Atom (ENA) global imaging of the outer heliosphere via ENAs from tens of eV up through hundreds of keV, as well as observations of interstellar neutral atoms traversing the heliosphere. IMAP also directly measures interstellar dust that enters the heliosphere and the solar-wind-modulated ultraviolet glow. This paper provides the mission overview for the full IMAP mission, acts as a roadmap to the other papers in this IMAP collection and provides the citable reference for the overall IMAP mission going forward.
</summary>
<dc:date>2025-10-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Passive Regolith Sampler: From Concept to Delivery to the Lunar Surface</title>
<link href="https://hdl.handle.net/1721.1/163509" rel="alternate"/>
<author>
<name>Stober, Keith J.</name>
</author>
<author>
<name>Dorrington, Scott</name>
</author>
<author>
<name>Rupasinghe, Dinuri</name>
</author>
<author>
<name>Mao, Claire</name>
</author>
<author>
<name>Romero, Elizabeth</name>
</author>
<author>
<name>Moswane, Rethabile</name>
</author>
<author>
<name>Zhang, Jackson</name>
</author>
<author>
<name>Mahfouth AlShehhi, Abdulla</name>
</author>
<author>
<name>Els, Sebastian G.</name>
</author>
<author>
<name>Wood, Danielle</name>
</author>
<id>https://hdl.handle.net/1721.1/163509</id>
<updated>2025-11-04T03:07:40Z</updated>
<published>2025-10-30T00:00:00Z</published>
<summary type="text">The Passive Regolith Sampler: From Concept to Delivery to the Lunar Surface
Stober, Keith J.; Dorrington, Scott; Rupasinghe, Dinuri; Mao, Claire; Romero, Elizabeth; Moswane, Rethabile; Zhang, Jackson; Mahfouth AlShehhi, Abdulla; Els, Sebastian G.; Wood, Danielle
This paper outlines the development and testing of two light-weight, low-cost, passive sensors developed by the MIT Space Enabled Research Group that were delivered to the moon in 2023 onboard the Rashid-1 rover as part of the Emirates Lunar Mission. The Passive Regolith Sampler (PRS) is a simple device mounted to the wheels of the rover, containing an aluminum tray with a cover plate of perforated holes of varying size and spacing. The device uses the motion of the rover wheel to press the device into the lunar surface, capturing small samples of lunar regolith in the holes. The Passive Wax Thermometer (PWT) is a collection of 10 wax samples, contained in individual capsules covered with sapphire windows. Each wax sample is an alkane with a different melting temperature determined by its chemical formula. Each wax sample undergoes temperature-dependent changes in opacity, providing a method for inferring temperature via image analysis. In preparation for lunar surface operations, the Space Enabled team performed a series of laboratory experiments and analytical analyses aiming to replicate conditions expected to be encountered during the mission. These experiments and analyses explored the physical mechanisms of the rover/regolith interaction, the lighting and thermal conditions at the landing site, and the quality of images captured from the rover mast camera. This paper outlines the results of these experiments and analyses, and their influence on the design and operations planning for the two payloads. Due to landing anomalies, the 2023 mission did not complete lunar surface operations; further work is planned to explore future operational opportunities.
</summary>
<dc:date>2025-10-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Von Neumann-Morgenstern stability and internal closedness in matching theory</title>
<link href="https://hdl.handle.net/1721.1/163508" rel="alternate"/>
<author>
<name>Faenza, Yuri</name>
</author>
<author>
<name>Stein, Cliff</name>
</author>
<author>
<name>Wan, Jia</name>
</author>
<id>https://hdl.handle.net/1721.1/163508</id>
<updated>2025-11-04T03:07:41Z</updated>
<published>2025-10-30T00:00:00Z</published>
<summary type="text">Von Neumann-Morgenstern stability and internal closedness in matching theory
Faenza, Yuri; Stein, Cliff; Wan, Jia
Gale and Shapley’s stability criterion enjoys a rich mathematical structure, which propelled its application in various settings. Although immensely popular, the approach by Gale and Shapley cannot encompass all the different features that arise in applications, motivating the search for alternative solution concepts. We investigate alternatives that rely on the concept of internal stability, a notion introduced for abstract games by von Neumann and Morgenstern and motivated by the need of finding a set of mutually compatible solutions. The set of stable matchings is internally stable. However, the class of internally stable sets is much richer, for an internally stable set of matchings may also include unstable matchings and/or exclude stable ones. In this paper, we focus on two families of internally stable sets of matchings: von Neumann-Morgenstern stable and internally closed. We study algorithmic questions around those concepts in both the marriage and the roommate models. One of our results implies that, in the marriage model, internally closed sets are an alternative to stable matchings that is as tractable as stable matchings themselves, a fairly rare occurrence in the area. Both our positive and negative results rely on new structural insights and extensions of classical algebraic structures associated with sets of matchings, which we believe to be of independent interest.
</summary>
<dc:date>2025-10-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Order-forcing in Neural Codes</title>
<link href="https://hdl.handle.net/1721.1/163507" rel="alternate"/>
<author>
<name>Jeffs, R. A.</name>
</author>
<author>
<name>Lienkaemper, Caitlin</name>
</author>
<author>
<name>Youngs, Nora</name>
</author>
<id>https://hdl.handle.net/1721.1/163507</id>
<updated>2025-11-04T03:07:38Z</updated>
<published>2025-10-28T00:00:00Z</published>
<summary type="text">Order-forcing in Neural Codes
Jeffs, R. A.; Lienkaemper, Caitlin; Youngs, Nora
Convex neural codes are subsets of the Boolean lattice that record the intersection patterns of convex sets in Euclidean space. Much work in recent years has focused on finding combinatorial criteria on codes that can be used to classify whether or not a code is convex. In this paper we introduce order-forcing, a combinatorial tool which recognizes when certain regions in a realization of a code must appear along a line segment between other regions. We use order-forcing to construct novel examples of non-convex codes, and to expand existing families of examples. We also construct a family of codes which shows that a dimension bound of Cruz, Giusti, Itskov, and Kronholm (referred to as monotonicity of open convexity) is tight in all dimensions.
</summary>
<dc:date>2025-10-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Co-evolution of alpha-helical transmembrane protein residues: large-scale variant profiling and complete mutational landscape of 2277 known PDB entries representing 504 unique human protein sequences</title>
<link href="https://hdl.handle.net/1721.1/163506" rel="alternate"/>
<author>
<name>Karagöl, Taner</name>
</author>
<author>
<name>Karagöl, Alper</name>
</author>
<author>
<name>Zhang, Shuguang</name>
</author>
<id>https://hdl.handle.net/1721.1/163506</id>
<updated>2025-11-04T03:07:31Z</updated>
<published>2025-09-24T00:00:00Z</published>
<summary type="text">Co-evolution of alpha-helical transmembrane protein residues: large-scale variant profiling and complete mutational landscape of 2277 known PDB entries representing 504 unique human protein sequences
Karagöl, Taner; Karagöl, Alper; Zhang, Shuguang
Membrane proteins play fundamental roles in cellular function, yet the evolutionary dynamics of their amino acid composition remain poorly understood. Our current study investigates the substitutional landscape and evolutionary patterns of hydrophilic and hydrophobic residues in membrane α-helical proteins, addressing a significant gap in our knowledge of protein evolution. We analyzed 2277 high-resolution protein structures from the RCSB Protein Data Bank corresponding to 458 unique PDB structures, 504 UniProt transmembrane entries and their AlphaMissense predicted mutational libraries including more than 5.8 million amino acid substitutions, focusing on known transmembrane α-helical proteins in Homo sapiens. Our analysis showed that the pathological outcome of the substitutions is diverse, as nonpolar to polar changes showed higher pathological scores in general. Notably, F &lt;=&gt; Y substitutions showed significantly lower pathological scores. Our further analysis revealed a significant asymmetry in the evolutionary frequencies of polar and nonpolar amino acids. We identified key residue pairs driving this asymmetry, with F &lt;=&gt; Y, A &lt;=&gt; T, V &lt;=&gt; T and A &lt;=&gt; S co-evolution diverging from the expected negative correlations (Spearman’s rho &gt; 0.20, p &lt; 0.001). The V &lt;=&gt; T substitution via an alanine intermediate and the G &lt;=&gt; N substitution via a serine intermediate lower their statistical barrier, which would otherwise require two sequential base changes. We propose two evolutionary game theory (EGT) based models to explain their diversification, with partial correlation analysis on residue frequencies in homolog sequences. These mathematical insights suggest a previously unrecognized evolutionary pressure, potentially linked to functional diversification, which could be targeted to combat drug resistance. Our results offer insights into membrane protein evolution and may inform improved methods for protein structure prediction and design.
</summary>
<dc:date>2025-09-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Perforation of the host cell plasma membrane during Toxoplasma invasion requires rhoptry exocytosis</title>
<link href="https://hdl.handle.net/1721.1/163505" rel="alternate"/>
<author>
<name>Male, Frances</name>
</author>
<author>
<name>Kegawa, Yuto</name>
</author>
<author>
<name>Blank, Paul S.</name>
</author>
<author>
<name>Jiménez-Munguía, Irene</name>
</author>
<author>
<name>Sidik, Saima M.</name>
</author>
<author>
<name>Valleau, Dylan</name>
</author>
<author>
<name>Lourido, Sebastian</name>
</author>
<author>
<name>Lebrun, Maryse</name>
</author>
<author>
<name>Zimmerberg, Joshua</name>
</author>
<author>
<name>Ward, Gary E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163505</id>
<updated>2025-11-04T03:07:28Z</updated>
<published>2025-09-19T00:00:00Z</published>
<summary type="text">Perforation of the host cell plasma membrane during Toxoplasma invasion requires rhoptry exocytosis
Male, Frances; Kegawa, Yuto; Blank, Paul S.; Jiménez-Munguía, Irene; Sidik, Saima M.; Valleau, Dylan; Lourido, Sebastian; Lebrun, Maryse; Zimmerberg, Joshua; Ward, Gary E.
Toxoplasma gondii is an obligate intracellular parasite. Proteins released during host cell invasion from apical secretory organelles known as rhoptries are delivered into the host cell cytosol to perform functions critical for parasite survival and virulence. How these effector proteins move across the host cell plasma membrane is unknown but may involve a previously noted temporary loss of host cell plasma membrane barrier integrity. Here, we use high-speed, multi-wavelength fluorescence imaging to spatially monitor the barrier integrity of the host cell plasma membrane, in real time, during invasion. The data reveal that early in invasion the parasite creates a transient perforation in the host cell membrane. The perforation occurs at the point on the host membrane in contact with the parasite’s apical end. Parasites depleted of any of five proteins known to be required for rhoptry exocytosis are unable to perforate the host cell membrane. These data suggest a model in which perforating agents stored within rhoptries are released onto the host cell at the initiation of invasion to create a conduit for the delivery of rhoptry effector proteins.
</summary>
<dc:date>2025-09-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unknottedness of free boundary minimal surfaces and self-shrinkers</title>
<link href="https://hdl.handle.net/1721.1/163504" rel="alternate"/>
<author>
<name>Chu, Sabine</name>
</author>
<author>
<name>Franz, Giada</name>
</author>
<id>https://hdl.handle.net/1721.1/163504</id>
<updated>2025-11-04T03:07:21Z</updated>
<published>2025-09-08T00:00:00Z</published>
<summary type="text">Unknottedness of free boundary minimal surfaces and self-shrinkers
Chu, Sabine; Franz, Giada
We study unknottedness for free boundary minimal surfaces in a three-dimensional Riemannian manifold with nonnegative Ricci curvature and strictly convex boundary, and for self-shrinkers in the three-dimensional Euclidean space. For doing so, we introduce the concepts of boundary graph for free boundary minimal surfaces and of graph at infinity for self-shrinkers. We prove that these surfaces are unknotted in the sense that any two such surfaces with isomorphic boundary graph or graph at infinity are smoothly isotopic.
</summary>
<dc:date>2025-09-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>The wavefront set: bounds for the Langlands parameter</title>
<link href="https://hdl.handle.net/1721.1/163503" rel="alternate"/>
<author>
<name>Ciubotaru, Dan</name>
</author>
<author>
<name>Kim, Ju-Lee</name>
</author>
<id>https://hdl.handle.net/1721.1/163503</id>
<updated>2025-11-04T03:07:12Z</updated>
<published>2025-09-09T00:00:00Z</published>
<summary type="text">The wavefront set: bounds for the Langlands parameter
Ciubotaru, Dan; Kim, Ju-Lee
For an irreducible smooth representation of a connected reductive p-adic group, two important associated invariants are the wavefront set and the (partly conjectural) Langlands parameter. While a wavefront set consists of p-adic nilpotent orbits, one constituent of the Langlands parameter is a complex nilpotent orbit in the dual Lie algebra. For unipotent representations in the sense of Lusztig, the corresponding nilpotent orbits on the two sides are related via the Lusztig–Spaltenstein duality (Ciubotaru et al. in Am J Math arXiv:2112.14354v4 , J Reine Angew Math (Crelles J) 823:191–253, 2025). In this paper, we formulate a general upper-bound conjecture and several variants relating the nilpotent orbits that appear in the wavefront set and in the Langlands parameter. We also verify these expectations in some cases, including the depth-zero supercuspidal representations of classical groups and all the irreducible representations of G2.
</summary>
<dc:date>2025-09-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>An open-source and low-cost dual-extruder 3D printer for macroscale biotic materials</title>
<link href="https://hdl.handle.net/1721.1/163502" rel="alternate"/>
<author>
<name>de Alva, Jesse P.</name>
</author>
<author>
<name>Buehler, Markus</name>
</author>
<id>https://hdl.handle.net/1721.1/163502</id>
<updated>2025-11-04T03:07:27Z</updated>
<published>2025-10-22T00:00:00Z</published>
<summary type="text">An open-source and low-cost dual-extruder 3D printer for macroscale biotic materials
de Alva, Jesse P.; Buehler, Markus
This work presents the design and fabrication of a novel, dual-extruder biotic 3D printer, tailored for precise deposition of natural biomaterials such as pectin, chitosan, and cellulose. Moving beyond the limitations of traditional thermoplastic extrusion which relies on non-renewable plastics and produces significant waste, this printer utilizes a syringe-based mechanical extruder to deposit viscous biotic material hydrogels. The integration of a dual-extruder system enables the creation of multi-material prints, offering new possibilities for sustainable and biotic manufacturing. Designed with accessibility and versatility in mind, the system features user-friendly operation suitable for non-experts with open-source hardware and software. By providing a robust, customizable, and open-source platform, this work aims to empower researchers, educators, and innovators to advance biomaterials research and expand the reach of sustainable additive manufacturing. The printer fosters a collaborative community and lays the groundwork for further exploration of biological designs and materials.
</summary>
<dc:date>2025-10-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Observation of the doubly-charmed-baryon decay Ξ cc + + → Ξ c 0 π + π +</title>
<link href="https://hdl.handle.net/1721.1/163501" rel="alternate"/>
<author>
<name>Aaij, R.</name>
</author>
<author>
<name>Abdelmotteleb, A. S. W.</name>
</author>
<author>
<name>Abellan Beteta, C.</name>
</author>
<author>
<name>Abudinén, F.</name>
</author>
<author>
<name>Ackernley, T.</name>
</author>
<author>
<name>Adefisoye, A. A.</name>
</author>
<author>
<name>Adeva, B.</name>
</author>
<author>
<name>Adinolfi, M.</name>
</author>
<author>
<name>Adlarson, P.</name>
</author>
<author>
<name>Agapopoulou, C.</name>
</author>
<author>
<name>Aidala, C. A.</name>
</author>
<author>
<name>Ajaltouni, Z.</name>
</author>
<author>
<name>Akar, S.</name>
</author>
<author>
<name>Akiba, K.</name>
</author>
<author>
<name>Albicocco, P.</name>
</author>
<author>
<name>Albrecht, J.</name>
</author>
<author>
<name>The LHCb Collaboration</name>
</author>
<id>https://hdl.handle.net/1721.1/163501</id>
<updated>2025-11-04T03:07:25Z</updated>
<published>2025-10-16T00:00:00Z</published>
<summary type="text">Observation of the doubly-charmed-baryon decay Ξ cc + + → Ξ c 0 π + π +
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; The LHCb Collaboration
A search for the doubly-charmed-baryon decay Ξ cc + + → Ξ c 0 π + π + is performed using proton-proton collision data collected by the LHCb experiment at a centre-of-mass energy of 13 TeV and corresponding to an integrated luminosity of 5.4 fb−1. A significant structure consistent with the Ξ cc + + baryon is observed in the Ξ c 0 π + π + invariant-mass spectrum. Using the Ξ cc + + → Λ c + K − π + π + decay as the normalisation channel, the branching fraction ratio, B Ξ cc + + → Ξ c 0 π + π + B Ξ cc + + → Λ c + K − π + π + , is measured to be 1.37 ± 0.18 (stat) ± 0.09 (syst) ± 0.35 (ext). This measurement provides critical input for testing QCD factorisation methods in the weak decays of doubly-heavy baryons, particularly in quantifying nonperturbative effects such as final-state interactions and resonance contributions to the hadronisation process.
</summary>
<dc:date>2025-10-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Experimental evidence for nodal superconducting gap in moiré graphene</title>
<link href="https://hdl.handle.net/1721.1/163500" rel="alternate"/>
<author>
<name>Park, Jeong Min</name>
</author>
<author>
<name>Sun, Shuwen</name>
</author>
<author>
<name>Watanabe, Kenji</name>
</author>
<author>
<name>Taniguchi, Takashi</name>
</author>
<author>
<name>Jarillo-Herrero, Pablo</name>
</author>
<id>https://hdl.handle.net/1721.1/163500</id>
<updated>2025-11-06T21:55:26Z</updated>
<published>2025-11-06T00:00:00Z</published>
<summary type="text">Experimental evidence for nodal superconducting gap in moiré graphene
Park, Jeong Min; Sun, Shuwen; Watanabe, Kenji; Taniguchi, Takashi; Jarillo-Herrero, Pablo
Understanding the nature of superconductivity in magic-angle graphene remains&#13;
challenging. A key difficulty lies in discerning the different energy scales in this strongly&#13;
interacting system, particularly the superconducting gap. Here, we report simultaneous tunneling&#13;
spectroscopy and transport measurements of magic-angle twisted trilayer graphene. This approach&#13;
allows us to identify two coexisting V-shaped tunneling gaps with different energy scales: a&#13;
distinct low-energy superconducting gap that vanishes at the superconducting critical temperature&#13;
and magnetic field, and a higher-energy pseudogap. The superconducting tunneling spectra display&#13;
a linear gap-filling behavior with temperature and magnetic field and exhibit the Volovik effect,&#13;
consistent with a nodal order parameter. Our work suggests an unconventional nature of the&#13;
superconducting gap and establishes an experimental framework for multidimensional&#13;
investigation of tunable quantum materials.
</summary>
<dc:date>2025-11-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mitigating matrix effects in oil and gas wastewater analysis: LC-MS/MS method for ethanolamines</title>
<link href="https://hdl.handle.net/1721.1/163499" rel="alternate"/>
<author>
<name>de Vera, Glen Andrew D</name>
</author>
<author>
<name>Caldiero, Loredana</name>
</author>
<author>
<name>Conte, Giovanni</name>
</author>
<author>
<name>Plata, Desirée L</name>
</author>
<id>https://hdl.handle.net/1721.1/163499</id>
<updated>2025-11-04T03:07:56Z</updated>
<published>2024-12-26T00:00:00Z</published>
<summary type="text">Mitigating matrix effects in oil and gas wastewater analysis: LC-MS/MS method for ethanolamines
de Vera, Glen Andrew D; Caldiero, Loredana; Conte, Giovanni; Plata, Desirée L
The high salinity and organic content in oil and gas wastewaters can cause ion suppression during liquid chromatography mass spectrometry (LC/MS) analysis, diminishing the sensitivity and accuracy of measurements in available methods. This suppression is severe for low molecular weight organic compounds such as ethanolamines (e.g., monoethanolamine (MEA), diethanolamine (DEA), triethanolamine (TEA), N-methyldiethanolamine (MDEA), and N,N-ethyldiethanolamine (EDEA)). Here, we deployed solid phase extraction (SPE), mixed-mode LC, triple quadrupole MS with positive electrospray ionization (ESI), and a suite of stable isotope standards (i.e., one per target compound) to correct for ion suppression by salts and organic matter, SPE losses, and instrument variability. The method was evaluated in produced water samples from Italy (NaCl salinity from 8110–18 100 mg L−1; diesel range organic compounds ranging from 5.1–7.9 mg L−1). After correcting for matrix effects, ethanolamines in produced water samples were quantified. The first batch of samples (March 2019) had 37–646 μg L−1 total ethanolamines. The second batch of samples (September 2019) had greater ethanolamine content of 77–3976 μg L−1 which was attributed to a reduced water cut during oil production, enhancing the proportionate abundance of these compounds in the aqueous phase. In all samples, DEA and MEA were the dominant ethanolamine species. Possible sources (e.g., corrosion inhibitor and biotransformation) and natural attenuation potential during storage (e.g., at different temperatures, acidification, and addition of sodium azide) were investigated. The developed analytical method enables further investigation of the fate of low molecular weight organic additives in oil and gas development and provides an enhanced ability to evaluate risks associated with chemical release to the environment.
</summary>
<dc:date>2024-12-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sensitivity analysis of aromatic chemistry to gas-phase kinetics in a dark molecular cloud model</title>
<link href="https://hdl.handle.net/1721.1/163498" rel="alternate"/>
<author>
<name>Byrne, Alex N</name>
</author>
<author>
<name>Xue, Ci</name>
</author>
<author>
<name>Van Voorhis, Troy</name>
</author>
<author>
<name>McGuire, Brett A</name>
</author>
<id>https://hdl.handle.net/1721.1/163498</id>
<updated>2025-11-04T03:07:57Z</updated>
<published>2024-10-21T00:00:00Z</published>
<summary type="text">Sensitivity analysis of aromatic chemistry to gas-phase kinetics in a dark molecular cloud model
Byrne, Alex N; Xue, Ci; Van Voorhis, Troy; McGuire, Brett A
The increasingly large number of complex organic molecules detected in the interstellar medium necessitates robust kinetic models that can be relied upon for investigating the involved chemical processes. Such models require rate coefficients for each of the thousands of reactions; the values of these are often estimated or extrapolated, leading to large uncertainties that are rarely quantified. We have performed a global Monte Carlo and a more local one-at-a-time sensitivity analysis on the gas-phase rate coefficients in a 3-phase dark cloud model. Time-dependent sensitivities have been calculated using four metrics to determine key reactions for the overall network as well as for the cyanonaphthalene molecule in particular, an important interstellar species that is severely under-produced by current models. All four metrics find that reactions involving small, reactive species that initiate hydrocarbon growth have large effects on the overall network. Cyanonaphthalene is most sensitive to a number of these reactions as well as ring-formation of the phenyl cation (C6H5+) and aromatic growth from benzene to naphthalene. Future efforts should prioritize constraining rate coefficients of key reactions and expanding the network surrounding these processes. These results highlight the strength of sensitivity analysis techniques to identify critical processes in complex chemical networks, such as those often used in astrochemical modeling.
</summary>
<dc:date>2024-10-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated electrochemical oxygen sensing using a 3D-printed microfluidic lab-on-a-chip system</title>
<link href="https://hdl.handle.net/1721.1/163497" rel="alternate"/>
<author>
<name>Kaufman, Daniel</name>
</author>
<author>
<name>Winkler, Steffen</name>
</author>
<author>
<name>Heuer, Christopher</name>
</author>
<author>
<name>Shibli, Ahed</name>
</author>
<author>
<name>Snezhko, Alexander</name>
</author>
<author>
<name>Livshits, Gideon I</name>
</author>
<author>
<name>Bahnemann, Janina</name>
</author>
<author>
<name>Ben-Yoav, Hadar</name>
</author>
<id>https://hdl.handle.net/1721.1/163497</id>
<updated>2025-11-04T03:07:51Z</updated>
<published>2024-12-28T00:00:00Z</published>
<summary type="text">Automated electrochemical oxygen sensing using a 3D-printed microfluidic lab-on-a-chip system
Kaufman, Daniel; Winkler, Steffen; Heuer, Christopher; Shibli, Ahed; Snezhko, Alexander; Livshits, Gideon I; Bahnemann, Janina; Ben-Yoav, Hadar
Dissolved oxygen is crucial for metabolism, growth, and other complex physiological and pathological processes; however, standard physiological models (such as organ-on-chip systems) often use ambient oxygen levels, which do not reflect the lower levels that are typically found in vivo. Additionally, the local generation of reactive oxygen species (ROS; a key factor in physiological systems) is often overlooked in biology-mimicking models. Here, we present a microfluidic system that integrates electrochemical dissolved oxygen sensors with lab-on-a-chip technology to monitor the physiological oxygen concentrations and generate hydrogen peroxide (H2O2; a specific ROS). This microfluidic lab-on-a-chip system was fabricated using high-resolution 3D printing technology in a one-step process. It incorporates a micromixer, an on-chip bubble-trap, an electrochemical cell with fabricated gold or platinum black-coated working electrodes as well as an Ag/AgCl reference electrode, and a commercial optical oxygen sensor for validation. This device enables an automated variation of the oxygen levels as well as sensitive electrochemical oxygen monitoring (limit of detection = 11.9 ± 0.3 μM), with a statistically significant correlation with the optical sensor. The proposed system can serve as a tool to characterize and evaluate custom-made electrodes. Indeed, we envision that in the future it will be used to regulate dissolved oxygen levels and oxygen species in real time in organ-on-chip systems.
</summary>
<dc:date>2024-12-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>A critical review on Li-ion transport, chemistry and structure of ceramic–polymer composite electrolytes for solid state batteries</title>
<link href="https://hdl.handle.net/1721.1/163496" rel="alternate"/>
<author>
<name>Sand, Sara Catherine</name>
</author>
<author>
<name>Rupp, Jennifer LM</name>
</author>
<author>
<name>Yildiz, Bilge</name>
</author>
<id>https://hdl.handle.net/1721.1/163496</id>
<updated>2025-11-04T03:07:53Z</updated>
<published>2024-11-18T00:00:00Z</published>
<summary type="text">A critical review on Li-ion transport, chemistry and structure of ceramic–polymer composite electrolytes for solid state batteries
Sand, Sara Catherine; Rupp, Jennifer LM; Yildiz, Bilge
In the transition to safer, more energy-dense solid state batteries, polymer–ceramic composite electrolytes may offer a potential route to achieve simultaneously high Li-ion conductivity and enhanced mechanical stability. Despite numerous studies on the polymer–ceramic composite electrolytes, disagreements persist on whether the polymer or the ceramic is positively impacted in their constituent ionic conductivity for such composite electrolytes, and even whether the interface is a blocking layer or a highly conductive lithium ion path. This lack of understanding limits the design of effective composite solid electrolytes. By thorough and critical analysis of the data collected in the field over the last three decades, we present arguments for lithium conduction through the bulk of the polymer, ceramic, or their interface. From this analysis, we can conclude that the unexpectedly high conductivity reported for some ceramic–polymer composites cannot be accounted for by the ceramic phase alone. There is evidence to support the theory that the Li-ion conductivity in the polymer phase increases along this interface in contact with the ceramic. The potential mechanisms for this include increased free volume, decreased crystallinity, and modulated Lewis acid–base effects in the polymer, with the former two to be the more likely mechanisms. Future work in this field requires understanding these factors more quantitatively, and tuning of the ceramic surface chemistry and morphology in order to obtain targeted structural modifications in the polymer phase.
</summary>
<dc:date>2024-11-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Na vs. Li metal anodes for batteries: unraveling thermodynamic and electronic origins of voids and developing descriptors for artificial surface coatings</title>
<link href="https://hdl.handle.net/1721.1/163495" rel="alternate"/>
<author>
<name>Venturi, Victor</name>
</author>
<author>
<name>Freitas, Rodrigo</name>
</author>
<author>
<name>Abate, Iwnetim Iwnetu</name>
</author>
<id>https://hdl.handle.net/1721.1/163495</id>
<updated>2025-11-04T03:07:49Z</updated>
<published>2024-09-24T00:00:00Z</published>
<summary type="text">Na vs. Li metal anodes for batteries: unraveling thermodynamic and electronic origins of voids and developing descriptors for artificial surface coatings
Venturi, Victor; Freitas, Rodrigo; Abate, Iwnetim Iwnetu
Techno-economic, humanitarian, and safety concerns limit the possible uses of conventional lithium-ion and lithium-metal batteries. Sodium-based batteries constitute a promising alternative to address these issues; however, due to the similarities between the two alkali metals, they present similar failure modes as their lithium counterparts. In this work, we focus on one of such failure mechanisms: the thermodynamically-driven accumulation of vacancies on the surface of the metallic anode, which leads to the formation of voids and pits, detrimental to battery performance and cycle life. We investigate the differences in behavior between anode/coating interfaces of both lithium and sodium. Adhesion energy, a descriptor previously argued to be a reliable design principle for lithium metal anodes, is found to not exhibit the same predictive power for sodium metal architectures: in cases where vacancy congregation is not thermodynamically favorable for isolated sodium slabs, we find strong interfacial interactions to have adverse effects on void formation. By studying select coating materials, we also reveal that these material interactions at alkali/coating interfaces are highly nuanced, and that the field of surface science and engineering is ripe with opportunities for further discovery and tuning of surface properties via coating selection.
</summary>
<dc:date>2024-09-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interface‐Induced Stability of Nontrivial Topological Spin Textures: Unveiling Room‐Temperature Hopfions and Skyrmions</title>
<link href="https://hdl.handle.net/1721.1/163494" rel="alternate"/>
<author>
<name>Katmis, Ferhat</name>
</author>
<author>
<name>Lauter, Valeria</name>
</author>
<author>
<name>Yagan, Rawana</name>
</author>
<author>
<name>Brandt, Iuri S</name>
</author>
<author>
<name>Cheghabouri, Arash M</name>
</author>
<author>
<name>Zhou, Hua</name>
</author>
<author>
<name>Freeland, John W</name>
</author>
<author>
<name>de Araujo, Clodoaldo IL</name>
</author>
<author>
<name>Jamer, Michelle E</name>
</author>
<author>
<name>Heiman, Don</name>
</author>
<author>
<name>Onbasli, Mehmet C</name>
</author>
<author>
<name>Moodera, Jagadeesh S</name>
</author>
<id>https://hdl.handle.net/1721.1/163494</id>
<updated>2025-11-04T03:07:47Z</updated>
<published>2025-08-18T00:00:00Z</published>
<summary type="text">Interface‐Induced Stability of Nontrivial Topological Spin Textures: Unveiling Room‐Temperature Hopfions and Skyrmions
Katmis, Ferhat; Lauter, Valeria; Yagan, Rawana; Brandt, Iuri S; Cheghabouri, Arash M; Zhou, Hua; Freeland, John W; de Araujo, Clodoaldo IL; Jamer, Michelle E; Heiman, Don; Onbasli, Mehmet C; Moodera, Jagadeesh S
Topological spin configurations, such as soliton-like spin texture and Dirac electron assemblies, have recently emerged in fundamental science and technology. Achieving stable topological spin textures at room temperature is crucial for their use as long-range information carriers. However, their creation and manipulation are hindered by multi-step field training and competing interactions. Thus, a spontaneous ground state for multidimensional topological spin textures is desirable, with skyrmions forming swirling, hedgehog-like spin structures in two dimensions and hopfions as their twisted 3D counterparts. Here, the first observation of robust and reproducible topological spin textures of hopfions and skyrmions observed at room temperature and in zero magnetic field is reported, which are stabilized by geometric confinement and protected by interfacial magnetism in a ferromagnet/topological insulator/ferromagnet trilayer heterostructure. These skyrmion-hopfion configurations are directly observed at room temperature with Lorenz transmission electron microscopy. Using micromagnetic modeling, the experimental observations of hopfion-skyrmion assemblies are reproduced. This model reveals a complete picture of how spontaneously organized skyrmion lattices encircled by hopfion rings are controlled by surface electrons, uniaxial anisotropy, and Dzyaloshinskii-Moriya interaction. This study provides evidence that topological chiral spin textures can facilitate the development of magnetic topological carriers, paving the way for ultralow-power and high-density information processing.
</summary>
<dc:date>2025-08-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Genetic Surfaceome E. coli Reprogramming Enables Selective Water Oxidation</title>
<link href="https://hdl.handle.net/1721.1/163493" rel="alternate"/>
<author>
<name>Sedenho, Graziela C</name>
</author>
<author>
<name>Pacheco, Jéssica C</name>
</author>
<author>
<name>Gut, Melanie</name>
</author>
<author>
<name>Lima, Filipe CDA</name>
</author>
<author>
<name>Dey, Sunanda</name>
</author>
<author>
<name>Crespilho, Frank N</name>
</author>
<author>
<name>Furst, Ariel L</name>
</author>
<id>https://hdl.handle.net/1721.1/163493</id>
<updated>2025-11-04T03:07:36Z</updated>
<published>2025-08-15T00:00:00Z</published>
<summary type="text">Genetic Surfaceome E. coli Reprogramming Enables Selective Water Oxidation
Sedenho, Graziela C; Pacheco, Jéssica C; Gut, Melanie; Lima, Filipe CDA; Dey, Sunanda; Crespilho, Frank N; Furst, Ariel L
Programming catalytic behavior at the microbial genome level is a frontier in synthetic biology with direct impact on bioelectrocatalysis. A key challenge is the coordinated control of gene expression, localization, folding, and cofactor maturation required to achieve proper bioelectrocatalytic activity. Here, a synthetic operon in Escherichia coli is engineered to reprogram its surfaceome for selective water oxidation. Using orthogonal IPTG-inducible control and codon-optimized expression, a fungal bilirubin oxidase (BOD) displayed at the cell surface is produced by ice nucleation protein anchoring (BOD-E. coli). Post-overexpression copper catalytic site reconstitution provides an active holoenzyme. The developed engineered living material performs water oxidation at near-zero overpotential (27 mV at pH 9.1), with complete suppression of the oxygen reduction reaction. These results show how regenerable microbial platforms can be designed for selective catalysis and artificial photosynthesis.
</summary>
<dc:date>2025-08-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Surprises From the Basal Ganglia: Stop and Go Have New Meaning</title>
<link href="https://hdl.handle.net/1721.1/163492" rel="alternate"/>
<author>
<name>Graybiel, Ann M</name>
</author>
<id>https://hdl.handle.net/1721.1/163492</id>
<updated>2025-11-04T03:07:44Z</updated>
<published>2025-08-14T00:00:00Z</published>
<summary type="text">Surprises From the Basal Ganglia: Stop and Go Have New Meaning
Graybiel, Ann M
This perspective highlights new worksuggesting the need for revision of the canonical direct–indirect model of the basal ganglia’s inﬂuence on move-ment, with fresh evidence that there is a formerlyunappreciated pair of direct and indirect pathways thatparallel the standard model’s canonical direct and indi-rect pathways, and promising evidence pointing towardimproved clinical treatments for Parkinson’s disease. Asa working hypothesis, it is suggested that the non-canonical direct and indirect pathways, which arise instriosomes, might act as homeostatic circuits that canreign in or amplify the activity of the canonical pathwaysin the face of their imbalance, including that occurring inhyperkinetic or hypokinetic disorders.
</summary>
<dc:date>2025-08-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulating the Potential for Invasive Grass Expansion to Alter Wildfire Behavior in Southern California With WRF‐Fire</title>
<link href="https://hdl.handle.net/1721.1/163491" rel="alternate"/>
<author>
<name>Wang, Bowen</name>
</author>
<author>
<name>Madakumbura, Gavin D</name>
</author>
<author>
<name>Juliano, Timothy W</name>
</author>
<author>
<name>Williams, A Park</name>
</author>
<id>https://hdl.handle.net/1721.1/163491</id>
<updated>2025-11-04T03:07:43Z</updated>
<published>2025-08-13T00:00:00Z</published>
<summary type="text">Simulating the Potential for Invasive Grass Expansion to Alter Wildfire Behavior in Southern California With WRF‐Fire
Wang, Bowen; Madakumbura, Gavin D; Juliano, Timothy W; Williams, A Park
Invasion by non‐native annual grasses poses a serious threat to native vegetation in California,facilitated through interaction with wildfires. Our work is the first attempt to use the coupled fire‐atmospheremodel, WRF‐Fire, to investigate how shifts from native, shrub‐dominated vegetation to invasive grasses couldhave affected a known wildfire event in southern California. We simulate the Mountain Fire, which burned&gt;11,000 ha in July 2013, under idealized fuel conditions representing varying extents of grass invasion.Expanding grass to double its observed coverage causes fire to spread faster due to the lower fuel load in grassesand increased wind speed. Beyond this, further grass expansion reduces the simulated spread rate because lowerheat release partially offsets the positive effects. Our simulations suggest that grass expansion may generallypromote larger faster‐spreading wildfires in southern California, motivating continued efforts to contain andreduce the spread of invasive annual grasses in this region.
</summary>
<dc:date>2025-08-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Vaccine-boosted CAR T crosstalk with host immunity to reject tumors with antigen heterogeneity</title>
<link href="https://hdl.handle.net/1721.1/163490" rel="alternate"/>
<author>
<name>Ma, Leyuan</name>
</author>
<author>
<name>Hostetler, Alexander</name>
</author>
<author>
<name>Morgan, Duncan M</name>
</author>
<author>
<name>Maiorino, Laura</name>
</author>
<author>
<name>Sulkaj, Ina</name>
</author>
<author>
<name>Whittaker, Charles A</name>
</author>
<author>
<name>Neeser, Alexandra</name>
</author>
<author>
<name>Pires, Ivan Susin</name>
</author>
<author>
<name>Yousefpour, Parisa</name>
</author>
<author>
<name>Gregory, Justin</name>
</author>
<author>
<name>Qureshi, Kashif</name>
</author>
<author>
<name>Dye, Jonathan</name>
</author>
<author>
<name>Abraham, Wuhbet</name>
</author>
<author>
<name>Suh, Heikyung</name>
</author>
<author>
<name>Li, Na</name>
</author>
<author>
<name>Love, J Christopher</name>
</author>
<author>
<name>Irvine, Darrell J</name>
</author>
<id>https://hdl.handle.net/1721.1/163490</id>
<updated>2026-03-08T03:29:10Z</updated>
<published>2023-07-20T00:00:00Z</published>
<summary type="text">Vaccine-boosted CAR T crosstalk with host immunity to reject tumors with antigen heterogeneity
Ma, Leyuan; Hostetler, Alexander; Morgan, Duncan M; Maiorino, Laura; Sulkaj, Ina; Whittaker, Charles A; Neeser, Alexandra; Pires, Ivan Susin; Yousefpour, Parisa; Gregory, Justin; Qureshi, Kashif; Dye, Jonathan; Abraham, Wuhbet; Suh, Heikyung; Li, Na; Love, J Christopher; Irvine, Darrell J
Chimeric antigen receptor (CAR) T cell therapy effectively treats human cancer, but the loss of the antigen recognized by the CAR poses a major obstacle. We found that in vivo vaccine boosting of CAR T cells triggers the engagement of the endogenous immune system to circumvent antigen-negative tumor escape. Vaccine-boosted CAR T promoted dendritic cell (DC) recruitment to tumors, increased tumor antigen uptake by DCs, and elicited the priming of endogenous anti-tumor T cells. This process was accompanied by shifts in CAR T metabolism toward oxidative phosphorylation (OXPHOS) and was critically dependent on CAR-T-derived IFN-γ. Antigen spreading (AS) induced by vaccine-boosted CAR T enabled a proportion of complete responses even when the initial tumor was 50% CAR antigen negative, and heterogeneous tumor control was further enhanced by the genetic amplification of CAR T IFN-γ expression. Thus, CAR-T-cell-derived IFN-γ plays a critical role in promoting AS, and vaccine boosting provides a clinically translatable strategy to drive such responses against solid tumors.
</summary>
<dc:date>2023-07-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>Early cellular and molecular signatures correlate with severity of West Nile virus infection</title>
<link href="https://hdl.handle.net/1721.1/163489" rel="alternate"/>
<author>
<name>Lee, Ho-Joon</name>
</author>
<author>
<name>Zhao, Yujiao</name>
</author>
<author>
<name>Fleming, Ira</name>
</author>
<author>
<name>Mehta, Sameet</name>
</author>
<author>
<name>Wang, Xiaomei</name>
</author>
<author>
<name>Wyk, Brent Vander</name>
</author>
<author>
<name>Ronca, Shannon E</name>
</author>
<author>
<name>Kang, Heather</name>
</author>
<author>
<name>Chou, Chih-Hung</name>
</author>
<author>
<name>Fatou, Benoit</name>
</author>
<author>
<name>Smolen, Kinga K</name>
</author>
<author>
<name>Levy, Ofer</name>
</author>
<author>
<name>Clish, Clary B</name>
</author>
<author>
<name>Xavier, Ramnik J</name>
</author>
<author>
<name>Steen, Hanno</name>
</author>
<author>
<name>Hafler, David A</name>
</author>
<author>
<name>Love, J Christopher</name>
</author>
<author>
<name>Shalek, Alex K</name>
</author>
<author>
<name>Guan, Leying</name>
</author>
<author>
<name>Murray, Kristy O</name>
</author>
<author>
<name>Kleinstein, Steven H</name>
</author>
<author>
<name>Montgomery, Ruth R</name>
</author>
<id>https://hdl.handle.net/1721.1/163489</id>
<updated>2026-03-08T03:29:07Z</updated>
<published>2023-12-15T00:00:00Z</published>
<summary type="text">Early cellular and molecular signatures correlate with severity of West Nile virus infection
Lee, Ho-Joon; Zhao, Yujiao; Fleming, Ira; Mehta, Sameet; Wang, Xiaomei; Wyk, Brent Vander; Ronca, Shannon E; Kang, Heather; Chou, Chih-Hung; Fatou, Benoit; Smolen, Kinga K; Levy, Ofer; Clish, Clary B; Xavier, Ramnik J; Steen, Hanno; Hafler, David A; Love, J Christopher; Shalek, Alex K; Guan, Leying; Murray, Kristy O; Kleinstein, Steven H; Montgomery, Ruth R
Infection with West Nile virus (WNV) drives a wide range of responses, from asymptomatic to flu-like symptoms/fever or severe cases of encephalitis and death. To identify cellular and molecular signatures distinguishing WNV severity, we employed systems profiling of peripheral blood from asymptomatic and severely ill individuals infected with WNV. We interrogated immune responses longitudinally from acute infection through convalescence employing single-cell protein and transcriptional profiling complemented with matched serum proteomics and metabolomics as well as multi-omics analysis. At the acute time point, we detected both elevation of pro-inflammatory markers in innate immune cell types and reduction of regulatory T cell activity in participants with severe infection, whereas asymptomatic donors had higher expression of genes associated with anti-inflammatory CD16&lt;sup&gt;+&lt;/sup&gt; monocytes. Therefore, we demonstrated the potential of systems immunology using multiple cell-type and cell-state-specific analyses to identify correlates of infection severity and host cellular activity contributing to an effective anti-viral response.
</summary>
<dc:date>2023-12-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Full-length single-cell BCR sequencing paired with RNA sequencing reveals convergent responses to pneumococcal vaccination</title>
<link href="https://hdl.handle.net/1721.1/163488" rel="alternate"/>
<author>
<name>Morgan, Duncan M</name>
</author>
<author>
<name>Zhang, Yiming J</name>
</author>
<author>
<name>Kim, Jin-Hwan</name>
</author>
<author>
<name>Murillo, MaryAnn</name>
</author>
<author>
<name>Singh, Suddham</name>
</author>
<author>
<name>Loschko, Jakob</name>
</author>
<author>
<name>Surendran, Naveen</name>
</author>
<author>
<name>Sekulovic, Ognjen</name>
</author>
<author>
<name>Feng, Ellie</name>
</author>
<author>
<name>Shi, Shuting</name>
</author>
<author>
<name>Irvine, Darrell J</name>
</author>
<author>
<name>Patil, Sarita U</name>
</author>
<author>
<name>Kanevsky, Isis</name>
</author>
<author>
<name>Chorro, Laurent</name>
</author>
<author>
<name>Christopher Love, J</name>
</author>
<id>https://hdl.handle.net/1721.1/163488</id>
<updated>2026-03-08T03:29:12Z</updated>
<published>2024-09-28T00:00:00Z</published>
<summary type="text">Full-length single-cell BCR sequencing paired with RNA sequencing reveals convergent responses to pneumococcal vaccination
Morgan, Duncan M; Zhang, Yiming J; Kim, Jin-Hwan; Murillo, MaryAnn; Singh, Suddham; Loschko, Jakob; Surendran, Naveen; Sekulovic, Ognjen; Feng, Ellie; Shi, Shuting; Irvine, Darrell J; Patil, Sarita U; Kanevsky, Isis; Chorro, Laurent; Christopher Love, J
Single-cell RNA sequencing (scRNA-seq) can resolve transcriptional features from individual cells, but scRNA-seq techniques capable of resolving the variable regions of B cell receptors (BCRs) remain limited, especially from widely-used 3′-barcoded libraries. Here, we report a method that can recover paired, full-length variable region sequences of BCRs from 3′-barcoded scRNA-seq libraries. We first verify this method (B3E-seq) can produce accurate, full-length BCR sequences. We then apply this method to profile B cell responses elicited against the capsular polysaccharide of Streptococcus pneumoniae serotype 3 (ST3) by glycoconjugate vaccines in five infant rhesus macaques. We identify BCR features associated with specificity for the ST3 antigen which are present in multiple vaccinated monkeys, indicating a convergent response to vaccination. These results demonstrate the utility of our method to resolve key features of the B cell repertoire and profile antigen-specific responses elicited by vaccination.
</summary>
<dc:date>2024-09-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rapidity and multiplicity dependence of charged-particle flow in pPb collisions at s NN = 8.16 TeV</title>
<link href="https://hdl.handle.net/1721.1/163487" rel="alternate"/>
<author>
<name>Aaij, R.</name>
</author>
<author>
<name>Abdelmotteleb, A. S. W.</name>
</author>
<author>
<name>Abellan Beteta, C.</name>
</author>
<author>
<name>Abudinén, F.</name>
</author>
<author>
<name>Ackernley, T.</name>
</author>
<author>
<name>Adefisoye, A. A.</name>
</author>
<author>
<name>Adeva, B.</name>
</author>
<author>
<name>Adinolfi, M.</name>
</author>
<author>
<name>Adlarson, P.</name>
</author>
<author>
<name>Agapopoulou, C.</name>
</author>
<author>
<name>Aidala, C. A.</name>
</author>
<author>
<name>Ajaltouni, Z.</name>
</author>
<author>
<name>Akar, S.</name>
</author>
<author>
<name>Akiba, K.</name>
</author>
<author>
<name>Albicocco, P.</name>
</author>
<author>
<name>Albrecht, J.</name>
</author>
<author>
<name>LHCB Collaboration</name>
</author>
<id>https://hdl.handle.net/1721.1/163487</id>
<updated>2026-03-08T03:28:51Z</updated>
<published>2025-10-15T00:00:00Z</published>
<summary type="text">Rapidity and multiplicity dependence of charged-particle flow in pPb collisions at s NN = 8.16 TeV
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; LHCB Collaboration
The elliptic and triangular flow of charged particles are measured using two-particle angular correlations in pPb collisions in the pseudorapidity range 2.0 &lt; |η| &lt; 4.8. The data sample was collected by the LHCb experiment in 2016 at a centre-of-mass energy per nucleon pair of s NN = 8.16 TeV, containing in total approximately 1.5 billion collision events. Non-flow contributions are obtained in low-multiplicity collisions and subtracted to extract the flow harmonics. The results are presented as a function of event multiplicity and hadron transverse momentum. Comparisons with a full (3+1)D dynamic model indicate that it overestimates the measured elliptic flow. A comparison between the forward and backward regions reveals no significant differences in flow parameters, suggesting that final-state effects may dominate over initial-state effects in the origin of flow in small systems.
</summary>
<dc:date>2025-10-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Truth and perspective</title>
<link href="https://hdl.handle.net/1721.1/163486" rel="alternate"/>
<author>
<name>Ricciardi, Giuseppe</name>
</author>
<author>
<name>Reuter, Kevin</name>
</author>
<id>https://hdl.handle.net/1721.1/163486</id>
<updated>2026-03-08T03:28:46Z</updated>
<published>2025-10-23T00:00:00Z</published>
<summary type="text">Truth and perspective
Ricciardi, Giuseppe; Reuter, Kevin
Several studies in experimental philosophy and semantics have shown that a substantial number of English speakers consider a statement true even if it does not align with the facts, as long as it is justified from the speaker's perspective. These findings challenge the prevailing view among philosophers that truth in the empirical domain is uniformly based on a statement's correspondence to reality. In this study, we explore how perspective-taking influences truth assessments by showing that this influence depends on how the critical question assessing the statement’s truth is phrased. Our results show that when the question targets only the proposition, e.g., “Is it true that [the uttered proposition]?”), participants typically apply a correspondence view of truth—consistent with philosophical convention. But when the question also highlights the speaker (e.g., “Is [the speaker]’s answer true?”), many participants shift toward judging the statement from the speaker’s perspective. We discuss four possible explanations for this behavior and examine the implications of the findings for other philosophical discussions concerning truth and lying, the theory of reference, and norms of assertion.
</summary>
<dc:date>2025-10-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Curtain Model for CAT(0) Spaces and Isometries</title>
<link href="https://hdl.handle.net/1721.1/163485" rel="alternate"/>
<author>
<name>Chen, Yutong</name>
</author>
<id>https://hdl.handle.net/1721.1/163485</id>
<updated>2026-03-08T03:28:53Z</updated>
<published>2025-07-30T00:00:00Z</published>
<summary type="text">Curtain Model for CAT(0) Spaces and Isometries
Chen, Yutong
This paper studies the dynamics of isometries in the curtain model, which is used to capture the hyperbolicity in a fixed CAT(0) space. We establish several fundamental properties and fully classify the behavior of semisimple isometries of a CAT(0) space in the associated curtain model. In the nonsemisimple case, we restrict the behavior of parabolic actions with positive translation length in the curtain model in most cases of interest, allowing the use of ping-pong-like techniques on the curtain model to provide insights into the study of CAT(0) groups.
</summary>
<dc:date>2025-07-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Algorithm for Estimating the Crossing Number of Dense Graphs, and Continuous Analogs of the Crossing and Rectilinear Crossing Numbers</title>
<link href="https://hdl.handle.net/1721.1/163484" rel="alternate"/>
<author>
<name>Solé-Pi, Oriol</name>
</author>
<id>https://hdl.handle.net/1721.1/163484</id>
<updated>2026-03-08T03:28:55Z</updated>
<published>2025-10-21T00:00:00Z</published>
<summary type="text">An Algorithm for Estimating the Crossing Number of Dense Graphs, and Continuous Analogs of the Crossing and Rectilinear Crossing Numbers
Solé-Pi, Oriol
We present a deterministic n 2 + o ( 1 ) -time algorithm that approximates the crossing number of any graph G of order n up to an additive error of o ( n 4 ) . We also provide a randomized polynomial-time algorithm that constructs a drawing of G with cr ( G ) + o ( n 4 ) crossings. These results yield a 1 + o ( 1 ) approximation algorithm for the crossing number of dense graphs. Our work complements a paper of Fox, Pach and Súk [20], who obtained similar results for the rectilinear crossing number. The results in [20] and in this paper imply that the (normalized) crossing and rectilinear crossing numbers are estimable parameters. Motivated by this, we introduce two graphon parameters, the crossing density and the rectilinear crossing density, and we prove that, in a precise sense, these are the correct continuous analogs of the crossing and rectilinear crossing numbers of graphs.
</summary>
<dc:date>2025-10-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improved measurement of η/η′ mixing in B s 0 → J / ψ η ′ decays</title>
<link href="https://hdl.handle.net/1721.1/163483" rel="alternate"/>
<author>
<name>Aaij, R.</name>
</author>
<author>
<name>Abdelmotteleb, A. S. W.</name>
</author>
<author>
<name>Abellan Beteta, C.</name>
</author>
<author>
<name>Abudinén, F.</name>
</author>
<author>
<name>Ackernley, T.</name>
</author>
<author>
<name>Adefisoye, A. A.</name>
</author>
<author>
<name>Adeva, B.</name>
</author>
<author>
<name>Adinolfi, M.</name>
</author>
<author>
<name>Adlarson, P.</name>
</author>
<author>
<name>Agapopoulou, C.</name>
</author>
<author>
<name>Aidala, C. A.</name>
</author>
<author>
<name>Ajaltouni, Z.</name>
</author>
<author>
<name>Akar, S.</name>
</author>
<author>
<name>Akiba, K.</name>
</author>
<author>
<name>Albicocco, P.</name>
</author>
<author>
<name>Albrecht, J.</name>
</author>
<author>
<name>Aleksiejunas, R.</name>
</author>
<id>https://hdl.handle.net/1721.1/163483</id>
<updated>2026-03-08T03:28:44Z</updated>
<published>2025-10-14T00:00:00Z</published>
<summary type="text">Improved measurement of η/η′ mixing in B s 0 → J / ψ η ′ decays
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.; Aleksiejunas, R.
Branching fraction ratios between the decays B s 0 → J / ψ η ′ are measured using proton-proton collision data collected by the LHCb experiment at centre-of-mass energies of 7, 8 and 13 TeV, corresponding to an integrated luminosity of 9 fb−1. The measured ratios of these branching fractions are B B 0 → J / ψη ′ B B 0 → J / ψη = 0.48 ± 0.06 ± 0.02 ± 0.01 , B B s 0 → J / ψη ′ B B s 0 → J / ψη = 0.80 ± 0.02 ± 0.02 ± 0.01 , where the uncertainties are statistical, systematic and related to the precision of the η(′) branching fractions, respectively. They are used to constrain the η/η′ mixing angle, ϕP, and to probe the presence of a possible glueball component in the η′ meson, described by the gluonic mixing angle ϕG. The obtained results are ϕ P = 41.6 − 1.2 + 1.0 ∘ , ϕ G = 28.1 − 4.0 + 3.9 ∘ , where the uncertainties are statistically dominated. While the value of ϕP is compatible with existing experimental determinations and theoretical calculations, the angle ϕG differs from zero by more than four standard deviations, which points to a substantial glueball component in the η′ meson and/or unexpectedly large contributions from gluon-mediated processes in these decays. The absolute branching fractions are also measured relative to that of the well-established B s 0 → J / ψϕ decay, which serves as the normalisation channel. These results supersede the previous LHCb measurements and are the most precise to date.
</summary>
<dc:date>2025-10-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Zero carbon challenges in supply chain management to achieve sustainability</title>
<link href="https://hdl.handle.net/1721.1/163482" rel="alternate"/>
<author>
<name>Derse, O.</name>
</author>
<author>
<name>Yontar, E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163482</id>
<updated>2025-11-01T04:06:55Z</updated>
<published>2025-08-19T00:00:00Z</published>
<summary type="text">Zero carbon challenges in supply chain management to achieve sustainability
Derse, O.; Yontar, E.
Reducing carbon emissions due to increasing climate concerns has become important at every stage of the supply chain line, as it is in every sector. Many activities take place in the supply chain processes and it takes serious work for these activities to be in line with the net zero carbon strategy. This paper addresses the challenges that are preventing the supply chain from achieving its net zero carbon target. Challenges addressed; It is categorized as environmental challenges, financial and economic challenges, organizational challenges, social and consumer challenges, technical and technological challenges, and administrative challenges. Depending on the 6 main categories determined, 24 sub-challenges are determined and the network structure, relations and rankings of the determined challenges are determined by the Analytical Network Process (ANP) method, one of the Multi-Criteria Decision Making methods. The risks of the challenges identified by the ANP-based Failure Mode and Effect Analysis (FMEA) are also listed. According to the ANP and ANP based FMEA methods, it is seen that the riskiest results and the most important challenges are Financial and Economic challenges and Technical and Technological challenges, respectively. According to the ANP, the most important challenges are respectively “Lack of technical competence and field experts”, “Lack of resources”, and “High initial investment cost”. According to the ANP based FMEA, the most important challenges are “Lack of resources”, “Lack of technical competence and field experts” and “Uncertain long-term economic return/payback periods and investment risks”, respectively. In the study, it is thought that the relationships and rankings determined will be a roadmap to reach net zero carbon targets in supply chains.
</summary>
<dc:date>2025-08-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Robust longitudinal and lateral control for mixed-vehicular platoons with string stability guarantees</title>
<link href="https://hdl.handle.net/1721.1/163481" rel="alternate"/>
<author>
<name>Chen, Qien</name>
</author>
<author>
<name>Wang, Shimin</name>
</author>
<author>
<name>Gao, Bolin</name>
</author>
<author>
<name>Zhan, Zhi</name>
</author>
<author>
<name>Zhong, Renxin</name>
</author>
<id>https://hdl.handle.net/1721.1/163481</id>
<updated>2025-11-01T04:06:56Z</updated>
<published>2025-07-16T00:00:00Z</published>
<summary type="text">Robust longitudinal and lateral control for mixed-vehicular platoons with string stability guarantees
Chen, Qien; Wang, Shimin; Gao, Bolin; Zhan, Zhi; Zhong, Renxin
Integrating longitudinal and lateral controls for vehicular platoons mixed with Connected and Autonomous Vehicles (CAVs) and Level-2 Automated Vehicles (L2AVs) to guarantee string stability against model uncertainty and external disturbances is essential yet challenging. This paper tackles this challenge by introducing a novel integrated longitudinal and lateral control (ILLC) strategy that guarantees input-to-state string stability (ISSS) for heterogeneous vehicular platoons. The proposed ILLC strategy significantly enhances the robustness of vehicular platoons by maintaining the desired headway and ensuring the ISSS against disturbances. By incorporating a disturbance observer, we directly address the disturbance estimation error within the string stability analysis. We validate the effectiveness of our method through simulations of various traffic scenarios. Compared to conventional cooperative adaptive cruise control (CACC) techniques, the proposed method achieves faster convergence to the desired states and exhibits bounded state fluctuations. Furthermore, our method can effectively attenuate external disturbances and dissipate stop-and-go waves.
</summary>
<dc:date>2025-07-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Finite Rank Perturbation of Non-Hermitian Random Matrices: Heavy Tail and Sparse Regimes</title>
<link href="https://hdl.handle.net/1721.1/163480" rel="alternate"/>
<author>
<name>Han, Yi</name>
</author>
<id>https://hdl.handle.net/1721.1/163480</id>
<updated>2025-11-01T04:06:58Z</updated>
<published>2025-09-29T00:00:00Z</published>
<summary type="text">Finite Rank Perturbation of Non-Hermitian Random Matrices: Heavy Tail and Sparse Regimes
Han, Yi
Abstract In this work we investigate spectral properties of squared random matrices with independent entries that have only two finite moments. We revisit the problem of perturbing a large, i.i.d. random matrix by a finite rank error. We prove that under a merely second moment condition, for a large class of perturbation matrix with bounded rank and bounded operator norm, the outlier eigenvalues of perturbed matrix still converge to that of the perturbation, which was previously known when matrix entries have finite fourth moment. We then show that the same perturbation holds for very sparse random matrices with i.i.d. entries, all the way up to a constant number of nonzero entries per row and column.
</summary>
<dc:date>2025-09-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Arene extrusion as an approach to reductive elimination at boron: implication of carbene-ligated haloborylene as a transient reactive intermediate</title>
<link href="https://hdl.handle.net/1721.1/163479" rel="alternate"/>
<author>
<name>Zhang, Chonghe</name>
</author>
<author>
<name>Gilliard, Robert J</name>
</author>
<author>
<name>Cummins, Christopher C</name>
</author>
<id>https://hdl.handle.net/1721.1/163479</id>
<updated>2026-03-08T03:29:10Z</updated>
<published>2024-10-03T00:00:00Z</published>
<summary type="text">Arene extrusion as an approach to reductive elimination at boron: implication of carbene-ligated haloborylene as a transient reactive intermediate
Zhang, Chonghe; Gilliard, Robert J; Cummins, Christopher C
Herein, we report boron-centered arene extrusion reactions to afford putative cyclic(alkyl)(amino) carbene (CAAC)-ligated chloroborylene and bromoborylene intermediates. The borylene precursors, chloro-boranorbornadiene (ClB(C6Me6), 2Cl) and bromo-boranorbornadiene (BrB(C6Me6), 2Br) were synthesized through the reaction of the corresponding 1-halo-2,3,4,5-tetramethylborole dimer (XBC4Me4)2 (X = Cl, 1Cl; X = Br, 1Br) with 2-butyne. Treatment of 2Cl with CAACs resulted in the release of di-coordinate chloro-borylene (CAAC)BCl from hexamethylbenzene (C6Me6) at room temperature. In contrast, the reaction of 2Br with CAAC led to the formation of a boronium species [(CAAC)BC6Me6]+Br− (7) at room temperature. Heating 7 in toluene promoted the release of di-coordinate bromo-borylene (CAAC)BBr as a transient species. Surprisingly, heating 7 in dichloromethane resulted in the C–H activation of hexamethylbenzene. The conversion of a CAAC-stabilized bromo-borepin to a borylene, a boron-centered retro Büchner reaction, was also investigated.
</summary>
<dc:date>2024-10-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Clustering in typical unit-distance avoiding sets</title>
<link href="https://hdl.handle.net/1721.1/163478" rel="alternate"/>
<author>
<name>Cohen, A.</name>
</author>
<author>
<name>Mani, N.</name>
</author>
<id>https://hdl.handle.net/1721.1/163478</id>
<updated>2025-11-01T04:06:57Z</updated>
<published>2025-09-22T00:00:00Z</published>
<summary type="text">Clustering in typical unit-distance avoiding sets
Cohen, A.; Mani, N.
In the 1960s Moser asked how dense a subset of R d can be if no pairs of points in the subset are exactly distance 1 apart. There has been a long line of work showing upper bounds on this density. One curious feature of dense unit distance avoiding sets is that they appear to be ''clumpy,'' i.e. forbidding unit distances comes hand in hand with having more than the expected number distance ≈ 2 pairs. In this work we rigorously establish this phenomenon in R 2 . We show that dense unit distance avoiding sets have over-represented distance ≈ 2 pairs, and that this clustering extends to typical unit distance avoiding sets. To do so, we build off of the linear programming approach used previously to prove upper bounds on the density of unit distance avoiding sets.
</summary>
<dc:date>2025-09-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structure of Lower Tails in Sparse Random Graphs</title>
<link href="https://hdl.handle.net/1721.1/163477" rel="alternate"/>
<author>
<name>Chin, Byron</name>
</author>
<id>https://hdl.handle.net/1721.1/163477</id>
<updated>2026-03-08T03:29:08Z</updated>
<published>2025-08-11T00:00:00Z</published>
<summary type="text">Structure of Lower Tails in Sparse Random Graphs
Chin, Byron
We study the typical structure of a sparse Erdős–Rényi random graph conditioned on the lower tail subgraph count event. We show that in certain regimes, a typical graph sampled from the conditional distribution resembles the entropy minimizer of the mean field approximation in the sense of both subgraph counts and cut norm. The main ingredients are an adaptation of an entropy increment scheme of Kozma and Samotij, and a new stability for the solution of the associated entropy variational problem. The proof can be interpreted as a structural application of the new probabilistic hypergraph container lemma for sparser than average sets, and suggests a more general framework for establishing such typical behavior statements.
</summary>
<dc:date>2025-08-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>t-channel dark matter models – a whitepaper</title>
<link href="https://hdl.handle.net/1721.1/163476" rel="alternate"/>
<author>
<name>Arina, Chiara</name>
</author>
<author>
<name>Fuks, Benjamin</name>
</author>
<author>
<name>Panizzi, Luca</name>
</author>
<author>
<name>Baker, Michael J.</name>
</author>
<author>
<name>Cornell, Alan S.</name>
</author>
<author>
<name>Heisig, Jan</name>
</author>
<author>
<name>Maier, Benedikt</name>
</author>
<author>
<name>Pedro, Rute</name>
</author>
<author>
<name>Trischuk, Dominique</name>
</author>
<author>
<name>Agin, Diyar</name>
</author>
<author>
<name>Arbey, Alexandre</name>
</author>
<author>
<name>Arcadi, Giorgio</name>
</author>
<author>
<name>Bagnaschi, Emanuele</name>
</author>
<author>
<name>Bai, Kehang</name>
</author>
<author>
<name>Bhatia, Disha</name>
</author>
<author>
<name>Becker, Mathias</name>
</author>
<author>
<name>Belyaev, Alexander</name>
</author>
<author>
<name>Benoit, Ferdinand</name>
</author>
<author>
<name>Blanke, Monika</name>
</author>
<author>
<name>Burzynski, Jackson</name>
</author>
<id>https://hdl.handle.net/1721.1/163476</id>
<updated>2026-03-08T03:26:36Z</updated>
<published>2025-09-12T00:00:00Z</published>
<summary type="text">t-channel dark matter models – a whitepaper
Arina, Chiara; Fuks, Benjamin; Panizzi, Luca; Baker, Michael J.; Cornell, Alan S.; Heisig, Jan; Maier, Benedikt; Pedro, Rute; Trischuk, Dominique; Agin, Diyar; Arbey, Alexandre; Arcadi, Giorgio; Bagnaschi, Emanuele; Bai, Kehang; Bhatia, Disha; Becker, Mathias; Belyaev, Alexander; Benoit, Ferdinand; Blanke, Monika; Burzynski, Jackson
This report, summarising work achieved in the context of the LHC Dark Matter Working Group, investigates the phenomenology of t-channel dark matter models, spanning minimal setups with a single dark matter candidate and mediator to more complex constructions closer to UV-complete models. For each considered class of models, we examine collider, cosmological and astrophysical implications. In addition, we explore scenarios with either promptly decaying or long-lived particles, as well as featuring diverse dark matter production mechanisms in the early universe. By providing a unified analysis framework, numerical tools and guidelines, this work aims to support future experimental and theoretical efforts in exploring t-channel dark matter models at colliders and in cosmology.
</summary>
<dc:date>2025-09-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Organic aerosol formation from 222 nm germicidal light: ozone-initiated vs. non-ozone pathways</title>
<link href="https://hdl.handle.net/1721.1/163475" rel="alternate"/>
<author>
<name>Goss, Matthew B</name>
</author>
<author>
<name>Kroll, Jesse H</name>
</author>
<id>https://hdl.handle.net/1721.1/163475</id>
<updated>2026-03-08T03:29:13Z</updated>
<published>2024-10-17T00:00:00Z</published>
<summary type="text">Organic aerosol formation from 222 nm germicidal light: ozone-initiated vs. non-ozone pathways
Goss, Matthew B; Kroll, Jesse H
Germicidal ultraviolet lamps outputting 222 nm light (GUV222) have the potential to reduce the airborne spread of disease through effective inactivation of pathogens, while remaining safe for direct human exposure. However, recent studies have identified these lamps as a source of ozone and other secondary pollutants such as secondary organic aerosol (SOA), and the health effects of these pollutants must be balanced against the benefits of pathogen inactivation. While ozone reactions are likely to account for much of this secondary indoor air pollution, 222 nm light may initiate additional non-ozone chemical processes, including the formation of other oxidants and direct photolytic reactions, which are not as well understood. This work examines the impacts of GUV222 on SOA formation and composition by comparing limonene oxidation under GUV222 and O3-only control conditions in a laboratory chamber. Differences between these experiments enable us to distinguish patterns in aerosol formation driven by ozone chemistry from those driven by other photolytic processes. These experiments also examine the influence of the addition of NO2 and nitrous acid (HONO), and investigate SOA formation in sampled outdoor air. SOA composition and yield vary only slightly with respect to GUV222vs. ozone-only conditions; NO2 and HONO photolysis do not appreciably affect the observed chemistry. In contrast, we observe consistent new particle formation under high-fluence 222 nm light (45 μW cm−2) that differs substantially from ozone-only experiments. This observed new particle formation represents an additional reason to keep GUV222 fluence rates to the lowest effective levels.
</summary>
<dc:date>2024-10-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging Electrons for Electrochemical CO2 Capture Using a Hemi-Labile Iron Complex</title>
<link href="https://hdl.handle.net/1721.1/163474" rel="alternate"/>
<author>
<name>Seo, Hyowon</name>
</author>
<author>
<name>Chen, Ying</name>
</author>
<author>
<name>Walter, Eric</name>
</author>
<author>
<name>Abdinejad, Maryam</name>
</author>
<author>
<name>Hatton, T Alan</name>
</author>
<id>https://hdl.handle.net/1721.1/163474</id>
<updated>2026-03-08T03:29:13Z</updated>
<published>2025-08-04T00:00:00Z</published>
<summary type="text">Leveraging Electrons for Electrochemical CO2 Capture Using a Hemi-Labile Iron Complex
Seo, Hyowon; Chen, Ying; Walter, Eric; Abdinejad, Maryam; Hatton, T Alan
Climate change, driven by anthropogenic carbon emissions, demands urgent action to prevent a 2050 tipping point. With CO2 levels at 427 ppm (50% above pre-industrial levels), deploying energy-efficient carbon capture technologies is crucial. Electrochemical carbon capture processes that have been touted to have the potential to meet these needs rely on the applied cell voltage, and electron utilization (CO2 molecules separated per electron), which has generally been asserted to have a theoretical limit of one. Here, we introduce an electron-leveraging strategy to enhance electron utilization beyond this limit to 1.43 by employing Fe-EDDHA, a redox-active coordination complex having a ligand with multiple hemi-labile coordination sites. The reversibility and robustness of the system were enabled by the efficient prevention of CO2 reduction upon the introduction of nicotinamide as a guardian of the iron(2+) center. The proof-of-concept cyclic system exhibits a minimum operational energy of 22.6 kJe mol−1 and an average of 63.7 kJe mol−1 over 29 cycles, using a simulated flue gas (15% CO2). Our electron-leveraging strategy holds promise for advancing energy-efficient electrochemical carbon capture technologies, and offers an alternative to prevalent redox potential shifting methods proposed to mitigate undesired electron transfer reactions in redox-active materials across diverse operational conditions.
</summary>
<dc:date>2025-08-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Programmable Nanovaccine Platform Based on M13 Bacteriophage for Personalized Cancer Vaccine and Therapy</title>
<link href="https://hdl.handle.net/1721.1/163473" rel="alternate"/>
<author>
<name>Huang, Shengnan</name>
</author>
<author>
<name>He, Yanpu</name>
</author>
<author>
<name>Madow, Allison</name>
</author>
<author>
<name>Peng, Huaiyao</name>
</author>
<author>
<name>Griffin, Mirielle</name>
</author>
<author>
<name>Qi, Jifa</name>
</author>
<author>
<name>Huang, Mantao</name>
</author>
<author>
<name>Amoroso, Heather</name>
</author>
<author>
<name>Abrashoff, Riley</name>
</author>
<author>
<name>Heldman, Nimrod</name>
</author>
<author>
<name>Belcher, Angela M</name>
</author>
<id>https://hdl.handle.net/1721.1/163473</id>
<updated>2026-03-08T03:29:09Z</updated>
<published>2025-08-27T00:00:00Z</published>
<summary type="text">A Programmable Nanovaccine Platform Based on M13 Bacteriophage for Personalized Cancer Vaccine and Therapy
Huang, Shengnan; He, Yanpu; Madow, Allison; Peng, Huaiyao; Griffin, Mirielle; Qi, Jifa; Huang, Mantao; Amoroso, Heather; Abrashoff, Riley; Heldman, Nimrod; Belcher, Angela M
Nanovaccines co-assemble antigens and adjuvants to elicit robust immuneresponses but often require complex synthesis and post-modiﬁcationprocedures. Here, a programmable nanovaccine platform based on the M13bacteriophage is developed for the scalable production of vaccines andsingle-step modular engineering of adjuvanticity, length, and antigen density.By reprogramming the sequence and size of the noncoding phage genome,the Toll-like receptor 9 activation and the length of the phage are preciselycontrolled. With a novel molecular engineering approach, the antigen densityis tuned from 13.6% to 70.3%. A systematic modulation reveals an optimaladjuvanticity at a constant antigen density for maximum anti-tumor CD8+ Tcell response, and vice versa, using the model antigen SIINFEKL. The M13phage-based nanovaccine induces durable memory immunity lasting over ayear. In addition, a 24-fold increase in neoantigen-speciﬁc CD8+ T cellfrequency is achieved when increasing both the adjuvanticity and antigendensity. Furthermore, when combined with anti-PD-1 therapy, the M13phage-based personalized vaccine eradicates established MC-38 tumors in75% of treated animals and they develop 100% resistance against tumorinvasion when challenged 5 months after treatment. These ﬁndings establishM13 phage as a powerful and versatile nanovaccine platform withtransformative potential for personalized cancer immunotherapy.
</summary>
<dc:date>2025-08-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Approximate Unitary Designs from Random Pauli Rotations</title>
<link href="https://hdl.handle.net/1721.1/163472" rel="alternate"/>
<author>
<name>Haah, Jeongwan</name>
</author>
<author>
<name>Liu, Yunchao</name>
</author>
<author>
<name>Tan, Xinyu</name>
</author>
<id>https://hdl.handle.net/1721.1/163472</id>
<updated>2025-11-01T04:07:10Z</updated>
<published>2025-10-30T00:00:00Z</published>
<summary type="text">Efficient Approximate Unitary Designs from Random Pauli Rotations
Haah, Jeongwan; Liu, Yunchao; Tan, Xinyu
We construct random walks on simple Lie groups that quickly converge to the Haar measure for all moments up to order t. Specifically, a step of the walk on the unitary or orthogonal group of dimension 2 n is a random Pauli rotation e i θ P / 2 . The spectral gap of this random walk is shown to be Ω ( 1 / t ) , which coincides with the best previously known bound for a random walk on the permutation group on { 0 , 1 } n . This implies that the walk gives an ε -approximate unitary t-design in depth O ( n t 2 + t log 1 ε ) d where d = O ( log n ) is the circuit depth to implement e i θ P / 2 . Our simple proof uses quadratic Casimir operators of Lie algebras.
</summary>
<dc:date>2025-10-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>How FDI reshapes host markets’ trade profile and politics</title>
<link href="https://hdl.handle.net/1721.1/163471" rel="alternate"/>
<author>
<name>Kim, In Song</name>
</author>
<author>
<name>Liao, Steven</name>
</author>
<author>
<name>Miyano, Sayumi</name>
</author>
<id>https://hdl.handle.net/1721.1/163471</id>
<updated>2026-03-08T03:29:12Z</updated>
<published>2025-09-12T00:00:00Z</published>
<summary type="text">How FDI reshapes host markets’ trade profile and politics
Kim, In Song; Liao, Steven; Miyano, Sayumi
A fast-growing literature indicates that ﬁrms’ engagement in foreign directinvestment (FDI) and trade is key to understanding deepening global valuechains and their political implications. However, existing studies have mainlyfocused on the ramiﬁcations for FDI home countries while often overlookingthe ﬁrm-product level interactions between FDI and trade, where their inter-dependencies manifest. This study examines how ﬁrms’ FDI reshapes hostcountries’ trade proﬁles at this level, empowering new political coalitions fortrade liberalization. Analyzing greenﬁeld FDI projects globally since 2003, weﬁnd that hosts experienced an average increase of over 45 export products inthe following year. To overcome the challenges of connecting ﬁrms to prod-ucts, we link FDI data with Vietnamese customs records. We ﬁnd that Viet-namese export (import) volumes of FDI-related products increased by 90%(30%) within 4 years of initial investments. Importantly, these products alsobeneﬁted from more substantial tariff cuts in bilateral Free Trade Agreements.
</summary>
<dc:date>2025-09-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrating metabolic scaling and coexistence theories</title>
<link href="https://hdl.handle.net/1721.1/163470" rel="alternate"/>
<author>
<name>Saavedra, Serguei</name>
</author>
<author>
<name>Arroyo, José Ignacio</name>
</author>
<author>
<name>Deng, Jie</name>
</author>
<author>
<name>Marquet, Pablo A</name>
</author>
<author>
<name>Kempes, Christopher P</name>
</author>
<id>https://hdl.handle.net/1721.1/163470</id>
<updated>2026-03-08T03:29:11Z</updated>
<published>2025-08-05T00:00:00Z</published>
<summary type="text">Integrating metabolic scaling and coexistence theories
Saavedra, Serguei; Arroyo, José Ignacio; Deng, Jie; Marquet, Pablo A; Kempes, Christopher P
Metabolic scaling theory has been pivotal in formalizing the expected energyexpenditures across populations as a function of body size. Coexistence theoryhas provided a mathematization of the environmental conditions compatiblewith multispecies coexistence. Yet, it has been challenging to explain howobserved community-wide patterns, such as the inverse relationship betweenpopulation abundance density and body size, can be unified under boththeories. Here, we provide the foundation for a tractable, scalable, and extend-able framework to study the coexistence of resource-mediated competingpopulations as a function of their body size. For a given thermal domain andresponse, this integration reveals that the metabolically predicted 1/4 powerdependence of carrying capacity of biomass density on body size can be under-stood as the average distribution of carrying capacities across feasible environ-mental conditions, especially for large communities. In line with empiricalobservations, our integration predicts that such average distribution leads tocommunities in which population biomass densities at equilibrium are inde-pendent from body size, and consequently, population abundance densitiesare inversely related to body size. This integration opens new opportunities toincrease our understanding of how metabolic scaling relationships at thepopulation level can shape processes at the community level under changingenvironments.
</summary>
<dc:date>2025-08-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Covariant phase space and L∞ algebras</title>
<link href="https://hdl.handle.net/1721.1/163469" rel="alternate"/>
<author>
<name>Bernardes, Vinícius</name>
</author>
<author>
<name>Erler, Theodore</name>
</author>
<author>
<name>Fırat, Atakan H.</name>
</author>
<id>https://hdl.handle.net/1721.1/163469</id>
<updated>2026-03-08T03:26:08Z</updated>
<published>2025-09-05T00:00:00Z</published>
<summary type="text">Covariant phase space and L∞ algebras
Bernardes, Vinícius; Erler, Theodore; Fırat, Atakan H.
We propose a symplectic structure for the phase space of a generic Lagrangian field theory expressed in the framework of L∞ algebras. The symplectic structure does not require explicit knowledge of the derivative content of the Lagrangian, and therefore is applicable to nonlocal models, such as string field theory, where traditional constructions are difficult to apply. We test our proposal in a number of examples ranging from general relativity to p-adic string theory.
</summary>
<dc:date>2025-09-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deciphering the origins of the elements through galactic archeology</title>
<link href="https://hdl.handle.net/1721.1/163468" rel="alternate"/>
<author>
<name>Farouqi, Khalil</name>
</author>
<author>
<name>Frebel, Anna</name>
</author>
<author>
<name>Thielemann, Friedrich-Karl</name>
</author>
<id>https://hdl.handle.net/1721.1/163468</id>
<updated>2026-03-08T03:26:38Z</updated>
<published>2025-09-12T00:00:00Z</published>
<summary type="text">Deciphering the origins of the elements through galactic archeology
Farouqi, Khalil; Frebel, Anna; Thielemann, Friedrich-Karl
Low-metallicity stars preserve the signatures of the first stellar nucleosynthesis events in the Galaxy, as their surface abundances reflect the composition of the interstellar medium from the time when they were born. Aside from primordial Big Bang nucleosynthesis, massive stars, due to their short lifetimes, dominate the wind and explosive ejecta into the interstellar medium of the early Galaxy. Most of them will end as core-collapse supernova (CCSN) explosions, and typical ejected abundance distributions, e.g. in terms of the α -element-to-Fe ratios, reflect these contributions. Essentially all CCSNe contribute 56Fe (decaying from radioactive 56Ni). Therefore, low-metallicity stars can be used to test whether the abundances of any other elements are correlated with those of Fe, i.e. whether these elements have been co-produced in the progenitor sources or if they require either a different or additional astrophysical origin(s). The present analysis focuses on stars with [Fe/H]&lt;-2, as they probe the earliest formation phase of the Galaxy when only one or very few nucleosynthesis events had contributed their ejecta to the gas from which the lowest metallicity stars form. This was also the era before low and intermediate mass stars (or type Ia supernovae) could contribute any additional heavy elements. Following earlier work on the origin of heavy r-process elements [1], we extend the present study to examine Pearson and Spearman correlations of Fe with Li, Be, C, N, O, Na, Mg, Si, S, K, Ca, Ti, Cr, Ni, Zn, Ge, Se, Sr, Y, Zr, Mo, Ba, La, Ce, Sm, Eu, Gd, Dy, Yb, Lu, Hf, Os, Ir, Pb, and Th, using high-resolution stellar abundance data from the SAGA [2] and JINA [3] databases. The main goal is to identify which of the observed elements (i) may have been co-produced with Fe in (possibly a variety of) CCSNe, and which elements require (ii) either a completely different, or (iii) at least an additional astrophysical origin.
</summary>
<dc:date>2025-09-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Absolute Security with Multiple-Slit Diffraction in Terahertz Communication Links</title>
<link href="https://hdl.handle.net/1721.1/163467" rel="alternate"/>
<author>
<name>Shiri, Yaseman</name>
</author>
<author>
<name>Yeh, Chia-Yi</name>
</author>
<author>
<name>Fang, Zhaoji</name>
</author>
<author>
<name>Shrestha, Rabi</name>
</author>
<author>
<name>Guerboukha, Hichem</name>
</author>
<author>
<name>Médard, Muriel</name>
</author>
<author>
<name>Malowicki, John</name>
</author>
<author>
<name>Overrocker, David</name>
</author>
<author>
<name>Fanelli, Paul</name>
</author>
<author>
<name>Thawdar, Ngwe</name>
</author>
<author>
<name>Mittleman, Daniel M.</name>
</author>
<id>https://hdl.handle.net/1721.1/163467</id>
<updated>2025-10-31T03:08:32Z</updated>
<published>2025-08-18T00:00:00Z</published>
<summary type="text">Absolute Security with Multiple-Slit Diffraction in Terahertz Communication Links
Shiri, Yaseman; Yeh, Chia-Yi; Fang, Zhaoji; Shrestha, Rabi; Guerboukha, Hichem; Médard, Muriel; Malowicki, John; Overrocker, David; Fanelli, Paul; Thawdar, Ngwe; Mittleman, Daniel M.
Many widely used antennas in terahertz (THz) directional communications (including horn antennas) are not fully compatible with the recently proposed absolute security approach due to the absence of strong frequency-dependent minima in the intrinsic antenna pattern. To this end, we propose to use a multiple-slit aperture to modify these non-suitable radiation patterns in a non-intrusive manner. Based on the principle of diffraction, the multi-slit aperture creates frequency varying minima critical for absolute security. We show that improved security performance, quantified by the size of the secure region in space (termed blind region), can be achieved by employing a wider diffraction aperture with a wider slit opening. We further characterize how the non-uniform wavefront, which is typical in practical transmission and results in varying amplitude and phase at different slit openings, affects the size of the blind region. This diffraction-based scheme is experimentally demonstrated with a horn antenna operating near 200 GHz. We demonstrate that, while the intrinsic horn antenna yields no blind region for angles within 16° from the intended user, the modified antenna configuration produces strong minima sufficient to create blind regions at angles as small as 4° and an expanding blind region with increasing transmission bandwidth, thus validating the security gain with this approach.
</summary>
<dc:date>2025-08-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>A gravity-based mounting approach for large-scale cryogenic calorimeter arrays</title>
<link href="https://hdl.handle.net/1721.1/163466" rel="alternate"/>
<author>
<name>CUPID Collaboration</name>
</author>
<id>https://hdl.handle.net/1721.1/163466</id>
<updated>2026-03-08T03:26:28Z</updated>
<published>2025-09-02T00:00:00Z</published>
<summary type="text">A gravity-based mounting approach for large-scale cryogenic calorimeter arrays
CUPID Collaboration
Cryogenic calorimeters are among the leading technologies for searching for rare events. The CUPID experiment is exploiting this technology to deploy a tonne-scale detector to search for neutrinoless double-beta decay of 100 Mo. The CUPID collaboration proposed an innovative approach to assembling cryogenic calorimeters in a stacked configuration, held in position solely by gravity. This gravity-based assembly method is unprecedented in the field of cryogenic calorimeters and offers several advantages, including relaxed mechanical tolerances and simplified construction. To assess and optimize its performance, we constructed a medium-scale prototype hosting 28  Li 2 MoO 4 crystals and 30 Ge light detectors, both operated as cryogenic calorimeters at the Laboratori Nazionali del Gran Sasso (Italy). Despite an unexpected excess of noise in the light detectors, the results of this test proved (i) a thermal stability better than ±0.5 mK at 10 mK, (ii) a good energy resolution of Li 2 MoO 4 cryogenic calorimeters, (6.6 ± 2.2) keV FWHM at 2615 keV, and (iii) a Li 2 MoO 4 light yield measured by the closest light detector of 0.36 keV/MeV, sufficient to guarantee the particle identification requested by CUPID.
</summary>
<dc:date>2025-09-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Higher Spin-Statistics Theorem for Invertible Quantum Field Theories</title>
<link href="https://hdl.handle.net/1721.1/163465" rel="alternate"/>
<author>
<name>Krulewski, Cameron</name>
</author>
<author>
<name>Stehouwer, Luuk</name>
</author>
<author>
<name>Müller, Lukas</name>
</author>
<id>https://hdl.handle.net/1721.1/163465</id>
<updated>2025-10-31T03:08:24Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">A Higher Spin-Statistics Theorem for Invertible Quantum Field Theories
Krulewski, Cameron; Stehouwer, Luuk; Müller, Lukas
We prove that every unitary invertible quantum field theory satisfies a generalization of the famous spin statistics theorem. To formulate this extension, we define a higher spin action of the stable orthogonal group O on appropriate spacetime manifolds, which extends both the reflection involution and spin flip. On the algebraic side, we define a higher statistics action of O on the universal target for invertible field theories, I Z , which extends both complex conjugation and fermion parity ( - 1 ) F . We prove that every unitary invertible quantum field theory intertwines these actions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational metrology for materials</title>
<link href="https://hdl.handle.net/1721.1/163464" rel="alternate"/>
<author>
<name>Warren, James</name>
</author>
<author>
<name>Read, Jake</name>
</author>
<author>
<name>Seppala, Jonathan</name>
</author>
<author>
<name>Strand, Erik</name>
</author>
<author>
<name>Gershenfeld, Neil</name>
</author>
<id>https://hdl.handle.net/1721.1/163464</id>
<updated>2026-03-08T03:26:27Z</updated>
<published>2025-07-31T00:00:00Z</published>
<summary type="text">Computational metrology for materials
Warren, James; Read, Jake; Seppala, Jonathan; Strand, Erik; Gershenfeld, Neil
Advanced materials hold great promise, but their adoption is impeded by the challenges of developing, characterizing, and modeling them, then of designing, processing, and producing something with them. Even if the results are open, the means to do each of these steps are typically proprietary and segregated. We show how principles of open-source software and hardware can be used to develop open instrumentation for materials science, so that a measurement can be accompanied by a complete computational description of how to reproduce it. And then we show how this approach can be extended to effectively measure predictive computational models rather than just model parameters. We refer to these interrelated concepts as “computational metrology.” These are illustrated with examples including a 3D printer that can do rheological characterization of unfamiliar and variable materials.
</summary>
<dc:date>2025-07-31T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2024, Music and Theater Arts</title>
<link href="https://hdl.handle.net/1721.1/163463" rel="alternate"/>
<author>
<name>Makan, Keeril</name>
</author>
<id>https://hdl.handle.net/1721.1/163463</id>
<updated>2025-10-31T03:09:53Z</updated>
<published>2024-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2024, Music and Theater Arts
Makan, Keeril
This report contains the following sections: Current Goals, Objectives, Priorities; Accomplishments; Administrative Initiatives; Finances and Funding; Personnel Information; and Teaching and Curriculum.
</summary>
<dc:date>2024-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing Science Robustness in Uncertain Environments: Application to a Uranus Flagship Mission</title>
<link href="https://hdl.handle.net/1721.1/163462" rel="alternate"/>
<author>
<name>Gentgen, Chloe</name>
</author>
<author>
<name>Landau, Damon</name>
</author>
<author>
<name>Weiss, Benjamin P.</name>
</author>
<author>
<name>Jasinski, Jamie M.</name>
</author>
<author>
<name>De Weck, Olivier</name>
</author>
<id>https://hdl.handle.net/1721.1/163462</id>
<updated>2026-03-08T03:29:02Z</updated>
<published>2025-07-14T00:00:00Z</published>
<summary type="text">Assessing Science Robustness in Uncertain Environments: Application to a Uranus Flagship Mission
Gentgen, Chloe; Landau, Damon; Weiss, Benjamin P.; Jasinski, Jamie M.; De Weck, Olivier
Defining science objectives for missions to unexplored bodies can be difficult when the underlying processes and mechanisms are not well understood. This uncertainty presents a challenge when attempting to determine mission requirements to address these objectives. Additionally, uncertainties in the environment may present risks to the system and mission operations. To this end, uncertainty quantification is increasingly used to inform and validate mission design. However, a framework has yet to be developed to support trajectory tradespace exploration of missions targeting uncertain environments through science modeling. The proposed methodology develops a science systems engineering framework integrating a science representation with trajectory designs to compute quantitative science value metrics. The science model is established by identifying relevant physical models (such as governing equations and assumptions) and input variables from the literature, simulation data, as well as past mission results. Variables are defined with probability distributions, and Monte Carlo simulations are used to quantify the uncertainties. For a given trajectory, the analysis outputs predictive probability distributions of the science value metrics, highlighting the trajectory's science performance and its robustness to uncertainty in the physical processes. The framework is applicable to any mission targeting highly dynamic and uncertain processes. This paper demonstrates its application to a future Uranus Flagship mission, focusing on magnetosphere science objectives. Listed as the highest priority Flagship mission by the latest Decadal Survey, a mission to the Uranian system aims to answer science questions regarding Uranus's interior and atmosphere, its satellites and rings, and its magnetosphere. Analytic and numerical models have been developed to understand Uranus' magnetosphere; however, significant uncertainties remain, leading to challenges when defining magnetosphere science investigations. By applying the proposed methodology, this paper shows a significant variation in predicted science metrics of interest (e.g., number of magnetopause crossings) that can be expected from similar trajectories due to varying environment conditions (solar wind and interplanetary magnetic field) or different arrival times at Uranus. These results should inform the flow-down of measurement requirements to mission design requirements for magnetosphere science.
2025 IEEE Aerospace Conference, 1-8 March, Big Sky, MT, USA
</summary>
<dc:date>2025-07-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Evaluation of Skill-Based Imitation Learning&#13;
Policies for Robotic Manipulation</title>
<link href="https://hdl.handle.net/1721.1/163461" rel="alternate"/>
<author>
<name>Palleiko, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/163461</id>
<updated>2025-10-30T03:24:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design and Evaluation of Skill-Based Imitation Learning&#13;
Policies for Robotic Manipulation
Palleiko, Andrew
Imitation learning is a popular approach for obtaining intelligent robotic policies by learning from human demonstrations. Within this field, there is significant interest in the development of multi-task architectures that can efficiently learn diverse sets of tasks. Skill-based imitation learning methods, which abstract action sequences into ``skill'' representations for planning, offer structural advantages for handling the challenges of multi-task imitation learning that make them an attractive option for this problem. This work presents a novel skill-based imitation learning architecture formulation, with a causal transformer VAE skill-abstraction network paired with an autoregressive transformer planning policy. We find that our skill-abstraction network shows promise in identifying meaningful skills, but that the chosen planning architecture is poorly suited for predicting these skills due to multimodality in the resulting latent space. This is followed by a set of evaluations applied to an existing skill-based method with comparisons to a non-skill-based network on a multi-task dataset. We systematically investigate the performance impacts of six different policy and dataset conditions: data quantity, task variety, retry behavior, control precision, goal representations, and zero-shot transfer. Our experiments reveal limited increases in skill-based policy performance with more demonstrations or task variety, but improvements across architectures through exposure to demonstration retry behavior. Overall, the skill-based architecture demonstrates superior robustness to goal representation variations and low-level process noise than the non-skill-based policy, while neither architecture achieves meaningful zero-shot generalization to novel task combinations. These findings provide insights into the current state of IL methods, with the additional goal of establishing a framework for the evaluation of future multi-task IL architectures.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Multimodal Streaming Perception: A Real-Time&#13;
Perception Scheduling Framework Based on Relevance</title>
<link href="https://hdl.handle.net/1721.1/163460" rel="alternate"/>
<author>
<name>Huang, Dingcheng</name>
</author>
<id>https://hdl.handle.net/1721.1/163460</id>
<updated>2025-10-30T03:24:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards Multimodal Streaming Perception: A Real-Time&#13;
Perception Scheduling Framework Based on Relevance
Huang, Dingcheng
In modern human-robot collaboration (HRC) applications, multiple perception modules jointly extract visual, auditory, and contextual cues to achieve comprehensive scene understanding, enabling the robot to provide appropriate assistance to human agents intelligently. While executing multiple perception modules on a frame-by-frame basis enhances perception quality and information gains in offline settings, it inevitably accumulates latency, leading to a substantial decline in system performance in streaming perception scenarios. Recent work in scene understanding, termed Relevance, has established a solid foundation for developing efficient methodologies in HRC. However, modern perception pipelines still face challenges related to information redundancy and suboptimal allocation of computational resources. Drawing inspiration from the relevance concept and the inherent sparsity of information in HRC events, we propose a novel lightweight perception scheduling framework that efficiently leverages output from previous frames to estimate and schedule necessary perception modules in real-time. Our experimental results demonstrate that the proposed perception scheduling framework effectively reduces computational latency by up to 27.52% compared to conventional parallel perception pipelines, while also achieving a 72.73% improvement in MMPose accuracy and comparable YOLO accuracy. Additionally, the framework demonstrates high keyframe accuracy, achieving rates of up to 98% in dynamic scenes. The results validate the framework’s capability to enhance real-time perception efficiency without significantly compromising accuracy. Additionally, the framework shows potential as a scalable and systematic solution for multi-modal streaming perception systems in human-robot collaboration.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fracture Mechanics of Networks</title>
<link href="https://hdl.handle.net/1721.1/163459" rel="alternate"/>
<author>
<name>Hartquist, Chase M.</name>
</author>
<id>https://hdl.handle.net/1721.1/163459</id>
<updated>2025-10-30T03:21:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Fracture Mechanics of Networks
Hartquist, Chase M.
Networks of interconnected materials permeate throughout nature, biology, and technology due to exceptional mechanical performance. Despite the importance of failure resistance in network design and utility, no existing physical model effectively reconciles strand mechanics and connectivity to predict fracture in diverse networks that constitute polymeric, architected, and biological materials. While traditional models predict the intrinsic fracture energy – the minimum energy to propagate a crack per unit area – of a polymer network is the energy to rupture a layer of chains, they can underestimate experiments by up to two orders of magnitude. In Part I, we show that the intrinsic fracture energy of polymer-like networks stems from nonlocal energy dissipation. We then reveal a general scaling law that captures nonlocal energetic contributions and connects strand mechanics with topological connectivity to universally predict the intrinsic fracture energy of stretchable networks. We measure intrinsic fracture energy using experiments and simulations of 2D and 3D networks with various strand constitutive behaviors, defect densities, strand length distributions, lattice topologies, and length scales. Results show that local strand rupture and nonlocal energy release contribute synergistically to the measured intrinsic fracture energy in networks. These effects align such that the intrinsic fracture energy scales independent of the energy to rupture a strand; it instead depends on the strand rupture force, breaking length, and connectivity. In Part II, we present a model for real polymer fracture and design elastomers with highly regular connectivity. End-linking then deswelling star polymers produces a class of elastomers with low defects and no trapped entanglements, enabling ultrahigh straininduced crystallinity of up to 50% and stretchability that scales beyond the saturated limit. These features promote a pronounced elastocaloric cooling effect and enable reversible two-way tuning of thermal conductivity by strain or temperature modulation. The mechanical and thermal properties of these polymer networks offer promise in addressing challenges in clean energy, thermal management, and biomedicine. Our findings establish a physical basis for understanding network fracture and design principles for fabricating tough polymeric, biological, and architected materials across multiple length scales for advanced applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling the Sit-to-Stand Transition using Koopman Lifting Linearization and Human State Estimation</title>
<link href="https://hdl.handle.net/1721.1/163458" rel="alternate"/>
<author>
<name>Bell IV, John H.</name>
</author>
<id>https://hdl.handle.net/1721.1/163458</id>
<updated>2025-10-30T03:21:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Modeling the Sit-to-Stand Transition using Koopman Lifting Linearization and Human State Estimation
Bell IV, John H.
The Sit-to-Stand (STS) transition is one of the most dangerous daily activities for the elderly population, as it is one of the situations in which falls occur most often. Despite its risks, STS dynamics remain poorly understood, and current STS assistance devices fail to utilize knowledge of STS dynamics to effect their support. This thesis presents contributions to the dynamic modeling of STS and to human-robot collaboration for improving robotic assistance of STS. To coherently capture the multi-phase nature of STS, lifting linearization, a dynamic modeling methodology inspired by Koopman operator theory, to subsume segmented local dynamics in a globally linear dynamic model. A novel class of lifting linearization basis functions, termed “State-Membership Product (SMP)” observables, enables both the seamless blending of local dynamics into a global model, and the direct extraction of phase-specific behaviors from the global model. It is shown that an SMP-Koopman linear model tuned to published data of STS experiments is capable of reproducing the multi-phase STS dynamics with a single linear model. Building on this framework, STS is additionally modeled as a lifted linear feedback control system, composed of an SMP-Koopman-based open-loop biomechanical model of the human body and a linear quadratic regulator (LQR) which guides the body to stand up. The LQR controller, tuned to replicate STS motion, guides the human body model through the phases of STS without explicit phase-switches, improving system robustness. To enhance human-robot collaboration in STS assistance, a framework for estimating patient cooperativeness is also introduced, leveraging a simplified dynamic model and an Extended Kalman Filter. By analyzing a human’s initial response to applied physical and verbal cues, the estimation framework assesses willingness to engage in assisted STS. Together, these contributions advance both the modeling and estimation of STS, offering insights crucial for the development of safe, effective robotic assistance.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Verse and Reversal: The Poetic Return to the Inner Child as Black Revolutionary Praxis</title>
<link href="https://hdl.handle.net/1721.1/163457" rel="alternate"/>
<author>
<name>Dunnell, Kaelyn</name>
</author>
<id>https://hdl.handle.net/1721.1/163457</id>
<updated>2025-10-30T03:25:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Verse and Reversal: The Poetic Return to the Inner Child as Black Revolutionary Praxis
Dunnell, Kaelyn
Black revolutionary movements historically have centered the role of the Black child—as either foundation, visionary, or representation of Black liberation. The identity of any given revolutionary movement is characterized by three tenets: resistance, imagination, and love. In order for the individual to uncover the origin of these three tenets for themselves. This thesis is about the poiesis of the revolutionary—the making and re-making of the revolutionary—and in it I argue that the very process of forming revolutionary identity is poetic. I coin the phrase poetic revolutionary to capture that process, which involves tapping into the font of revolutionary soulfulness, which is one’s inner child or the voice and experience of the Black child. The literature guiding this analysis is from June Jordan’s archive hosted at Schlesinger Library, with Voice of the Children, a children’s publication edited by Jordan, as one of the most notable works. I examine June Jordan as the model of the Black revolutionary who has uncovered the language of her child, and I also examine the works of the children she worked with (whose 13– 15-year age ranges, notably, are on the cusp of the definition of childhood that I adopt in this thesis—more in Section I). I gather evidence from workshop diary entries written by Jordan and by her students, poetry excerpts from Voice of the Children, and Jordan’s own writing from her childhood and beyond to support my theory of the poetic revolutionary.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying Human Balance Performance and Control to Inform Therapy</title>
<link href="https://hdl.handle.net/1721.1/163456" rel="alternate"/>
<author>
<name>Shiozawa, Kaymie S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163456</id>
<updated>2025-10-30T03:21:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Quantifying Human Balance Performance and Control to Inform Therapy
Shiozawa, Kaymie S.
Maintaining balance is essential for daily activities and overall health. However, balance capability often declines with age or due to health conditions such as stroke, increasing fall risk. Falls among older adults are a major public health concern, affecting 14 million older adults annually in the US and directly causing over 40,000 deaths. Timely and accurate assessment of balance impairment is crucial to prevent falls and promote independence. Current assessments rely heavily on subjective therapist evaluations, underscoring the need for objective, quantitative methods. With the growing strain on healthcare systems due to an aging population, continuous at-home balance monitoring is also increasingly important. Additionally, a comprehensive understanding of the motor control mechanisms that deteriorate with aging or disease is crucial for informing therapy methods and technologies. &#13;
&#13;
The goal of this thesis was to develop and validate methods that quantify quiet balance ability and control in unimpaired and impaired human participants. The first part focuses on assessing balance ability, the capacity to maintain upright posture during quiet stance that is currently often quantified by measures of body sway. A review of the strengths and limitations of current clinical and instrumented balance assessments highlighted a critical need for continuous assessment methods that enable objective monitoring of balance function outside of clinical settings. Addressing this need, a novel algorithm that quantifies balance ability using only force and motion sensors embedded in an instrumented cane was developed. Well-established balance measures were successfully estimated in both younger and older adults, demonstrating the proposed method's potential to facilitate continuous balance monitoring in real-world environments.&#13;
&#13;
The next part focuses on identifying balance control strategies. The novel intersection-point analysis, based on foot-force direction and point of application, was used in conjunction with a simple biomechanical model and an optimal controller to quantify balance control. The first study demonstrated that unimpaired quiet balance in a challenging environment was best described by a controller that maintained minimal effort by adjusting relative ankle and hip joint torques. Applying this method to aging populations in a subsequent study revealed that older adults rely more on neural feedback, possibly to compensate for muscle strength deficiency. This study also quantified individual balance controllers, highlighting the method's potential as a diagnostic tool for aging populations. Finally, the model was extended to describe balance control after stroke. The results suggest that the non-paretic limb compensated for the paretic limb's abnormal coordination pattern by strongly favoring neural feedback. As one of the first studies to model quiet balance after stroke, this work lays the foundation for future efforts on studying balance impairments. The contributions of this thesis are instrumental to enhancing at-home monitoring, advancing clinical practices, and reducing fall-related injuries, ultimately improving quality of life for aging and neurologically impaired populations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deformable Object Manipulation with a Tactile Reactive Gripper</title>
<link href="https://hdl.handle.net/1721.1/163455" rel="alternate"/>
<author>
<name>Sunil, Neha</name>
</author>
<id>https://hdl.handle.net/1721.1/163455</id>
<updated>2025-10-30T03:21:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Deformable Object Manipulation with a Tactile Reactive Gripper
Sunil, Neha
Manipulating deformable objects remains a fundamental challenge in robotics, as techniques developed for rigid objects often fail to generalize. Deformable objects exhibit infinite-dimensional configuration spaces, frequent self-occlusion, and high model uncertainty, making global state estimation and predictive modeling unreliable. To address these challenges, we propose a perception-driven framework that combines global visual understanding with local tactile feedback. Rather than modeling the full configuration of the object, we leverage local constraints, grounded in modular visual and tactile representations, to enable robust, reactive, and generalizable manipulation. The primary contributions of this work include: • Chapter 2: Cable Following. A tactile control strategy for in-hand cable manipulation that decouples contact regulation from object pose control, enabling fast, reactive sliding and closed-loop plug insertion using only local tactile feedback. • Chapter 3: Towel Edge Tracing. An extension of contact-based control to fabric edge following and the learned tactile perception networks to support this capability. • Chapter 4: Visuotactile Grasp Affordance. A grasp affordance model trained in simulation and refined with tactile self-supervision, enabling high-confidence edge grasping on towels. • Chapter 5: Dense Object Correspondence. A confidence-aware dense descriptor representation. Supports correspondence across crumpled and symmetric garments in air and on a table. • Chapter 6: Behavior Architecture and Planning Interfaces. Integration of perception modules into a reactive, confidence-based folding system and an exploration of how dense descriptors can interface with demonstrations, language, and task and motion planning. Collectively, these contributions show that global state estimation and dynamics prediction are not required for reliable deformable manipulation. Instead, semantically meaningful local interactions, guided by modular visual and tactile representations, can drive scalable, long-horizon behaviors across varied objects, configurations, and tasks.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fluid Sealing Challenges in Solid Oxide Electrolysis Cells and Rapid Swap Battery Systems</title>
<link href="https://hdl.handle.net/1721.1/163454" rel="alternate"/>
<author>
<name>Lindberg, Ian G.</name>
</author>
<id>https://hdl.handle.net/1721.1/163454</id>
<updated>2025-10-30T03:24:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Fluid Sealing Challenges in Solid Oxide Electrolysis Cells and Rapid Swap Battery Systems
Lindberg, Ian G.
This thesis explores the design and development of several mechanical elements relevant to two technologies Important to a global transition to green energy, hydrogen and electric vehicles. The portion of the thesis relating to hydrogen focuses on preloading mechanisms and high temperature seals, two design spaces crucial to the implementation of solid oxide hydrogen generation. Due to the high operating temperatures (600°C - 800°C), seal materials commonly used in other applications are inadequate and glass or vermiculite based seals must be used. The delicateness of these seals makes them a common failure point, and consistent application of a preloading force is key to mitigating this. The concept of a variable-bypass piston is proposed as a preloading mechanism suitable for the high temperatures present inside solid oxide electrolyzer systems, and the development of seal geometries as well as flow characterization of porous steel wool seals to enable parametric design is documented. As an alternative to current sealing methods, initial development of a composite seal utilizing materials and manufacturing methods originating in the semiconductor industry was also conducted. The final section of the thesis proposes the concept and covers initial testing of fluid transfer through a kinematic coupling, a topic of potential interest for implementing liquid pack cooling in a system of rapidly swappable batteries for electric vehicles.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing the Performance of Skeletal Muscle Powered&#13;
Biohybrid Robots</title>
<link href="https://hdl.handle.net/1721.1/163453" rel="alternate"/>
<author>
<name>Bawa, Maheera</name>
</author>
<id>https://hdl.handle.net/1721.1/163453</id>
<updated>2025-10-30T03:24:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Enhancing the Performance of Skeletal Muscle Powered&#13;
Biohybrid Robots
Bawa, Maheera
Skeletal muscle powers all voluntary motion in many living creatures, enabling behaviors such as walking, jumping, swimming, and flying. The field of biohybrid robotics aims to use biological actuators, such as skeletal muscle, to power adaptable robots that respond to their environment. Previous work in this field has focused on deploying 3D skeletal muscle tissues to power robotic function. In natural systems, muscles can also be organized in 2D formats to power a range of movements such as fish-like swimming and peristaltic pumping. However, long-lasting 2D cultures of skeletal muscle have been precluded by force-generating cells delaminating from their underlying substrate. Building on previous work from our lab demonstrating a method to culture contractile skeletal muscle in 2D formats, this work aims to enhance the performance of these systems by tuning substrate stiffness and topography. We show that optimizing system parameters prolongs actuator lifetime and enhances force by 100x.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development and Implementation of a Smart Factory for Educational Fiber Extrusion Device Production</title>
<link href="https://hdl.handle.net/1721.1/163452" rel="alternate"/>
<author>
<name>Fillon, Marie</name>
</author>
<id>https://hdl.handle.net/1721.1/163452</id>
<updated>2025-10-30T03:24:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Development and Implementation of a Smart Factory for Educational Fiber Extrusion Device Production
Fillon, Marie
This thesis presents the development and production of FrED (Fiber Extrusion Device), an educational manufacturing system designed to bridge the gap between theoretical instruction and hands-on practice in process control, computer vision, and smart manufacturing. Building on an existing prototype, this work focused on transitioning FrED from a proof-of-concept into a production-ready system by designing scalable workflows, improving hardware and software integration, and developing tools to ensure traceability and repeatability across builds. A major contribution of this thesis was the enhancement and implementation of a smart factory environment capable of supporting batch production. This included designing and deploying applications using Tulip Interfaces to manage inventory, guide subassembly processes, and monitor production metrics in real time. A modular SKU system and structured bin labeling framework were introduced to reduce errors, maintain version control, and support future growth. Station-specific apps were developed and refined to ensure consistent assembly and simplify onboarding across a rotating team of users. In parallel, this thesis contributed to the evaluation and refinement of a vision-based diameter measurement system using a low-cost USB camera. The system was analyzed under various operating conditions and its limitations under motion and variable lighting were quantified. Multiple image processing strategies were explored and robustness metrics were developed to inform future improvements. To ensure pedagogical relevance, the system was tested in user-facing workshops and public demo sessions. Feedback informed updates to both the assembly process and instructional content. By the end of the development cycle, the system supported the successful production of 35 complete FrED units, establishing a replicable model for small-scale manufacturing. This thesis demonstrates how modular digital infrastructure can enable scalable hardware deployment. It also highlights the practical challenges of transitioning from prototype to production and proposes tools and methods that can support broader adoption of smart manufacturing principles in learning environments.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing Vibration in Omni-Wheels: A Design of Experiments Approach to Optimizing Omni-Wheel Selection</title>
<link href="https://hdl.handle.net/1721.1/163451" rel="alternate"/>
<author>
<name>Sanghai, Rohan S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163451</id>
<updated>2025-10-30T03:24:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Analyzing Vibration in Omni-Wheels: A Design of Experiments Approach to Optimizing Omni-Wheel Selection
Sanghai, Rohan S.
Omni-wheels, known for enabling holonomic motion in robotic systems, often introduce vibration due to their complex geometry and multiple contact points. Unlike caster wheels with established testing standards, omni-wheels lack comprehensive characterization methods. While parallel studies by Ilkbahar [1] and Donnellan [2] explore their rolling resistance and static load capacity, a systematic analysis of vibration characteristics remains absent from the literature. This thesis presents an investigation of the vibration behavior of various omniwheel designs using a Design of Experiments (DOE) approach. A full factorial experimental design was developed, considering factors such as wheel type, rotational speed, applied load, and wheel orientation angle. Individual regression models were developed for each of six wheel types, treating operational parameters as continuous variables. Vibration levels were measured using root mean square (RMS) acceleration, derived from Fast Fourier Transform (FFT) and Power Spectral Density (PSD) analyses of accelerometer data. Results show that rotational speed consistently increased vibration across all wheel designs, while lateral motion (90° angle) consistently reduced vibration compared to forward motion. The effect of applied load varied significantly between wheel designs, with some wheels showing reduced vibration under load while others remained unaffected. Wheels DZ(1) and Vex(5) demonstrated the lowest average vibration levels, though post-test inspection revealed trade-offs with durability, including roller deformation and material degradation. Interaction effects, particularly between angle and speed, were statistically significant for all wheel types, indicating that the benefits of lateral motion are enhanced at higher speeds. This research provides a framework for optimizing omni-wheel selection to minimize vibration by developing wheel-specific predictive models that quantify sensitivities and interaction effects across various designs and conditions, improving system performance and stability. The findings highlight that wheel selection must consider not only vibration performance but also trade-offs with durability and rolling resistance, establishing vibration characteristics as a critical consideration alongside other performance metrics when selecting omni-wheels.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Model-Based Planning and Control Framework for Parkour-Style Legged Locomotion</title>
<link href="https://hdl.handle.net/1721.1/163450" rel="alternate"/>
<author>
<name>Chignoli, Matthew T.</name>
</author>
<id>https://hdl.handle.net/1721.1/163450</id>
<updated>2025-10-30T03:21:19Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Model-Based Planning and Control Framework for Parkour-Style Legged Locomotion
Chignoli, Matthew T.
Legged robots have long been envisioned as a means of expanding robotic capabilities beyond structured environments, yet achieving high-agility locomotion remains a fundamental challenge. This thesis presents a model-based framework for parkour-style locomotion, enabling robots to execute highly dynamic maneuvers such as jumps, rolls, and flips with precision and robustness. A key challenge in planning these motions is selecting an appropriate dynamic model that balances computational efficiency with physical accuracy. To address this, a model assessment strategy is introduced to determine the simplest model capable of capturing task-relevant dynamics. Even with well-chosen models, solving long-horizon trajectory optimization problems for dynamic motions is computationally demanding. This thesis introduces graduated optimization techniques, which improve solver efficiency and reliability by generating high-quality initial guesses through progressively refined problem formulations. Additionally, a novel formulation of rigid-body dynamics algorithms for systems with kinematic loops accelerates trajectory optimization and simulation. Finally, two control strategies are proposed to execute planned motions on hardware: a model-based tracking controller for real-time adjustments and an imitation learning policy trained on optimal trajectories to enhance robustness. Extensive experiments on hardware validate the framework, demonstrating the successful execution of complex, high-impact locomotion behaviors. By integrating advanced planning, optimization, and control techniques, this work establishes a foundation for high-agility legged locomotion, pushing beyond conventional automation toward real-world, dynamic robotic movement.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing and Optimizing Magnetohydrodynamic Induction Marine Energy Harvester</title>
<link href="https://hdl.handle.net/1721.1/163449" rel="alternate"/>
<author>
<name>Scali, William T.</name>
</author>
<id>https://hdl.handle.net/1721.1/163449</id>
<updated>2025-10-30T03:24:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Designing and Optimizing Magnetohydrodynamic Induction Marine Energy Harvester
Scali, William T.
Magnetohydrodynamic (MHD) power generation presents a promising approach for harvesting energy from marine environments, offering a sustainable alternative for powering naval assets and coastal infrastructure. While energy harvesting technologies are widely used in terrestrial and aerial applications, their implementation in marine environments remains limited. This thesis explores the feasibility of an MHD Inductive Marine Energy Harvester, optimizing its design for undersea naval applications to enhance energy efficiency and reduce carbon emissions with minimized construction costs. A theoretical 2D model was developed based on Maxwell’s equations and Fourier analysis to characterize the physics governing MHD power generation in seawater. This model was extended to multiple concentric gaps on one device, refining predictions of power output under varying flow regimes. Numerical simulations using MATLAB enabled the evaluation of key parameters, including fluid conductivity, magnetic field strength, and shroud design, to optimize energy conversion efficiency. Furthermore, geographical and coastal tide analyses were conducted to determine optimal deployment locations, maximizing power extraction from natural marine currents. Economic viability was assessed through a cost-benefit analysis, comparing the energy yield per unit cost of the harvester against existing renewable energy technologies and other maritime power sources. Results indicate that under specific conditions, MHD generators can effectively supplement energy demands, reducing reliance on conventional fuel or other electrical power sources. The findings of this research contribute to the advancement of marine renewable energy technologies, demonstrating the potential of MHD induction-based harvesting as a scalable solution for sustainable power.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Solar-Powered Critical Cooling: A Theoretical Feasibility Study for Human Thermal Regulation</title>
<link href="https://hdl.handle.net/1721.1/163448" rel="alternate"/>
<author>
<name>Hall, Jeff</name>
</author>
<id>https://hdl.handle.net/1721.1/163448</id>
<updated>2025-10-30T03:24:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Solar-Powered Critical Cooling: A Theoretical Feasibility Study for Human Thermal Regulation
Hall, Jeff
Over the last 50 years, the leading global environmental hazard has not been hurricanes, lightning, tornadoes, floods, or earthquakes, but extreme heat events. With climate models projecting an increase in the frequency, intensity, and duration of heatwaves in the coming decades this threat to life is expected to only increase. Air conditioning has been demonstrated to reduce mortality during heatwaves yet uses an order of magnitude more energy than necessary to keep a human cool. Using principles of similitude to extrapolate the capability of existing vapor compression equipment, an objective function to maintain energy balance in a human exposed to extreme heat is developed across a design space. The function shows that in a standard forced convection air conditioning system, there no opportunity to provide emergency cooling of a human due to the slow mass flow rate needed to cool air in a single stream. As such, status-quo attempts to cool humans with general-purpose air conditioning will always be an inefficient use of energy. By focusing on keeping people cool, not spaces, we propose three paths forward for critical human cooling that appropriately match the energy needs of humans: radiative cooling, liquid cooling devices, and low-mass flow air conditioning.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fractured Practices: How Schooling Norms Limit Modeling Practices in Traditional Technical Thermal-Fluids Engineering Courses -- And the Possibilities Emerging through the Cracks</title>
<link href="https://hdl.handle.net/1721.1/163447" rel="alternate"/>
<author>
<name>Huffman, Sandra</name>
</author>
<id>https://hdl.handle.net/1721.1/163447</id>
<updated>2025-10-30T03:21:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Fractured Practices: How Schooling Norms Limit Modeling Practices in Traditional Technical Thermal-Fluids Engineering Courses -- And the Possibilities Emerging through the Cracks
Huffman, Sandra
In professional science and engineering contexts, modeling practices are frequent and diverse. To understand, analyze, and communicate, scientists and engineers simplify and distort the complex systems with which they work. This practice is known as modeling. Typically, scientists create models to predict and explain phenomena while engineers develop them to analyze and test systems, make design decisions, and predict the performance of built systems. Models can include verbal (ex. analogy, story), visual (ex. diagrams, graphs, images), and symbolic (ex. equations) representations. When scientists and engineers model, they do so expansively: pulling from different resources, combining modeling strategies, engaging in critique and iteration, and contextualizing their claims in the work of their field. This is not the case for students in technical engineering classes who are attempting to learn these skills. Traditional, lecture-based courses are the norm for introducing technical material to undergraduate engineering students. These courses typically consist of lectures, recitations, problem sets, and exams. In this type of class, students report homework and test problems as having an outsized influence on their learning approach. These problems tend to be narrow and prescribed. Colloquially known as ‘Textbook-Style’ problems, well-defined, single-solution problems are not sucient to prepare students to successfully tackle the ill-defined, multifaceted engineering problems they will face in their careers. These problems do not elicit student engagement in scientific or engineering modeling practices. Instead, they lead to inauthentic, bounded learning where students develop strategies adequate for groups of similar problems, but too narrow for use outside of the classroom. There has been significant research on innovative educational interventions and alternative problem types shown to improve classroom learning. However, educators work within established structures that resist change, leading to the perpetuation of insucient practices. The gap between textbook-style problems and the problems engineers face, therefore, exists not just in the problem type, but in the context surrounding the task. In this work, I describe and characterize the norms and practices of the classroom environment through three qualitative studies, each centered on traditional technical thermal-fluids courses. Specifically, I investigate the ways in which the development of student modeling practices are supported or undermined. I do this, in part, by adapting the theoretical framework of Figured Worlds. Originally developed by Dorothy Holland and later used in Engineering Education research, figured worlds is a situative framework that allows researchers to look at distinct, sometimes contradictory cultural worlds within the same group and activity. In the first study, I look at individual student approaches to classroom tasks in a think-aloud study, comparing their problem solving approaches and analyzing prompt-student interactions. In the second study, I analyze small groups’ modeling practices and how they are limited by the cultural practices of schooling. In the third study, through semi-structured interviews, I document instructor perceptions of their research and teaching, and discuss the misalignments within and between these contexts. Together, these works outline the mechanisms by which school practices can inhibit the development of student modeling capabilities and the role of students and instructors in perpetuating these practices. In describing student and instructor behavior and contextualizing practices that may otherwise be ascribed to misconceptions, carelessness, or ignorance, I hope to build a foundation for future research into pragmatic educational interventions for enhanced learning outcomes.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Behavioral Methods for Next-Generation Shipboard Power System Simulation: Letting SPARCS Fly</title>
<link href="https://hdl.handle.net/1721.1/163446" rel="alternate"/>
<author>
<name>Almquist, Ethan T.</name>
</author>
<id>https://hdl.handle.net/1721.1/163446</id>
<updated>2025-10-30T03:24:12Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Behavioral Methods for Next-Generation Shipboard Power System Simulation: Letting SPARCS Fly
Almquist, Ethan T.
Design requirements on modern naval platforms are increasing the complexity and criticality of onboard electric plants. They form the backbone of warship operational capability and are at the heart of maritime decarbonization. Tasks such as assessing the ship's capacity in a damaged state, optimizing the mission profile of a fleet of vehicles, and evaluating broad design spaces in an efficient manner are increasingly difficult as electric network complexity increases. Traditional modeling techniques are either too computationally expensive, or lack the fidelity necessary to produce meaningful insights into the electric network's operation. Behavioral modeling bridges this gap, but is underdeveloped to support the system architectures of tomorrow's ships. This work details the advancement of behavioral modeling of electrical systems to incorporate hybrid AC/DC and ring bus architectures, the development of parallelization techniques, and SPARCS: a software package offering Shipboard Parallelized Analytics with a Rapid Configuration Simulator.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design for Longevity: Service and System Innovation</title>
<link href="https://hdl.handle.net/1721.1/163445" rel="alternate"/>
<author>
<name>Lee, Sheng-Hung</name>
</author>
<id>https://hdl.handle.net/1721.1/163445</id>
<updated>2025-10-30T03:21:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design for Longevity: Service and System Innovation
Lee, Sheng-Hung
The global demographic shift toward an aging population presents complex social, economic, and systemic challenges, necessitating innovative approaches to service design, systems thinking, and financial planning. This dissertation, Design for Longevity: Service and System Innovation, examines these transformations and proposes strategies to foster a “longevity society”, a new era in society necessitating a fundamental rethinking of age and ageing to effectively harness the opportunities afforded by increased life expectancy (Scott, 2021). This research is built upon five relevant paradigm shifts: 1. from age-based to stage-based mindsets, 2. from product-driven to service-driven solutions, 3. from human-centered to humanity-centered design, 4. from circular to longevity economics, and 5. from an aging society to a longevity society. These shifts redefine the role of designers and researchers in creating adaptive, inclusive, and sustainable systems for the future. This dissertation explores how tangible artifacts, Longevity Planning Blocks (LPBs), can be employed to create effective service encounters. The research questions explore 1. how to use boundary objects (BOs) to uncover and define latent user needs, 2. how to use a mixed-method approach to analyze experiment data, 3. data-driven persona creation, and 4. the design of longevity planning services across financial planning, service innovation, and system thinking. Central to the research is a study of LPBs, BOs designed to facilitate collaborative engagement between a facilitator and 69 Boston-based participants, stratified by age, gender, pre-tax annual income, and assets. LPBs, employed in experiments, help investigate participants’ needs and concerns across various life transitions and stages. These tangible BOs facilitated informal yet insightful discussions, uncovering how individuals navigate ambiguity, make complex decisions, manage their evolving physical, mental, and social health, and perceptions about living solo. Data from in-person longevity planning experiments provided nuanced insights into the interplay of individual, societal, and systemic factors shaping longevity planning services. A mixed-methods approach integrates qualitative and quantitative techniques, including expert and user interviews, co-creation workshops, pre- and post-experiment surveys, hierarchical cluster analysis, K-means clustering for persona development, and causal loop diagrams for longevity planning service system modeling. Constructivist grounded theory and exploratory factor analysis uncover emerging themes and systemic interconnections, emphasizing the importance of adaptive services that align with changing needs and broader social infrastructures. The study introduces the notion of Design for Longevity (D4L), expanding on longevity economics and circular economy principles to address the complexities of extended lifespans. D4L highlights how evolving resources, transformative needs, and systems integrate life stages into the design of products, services, and experiences. This dissertation contributes to service innovation, financial planning, and system design by proposing actionable insights for longevity planning services. It emphasizes multi-stage life planning, intergenerational collaboration, and systemic thinking as foundational to a longevity society. This dissertation contributes a mixed-method approach, offering design practitioners a replicable, data-driven framework for persona creation applicable beyond longevity planning. Concluding with reflections on social infrastructure, community, and culture, the study calls for cross-disciplinary collaboration to address longevity planning challenges. By advancing the understanding of longevity planning and its systemic implications, this work lays a foundation for designing a future where extended lifespans are inclusive and socially engaged.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrated Prosthetic Leg Design Frameworks for People with an Above-Knee Amputation</title>
<link href="https://hdl.handle.net/1721.1/163444" rel="alternate"/>
<author>
<name>Petelina, Nina T.</name>
</author>
<id>https://hdl.handle.net/1721.1/163444</id>
<updated>2025-10-30T03:21:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Integrated Prosthetic Leg Design Frameworks for People with an Above-Knee Amputation
Petelina, Nina T.
A well-fitting, high-performance prosthesis for people with a lower limb amputation can greatly improve users’ mobility and quality of life. Still, many amputees lack access to high-performance prosthetic components due to the cost and availability of continuous care. This thesis aims to design low-cost, high biomechanical performance above-knee prosthetic leg components (prosthetic foot and knee) that will result in a walking motion likely to be perceived as able-bodied after minimal acclimation time. Above-knee amputees have two common gait deviations from able-bodied and below-knee amputee gait: lack of early stance knee flexion (ESF) and delayed initiation of knee flexion (IOF) during late stance phase. These deviations are likely a result of prioritization of stability at the expense of other functions such as shock absorption and progression through stance. A preliminary perception study was conducted to investigate the acceptable bounds of gait deviation that can be incorporated into a prosthetic leg design without compromising the perception of "typical" walking. Using these results, I created the Hip Trajectory Error (HTE) framework for designing prosthetic feet specifically for people with an above-knee amputation. The HTE framework takes into account the lack of ESF by incorporating the shock absorption function of ESF within the prosthetic foot design. This is achieved by targeting able-bodied hip center motion, which is correlated with sufficient shock absorption during the stance phase. This thesis presents an optimization and performance evaluation process that resulted in a prosthetic foot structure that not only closely replicates able-bodied hip center motion but also could be manufactured for a low cost. An experimental study successfully demonstrated that the Hip Trajectory Error (HTE) framework can be used to predictively design prosthetic feet for aboveknee amputees. HTE-designed prosthetic feet enable comparable biomechanical performance to daily-use tuned and prescribed prosthetic feet within 10-15 minutes of acclimation time and without iterative multi-day fittings. Next, I proposed a method to recommend a damping coefficient for the prosthetic knee to achieve able-bodied peak knee flexion during swing phase. A range of recommended damping coefficients to achieve target peak knee flexion angle in transfemoral amputees was determined using a simple three-step framework. This framework incorporates effects from common transfemoral prosthetic gait deviations, such as slower self-selected walking speeds and delay in initiation of knee flexion during late stance. The calculated range of recommended damping coefficients was experimentally investigated and found to enable a peak knee flexion angle within two standard deviations of able-bodied peak knee flexion angle. Lastly, I created the Full Leg Optimization (FLO) framework to design the prosthetic foot and knee concurrently based on minimal inputs from the user and the prosthetist. The framework anticipates the lack of ESF and delay initiation of late stance knee flexion and uses the HTE framework to predict the orientation and location of the knee mechanism. Using this prediction, the rotational axes of the prosthetic knee can be positioned to start knee flexion at a point in late stance chosen by the prosthetist to provide sufficient stability to the user. A proof-of-concept study demonstrated the accuracy of the prediction for one user after minimal acclimation time, confirming the ability to predictively design prosthetic leg components in tandem. The FLO framework can therefore be used to predictively design a passive prosthetic leg for above-knee amputees while considering common gait deviations due to stability needs. This doctoral work demonstrates that the presented frameworks can be used to quantitatively design prosthetic feet and knees based on the needs of above-knee amputees, which could save fitting time, manufacturing cost, and improve mobility.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multimodal Non-Contact Sensing of Neonatal Vital Signs&#13;
Using Radar and Video</title>
<link href="https://hdl.handle.net/1721.1/163443" rel="alternate"/>
<author>
<name>Chityat, Inbar</name>
</author>
<id>https://hdl.handle.net/1721.1/163443</id>
<updated>2025-10-30T03:24:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Multimodal Non-Contact Sensing of Neonatal Vital Signs&#13;
Using Radar and Video
Chityat, Inbar
Preterm neonates represent a vulnerable population which traditional contact-based monitoring devices are not optimized for their small size and complicated physiology. Adhesive sensors and wires can cause infections, discomfort, and impair the delivery of clinical care. Therefore, these most fragile patients could significantly benefit from remote health monitoring. This thesis establishes the foundation for a multimodal device designed for noncontact monitoring of neonates in the Neonatal Intensive Care Unit (NICU) that integrates a video camera and a radar. The device is used to estimate vital signs such as respiratory rate (RR), using both unimodal (solely video or radar) and multimodal fusion approaches that combine data from both sensors. Preliminary testing was conducted on neonatal simulator mannequins, followed by a clinical study at Tufts Medical Center NICU which collected data from 16 neonates so far (with the goal of reaching 20). The collected data was processed, labeled, and organized using image processing techniques and manual review, and then analyzed using a Video Vision Transformer (ViViT) architecture, incorporating early, intermediate, and late fusion strategies. Initial analysis was conducted on the mannequin data and the first neonatal subject. The results show that for estimating RR in neonates, the early fusion approach outperformed the unimodal methods. In movement detection, compared to human labeling, the fusion techniques achieved high accuracy and precision. To conclude, this study demonstrates that multimodal analysis has the potential to outperform unimodal approaches by improving accuracy against gold standard monitoring, particularly in challenging real-life conditions, including motion artifacts and poor lighting. This work represents a step toward more robust, non-invasive monitoring solutions for neonatal care, with implications for broader applications in remote health monitoring.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Incubators for Species which Exhibit Temperature-Dependent Sex&#13;
Determination: Application to Hawksbill Sea Turtles in Rising Ambient Temperatures</title>
<link href="https://hdl.handle.net/1721.1/163442" rel="alternate"/>
<author>
<name>Finlason, Katana R.</name>
</author>
<id>https://hdl.handle.net/1721.1/163442</id>
<updated>2025-10-30T03:24:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Incubators for Species which Exhibit Temperature-Dependent Sex&#13;
Determination: Application to Hawksbill Sea Turtles in Rising Ambient Temperatures
Finlason, Katana R.
As global ambient temperatures continue to rise, with the highest recorded annual averages since 1850 being within the last ten years, problems emerge for species exhibiting temperature-dependent sex determination. This is the process by which the sex of an animal’s embryo is determined based on the temperature of environment in which it is incubated, which can result in skewed sex ratios within a population like in the case of the critically endangered Hawksbill Sea Turtles (Eretmochelys imbricata). Reportedly, 85-95% of Hawksbills sampled in the wild are currently female [3]. This sex-imbalance can negatively impact the species’ ability to procreate, leading to the potential for extinction. Currently, no viable, long-term solutions exist to effectively and safely cool sea turtle eggs while still keeping them within their natural habitat. This research proposes the creation of sea turtle egg incubators designed to achieve a temperature range that will produce a higher percentage of male hatchlings to help rectify this imbalance in habitats heavily affected by climate change. These incubators are designed to be affordable, easy to build and, most importantly, safe for the sea turtle eggs. Three-month-long temperature trials for the incubator were conducted in Jamaica with conservationist community partners at Oracabessa Bay Sea Turtle Project. Results showed that this incubator is not only easy to manufacture and use, but that it successfully regulates the temperature range in favor of more male hatchlings, while also increasing the emergence rate of the hatchlings from 70% in natural nests to over 80%. During one of the hottest months in Jamaica, the incubator, deployed without water changes, doubled the predicted percentage of males produced by natural nests. When provided with cool water changes the incubator quintupled this value. Throughout the months of August to October, the incubator achieved a temperature range that is predicted to produce 85-99% male hatchlings, thus counteracting the feminization phenomenon occurring in nature.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Minimal Constraint and Precision Placement in Cyclic Testing of High Life-Cycle Products</title>
<link href="https://hdl.handle.net/1721.1/163441" rel="alternate"/>
<author>
<name>Edington, David J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163441</id>
<updated>2025-10-30T03:24:42Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Minimal Constraint and Precision Placement in Cyclic Testing of High Life-Cycle Products
Edington, David J.
In the electrification of heavy industry, rapid swappable batteries provide an effective means to minimize vehicle downtime and the cost of operation. However, to allow this technology to take hold, further development of electrical contacts that can both pass high amperage and undergo a high cycle life needs to occur. The development of these electrical contacts is a highly experimental process, and thus establishing a method and test equipment to determine the physical and electrical characteristics of these contacts over their lifetime will allow for the accelerated development of these products. This body of work serves as a design guide to establish a physical testing mechanism to assess contact resistance degradation and physical wear over the lifespan of an electric connector. Data will then be collected on initial contact prototypes to characterize their performance. With this data, designs may be iterated and improved upon in pursuit of creating a universal standard for battery swap technology on electric vehicles in heavy industry.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of a Hierarchical Reflexive Control Framework for Autonomous Robotic Manipulation</title>
<link href="https://hdl.handle.net/1721.1/163440" rel="alternate"/>
<author>
<name>SaLoutos, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/163440</id>
<updated>2025-10-30T03:21:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Development of a Hierarchical Reflexive Control Framework for Autonomous Robotic Manipulation
SaLoutos, Andrew
Within the field of robotic manipulation, much research focus has been placed on improving perception and planning algorithms, assuming that the actions output by these high-level planners will be easily achieved by the robot systems. However, to surpass human manipulation performance, fast and robust execution of manipulation plans is just as critical as improved perception and planning methods. In this thesis, we introduce the last centimeter problem, which states that the most difficult part of grasp execution is when less than a centimeter remains between fingertips and an object, and contact is imminent. To solve this problem, we propose a reflexive control framework, which is a manipulation control architecture that decouples low-level, high-bandwidth behaviors, which we call reflexes, from broad high-level plans. The reflexes are fast, autonomous reactions to local sensing information that are designed to add robustness to high-level manipulation plans while also reducing the necessary complexity of manipulation planning problems. To deploy our reflexes, we design hardware platforms that incorporate high-bandwidth actuation and low-latency tactile sensing, allowing us to maximize the reactive capabilities of the overall manipulation system. We validate our approach through studies on teleoperated grasping and autonomous planar grasping, which show that our reflexive controllers increase manipulation speed and robustness. Then, we perform extensive simulation studies for autonomous grasping in SE(3), conducting experiments with single objects as well as cluttered scenes, using a variety of state-of-the-art grasp planners. Our results show greatly improved grasp robustness with our reflexive controllers, across all object types and grasp planners. Further experiments show that the benefits of our reflexes persist across sets of objects that are larger, heavier, and more slippery, and with increasing magnitudes of errors in the executed grasp poses. While this thesis demonstrates that the reflexive control framework is effective at increasing grasp robustness during picking, our framework is constructed in a way that is amenable to extension to other tasks, like in-hand manipulation or constrained object placement, as well as application to more complex grippers, such as those with three or more dexterous fingers and more diverse sensing.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distinct roles for energy storage and transmission&#13;
infrastructure in a renewables-based electric power system</title>
<link href="https://hdl.handle.net/1721.1/163439" rel="alternate"/>
<author>
<name>Kim, Beomjun</name>
</author>
<id>https://hdl.handle.net/1721.1/163439</id>
<updated>2025-10-30T03:24:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Distinct roles for energy storage and transmission&#13;
infrastructure in a renewables-based electric power system
Kim, Beomjun
Due to the intermittency of renewable resources, achieving a high coverage of renewable generation at low cost is one of the main hurdles to realizing zero-carbon electricity generation. In this study, we analyze the roles of energy storage systems (ESS) and transmission infrastructure in the cost-optimal deployment of a renewable electricity grid in the United States. We find that storage and transmission serve distinctly different functions: transmission is useful for addressing hours-long resource lows, but only plays a supplementary role in mitigating long-duration resource lows. Conversely, storage can handle both short-duration and long-duration resource lows. These different functions are driven in part by the large spatial footprints of the most extreme long duration resource lows. Furthermore, the total cost of renewable energy in the system and the cost-determining technological components in the system are dependent on the renewables penetration toward total demand—known as the energy availability factor (EAF). When the EAF is sufficiently low, the cost of a cost-optimized system is driven solely by generation costs. For low to intermediate EAF, both generation and transmission costs are dominant factors. At high EAF, generation and storage costs become the dominant factors.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wedged Vortex Generator Applications for Marine Vessels</title>
<link href="https://hdl.handle.net/1721.1/163438" rel="alternate"/>
<author>
<name>Kimmeth, Jack</name>
</author>
<id>https://hdl.handle.net/1721.1/163438</id>
<updated>2025-10-30T03:24:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Wedged Vortex Generator Applications for Marine Vessels
Kimmeth, Jack
This thesis investigates the effectiveness of vortex generators (VGs) in reducing viscous drag in hydrodynamic applications. Initial experimental and computational fluid dynamics analyses identified wedge-shaped VGs as the optimal design for flow manipulation. Comparative testing of three wedge shaped VG sizes at 1.3 m/s revealed the most effective configuration, which was subsequently evaluated across speeds ranging from 1.0 m/s to 1.6 m/s. The results showed a viscous drag reduction of 7.9% at 1.4 m/s. These findings were extrapolated to a full-scale bulk carrier using appropriate geometric and dynamic scaling factors. Total resistance was partitioned using Holtrop-Mennen approximations, allowing the drag reduction to be realistically applied to operational conditions on a trans-Pacific route. Material and installation cost estimates were also developed. Finally, implications for propulsion efficiency, flow-induced vibrations, and cavitation are discussed, with recommendations for future self-propelled model testing to further explore these effects.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Prosody in Kichwa</title>
<link href="https://hdl.handle.net/1721.1/163437" rel="alternate"/>
<author>
<name>Chango Masaquiza, Soledad</name>
</author>
<id>https://hdl.handle.net/1721.1/163437</id>
<updated>2025-10-30T03:24:10Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Prosody in Kichwa
Chango Masaquiza, Soledad
This thesis investigates the prosodic system of Salasaka Kichwa, focusing on the interaction between pitch, morphosyntactic structure, and word order in both elicited and spontaneous speech. Based on data from ten native speakers of the Salasaka community, the study analyzes approximately 150 utterances using Praat and ToBI-style prosodic annotation. The findings reveal a consistent alignment between the nuclear pitch accent and the leftmost constituent of the verb phrase in neutral declarative sentences, supporting the hypothesis that Salasaka Kichwa exhibits a head-final syntactic structure. This default prosodic alignment is disrupted by the presence of focus-sensitive or interrogative morphemes such as -mi and -chu, which reliably attract the pitch peak regardless of their position in the clause. In ditransitive constructions, pitch prominence consistently targets the dative-marked argument. Accusative-marked objects also receive prominence, but only when modified; in such cases, it is typically the modifying adjective or contrastive element that bears the highest pitch. Overall, the study demonstrates that prosodic prominence in Salasaka Kichwa is not governed by syntactic structure alone. Instead, it emerges from a layered interaction between morphology, information structure, and pragmatic marking offering new insights into how prosody encodes grammatical and communicative functions in underdescribed head-final languages.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Model Predictive Control Approaches for Dynamic Table&#13;
Tennis Swinging</title>
<link href="https://hdl.handle.net/1721.1/163436" rel="alternate"/>
<author>
<name>Nguyen, David H.</name>
</author>
<id>https://hdl.handle.net/1721.1/163436</id>
<updated>2025-10-30T03:24:26Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Model Predictive Control Approaches for Dynamic Table&#13;
Tennis Swinging
Nguyen, David H.
This thesis presents three model predictive control (MPC) formulations for robotic table tennis swinging, addressing the challenge of generating precise, real-time paddle trajectories for dynamic ball interactions. We explore key differences in optimization structure, solver strategy, and real-time implementation, evaluating each approach through hardware experiments that measure strike condition tracking and hit success. The final controller integrates the full task of a table tennis possession by planning the return ball trajectory through the contact dynamics, and generating a swing to achieve it. This controller improves the hit rate of the system from 88.3% to 97.6% and significantly enhances strike condition accuracy and smoothness enabling control over the landing location and spin of the ball.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the Certification of Deep Learning-based Dynamical System Identification</title>
<link href="https://hdl.handle.net/1721.1/163435" rel="alternate"/>
<author>
<name>Zhang, Wang</name>
</author>
<id>https://hdl.handle.net/1721.1/163435</id>
<updated>2025-10-30T03:21:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">On the Certification of Deep Learning-based Dynamical System Identification
Zhang, Wang
Dynamical system identification, the reconstruction of the system governing equations from observations, has been studied for decades. With the recent emergence of deep learning techniques, neural network-based parameterization enriches this classical field by offering new capabilities in modeling complex systems. While promising advances have been made, these black box models face significant challenges due to their limited interpretability and lack of physical guarantees, raising concerns about their applicability in scenarios where trustworthiness is critical.&#13;
&#13;
In this thesis, we developed a comprehensive framework to analyze, understand and learn dynamical systems. We start with a contrastive learning method to capture system invariants (i.e., conserved quantities) from trajectory observation of dynamical systems. Building on these learned invariants or known priors, we introduce a projection layer for neural networks that guarantees the preservation of physics constraints in the learned dynamics models. This two-step approach significantly improves the trustworthiness and interpretability of the traditional black-box models. On top of this, we extend this methodology to learn physically meaningful embeddings corresponding to inter-system characteristics, enabling zero-shot meta-learning capabilities for dynamical system models. Finally, we reduce the bias gap in the classical neural network-based aleatoric uncertainty estimators. We identify overestimation issues in existing variance attenuation methods and propose a novel denoising-based approach that provides more accurate estimates of data uncertainty. This method not only applies to regression tasks but also extends to dynamical system observations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling the Impact of Helicopter Vibrations on the Musculoskeletal Health of US Army Blackhawk Helicopter Pilots</title>
<link href="https://hdl.handle.net/1721.1/163434" rel="alternate"/>
<author>
<name>Johnston, Julie E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163434</id>
<updated>2025-10-30T03:24:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Modeling the Impact of Helicopter Vibrations on the Musculoskeletal Health of US Army Blackhawk Helicopter Pilots
Johnston, Julie E.
The UH-60, used for troop transport, MEDVAC, and mission control, has evolved over the last 45 years from the Alpha Model to the Lima and Mike models that are currently utilized. Previous studies investigated the impact of Whole-Body Vibrations (WBV) on aviators and the resulting musculoskeletal injury, but none have investigated the efficacy of the Mike model’s Active Vibration Control System (AVCS) on reducing the impact of helicopter vibrations on musculoskeletal health.&#13;
Computational analyses of a biomechanical model using OpenSim and motion capture at varying levels of vibration was conducted. This quantifies the response of the spine and the surrounding muscles when vibratory loads are applied while positioned to manipulate the flight controls. A musculoskeletal model was developed to represent the aviator in the seated posture required to effectively manipulate the flight controls. To develop the model, the team recorded motion capture data with a pilot in a pilot test for concept validation. This data was then processed and input in the OpenSim inverse kinematics tool to determine joint angle and to demonstrate the muscle-tendon length of several muscles in the back. Unlike the initial predictions, the muscles in the right side of the back were not consistently longer than those of the left side. &#13;
A survey was also developed that builds upon previous efforts, seeking to understand the aviator’s perspective on musculoskeletal injury and prevention, with a focus on the back. Aviators are asked to describe the cause of their injury, methods of injury prevention, and recovery techniques encompassing numerous subpopulations of flight experience: Lima-majority, Mike-only, Mike-majority, and an even mixture of L/M. The data attempts to characterize the impact of the AVCS on aviator spine health. The AVCS should decrease the rate of injury by reducing the vibratory loads experienced by the aviator. This survey is unique to previous questionnaires as it focuses on the user’s perspective of differences between the two models, and the injury or pain felt by each service member.&#13;
While it was expected to see a trend of reduced injury occurrence amongst the Mike-only aviators versus those with Lima-majority flight hours, this was not the case. Injury prevalence was consistent across most populations, indicating the potential inefficacy of the AVCS. Analysis of open-ended responses, particularly from the hybrid group, provide some context for the perceived impacts of using the AVCS. Some population demographics were not represented in this survey due to the nature of the unit being surveyed, which may impact the validity of some results.&#13;
By quantifying the perceived efficacy of the AVCS as it relates to chronic musculoskeletal injury using a survey of pilot experience factors (flight hours, airframes, operating theatres, etc.), and by representing the maladaptive posture of the pilots with a computational simulation based on experimental pilot data; a full picture is developed of the risk of issue related to the near and long-term health of US Army Aviators. The aim is to expand the overall understanding of how vibration is impacting the musculoskeletal health of aviators and their perceived impact on lifelong health from the profession. The ultimate goal is to aid in the design of additional countermeasures to improve aviator spine health and to serve as a platform for optimization of systems like AVCS.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Theories for Compact, Low-energy, Clog-resistant Drip Irrigation Emitters</title>
<link href="https://hdl.handle.net/1721.1/163433" rel="alternate"/>
<author>
<name>Ghodgaonkar, Aditya</name>
</author>
<id>https://hdl.handle.net/1721.1/163433</id>
<updated>2025-10-30T03:21:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design Theories for Compact, Low-energy, Clog-resistant Drip Irrigation Emitters
Ghodgaonkar, Aditya
This thesis presents the derivation, experimental validation, and demonstration of new design theories for compact, low-pressure, clog-resistant drip emitters that can make drip irrigation affordable, reliable, and easier for farmers to adopt. Broad adoption of water-efficient irrigation methods such as drip irrigation is imperative to sustainably meet projected global food demand against the backdrop of diminishing freshwater resources, constrained arable land, and climate change. In drip irrigation systems, emitters are passive flow-regulating devices that are inserted into the drip tube to align with every plant. They are designed to provide a constant flow rate once they are pressurized to at least their activation pressure, thus ensuring uniform, localized irrigation of plants. However, conventional emitters directly contribute to three barriers that have limited drip irrigation adoption – high raw material-driven equipment costs, high pumping power costs associated with pressurizing all emitters in the field to their activation pressure, and gradual loss of reliability due to clogging. Compact, low-pressure, clog-resistant emitters can address these challenges, but to design them, we must model and tune their operating physics, which is centered around two complex features – a millimeter-scale tortuous passage called the labyrinth, and fluid-structure interaction (FSI) involving a flexible silicone rubber diaphragm and a micro-duct. This makes conventional design approaches relying on high-fidelity simulation software or empirical trial-and-error too expensive and time-consuming to use for the development of compact, low-pressure, clog-resistant emitters on competitive industrial timelines. This thesis addresses these challenges through three contributions. &#13;
&#13;
The first contribution presents an empirically derived hydraulic model of emitter labyrinths, which are typically the most volume-intensive feature of emitters. The model relates labyrinth flow rate to select material volume agnostic parameters, allowing designers to create compact labyrinths with desired hydraulic performance. The compact labyrinths can enable up to 10% reduction in the raw material-driven cost of drip equipment. &#13;
&#13;
The second contribution presents a 1-dimensional model of the FSI in emitters that can predict their flow rate-pressure performance in 2-3 minutes and within 8-14% error, cutting down on design cycle times by orders of magnitude. This facilitated the rapid synthesis of low-pressure emitter designs having 50-60% less activation pressure than conventional emitters, cutting pumping power costs by an estimated 18-23%. &#13;
&#13;
Together, the first two contributions can enable an estimated 18% reduction in the lifetime costs of drip irrigation, but long-term adoption requires that the emitters be clog-resistant and compatible with the current maintenance practices of farmers. To that end, the third contribution presents an experimental investigation of clogging in low-pressure emitters. The results of the investigation directly correlated the geometry of emitter hydraulic features to the critical particle size that would clog them. As a result, compact, low-pressure emitters could be designed to be compatible with the same filters and maintenance practices as current state-of-the-art products that have higher activation pressures. This was confirmed by field testing the compact, low-pressure, clog-resistant (MIT) emitters alongside commercial reference designs with their prescribed filters for nearly 1200 hours. At the end of the field test, the MIT emitters still held 90-94% of their initial flow rate, putting them on par with or better than the reference products in terms of irrigation reliability. The collective contributions of this thesis present the knowledge needed to design emitters that can make drip irrigation more affordable to adopt by farmers and demonstrate that substantial capital and operating cost reductions can be realized without sacrificing product reliability or requiring expensive changes to current farmer maintenance practices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hydrodynamic Behavior of Pop-Up Satellite Archival Tags (PSAT) Subject to Vortices</title>
<link href="https://hdl.handle.net/1721.1/163432" rel="alternate"/>
<author>
<name>Hoo, Stephanie</name>
</author>
<id>https://hdl.handle.net/1721.1/163432</id>
<updated>2025-10-30T03:24:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Hydrodynamic Behavior of Pop-Up Satellite Archival Tags (PSAT) Subject to Vortices
Hoo, Stephanie
Pop-up Satellite Archival Tags (PSATs) are a combination of satellite and archival tags used by marine biologists to collect large scale movement and behavioral data of large pelagic life for up to two years [1]. However, current commercial PSATs have an unusually high failure rate when tagged on tuna and cost upwards of $4000, making it both difficult and expensive to collect data [14]. Upon investigation, the top two failure modes of tuna-affixed PSATs have been identified as drag from movement/tissue healing and pressure cycling [14]. Current commercial PSAT manufacturers do not account for the vortices shed by fish when testing their designs— a large oversight that could account for their high failure rate [15]. The work herein determined the effects of vortex shedding on PSAT hydrodynamic behavior, used these results to inform the design of novel PSAT body shapes, and conducted a head-to-head comparison of these designs with existing commercial PSATs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Combating Corrosion and Monitoring Microgrids on Coast Guard Patrol Boats</title>
<link href="https://hdl.handle.net/1721.1/163431" rel="alternate"/>
<author>
<name>Buchanan, Maxwell Calvin</name>
</author>
<id>https://hdl.handle.net/1721.1/163431</id>
<updated>2025-10-30T03:24:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Combating Corrosion and Monitoring Microgrids on Coast Guard Patrol Boats
Buchanan, Maxwell Calvin
Marine corrosion presents a persistent threat to the reliable operation of U.S. Coast Guard Fast Response Cutters (FRCs). This thesis investigates hybrid cathodic protection strategies combining impressed current cathodic protection (ICCP) systems and sacrificial zinc anodes to combat corrosion on such vessels. Observing over 550 cumulative months of ICCP system data across 46 FRCs, this thesis identifies operational trends, failure modes, and unique regional behaviors. To validate observed patterns and explore failure scenarios, the study implements finite element modeling using COMSOL Multiphysics. These simulations replicate normal operation, reference electrode failure, propeller passivation, localized zinc loss, and hull coating failure for both a generic 35m hull and the FRC hull. These models emphasize how system behavior responds to material variations, temperature, and system health, offering a diagnostic framework for optimizing ICCP configurations. Field and laboratory experiments further ground the computational findings. These include shipboard hull potential surveys and analysis of zinc anode wastage across multiple cutters. Controlled experiments on nickel aluminum bronze (NAB) passivation using miniaturized ICCP test systems are explored for further study. Initial results show variation in zinc consumption and corrosion behavior depending on ICCP setpoints, with higher protection levels (-1050 mV) often correlating with reduced zinc depletion. The thesis also explores energy diagnostics onboard FRCs via non-intrusive load monitoring (NILM). A case study on the USCGC WILLIAM CHADWICK describes monitoring auxiliary machinery loads through NILM signatures and suggests expansion to critical panels and DC systems. By integrating fleet data, physical experimentation, and simulation, this thesis advances future efforts in patrol boat corrosion monitoring, ICCP optimization, and resilient microgrid management.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Incorporating High-Resolution Tactile Perception for Performative and Generalized Robotic Manipulation Through Compliance Estimation and Hardware Design</title>
<link href="https://hdl.handle.net/1721.1/163430" rel="alternate"/>
<author>
<name>Burgess, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/163430</id>
<updated>2025-10-30T03:24:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Incorporating High-Resolution Tactile Perception for Performative and Generalized Robotic Manipulation Through Compliance Estimation and Hardware Design
Burgess, Michael
In robotics, replicating the natural proficiency with which humans perform manipulation tasks has proven challenging. Modern control schemes are predominantly learning-based and thus depend heavily on data collected via teleoperated demonstrations. Humans rely on our tactile perception to perform contact-rich and dynamic manipulation tasks. By more seamlessly incorporating high-resolution tactile sensing and haptic feedback into teleoperation interfaces, we can work to create stronger demonstration data to support the development of more effective learned control policies. In this thesis, we present two contributions toward this goal. First, we develop an algorithm to estimate the compliance of grasped objects in real-time from tactile images to provide haptic feedback to remote users. This algorithm combines both analytical and learning-based approaches to better generalize across both object shapes and materials. Second, we create a 1-DoF robotic gripper design with integrated tactile sensing. Inspired by the principle of self-similarity, this gripper is designed to better conform to complex object geometries than traditional designs and more securely grasp objects of many shapes and sizes. Together, these contributions can be utilized to create robust, tactile-aware teleoperation platforms. These platforms would facilitate more effective data collection and thereby promote the development of more performative autonomous action in generalized robotic manipulation scenarios.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tailoring Complexity of Model-Based Controllers for Legged Robots</title>
<link href="https://hdl.handle.net/1721.1/163429" rel="alternate"/>
<author>
<name>Khazoom, Charles</name>
</author>
<id>https://hdl.handle.net/1721.1/163429</id>
<updated>2025-10-30T03:21:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Tailoring Complexity of Model-Based Controllers for Legged Robots
Khazoom, Charles
Humanoid robots promise human-like mobility, but must manage complex and often conflicting control objectives. While model-based controllers can address these challenges using online optimization, they have high computational demands. Model predictive control (MPC) provides closed-loop stability with online trajectory optimization, but achieving real-time rates is difficult for high-dimensional systems. To mitigate this limitation, most MPC implementations rely on reduced-order models (ROMs) that simplify planning but fail to capture whole-body constraints like joint limits and self-collisions. Reactive whole-body controllers (WBCs) partially address this limitation by projecting ROM trajectories onto some wholebody constraints, but these are restricted to acceleration-level constraints like friction cones and torque limits. This thesis advances humanoid planning and control through a renewed focus on model fidelity, solution accuracy ans solve times with three key contributions. First, we propose the CBF-WBC, which augments reactive WBCs with position constraints using control barrier functions (CBFs), enabling the MIT Humanoid to avoid selfcollisions with minimal computational overhead. As a result, the robot can reactively deviate from infeasible trajectories from a reduced-order MPC. Despite fast solve times below 100 microseconds, conflicts can arise between the reduced-order MPC and the CBF-WBC. To address this, we enable real-time whole-body MPC using the alternating direction method of multipliers (ADMM) to provide low-accuracy solutions at high feedback rates. The controller is reliably deployed on hardware and enables the MIT Humanoid to walk robustly on rough terrains and plan complex crossed-leg and arm motions that enhance stability when recovering from significant disturbances. While low-accuracy solutions often suffice for real-time control, we found that higher accuracy could still improve closed-loop performance if computational speed allows. Building on this insight, we propose a framework to simultaneously optimize solution accuracy and model complexity to maximize closed-loop performance. Instead of planning with a single model that is too complex or too simple, solve times can be reduced by planning over a sequence of models of reducing complexity. We extract ROMs from whole-body dynamics equations and optimize their horizons, discretization timesteps and solution accuracy using blackbox optimization. The optimizer can sacrifice model complexity for additional ADMM iterations, reducing falls by nine-fold and enabling a 2 m/s walking speed on hardware.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Net Climate Impact of AI: Balancing Current Costs with Future Climate Benefits</title>
<link href="https://hdl.handle.net/1721.1/163428" rel="alternate"/>
<author>
<name>Turliuk, Jennifer</name>
</author>
<id>https://hdl.handle.net/1721.1/163428</id>
<updated>2025-10-30T03:23:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Net Climate Impact of AI: Balancing Current Costs with Future Climate Benefits
Turliuk, Jennifer
What is the net impact of artificial intelligence on climate change? Existing studies focus on AI's footprint, but few analyze AI's trade-offs. This paper develops a framework to quantify both the Greenhouse Gas (GHG) emissions and the climate change costs and benefits of AI systems, addressing the time value of carbon and the installed base of existing AI infrastructure. We examine the energy demands of AI, which are growing rapidly and threatening companies' net-zero commitments, while also analyzing AI's potential to enable emissions reductions through applications such as optimized energy systems, demand response, grid management, and electrification acceleration. This research introduces the Net Climate Impact Score (NCIS) of AI, a novel equation to calculate the net climate impact of AI technologies that considers both immediate emissions and potential future benefits, and provides a methodology for assessing AI projects holistically. We demonstrate that while current AI applications are predominantly emissions-intensive, strategic deployment focused on energy system transformation could potentially deliver net climate benefits within specific time frames and applications. However, improvements in energy efficiency and emissions reductions resulting from AI are, absent climate policy, likely to generate both direct and indirect rebound effects that could undermine the emissions reductions and reduce the climate benefits of AI. The research concludes with policy and industry recommendations that propose technological pathways that could maximize AI's positive impact while minimizing its environmental footprint.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wide Range Switched Mode RF Power Amplifiers and&#13;
their applications</title>
<link href="https://hdl.handle.net/1721.1/163427" rel="alternate"/>
<author>
<name>Pressel, Adam Jay</name>
</author>
<id>https://hdl.handle.net/1721.1/163427</id>
<updated>2025-10-30T03:24:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Wide Range Switched Mode RF Power Amplifiers and&#13;
their applications
Pressel, Adam Jay
Switched-mode power amplifiers (SMPAs) are desired that can work across a wide range of power levels and load impedances with fast response speed while maintaining high efficiency. Such designs would be valuable for many applications including plasma generation and wireless power transfer. We introduce a new wide-range SMPA architecture that provides direct output voltage modulation, enabling it to modulate output power and compensate for resistive load variations. Dynamic frequency modulation is leveraged to address reactive load variations. The new architecture enables all the semiconductor switches to maintain zero-voltage switching across all operating conditions. Experimental results shows that the wide-range half bridge power amplifier was able to deliver a wide power range of 25 W - 95 W power range across each individual resistive load in the range of 5 Ω - 20 Ω with up to j15 Ω reactance. The maximum dc-ac efficiency is 86 with 20 Ω load and 110.5 W load power.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tension-Leg Platform for Offshore Diffusor-AugmentedHydrokinetic Turbine</title>
<link href="https://hdl.handle.net/1721.1/163426" rel="alternate"/>
<author>
<name>Mannier, Robert B.</name>
</author>
<id>https://hdl.handle.net/1721.1/163426</id>
<updated>2025-10-30T03:24:05Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Tension-Leg Platform for Offshore Diffusor-AugmentedHydrokinetic Turbine
Mannier, Robert B.
Harnessing marine energy offers significant potential for advancing clean and sustainable power generation. This thesis focuses on the design and optimization of a diffuser-augmented hydrokinetic turbine, supported by a tension-leg platform, to harness ocean and tidal currents for renewable energy production. By incorporating diffuser technology, the turbine’s efficiency is enhanced, increasing the coefficient of power and enabling effective energy capture even in environments with lower current speeds.&#13;
The research involves 2D and 2D axisymmetric modeling of the diffuser and turbine using Actuator Disk Theory (ADT), with tools such as Rhino and Star CCM+. Mounted on a floating tension-leg platform anchored to the seabed, the turbine is designed to exceed the Betz limit, maximizing power output and advancing offshore energy harvesting capabilities.&#13;
This thesis is solely focused on the design and optimization of the hydrokinetic turbine, providing an in-depth analysis of diffuser performance. The findings contribute to the development&#13;
of marine renewable energy technologies, promoting sustainable and efficient power generation from ocean and tidal currents.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimized Sustainable Hydrogen Generation from Liquid Metal Activated Aluminum-Water Reactions</title>
<link href="https://hdl.handle.net/1721.1/163425" rel="alternate"/>
<author>
<name>Kombargi, Aly</name>
</author>
<id>https://hdl.handle.net/1721.1/163425</id>
<updated>2025-10-30T03:21:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimized Sustainable Hydrogen Generation from Liquid Metal Activated Aluminum-Water Reactions
Kombargi, Aly
This study presents a sustainable and cost-effective method for hydrogen generation using aluminum waste, addressing both energy and environmental challenges. Activated aluminum reacts with water to produce hydrogen, heat, and aluminum oxyhydroxide (boehmite), a commercially valuable byproduct. As a safe, efficient, and cost-effective energy carrier with an energy density exceeding 20 kWh/L (8 kWh/kg), aluminum enables on-demand hydrogen production for diverse applications, including maritime transport and off-grid power systems. This research optimizes reaction kinetics to enhance hydrogen yield and rate while minimizing costs and carbon emissions.&#13;
&#13;
Activation involves coating aluminum with a gallium-indium eutectic (eGaIn) liquid metal, which disrupts the oxide layer and enables spontaneous reaction in aqueous environments. The study investigates seawater as an ionic medium for eGaIn eutectic agglomeration and reuse. However, chlorine binding slows the reaction, which was countered using high-temperature operation and catalytic enhancement. Adding 0.02 M imidazole accelerated the reaction 60-fold, enabled 92% eutectic recovery, and achieved 99% of the theoretical hydrogen yield.&#13;
&#13;
Environmental conditions significantly influence reaction efficiency. Increasing seawater temperature from 20°C to 90°C enhanced reaction rates 44-fold, aligning with Arrhenius Law. Isochoric reactions at high pressure were tested to simulate deep-sea vehicle environments using onboard hydrogen reactors fueled by aluminum and surrounding seawater. Results showed a 33% yield increase at 6 MPa (586 m depth) compared to atmospheric pressure, primarily due to surface tension effects that reduce hydrogen bubble size, improving aluminum-water contact at higher pressures.&#13;
&#13;
A life cycle and cost analysis identified an optimized production scenario with a carbon footprint of 1.45 kgCO2eq/kg H2, meeting green hydrogen standards. Major contributors include recycled aluminum use and processing, and the eGaIn alloy; but eutectic recovery and thermal energy reuse further reduce emissions. Using scrap aluminum and recovering byproducts, hydrogen production costs are estimated at $9.2/kg. Additionally, reselling boehmite (market price $2.5/kg) could generate revenue 5.6 times greater than input costs, significantly improving economic viability.&#13;
&#13;
To demonstrate scalability, a modular hydrogen reactor was developed and directly integrated with a commercial generator, reliably producing 400W of power from on-demand, 99% purity lab-tested hydrogen. The envisioned application is a fully integrated aluminum recycling system that utilizes aluminum waste and seawater to generate hydrogen, thermal energy, and boehmite. This approach advances clean energy technology by providing a scalable and economically viable hydrogen production pathway.&#13;
&#13;
Beyond its direct application in underwater technologies, this optimized reaction can support energy-intensive operations such as heating, desalination, transportation, industrial hydrogen production for refining and fertilizer synthesis, stationary energy systems for off-grid power, and renewable energy storage. Its versatility strengthens energy security and decarbonization efforts while offering a cost-competitive alternative to conventional fuels, positioning it as a key enabler of a sustainable energy future.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Essays in Venture Capital and Corporate Finance</title>
<link href="https://hdl.handle.net/1721.1/163424" rel="alternate"/>
<author>
<name>Paine, Fiona</name>
</author>
<id>https://hdl.handle.net/1721.1/163424</id>
<updated>2025-10-30T03:21:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Essays in Venture Capital and Corporate Finance
Paine, Fiona
This thesis is three chapters. In the first chapter, I study the impact of restricting foreign venture capital investments for national security reasons. Countries have increasingly been using economic policies to further geopolitical and national security goals. Thus far, economists have focused on studying tariffs and subsidies despite a broader range of economic tools actually being implemented. How costly are these other policies and what are their effects on capital markets, investment, and the economy more broadly? In this paper, I examine a 2018 U.S. law (FIRRMA), which expanded the government’s ability to review and block transactions on national security grounds to include venture capital (VC) investments by foreign investors. I use the passing of FIRRMA, its differential impact on specific VC industries, and the role of Chinese investors in U.S. venture capital to study whether foreign investment screening impacts capital supply. I find that FIRRMA had a negative effect on capital supply in impacted industries due to two factors: 1) the specialization of VC investing (such that the substitution of outside capital into impacted industries is low) and 2) networks in VC investing (there are spillovers to domestic syndication partners in impacted industries). I further find that the change in capital supply is costly, leading to lower innovation by startups. I introduce a novel way of measuring innovation early in the life of a startup using text from startup websites. I use this measure to show there is a selection effect where VCs give first round funding to less innovative startups after FIRRMA. Finally, in a case study of the biotechnology industry, I show that impacted startups suspend drug projects at higher rates, and in particular their risky projects. In the second chapter, joint with Johnathan Jensen, we study municipal cyber risk. Cyber attacks are estimated to cost billions of dollars per year. However, cyber risk is hard to study since companies rarely disclose hacks and don’t share information on cyber security investment. This paper takes a novel approach by looking at municipal hacking. We use a dataset of municipal ransomware attacks merged with hand collected IT investment data and municipal bond data. We find that lower IT investment predicts hacking. Furthermore, following a ransomware attack, municipal bond yields fall by 13 basis points and IT investment as a share of total town expenditure increases by 23 basis points. We investigate potential channels leading to decreased yields post hacking. We find evidence that being hacked reduces cyber risk by disciplining municipalities to move closer to the optimal level of IT spending. The third chapter investigates the impact of firm data collection and analysis of collected data on the riskiness of firm cash flows. I use a scraped data set of the third party resources loaded on firms’ websites as a measure of firm data collection and analysis practices. I find that firm u se of less effective web analytics is as sociated with an increase in the variance of sales, inventory, and both fixed and variable costs. This effect is de spite a lack of change in the level of these variables. Looking at the effect of treatment on the treated, there i s higher profit and sales variance during times of higher uncertainty. I use differences in web analytics technology and a change in their relative effectiveness as my identification strategy. As a case study of a large negative demand shock, I look at differences in fi rm reactions to COVID-19 based on their web analytics usage.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Coevolution of Small Business Strategy and Regulation: A Mixed-Methods Study of United States Craft Breweries</title>
<link href="https://hdl.handle.net/1721.1/163423" rel="alternate"/>
<author>
<name>Rixey V, Eppa</name>
</author>
<id>https://hdl.handle.net/1721.1/163423</id>
<updated>2025-10-30T03:21:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Coevolution of Small Business Strategy and Regulation: A Mixed-Methods Study of United States Craft Breweries
Rixey V, Eppa
This dissertation asks how do small firms overcome regulatory constraints despite powerful opposition? Significant research has documented the nonmarket strategies of large, multinational firms seeking to benefit from and capture regulatory systems. However, despite the historically important role of small and medium-sized enterprises (SMEs) in the economic and civic structures of the US, there is much we do not know about whether and how they attempt to exert their own influence in regulatory environments. To explore this, the US beer industry was selected as a strategic research site where SMEs have had a range of successes and failures in developing policy influence. In the late 1970s, the US beer industry rapidly consolidated to less than 100 breweries, but today, with the rise of small, craft breweries, there are over 9,000 breweries in the US. Over 7,000 of these focus on direct-to-consumer (DTC) sales, which were explicitly or practically illegal in all 50 states in 1980. How did this market and regulatory transformation take place and why did some states significantly change their policies to support small brewers while others did not? Two studies were conducted to explore this, an in-depth qualitative study of a single state and a mixed-methods comparative study of six states. The single state was selected for variation in policy outcomes over time and at local levels. Through interviews and archival research, it was revealed that craft breweries engaged in a bottoms-up approach, through which individual firms venue shift downwards, from state to local regulators, to successfully ease state-level constraints. In local public hearings, individual entrepreneurs blended local corporate social responsibility (CSR) with an experimental approach to corporate political activity (CPA) that motivated city-based regulators to challenge state-level restrictions on DTC business models. To understand how this process of developing policy influence unfolds in the absence of local regulators, the national trade associations in the beer industry were analyzed and six states where the state has near exclusive control over alcohol regulations were selected for further analysis. Controlling for a range of factors through a cross-sectional database led to a geographically proximate sample of six comparable states with wide variation in the favorability of policies and the number of breweries per capita. A unique dataset of over 5,000 legislative updates on proposed and enacted federal and state policy changes was supplemented with archival and interview data to assess policy influence. The conventional approach described in the literature, collective action via a trade association, was important but often insufficient. Each state had a functioning trade association representing most craft breweries, but sustained policy influence was observed only in states where full-time leaders of these associations understood the political landscape and developed policy partnerships to tilt the odds in their favor. Policy partnerships entailed legislation alleviating regulatory constraints while also including new provisions that ensured long-term alignment among the partners. Taken together, these studies reveal the vital importance of collective action extending beyond the focal industry for SMEs to develop policy influence at the local or state level.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the Application of an Output-based Adaptive, Higher-order Finite Element Method to Sonic Boom Propagation</title>
<link href="https://hdl.handle.net/1721.1/163422" rel="alternate"/>
<author>
<name>Trono Figueras, Renato</name>
</author>
<id>https://hdl.handle.net/1721.1/163422</id>
<updated>2025-10-30T03:23:48Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">On the Application of an Output-based Adaptive, Higher-order Finite Element Method to Sonic Boom Propagation
Trono Figueras, Renato
The reduction of sonic boom loudness to within acceptable limits is a crucial factor for the viability of supersonic aircraft. This thesis presents a computational framework for simulating sonic boom propagation using an output-based adaptive, higher-order finite element method. The research employs the Variational Multiscale with Discontinuous Subscales (VMSD) method, integrating Continuous Galerkin (CG) and Discontinuous Galerkin (DG) features, referred to as VMSD-BR2. This approach leverages static condensation to manage computational cost while utilizing DG stabilization techniques for enhanced stability and adjoint consistency. A key component of this work is the application of the dual weighted residual (DWR) method for output error estimation, which in turns drives the mesh optimization process. The method’s efficacy is validated using smooth solutions for the viscous Burgers equation and the adjoint PDE for a volume output functional. Additionally, artificial viscosity is incorporated via a shock sensor PDE approach to handle shock presence, with necessary corrections applied to the DWR error estimate. The VMSD-BR2 method is applied then to a real-world scenario solving the augmented Burgers equation, which models the propagation of sonic booms. The results include the pressure perturbation field, adapted meshes, ground-level B-SEL filtered pressure, and perceived loudness at ground, demonstrating the method’s practical application.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>C. elegans as a Platform for Multimodal Neural Data Integration</title>
<link href="https://hdl.handle.net/1721.1/163421" rel="alternate"/>
<author>
<name>Simeon, Quilee</name>
</author>
<id>https://hdl.handle.net/1721.1/163421</id>
<updated>2025-10-30T03:24:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">C. elegans as a Platform for Multimodal Neural Data Integration
Simeon, Quilee
Systems neuroscience has traditionally been fragmented into investigations at discrete levels of organization, creating methodological and conceptual gaps that hinder unified understanding of neural function. This thesis examines the nematode Caenorhabditis elegans as a platform for integrating diverse neural data modalities, offering a pathway to bridge these gaps. The hermaphrodite C. elegans, with its completely mapped connectome, optical transparency, genetic tractability, and stereotyped nervous system of only 302 neurons, presents an opportunity for comprehensive measurements across multiple dimensions of neural function. The review is organized around three fundamental neural data modalities accessible in C. elegans: (1) molecular genetic profiles, (2) network connectivity, and (3) neural activity dynamics. Historically studied in isolation, these complementary data types are increasingly being bridged through technological and computational innovations. We examine experimental advances enabling whole-nervous-system measurements of these modalities, as well as data standardization efforts and computational frameworks for cross-modal integration. While understanding the relationship between neural activity and behavior remains a fundamental goal of systems neuroscience, this thesis focuses on neural data acquisition and integration rather than behavioral analysis, which has been extensively covered elsewhere.1 We conclude with some original proposals to overcome current limitations in multimodal data acquisition and synthesis, and suggest future directions toward a holistic understanding of how molecular components, network connectivity, and cellular physiology collectively give rise to neural function in C. elegans. These integrative approaches establish a roadmap that may eventually scale to more complex nervous systems and advance our understanding of neural computation across species.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Nickel Short: Rethinking Element Scarcity in Pursuit of a Fusion-Powered World</title>
<link href="https://hdl.handle.net/1721.1/163420" rel="alternate"/>
<author>
<name>Sutcliffe, Douglas</name>
</author>
<id>https://hdl.handle.net/1721.1/163420</id>
<updated>2025-10-30T03:24:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Nickel Short: Rethinking Element Scarcity in Pursuit of a Fusion-Powered World
Sutcliffe, Douglas
Fusion energy presents a promising solution for current global decarbonization goals. This thesis presents an adaptable model for evaluating mineral sufficiency in the global deployment of fusion power. Using the ARC Magnetic Confinement (MC) Deuterium-Tritium (D-T) fusion concept as a framework, this research integrates mineral usage estimates from the International Energy Agency (IEA) with MIT Energy Initiative’s (MITEI) energy production forecasts by generation technology. Using MITEI’s $2,800/kW cost scenario for fusion power generation, the model situates the demand for fusion-critical minerals within the broader context of growing mineral needs driven by the clean energy transition, and offers specific, quantitative insights into mineral sufficiency risks. The study finds that beryllium will face significant shortages solely due to fusion demand, with resource exhaustion projected to occur within 40 years. When accounting for additional demands from Electric Vehicles (EVs), battery storage, and transmission infrastructure, chromium and nickel are projected to exhaust economically extractable reserves within 21 to 35 years at current prices. The research further reveals that for nine of the thirty elements evaluated, over 50% of production is concentrated in a single country, and for half of the minerals China is the largest producer, introducing geopolitical risks. Notably, at just 13 kg per reactor, the demand for Rare Earth Elements (REEs) is not exposed to a significant risk, even without the top producing country. The research also surfaces current reactor designs and strategies which could help mitigate each identified risk.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>As global ambient temperatures continue to rise, with the highest recorded annual averages since 1850 being within the last ten years, problems emerge for species exhibiting temperature-dependent sex determination. This is the process by which the sex of an animal’s embryo is determined based on the temperature of environment in which it is incubated, which can result in skewed sex ratios within a population like in the case of the critically endangered Hawksbill Sea Turtles (Eretmochelys imbricata). Reportedly, 85-95% of Hawksbills sampled in the wild are currently female [3]. This sex-imbalance can negatively impact the species’ ability to procreate, leading to the potential for extinction. Currently, no viable, long-term solutions exist to effectively and safely cool sea turtle eggs while still keeping them within their natural habitat. This research proposes the creation of sea turtle egg incubators designed to achieve a temperature range that will produce a higher percentage of male hatchlings to help rectify this imbalance in habitats heavily affected by climate change. These incubators are designed to be affordable, easy to build and, most importantly, safe for the sea turtle eggs. Three-month-long temperature trials for the incubator were conducted in Jamaica with conservationist community partners at Oracabessa Bay Sea Turtle Project. Results showed that this incubator is not only easy to manufacture and use, but that it successfully regulates the temperature range in favor of more male hatchlings, while also increasing the emergence rate of the hatchlings from 70% in natural nests to over 80%. During one of the hottest months in Jamaica, the incubator, deployed without water changes, doubled the predicted percentage of males produced by natural nests. When provided with cool water changes the incubator quintupled this value. Throughout the months of August to October, the incubator achieved a temperature range that is predicted to produce 85-99% male hatchlings, thus counteracting the feminization phenomenon occurring in nature.</title>
<link href="https://hdl.handle.net/1721.1/163419" rel="alternate"/>
<author>
<name>Espinal, Michael A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163419</id>
<updated>2025-10-30T03:23:53Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">As global ambient temperatures continue to rise, with the highest recorded annual averages since 1850 being within the last ten years, problems emerge for species exhibiting temperature-dependent sex determination. This is the process by which the sex of an animal’s embryo is determined based on the temperature of environment in which it is incubated, which can result in skewed sex ratios within a population like in the case of the critically endangered Hawksbill Sea Turtles (Eretmochelys imbricata). Reportedly, 85-95% of Hawksbills sampled in the wild are currently female [3]. This sex-imbalance can negatively impact the species’ ability to procreate, leading to the potential for extinction. Currently, no viable, long-term solutions exist to effectively and safely cool sea turtle eggs while still keeping them within their natural habitat. This research proposes the creation of sea turtle egg incubators designed to achieve a temperature range that will produce a higher percentage of male hatchlings to help rectify this imbalance in habitats heavily affected by climate change. These incubators are designed to be affordable, easy to build and, most importantly, safe for the sea turtle eggs. Three-month-long temperature trials for the incubator were conducted in Jamaica with conservationist community partners at Oracabessa Bay Sea Turtle Project. Results showed that this incubator is not only easy to manufacture and use, but that it successfully regulates the temperature range in favor of more male hatchlings, while also increasing the emergence rate of the hatchlings from 70% in natural nests to over 80%. During one of the hottest months in Jamaica, the incubator, deployed without water changes, doubled the predicted percentage of males produced by natural nests. When provided with cool water changes the incubator quintupled this value. Throughout the months of August to October, the incubator achieved a temperature range that is predicted to produce 85-99% male hatchlings, thus counteracting the feminization phenomenon occurring in nature.
Espinal, Michael A.
Foams, widely used in packaging, insulation, protective gear, and medical implants, are versatile materials but mechanically inefficient due to their bending-dominated microstructure, leading to an exponential loss of stiffness and strength at low relative densities. Architected materials address this limitation through engineered microstructures that achieve near-linear scaling of properties with relative density. However, truss- and plate-based designs suffer from stress concentrations, while shell-based architectures, though more mechanically efficient, remain highly sensitive to defects and are challenging to fabricate at scale via additive manufacturing. Spinodal architected materials, derived from scalable spinodal decomposition processes, offer a promising alternative with aperiodic, double-curvature microstructures that enhance mechanical efficiency at low relative densities. Nevertheless, their behavior beyond the elastic regime remains largely unexplored. This thesis investigates the nonlinear mechanics of spinodal architected materials by combining a comprehensive experimental dataset with computational modeling. A total of 107 unique morphologies were fabricated and subjected to uniaxial compression along three principal directions, resulting in a dataset of 321 stress-strain curves. Morphologies were generated via simulated spinodal decomposition, allowing controlled variation of anisotropy. Explicit finite element simulations, validated against experimental data, revealed that plastic energy dissipation dominates the large-strain mechanical response. To quantitatively link local morphology to global mechanical behavior, we introduce the Normal Participation Factor (NPF) — a scalar geometric parameter that captures the alignment between surface normals and the loading direction. We demonstrate that the NPF is a material-agnostic proxy for equivalent plastic strain and is linearly correlated with the total energy dissipated during deformation. Combining insights from both experiments and simulations, we establish the NPF as a first-order predictive tool for mechanical behavior under large strains, enabling structure-property predictions without reliance on costly simulations or extensive experimental testing. Altogether, this work lays the foundation for developing finite-strain structure-property relationships in spinodal architected materials, advancing their potential for real-world applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advancing Tendon-Driven Robotic Systems: From Climbing Robots to String Actuators</title>
<link href="https://hdl.handle.net/1721.1/163418" rel="alternate"/>
<author>
<name>Poon, Ryan Joseph Mar</name>
</author>
<id>https://hdl.handle.net/1721.1/163418</id>
<updated>2025-10-30T03:21:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Advancing Tendon-Driven Robotic Systems: From Climbing Robots to String Actuators
Poon, Ryan Joseph Mar
Tendon-driven mechanisms provide a range of benefits for robotic systems, particularly by allowing actuators to be mounted at the base of a manipulator and reducing its inertia. This thesis explores two projects that exploit and advance tendon-driven mechanisms: a wheeled-grasping hybrid climbing robot with modular tendon-driven grasping arms and a hybrid twisted-winching string actuator. Called CLIMR (Cabled Limb Interlocking Modular Robot), the novel climbing robot adapts to columns of varying diameters by adding or removing modular arm links. CLIMR also features capabilities like self-locking (the ability of the robot to stay on the column without power), autonomous grasping, and rotation around the column axis. Mathematical models describe conditions for self-locking, vertical wheeled climbing, and complete grasping of a column. Simulations and experimental results validate the proposed models. The insights from CLIMR are then extended into general design strategies for future developments of similar hybrid climbing robots, focusing on methods to inform design decisions and assess metrics such as adaptability. Ultimately, this work provides a comprehensive framework for designing hybrid climbing robots, highlighting the potential of autonomous solutions for environments where climbing tall structures is critical. Stemming from this climbing robot work is a novel actuator system combining a twisted string actuator (TSA) with a winch mechanism. Relative to traditional hydraulic and pneumatic systems, TSAs are compact but face limitations in stroke length and velocity. This TSA-winch system overcomes these constraints without risking overtwisting by providing both high displacement winching and high force twisting modes. The design features a rotating turret that houses a winch and a worm gear transmission driven by a through-hole drive shaft. Models are developed for the combined displacement and velocity control of this system. Experiments validate the open loop model as well as the closed loop model, which uses a conductive string feedback controller with a gain scheduling and control effort allocation scheme. For specific cases that require large displacement winching followed by high force twisting over several repeatable cycles, an alternate design sacrifices complete string state control and replaces a motor with passive automatic clutches to achieve a seamless transition between modes triggered by the string load. The models of the clutch torque thresholds for this version of the actuator are verified by experiments. Overall, this research contributes to the development of more versatile and efficient actuation systems for tendon-driven robotic applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Coordination of distributed energy resources for a reliable,&#13;
resilient, and affordable decarbonized grid</title>
<link href="https://hdl.handle.net/1721.1/163417" rel="alternate"/>
<author>
<name>Jagadeesan Nair, Vineet</name>
</author>
<id>https://hdl.handle.net/1721.1/163417</id>
<updated>2025-10-30T03:20:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Coordination of distributed energy resources for a reliable,&#13;
resilient, and affordable decarbonized grid
Jagadeesan Nair, Vineet
Rapid decarbonization of the power grid is essential to meet climate goals by reducing emissions and enabling sustainable electrification of sectors like transport and heating. This requires shifting from centralized fossil-fuel generation to variable renewables like wind and solar. The grid must also adapt to a growing number of small-scale, distributed energy resources (DERs) at the edge, such as rooftop solar, batteries, electric vehicles, and heat pumps. This thesis focuses on modeling, optimizing, and coordinating DERs to enable a flexible, resilient, and affordable grid. First, it proposes a novel hierarchical local electricity market for low and medium-voltage distribution grids. This structure enables DER participation through decentralized and distributed optimization, respecting grid physics while preserving privacy and scalability. The market is applicable to both balanced and unbalanced radial grids using two different convex relaxations and power flow models. Grid services are also priced based on duality theory. Numerical simulations show improved dispatch efficiency, reliability, voltage regulation, and lower retail electricity rates. Second, the thesis applies game theory and mechanism design to extract flexibility from autonomous, strategic DER owners. A repeated Stackelberg game with incomplete information and intertemporal constraints yields equilibrium pricing with closed-form solutions. Third, a distributed decision-making framework is developed to coordinate DERs for grid resilience. It mitigates cyber-physical attacks and outages, ranging from 5 to 40% of peak load, using local flexibility and grid reconfiguration, extensively validated through both software and hardware-in-the-loop simulations. Finally, the thesis addresses DER hosting capacity. New algorithms are developed that co-optimize the siting and sizing of diverse DERs under uncertainty using Monte Carlo sampling, stochastic programming, and k-means clustering for scenario reduction. Results show that intelligent DER coordination can defer grid infrastructure upgrades and support greater renewable integration and electrified demand growth. Together, these contributions provide analytical and simulation tools to improve the planning and real-time operation of future distributed, low-carbon power grids.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Development and Utilization of Tandem Fluency in Human-Exoskeleton Interaction</title>
<link href="https://hdl.handle.net/1721.1/163416" rel="alternate"/>
<author>
<name>Koo, Bon H. (Brandon)</name>
</author>
<id>https://hdl.handle.net/1721.1/163416</id>
<updated>2025-10-30T03:20:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Development and Utilization of Tandem Fluency in Human-Exoskeleton Interaction
Koo, Bon H. (Brandon)
There is strong demand for portable technologies that enhance human power output while maintaining safety and range, not only in defense and industry but also in aerospace. Exoskeletons and other wearable powered devices have been proposed as solutions, but a major barrier to adoption is the issue of “fluency”: a combination of metrics representing the seamlessness of human-robot interaction. Most current exoskeleton systems, especially for non-cyclic motions, disrupt user intent and movement, often offering no benefit, or even causing harm by increasing discomfort and injury risk. This lack of fluency is frequently linked to poor intent recognition and absence of predictive control. To address this, we propose developing a human motion prediction system and studying its impact on fluency in exoskeleton-like devices and related human-centered technologies in real-world applications. We introduce an expanded metric “tandem fluency” based on conventional fluency, tailored for evaluating human-robot interaction (HRI) systems where human and robot agents are kinematically synchronized to perform functional tasks. We then develop a proof-of-concept and a functional deep neural network (DNN) capable of detecting human motion intent and predicting motion trajectories in advance using biosignals such as surface electromyography (sEMG). In parallel, we build and test prototype exoskeleton hardware with both single and multiple degrees of freedom. Finally, we conduct human trials with the full closed-loop tandem human-exoskeleton system to evaluate the impact of motion prediction-based control on tandem fluency. The results show that classification and regression prediction of human motion prior to initiation of physical motion is possible and can have performance necessary for practical application of this information, the prediction can be generated not only prior to the physical motion initiation, but often even before the full electrical activation of the primary agonist in many motions, the DNN is robust to variations in sensor hardware and input formatting, and furthermore the use of this prediction in the controls of a tandem robot system has potential to improve tandem fluency by positively affecting both subjective experience and objective/metabolic results.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mapping Informality: An Approach to Classifying Sidewalk Informal&#13;
Practices and Elements Through Street View Imagery</title>
<link href="https://hdl.handle.net/1721.1/163415" rel="alternate"/>
<author>
<name>Co, Dominic Lim</name>
</author>
<id>https://hdl.handle.net/1721.1/163415</id>
<updated>2025-10-30T03:23:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Mapping Informality: An Approach to Classifying Sidewalk Informal&#13;
Practices and Elements Through Street View Imagery
Co, Dominic Lim
By 2050, the United Nations estimates that 68 percent of the world’s population will live in cities, with 90 percent of that growth concentrated in rapidly urbanizing informal communities across Africa, Latin America, and Asia. In these contexts, informality, defined as unregulated commerce, adaptive reuse of space, incremental construction, and self-organized infrastructure, shapes the everyday choreography Jane Jacobs called the “sidewalk ballet.” Yet because governments rarely collect census-grade data on such activity, informality remains poorly documented and weakly understood. This thesis introduces a transferable computational framework to formalize informality by transforming street imagery into an auditable taxonomy of informal street-level elements, activities, and practices. The framework is tested in two contrasting districts, i.e. District 1 and District 5 of Ho Chi Minh City, where sidewalks are highly contested by vendors, pedestrians, and regulators. The contribution of this thesis is two-fold. First, this thesis contributes a three-stage pipeline for classifying sidewalk informality. Using Seesaw (Moll et al., 2022), a CLIP-based feedback loop retrieves and soft-labels candidate scenes. This is followed by manual verification and fine-tuning a lightweight ResNet on binary categories (e.g. stationary vs mobile vendors, etc.). Compared to the zero-shot model Qwen-VL-Max, the fine-tuned ResNet delivered more balanced performance (precision/recall: 0.62– 0.78) and better handled nuanced, context-sensitive distinctions. In contrast, Qwen-VL-Max favored recall and object salience but struggled with subtle or spatial cues like mobile vs. stationary setups. Second, this thesis also developed a taxonomy and annotated dataset of informality which was used to reveal spatial inequities in sidewalk use. By converting curbside complexity into structured, updateable categories, the framework enables planners to recognize the adaptive value of informal practices, target genuine hazards, and design interventions for more equitable urban planning.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nuclear Microreactor-Powered Container Ships for Maritime Decarbonization</title>
<link href="https://hdl.handle.net/1721.1/163414" rel="alternate"/>
<author>
<name>Dickerman, Matthew F.</name>
</author>
<id>https://hdl.handle.net/1721.1/163414</id>
<updated>2025-10-30T03:23:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Nuclear Microreactor-Powered Container Ships for Maritime Decarbonization
Dickerman, Matthew F.
The maritime shipping industry, responsible for approximately 3% of global greenhouse gas emissions, faces growing pressure to achieve net-zero emissions by 2050 under the International Maritime Organization (IMO) framework. Alternative fuels such as liquefied natural gas, ammonia, and methanol present challenges related to energy density, infrastructure, safety, and cost. Nuclear microreactors offer high energy density, zero operational emissions, and multi-year endurance, but require coordinated regulatory development and stakeholder engagement for commercial adoption.&#13;
&#13;
This thesis evaluates the feasibility of integrating microreactors into container ship designs employing electric propulsion and standardized intermodal logistics. Holos-Quad microreactors are selected based on their modular architecture, transportability, and compatibility with marine operations. Detailed ship concepts are developed for Feeder, Panamax, and New-Panamax classes, accompanied by a phased fleet development strategy.&#13;
&#13;
Economic modeling compares the lifecycle costs of conventional and microreactor-powered ships, incorporating capital expenditures, operating costs, financing assumptions, and carbon pricing. Fleet-level analysis indicates that microreactor-powered ships can achieve comparable or improved profitability while eliminating nearly 44 million metric tons of CO2e emissions across a ten-ship fleet. Sensitivity analyses confirm the robustness of these results across a wide range of future scenarios.&#13;
&#13;
By integrating stakeholder analysis, technical feasibility assessments, and economic modeling, this research establishes a commercially viable framework for zero-emission nuclear-powered shipping, offering a scalable pathway toward sustainable maritime operations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A magnetic levitation testbed for development of real-time control frameworks applied in fusion</title>
<link href="https://hdl.handle.net/1721.1/163413" rel="alternate"/>
<author>
<name>Lee, Yehoon</name>
</author>
<id>https://hdl.handle.net/1721.1/163413</id>
<updated>2025-10-30T03:23:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A magnetic levitation testbed for development of real-time control frameworks applied in fusion
Lee, Yehoon
This thesis presents the development of a magnetic levitation device as a hardware-in-theloop platform to be used for research in Control and Data Acquisition frameworks applied to fusion experiments. Specifically, the testbed is aimed to demonstrate distributed, modular control using a plasma control system framework being developed at the Plasma Science and Fusion Center at MIT. This framework integrates a real-time control framework, MARTe2, and a data management framework, MDSplus, to provide platform flexibility and robust data management for rapid prototyping of control systems. Both frameworks are widely used individually in fusion experiments worldwide. The magnetic levitation setup is centered around a single electromagnet coil which levitates a permanent disk magnet from above. Implemented with the integrated MARTe2/MDSplus framework, the controller, actuator, and sensors are distributed over the network. With the magnetic levitation testbed, this thesis achieves three objectives: 1. formulation of a physicsbased model of the system, 2. development of a controller in a modular, networked framework, and 3. training and implementation of learning-based methods within the framework. First, a state-space model for single-axis magnetic levitation is formulated based on theory and refined with magnetic field measurements. A feedback controller is then developed and implemented with MATLAB Simulink. Afterwards, a vision-based observer is developed to estimate position and tilt of the levitated magnet. Pose-image datasets are auto-labeled using fiducial markers and are used to train a convolutional neural network. Finally, the trained network will be applied in system identification of the final controlled system. Through the process of system development, this thesis proposes that the integrated MARTe2/MDSplus framework is robust in performing real-time control of a networked system, and its structural modularity is advantageous for developing and testing learning-based models.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Fast Assay of Bacteria Cell Permeability for Genetic&#13;
Transformation</title>
<link href="https://hdl.handle.net/1721.1/163412" rel="alternate"/>
<author>
<name>Nieves, Charmaine</name>
</author>
<id>https://hdl.handle.net/1721.1/163412</id>
<updated>2025-10-30T03:23:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Fast Assay of Bacteria Cell Permeability for Genetic&#13;
Transformation
Nieves, Charmaine
Bacterial cell genetic engineering is fundamental for research aiming to learn more about bacterial species for a broad range of applications. One method of intracellular delivery of foreign DNA during the genetic engineering process is the use of electroporation to create pores along the bacterial cell membrane. Current methods for assessing pore formation do not directly measure cell permeabilization or enable same-day assessment. In this thesis, a novel fast-screening protocol combining SYTOX green, microfluidics, and fluorescence imaging is evaluated for its capability to assess multiple conditions for cell permeabilization within a single day. By imaging bulk suspensions of post-electroporated cells stained with intracellularly delivered SYTOX, multiple electroporation conditions can be rapidly screened for cell permeabilization. This fast-screening protocol utilizes standard microbiology equipment and low-cost microfluidic imaging chambers, lowering the barrier to adoption and significantly reducing experimental time compared to conventional protocols involving foreign DNA delivery. Importantly, by decoupling permeabilization assessment from foreign DNA uptake, this method isolates the effect of membrane permeabilization from confounding factors such as restriction-modification systems. As a result, it provides a more accurate qualitative and quantitative assessment of bacterial membrane disruption. This approach enables same-day evaluation of electroporation conditions regardless of bacterial growth rate, potentially accelerating the optimization process for intracellular delivery in gene editing applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Probabilistic Human Arm Motion Prediction via Structured Multitask Variational Gaussian Processes</title>
<link href="https://hdl.handle.net/1721.1/163411" rel="alternate"/>
<author>
<name>Chong, Jinger S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163411</id>
<updated>2025-10-30T03:23:40Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Probabilistic Human Arm Motion Prediction via Structured Multitask Variational Gaussian Processes
Chong, Jinger S.
Accurate human motion prediction with uncertainty estimation is essential for safe and efficient human-robot collaboration, where robots must anticipate and react to human movements in real-time. Existing methods either rely on sophisticated techniques that demand extensive training data and sacrifice interpretability, or use simpler approaches like conventional Gaussian Processes (GPs) that fall short in performance. To address this gap, we propose a novel structured multitask variational GP framework that explicitly incorporates joint dependencies to reflect human kinematics. We further enhance this framework by integrating angular velocity constraints, which improve the physical plausibility of predictions. The addition of constraints alone yields up to a 66% reduction in mean angle error (MAE) and an 84% improvement in the likelihood of predicting ground truth (NLL), outperforming standard GP baselines across a wide range of motion types and prediction horizons. Among model variants, our structured GP with constraints offers the best tradeoff—achieving MAE within 1.1–2.6% and NLL within 0.001–0.012 of the best-performing model, while maintaining significantly lower overconfidence rates (OCR), particularly at short horizons where the independent GP model OCR reaches nearly 45%. These results underscore the importance of incorporating structure and context in human motion prediction, demonstrating that even simpler probabilistic models like GPs can achieve substantial performance gains when augmented with such information.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Permanent Magnet Synchronous Motors: Nonlinear Dynamic Modeling, Hardware Characterization, and High-Bandwidth Torque Control for Applications in Dynamic Robotics</title>
<link href="https://hdl.handle.net/1721.1/163410" rel="alternate"/>
<author>
<name>Roy, Ronak</name>
</author>
<id>https://hdl.handle.net/1721.1/163410</id>
<updated>2025-10-30T03:23:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Permanent Magnet Synchronous Motors: Nonlinear Dynamic Modeling, Hardware Characterization, and High-Bandwidth Torque Control for Applications in Dynamic Robotics
Roy, Ronak
The high-level control algorithms that are responsible for achieving dynamic locomotion in legged robots depend on accurate torque production for matching real-life performance with simulated performance. To achieve accurate torque production, actuators must run high-bandwidth, low-level torque control. Developing high performance low-level controllers requires accurate actuator models. This thesis covers the physical model of a Permanent Magnet Synchronous Motors (PMSM), a very common type of actuator in dynamic robotics. This thesis details the derivation of the PMSM linear model, how to adapt the model dependent on the physical construction of a real motor, and the implementation of FieldOriented Control (FOC) to achieve torque control. This thesis also describes a novel design of a high-precision dynamometer, which allows a motor to be coupled with an impedance and a torque sensor in order to accurately characterize the torque production characteristics of the motor. Using this dynamometer and other experimental setups, this thesis validates the model and determines parameters for multiple different actuators. Finally, this thesis proposes an augmented PMSM model that considers the nonlinear saturation behavior of the motor, validating the principle with hardware experiments, and demonstrates a nonlinear torque model and gain-scheduled current controller that improve torque tracking performance.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of an Apparatus and Testing Strategy for Characterizing Rolling Resistance of Omnidirectional Wheels</title>
<link href="https://hdl.handle.net/1721.1/163409" rel="alternate"/>
<author>
<name>Ilkbahar, Kayra B.</name>
</author>
<id>https://hdl.handle.net/1721.1/163409</id>
<updated>2025-10-30T03:23:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Development of an Apparatus and Testing Strategy for Characterizing Rolling Resistance of Omnidirectional Wheels
Ilkbahar, Kayra B.
Omnidirectional wheels (omni wheels) are a type of wheel technology similar to caster wheels but capable of simultaneous longitudinal and lateral motion, making them suitable for holonomic motion applications. In recent years, their popularity has grown substantially in areas such as educational robotics, autonomous vehicles, and industrial automation. Despite their similarity to caster wheels in both function and application, omni wheels are a much less mature technology and few agreed-upon standards exist for their design and testing. This thesis covers the design of a test procedure and its requisite test apparatus to characterize the rolling resistance of omni wheels across various test conditions, and focuses specifically on the mechanical and electrical design of an apparatus which can measure the rolling resistance coefficient of omni wheels while modulating their load weight, travel angle, and travel speed.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Magnetotelluric Study of Mantle Heterogeneities Beneath the Northeastern United States</title>
<link href="https://hdl.handle.net/1721.1/163408" rel="alternate"/>
<author>
<name>Kim, Jae Deok</name>
</author>
<author>
<name>Evans, Rob. L.</name>
</author>
<id>https://hdl.handle.net/1721.1/163408</id>
<updated>2026-03-08T03:29:06Z</updated>
<published>2025-10-25T00:00:00Z</published>
<summary type="text">A Magnetotelluric Study of Mantle Heterogeneities Beneath the Northeastern United States
Kim, Jae Deok; Evans, Rob. L.
Analysis of magnetotelluric (MT) data across the northern Appalachian region reveals significantmantle heterogeneity. By inverting a subset of long‐period EarthScope USArray MT data, we constructed athree‐dimensional electrical resistivity model that provides new insights into the seismic low‐velocity NorthernAppalachian Anomaly (NAA). Comparison with empirical conductivity models indicates that the low‐resistivity anomalies along the northern and western edges of the NAA cannot be explained by temperaturealone and likely require the presence of volatiles, such as CO2‐rich or hydrous melts, or other volatile‐bearingphases, to reduce mantle resistivity to the observed levels. In addition, our modeling suggests that certainalternative lithologies, particularly hydrous clinopyroxenites, may also contribute to the observed conductivity,implying that compositional heterogeneity plays a role alongside fluids or melt. These conductive features mayreflect partial melting or metasomatic enrichment of carbonated and hydrated mantle domains introduced duringpast subduction or plume interactions, potentially mobilized by edge‐driven convection at lithosphericboundaries. We also resolve a deep resistive feature in western New England, interpreted as a dry and depletedlithospheric block, though its nature remains uncertain due to limited seismic expression and the relatively lowsensitivity of MT to resistive structures. Our results suggest that the upper mantle beneath New England is bothcompositionally and thermally heterogeneous, shaped by a complex tectonic history involving subduction,metasomatism, lithospheric thinning, and ongoing asthenospheric processes.
</summary>
<dc:date>2025-10-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>“Reason” En Masse</title>
<link href="https://hdl.handle.net/1721.1/163407" rel="alternate"/>
<author>
<name>Watkins, Eliot</name>
</author>
<id>https://hdl.handle.net/1721.1/163407</id>
<updated>2026-03-08T03:29:05Z</updated>
<published>2025-08-03T00:00:00Z</published>
<summary type="text">“Reason” En Masse
Watkins, Eliot
We can use “reason,” with its normative sense, as both a count noun (“there is a reason for her to Φ”) and a mass noun (“there is plenty of reason for her to Φ”). How are the count and mass senses of “reason” related? Daniel Fogal argues that the mass sense is fundamental: Just as lights are merely those things that give light and anxieties are merely those things that give anxiety, reasons are merely those things that give reason. In this article, I develop an opposing analysis of the mass noun “reason” that puts reasons first. Just as the detail on the Mona Lisa is composed of particular details (brushstrokes and colors) and the crime in L.A. is composed of particular crimes (pickpocketings and speeding offenses), so the reason for you to go to the dentist is composed of your reasons to go. Reasons stand to reason as parts to a whole. Such a picture makes reasons fundamental once more, but it has a cost of entry. In order to accommodate the behavior of “reason” in comparative constructions, you need to abandon the idea that reasons are facts we can count up. On the contrary: They're not facts, and you can't count them.
</summary>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Battle in the Clouds</title>
<link href="https://hdl.handle.net/1721.1/163406" rel="alternate"/>
<author>
<name>Moran‐Thomas, Amy</name>
</author>
<id>https://hdl.handle.net/1721.1/163406</id>
<updated>2026-03-08T03:29:03Z</updated>
<published>2025-07-18T00:00:00Z</published>
<summary type="text">Battle in the Clouds
Moran‐Thomas, Amy
This narrative experiment brings together scenes from my family histories in western Pennsylvania coal country, alongsideongoing visits to learn about rising health issues in the region today. Increasing numbers of residents express concerns aboutchronic problems such as young cancers, and many people worry about potential exposures coming from past and present energyinfrastructures. These growing health concerns, some of them my own, also brought me to revisit Rachel Carson’s medical writingsfrom her family home in western Pennsylvania. Looking out from her childhood bedroom with my mother and returning toCarson’s archival notes on “transmissible cancers” and her childhood essay, “A Battle in the Clouds,” these descriptions circlelong-accumulating debates about chronic diseases and their causes and effects over time. Returning to varieties of changing cloudstoday, this essay reflects on how chronic exposures—unevenly accumulating in bodies and landscapes and across generations—show “undone sciences” of many kinds in need of collective attention. It traces how families are grappling with the sense ofneeding to connect their own dots; the ways local communities are coming together to process displaced responsibilities; and theimplications for health, public trust, and care when so much is left in clouds.
</summary>
<dc:date>2025-07-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leaf Stripping on Uniform Attachment Trees</title>
<link href="https://hdl.handle.net/1721.1/163405" rel="alternate"/>
<author>
<name>Addario‐Berry, Louigi</name>
</author>
<author>
<name>Brandenberger, Anna</name>
</author>
<author>
<name>Briend, Simon</name>
</author>
<author>
<name>Broutin, Nicolas</name>
</author>
<author>
<name>Lugosi, Gábor</name>
</author>
<id>https://hdl.handle.net/1721.1/163405</id>
<updated>2026-03-08T03:29:05Z</updated>
<published>2025-08-04T00:00:00Z</published>
<summary type="text">Leaf Stripping on Uniform Attachment Trees
Addario‐Berry, Louigi; Brandenberger, Anna; Briend, Simon; Broutin, Nicolas; Lugosi, Gábor
In this note, we analyze the performance of a simple root-finding algorithm in uniform attachment trees. The leaf-stripping algorithm recursively removes all leaves of the tree for a carefully chosen number of rounds. We show that, with probability 1 − &#120576;, the set of remaining vertices contains the root and has a size only depending on &#120576; but noton the size of the tree.
</summary>
<dc:date>2025-08-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unnatural Wills: Inheritance Disputes and Inequality</title>
<link href="https://hdl.handle.net/1721.1/163404" rel="alternate"/>
<author>
<name>O'Brien, Shay</name>
</author>
<id>https://hdl.handle.net/1721.1/163404</id>
<updated>2026-03-08T03:28:58Z</updated>
<published>2025-07-23T00:00:00Z</published>
<summary type="text">Unnatural Wills: Inheritance Disputes and Inequality
O'Brien, Shay
Within the conceptual frame of relational economic sociology, inheritance disputes are a canonical form of relational mismatch.But the social patterning of relational mismatches, and their various ties to inequality, remain murky. In this paper, I examineall known inheritance disputes in Dallas from 1895–1945 within their social context to generate hypotheses about the rela-tionship between inequality and mismatches more broadly. Inheritance disputes were usually resolved by increasing the spreadof fortunes; in this sense, they moderated wealth inequality between individuals. But not everyone was equally able to maketheir preferred estate distribution a reality. Using a series of case studies, I argue that dispute resolutions tended to reifynormative family structures and naturalize sharp, moralized distinctions between fuzzy social categories. The legal resolutionsto this class of relational mismatches may marginally mitigate individual‐level wealth inequality and simultaneously producecategorical inequalities by race, class, gender, sexuality, and family structure. I conclude with a set of hypotheses and questionsfor future studies.
</summary>
<dc:date>2025-07-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decentralization, Blockchain, Artificial Intelligence (AI): Challenges and Opportunities</title>
<link href="https://hdl.handle.net/1721.1/163403" rel="alternate"/>
<author>
<name>Hui, Xiang</name>
</author>
<author>
<name>Tucker, Catherine</name>
</author>
<id>https://hdl.handle.net/1721.1/163403</id>
<updated>2026-03-08T03:29:07Z</updated>
<published>2025-07-22T00:00:00Z</published>
<summary type="text">Decentralization, Blockchain, Artificial Intelligence (AI): Challenges and Opportunities
Hui, Xiang; Tucker, Catherine
New technologies like blockchain allow firms to decentralize core functions, forcing managers to reconsider the trade-off be-tween closed, proprietary control and open strategies that involve external contributors. While proponents often advocate forfull decentralization, we argue this view overlooks important economic trade-offs. We propose that the better strategy is selectivedecentralization: a disciplined approach to choosing where to centralize for efficiency and where to decentralize for innovation.We propose a three-level framework—Infrastructure, Decision-Making, and Operational Control—to guide this choice, helpingmanagers analyze the specific costs and benefits at each layer. We apply this framework to the strategic adoption of ArtificialIntelligence (AI), where the technology's powerful pull toward centralization provides a stark test case. Our analysis shows thatan “open source AI” strategy—decentralizing operations to foster innovation while keeping infrastructure centralized for effi-ciency—is more pragmatic than full decentralization. Selective decentralization therefore emerges as a key managerial capabilityfor capturing blockchain's benefits without sacrificing scale efficiencies.
</summary>
<dc:date>2025-07-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing the response time of unpumped oxygen optodes for profiling applications</title>
<link href="https://hdl.handle.net/1721.1/163402" rel="alternate"/>
<author>
<name>Park, Ellen</name>
</author>
<author>
<name>Nicholson, David</name>
</author>
<author>
<name>Dever, Mathieu</name>
</author>
<author>
<name>Atamanchuk, Dariia</name>
</author>
<author>
<name>Richards, Clark</name>
</author>
<id>https://hdl.handle.net/1721.1/163402</id>
<updated>2026-03-08T03:29:01Z</updated>
<published>2025-07-26T00:00:00Z</published>
<summary type="text">Characterizing the response time of unpumped oxygen optodes for profiling applications
Park, Ellen; Nicholson, David; Dever, Mathieu; Atamanchuk, Dariia; Richards, Clark
The response times of the Aanderaa 4330, Aanderaa 4330 WTW, RBRcoda T.ODO|slow, and PyroScience PICO-O2-SUB were evaluated in the laboratory over a range of profiling speeds at two temperatures. The PyroScience PICO-O2-SUB had the fastest response time (1–4 s), followed by the RBRcoda T.ODO|slow (~ 15–35 s), Aanderaa 4330 (~ 30–60 s), and Aanderaa 4330W (~ 50–100 s). This study provides recommendations on improving the quality of oxygen data from optodes in profiling applications by additionally assessing the impact of response time testing setups, thermal inertia effects, and foil types on sensor response times. This study provides a new response time function based on physical principles to predict response time for these four optode types.
</summary>
<dc:date>2025-07-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tapping ressentiment: pharmakeus and the sublime poisons of white supremacy</title>
<link href="https://hdl.handle.net/1721.1/163401" rel="alternate"/>
<author>
<name>Ruffin, Jessica</name>
</author>
<id>https://hdl.handle.net/1721.1/163401</id>
<updated>2026-03-08T03:28:33Z</updated>
<published>2025-04-14T00:00:00Z</published>
<summary type="text">Tapping ressentiment: pharmakeus and the sublime poisons of white supremacy
Ruffin, Jessica
This auto-philosophical essay takes up Nietzsche’s concept of ressentiment; the archival record of Mark and Phillis; and Derrida’s engagement with pharmakon as a means of working through the question of what is to be done with the poisons of white supremacy, which persist in present worldly environments as well as our bodies and histories. Engaging aesthetics, Black thought, and phenomenology of race, the work aims for an embodied therapeutic movement that might open the way for ethical receptivity within the white supremacist world. Eschewing a universalizing tone while recognizing the ahistoricities of white supremacist cultural techniques, the essay enlists autobiography and practices of the self to give voice to the reservoirs of white supremacist poison permeating a worldly body.
</summary>
<dc:date>2025-04-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>SLAM Handbook: From Localization and Mapping to Spatial Intelligence</title>
<link href="https://hdl.handle.net/1721.1/163400" rel="alternate"/>
<author>
<name>Carlone, Luca</name>
</author>
<author>
<name>Kim, Ayoung</name>
</author>
<author>
<name>Barfoot, Timothy</name>
</author>
<author>
<name>Cremers, Daniel</name>
</author>
<author>
<name>Dellaert, Frank</name>
</author>
<id>https://hdl.handle.net/1721.1/163400</id>
<updated>2026-03-08T03:27:42Z</updated>
<summary type="text">SLAM Handbook: From Localization and Mapping to Spatial Intelligence
Carlone, Luca; Kim, Ayoung; Barfoot, Timothy; Cremers, Daniel; Dellaert, Frank
Simultaneous Localization and Mapping —better known as SLAM— refers to the&#13;
fundamental problem of building spatial models of an environment while simultaneously determining the position of a robot within that environment. The term&#13;
itself was first coined in 1995 by Hugh Durrant-Whyte and John Leonard, marking&#13;
the formalization of a problem that sits at the intersection of robotics, geometry,&#13;
controls, and probabilistic inference.&#13;
SLAM is as elegant as it is formidable. At its core, it addresses the challenge&#13;
of reasoning over high-dimensional, uncertain, and dynamic systems. The process&#13;
demands precise spatial inference and robust probabilistic modeling to build coherent maps of the world —maps that must be constructed in real time, often under&#13;
conditions of noise and ambiguity.&#13;
What makes SLAM particularly compelling is its universality. In computer vision,&#13;
it is mirrored in the problem of Structure from Motion; in robotics, it underpins&#13;
everything from indoor autonomous navigation to planetary exploration and selfdriving cars. Since its inception, SLAM has inspired tens of thousands of research&#13;
papers, drawing deeply from disciplines as diverse as physics, statistics, computer&#13;
vision, geometry, controls, and machine learning. Its evolution has catalyzed the&#13;
development of increasingly capable autonomous systems, able to operate at scale&#13;
in complex, open-world environments.&#13;
This volume brings together contributions from some of the field’s foremost experts and rising stars. The chapters represent the state of the art in SLAM today,&#13;
reflecting both the depth of theoretical innovations and the breadth of practical&#13;
applications. From its early formulations based on Kalman filters and Bayesian&#13;
estimation, SLAM has matured into a rich tapestry of mathematical frameworks&#13;
—encompassing graph-based optimization, factor graphs, nonlinear least squares,&#13;
and deep learning-based techniques. Beyond introducing the mathematical foundations of SLAM, this volume provides valuable guidance to the practitioner by&#13;
discussing real-world use cases ranging from vision-based and LiDAR-based SLAM&#13;
systems to legged locomotion. It also covers recent developments in Spatial AI,&#13;
showing how advances in deep learning, differentiable rendering, and large vision and language models point the way toward representations that provide robots with&#13;
a rich spatial and semantic understanding of their environment.
</summary>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Music and Theater Arts</title>
<link href="https://hdl.handle.net/1721.1/163399" rel="alternate"/>
<author>
<name>Scheib, Jay</name>
</author>
<id>https://hdl.handle.net/1721.1/163399</id>
<updated>2025-10-29T03:08:36Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Music and Theater Arts
Scheib, Jay
This report contains the following sections: Accomplishments; Personnel Information; Teaching and Curriculum; Research Activities; Awards and Recognition; School and Institute Service; and Development.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards a Science Exocortex</title>
<link href="https://hdl.handle.net/1721.1/163398" rel="alternate"/>
<author>
<name>Yager, Kevin G.</name>
</author>
<id>https://hdl.handle.net/1721.1/163398</id>
<updated>2026-03-08T03:21:31Z</updated>
<published>2024-08-15T00:00:00Z</published>
<summary type="text">Towards a Science Exocortex
Yager, Kevin G.
Artificial intelligence (AI) methods are poised to revolutionize intellectual work, with generative AI enabling automation of text analysis, text generation, and simple decision making or reasoning. The impact to science is only just beginning, but the opportunity is significant since scientific research relies fundamentally on extended chains of cognitive work. Here, we review the state of the art in agentic AI systems, and discuss how these methods could be extended to have even greater impact on science. We propose the development of an exocortex, a synthetic extension of a person's cognition. A science exocortex could be designed as a swarm of AI agents, with each agent individually streamlining specific researcher tasks, and whose inter-communication leads to emergent behavior that greatly extend the researcher's cognition and volition.
</summary>
<dc:date>2024-08-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Domestic groundwater wells in Appalachia show evidence of low-dose, complex mixtures of legacy pollutants</title>
<link href="https://hdl.handle.net/1721.1/163397" rel="alternate"/>
<author>
<name>Bugher, Nicolette Anna</name>
</author>
<author>
<name>Xiong, Boya</name>
</author>
<author>
<name>Gentles, Runako I.</name>
</author>
<author>
<name>Glist, Lukas D.</name>
</author>
<author>
<name>Siegel, Helen G.</name>
</author>
<author>
<name>Johnson, Nicholaus P.</name>
</author>
<author>
<name>Clark, Cassandra J.</name>
</author>
<author>
<name>Deziel, Nicole</name>
</author>
<author>
<name>Saiers, James E.</name>
</author>
<author>
<name>Plata, Desiree</name>
</author>
<id>https://hdl.handle.net/1721.1/163397</id>
<updated>2026-03-08T03:21:27Z</updated>
<published>2024-06-20T00:00:00Z</published>
<summary type="text">Domestic groundwater wells in Appalachia show evidence of low-dose, complex mixtures of legacy pollutants
Bugher, Nicolette Anna; Xiong, Boya; Gentles, Runako I.; Glist, Lukas D.; Siegel, Helen G.; Johnson, Nicholaus P.; Clark, Cassandra J.; Deziel, Nicole; Saiers, James E.; Plata, Desiree
Lack of water quality data for private drinking water sources prevents robust evaluation of exposure risk for communities co-located with historically contaminated sites and ongoing industrial activity. Areas of the Appalachian region of the United States (i.e., Pennsylvania, Ohio and West Virginia) contain extensive hydraulic fracturing activity, as well as other extractive and industrial technologies, in close proximity to communities reliant on private drinking water sources, creating concern over potential groundwater contamination. In this study, we characterized volatile organic compound (VOC) occurrence at 307 private groundwater well sites within Pennsylvania, Ohio, and West Virginia. The majority (97%) of water samples contained at least one VOC, while the average number of VOCs detected at a given site was 5 ± 3. The majority of individual VOC concentrations fell below applicable U.S. Environmental Protection Agency (EPA) Maximum Contamination Levels (MCLs), except for chloroform (MCL of 80 μg L−1; n = 1 at 98 μg L−1), 1,2-dibromoethane (MCL of 0.05 μg L−1; n = 3 ranging from 0.05 to 0.35 μg L−1), and 1,2-dibromo-3-chloropropane (MCL of 0.2 μg L−1; n = 7 ranging from 0.20 to 0.58 μg L−1). To evaluate well susceptibility to VOCs from industrial activity, distance to hydraulic fracturing site was used to assess correlations with contaminant occurrences. Proximity to closest hydraulic fracturing well-site revealed no statistically significant linear relationships with either individual VOC concentrations, or frequency of VOC detections. Evaluation of other known industrial contamination sites (e.g., US EPA Superfund sites) revealed elevated levels of three VOCs (chloroform, toluene, benzene) in groundwaters within 10 km of those Superfund sites in West Virginia and Ohio, illuminating possible point source influence. Lack of correlation between VOC concentrations and proximity to specific point sources indicates complex geochemical processes governing trace VOC contamination of private drinking water sources. While individual concentrations of VOCs fell well below recommended human health levels, the low dose exposure to multiple VOCs occurring in drinking supplies for Appalachian communities was noted, highlighting the importance of groundwater well monitoring.
</summary>
<dc:date>2024-06-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Opportunity for Utilizing End‐of‐Life Scrap to Meet Growing Copper Demand</title>
<link href="https://hdl.handle.net/1721.1/163396" rel="alternate"/>
<author>
<name>Diersen, Isabel</name>
</author>
<author>
<name>Bhuwalka, Karan</name>
</author>
<author>
<name>Olivetti, Elsa</name>
</author>
<id>https://hdl.handle.net/1721.1/163396</id>
<updated>2026-03-08T03:29:04Z</updated>
<published>2025-07-11T00:00:00Z</published>
<summary type="text">The Opportunity for Utilizing End‐of‐Life Scrap to Meet Growing Copper Demand
Diersen, Isabel; Bhuwalka, Karan; Olivetti, Elsa
As electrification trends and clean energy deployment drive up copper demand, there will be pressure on copper supply chains.With annual copper demand expected to grow by 50% and reach 49 Mt by 2035, the world will continue to need additional sourcesof copper supply. While expanding mining projects could increase copper production, given the significant stock of material,secondary copper can play a vital role in meeting demand. We analyze the opportunity to meet growing copper demand via in-creased scrap collection and improved technical recycling efficiencies. We use an economic model of the global copper system—with China analyzed separately from the rest of the world—to quantify supply evolution by incorporating price feedback betweendemand and supply. The model quantifies the impact of the increased collection on the displacement of mining production anddemonstrates how increasing recycling can modulate supply risks and copper prices. Aligned with recent literature on futurecopper flows, we find that there is an opportunity to increase scrap supply in 2040 by 46% (6.3 Mt) compared with the baseline.
</summary>
<dc:date>2025-07-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>Topographic Stress as a Mechanical Weathering Mechanism on Titan</title>
<link href="https://hdl.handle.net/1721.1/163395" rel="alternate"/>
<author>
<name>Seltzer, Cassandra</name>
</author>
<author>
<name>Martel, Stephen J</name>
</author>
<author>
<name>Perron, J Taylor</name>
</author>
<id>https://hdl.handle.net/1721.1/163395</id>
<updated>2026-03-08T03:29:01Z</updated>
<published>2025-07-29T00:00:00Z</published>
<summary type="text">Topographic Stress as a Mechanical Weathering Mechanism on Titan
Seltzer, Cassandra; Martel, Stephen J; Perron, J Taylor
Titan is unique among icy moons for its active surface processes and extensive erosional features.The presence of coarse sediment suggests that mechanical weathering breaks down Titan's surface material, butthe exact processes of mechanical weathering are unknown. We tested the idea that topographic features perturbambient crustal stresses enough to generate or enhance fractures. We used a two‐dimensional boundary elementmodel to predict the likely stress state within hypothetical erosional landforms on Titan, including river valleysand isolated ridges, and to model the locations and types of resulting fractures. Our results suggest thattopographic stress perturbations are indeed sufficient to generate fractures and drive mechanical weathering,with little sensitivity to the density of the material making up Titan's crust and landforms and no dependence onits elastic moduli. For material density of 800 to1,200 kg/m3, opening‐mode failure is predicted to occur withinhypothetical Titan landforms with a width of hundreds of meters, relief of tens of meters or more, and horizontaltidal or tectonic stresses up to 1 MPa of compression, which encompasses typical predicted tidal stresses rangingbetween 10 kPa of compression and 10 kPa of tension. Under the same conditions, shear fracture is predicted tooccur if the cohesion of the material is less than 100 kPa or if pore fluid pressures reduce local effective normalstresses. We therefore suggest that Titan's crust may be highly fractured and permeable, and that the predictedfractures could help generate sediment and provide pathways for subsurface transport of fluids.
</summary>
<dc:date>2025-07-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>The existence of subspace designs</title>
<link href="https://hdl.handle.net/1721.1/163394" rel="alternate"/>
<author>
<name>Keevash, Peter</name>
</author>
<author>
<name>Sah, Ashwin</name>
</author>
<author>
<name>Sawhney, Mehtaab</name>
</author>
<id>https://hdl.handle.net/1721.1/163394</id>
<updated>2026-03-08T03:29:00Z</updated>
<published>2025-07-17T00:00:00Z</published>
<summary type="text">The existence of subspace designs
Keevash, Peter; Sah, Ashwin; Sawhney, Mehtaab
We prove the existence of subspace designs with anygiven parameters, provided that the dimension of theunderlying space is sufficiently large in terms of theother parameters of the design and satisfies the obvi-ous necessary divisibility conditions. This settles an openproblem from the 1970s. Moreover, we also obtain anapproximate formula for the number of such designs.
</summary>
<dc:date>2025-07-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Verbal disputes, social totality, and trans politics</title>
<link href="https://hdl.handle.net/1721.1/163393" rel="alternate"/>
<author>
<name>Zhou, Katie</name>
</author>
<id>https://hdl.handle.net/1721.1/163393</id>
<updated>2026-03-08T03:29:04Z</updated>
<published>2025-07-22T00:00:00Z</published>
<summary type="text">Verbal disputes, social totality, and trans politics
Zhou, Katie
A puzzling feature about the dispute over whether transwomen are women is its apparent verbality: gender-critical theorists assert a biological fact about transwomen, and trans-inclusionary theorists respond byasserting a social/psychological fact about trans women.But plausibly, both theorists’ assertions are compatible,and so there is no real disagreement. In this paper, Iargue that the two theorists are not talking past eachother. But I also argue that extant accounts of the dis-pute fail to adequately explain why the dispute is notmerely verbal. Indeed, clarifying the dispute requires usto ask what it is for something to be a gender concept,as opposed to a merely biological or social/psychologicalconcept. After developing a questions-based account ofconcepts and conceptual roles, I suggest that a neces-sary feature of gender concepts is that we use them toconstruct unified and portable narratives about how wewill stand in relation to one another as social individu-als, regardless of the particular social context we are in.This allows us to understand the trans woman disputeas a dispute about whether we should prioritize biolog-ical or social/psychological facts when interpreting ourrelations to one another.
</summary>
<dc:date>2025-07-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Microscale Metal Additive Manufacturing by Solid‐State Impact Bonding of Shaped Thin Films</title>
<link href="https://hdl.handle.net/1721.1/163392" rel="alternate"/>
<author>
<name>Reiser, Alain</name>
</author>
<author>
<name>Schuh, Christopher A</name>
</author>
<id>https://hdl.handle.net/1721.1/163392</id>
<updated>2026-03-08T03:28:57Z</updated>
<published>2025-07-14T00:00:00Z</published>
<summary type="text">Microscale Metal Additive Manufacturing by Solid‐State Impact Bonding of Shaped Thin Films
Reiser, Alain; Schuh, Christopher A
The deposition of device-grade inorganic materials is one key challenge towardthe implementation of additive manufacturing (AM) in microfabrication, andto that end, a broad range of physico-chemical principles has been exploredfor 3D fabrication with micro- and nanoscale resolution. Yet, for metals,a process that achieves material quality rivalling that of established thin-ﬁlmdeposition methods, and at the same time, has the potential to combinehigh throughput production with a broad palette of processable materials, isstill lacking. Here, the kinetic, solid-state bonding of metal thin ﬁlms for theadditive assembly of high-purity, high-density metals with micrometer-scaleprecision is introduced. Indirect laser ablation accelerates micrometer-thickgold ﬁlms to hundreds of meters per second without their heating or ablation.Their subsequent impact on the substrate above a critical velocity forms apermanent, metallic bond in the solid state. Stacked layers are of high density(&gt;99%). By deﬁning thin-ﬁlm layers with established lithographic methodsprior to launch, a variable feature size (2–50 µm), arbitrary shape of bondedlayers, and parallel transfer of up to 36 independent ﬁlm units in a single shot,is demonstrated. Thus, the solid-state kinetic bonding principle as a viableand potentially versatile route for micro-scale AM of metals is established.
</summary>
<dc:date>2025-07-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Persistent Disruptions in Prefrontal Connectivity Despite Behavioral Rescue by Environmental Enrichment in a Mouse Model of Rett Syndrome</title>
<link href="https://hdl.handle.net/1721.1/163391" rel="alternate"/>
<author>
<name>Ährlund‐Richter, Sofie</name>
</author>
<author>
<name>Harpe, Jonathan</name>
</author>
<author>
<name>Fernandes, Giselle</name>
</author>
<author>
<name>Lam, Ruby</name>
</author>
<author>
<name>Sur, Mriganka</name>
</author>
<id>https://hdl.handle.net/1721.1/163391</id>
<updated>2026-03-08T03:28:53Z</updated>
<published>2025-07-17T00:00:00Z</published>
<summary type="text">Persistent Disruptions in Prefrontal Connectivity Despite Behavioral Rescue by Environmental Enrichment in a Mouse Model of Rett Syndrome
Ährlund‐Richter, Sofie; Harpe, Jonathan; Fernandes, Giselle; Lam, Ruby; Sur, Mriganka
Rett syndrome, a neurodevelopmental disorder caused by loss-of-function mutations in the MECP2 gene, is characterized by severe motor, cognitive, and emotional impairments. Some of the deficits may result from changes in cortical connections, especially downstream projections of the prefrontal cortex (PFC), which may also be targets of restoration following rearing conditions such as environmental enrichment that alleviate specific symptoms. Here, using a heterozygous Mecp2+/− female mouse model closely analogous to human Rett syndrome, we investigated the impact of early environmental enrichment on behavioral deficits and PFC connectivity. Behavioral analyses revealed that enriched housing rescued fine motor deficits and reduced anxiety, with enrichment-housed Mecp2+/− mice performing comparably to wild-type (WT) controls in rotarod and open field assays. Anatomical mapping of top-down anterior cingulate cortex (ACA) projections demonstrated altered PFC connectivity in Mecp2+/− mice, with increased axonal density in the somatosensory cortex and decreased density in the motor cortex compared to WT controls. ACA axons revealed shifts in hemispheric distribution, particularly in the medial network regions, with Mecp2+/− mice exhibiting reduced ipsilateral dominance. These changes were unaffected by enriched housing, suggesting that structural abnormalities in PFC connectivity persist despite behavioral improvements. Enriched housing rescued brain-derived neurotrophic factor (BDNF) levels in the hippocampus but failed to restore BDNF levels in the PFC, consistent with the persistent deficits observed in prefrontal axonal projections. These findings highlight the focal nature of changes induced by reduction of MeCP2 and by exposure to environmental enrichment and suggest that environmental enrichment starting in adolescence can alleviate behavioral deficits in Mecp2+/− mice without reversing abnormalities in large-scale cortical connectivity.
</summary>
<dc:date>2025-07-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Negotiating With an Aggressive Competitive Negotiator (ACN)</title>
<link href="https://hdl.handle.net/1721.1/163390" rel="alternate"/>
<author>
<name>Rowe, Mary</name>
</author>
<id>https://hdl.handle.net/1721.1/163390</id>
<updated>2025-10-26T03:01:19Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Negotiating With an Aggressive Competitive Negotiator (ACN)
Rowe, Mary
Note: This is a condensed version of material also contained in Mary Rowe's longer-form teaching note, "Notes on Dealing with  an Aggressive Competitive Negotiator (ACN),"  which is also available via DSpace.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Notes on Dealing with  an Aggressive Competitive Negotiator (ACN) (Especially If You Are Cooperative)</title>
<link href="https://hdl.handle.net/1721.1/163389" rel="alternate"/>
<author>
<name>Rowe, Mary</name>
</author>
<id>https://hdl.handle.net/1721.1/163389</id>
<updated>2025-10-26T03:01:58Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Notes on Dealing with  an Aggressive Competitive Negotiator (ACN) (Especially If You Are Cooperative)
Rowe, Mary
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Agentic deep graph reasoning yields self-organizing knowledge networks</title>
<link href="https://hdl.handle.net/1721.1/163388" rel="alternate"/>
<author>
<name>Buehler, Markus J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163388</id>
<updated>2026-03-08T03:26:25Z</updated>
<published>2025-07-31T00:00:00Z</published>
<summary type="text">Agentic deep graph reasoning yields self-organizing knowledge networks
Buehler, Markus J.
We present an agentic, autonomous graph expansion framework that iteratively structures and refines knowledge in situ. Unlike conventional knowledge graph construction methods relying on static extraction or single-pass learning, our approach couples a reasoning-native large language model with a continually updated graph representation. At each step, the system actively generates new concepts and relationships, merges them into a global graph, and formulates subsequent prompts based on its evolving structure. Through this feedback-driven loop, the model organizes information into a scale-free network characterized by hub formation, stable modularity, and bridging nodes that link disparate knowledge clusters. Over hundreds of iterations, new nodes and edges continue to appear without saturating, while centrality measures and shortest path distributions evolve to yield increasingly distributed connectivity. Applied to materials design problems, we present compositional reasoning experiments to foster knowledge synthesis, yielding cross-domain ideas that transcend rote summarization.
</summary>
<dc:date>2025-07-31T00:00:00Z</dc:date>
</entry>
<entry>
<title>EvenQuads Game and Error-Correcting Codes</title>
<link href="https://hdl.handle.net/1721.1/163387" rel="alternate"/>
<author>
<name>Byrapuram, Nikhil</name>
</author>
<author>
<name>Choi, Hwiseo</name>
</author>
<author>
<name>Ge, Adam</name>
</author>
<author>
<name>Ge, Selena</name>
</author>
<author>
<name>Lee, Sylvia Z.</name>
</author>
<author>
<name>Liang, Evin</name>
</author>
<author>
<name>Mandal, Rajarshi</name>
</author>
<author>
<name>Oki, Aika</name>
</author>
<author>
<name>Wu, Daniel</name>
</author>
<author>
<name>Yang, Michael</name>
</author>
<author>
<name>Khovanova, Tanya</name>
</author>
<id>https://hdl.handle.net/1721.1/163387</id>
<updated>2026-03-08T03:26:26Z</updated>
<published>2025-08-22T00:00:00Z</published>
<summary type="text">EvenQuads Game and Error-Correcting Codes
Byrapuram, Nikhil; Choi, Hwiseo; Ge, Adam; Ge, Selena; Lee, Sylvia Z.; Liang, Evin; Mandal, Rajarshi; Oki, Aika; Wu, Daniel; Yang, Michael; Khovanova, Tanya
EvenQuads is a new card game that is a generalization of the SET game, where each card is characterized by three attributes, each taking four possible values. Four cards form a quad when, for each attribute, the values are the same, all different, or half and half. For any ℓ cards selected from the deck of EvenQuads, it is possible to construct an error-correcting linear binary code of length ℓ and Hamming distance 4, where quads correspond to codewords of weight 4. Using error-correcting codes, we calculate the number of possible quads that can be formed with up to 8 cards. We also estimate the number of cards that do not contain quads for decks of different sizes. In addition, we discuss properties of error-correcting codes built on semimagic, magic, and strongly magic quad squares. This highlights a rich interplay between recreational mathematics games and coding theory and encourages others to explore similar combinatorial games for hidden connections!
</summary>
<dc:date>2025-08-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning Applications Enabling Fusion Energy: Recent Developments</title>
<link href="https://hdl.handle.net/1721.1/163386" rel="alternate"/>
<author>
<name>Rea, Cristina</name>
</author>
<id>https://hdl.handle.net/1721.1/163386</id>
<updated>2025-10-25T03:09:08Z</updated>
<published>2025-09-03T00:00:00Z</published>
<summary type="text">Machine Learning Applications Enabling Fusion Energy: Recent Developments
Rea, Cristina
Over the last few years, machine learning helped to develop advanced capabilities for fusion energy over a broad range of domains. This includes advanced algorithms to extract information from fusion diagnostics, enhanced algorithms for plasma state estimation and control, accelerated simulation tools to improve predictive capabilities, and expanded modeling capabilities for fusion materials design. This topical collection covers recent developments in machine learning applied research further enabling the path to fusion energy; in particular it covers a wide breadth of fusion subfields – from inertial confinement fusion, to magnetically confined plasma, including high temperature superconducting magnet design and optimization. This editorial summarizes the collection while also providing a critical outlook on how machine learning can be used in the future to accelerate the development of fusion energy as a reliable energy source.
</summary>
<dc:date>2025-09-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Synthesis‐Related Nanoscale Defects in Mo‐Based Janus Monolayers Revealed by Cross‐Correlated AFM and TERS Imaging</title>
<link href="https://hdl.handle.net/1721.1/163385" rel="alternate"/>
<author>
<name>Zhang, Tianyi</name>
</author>
<author>
<name>Krayev, Andrey</name>
</author>
<author>
<name>Yang, Tilo H</name>
</author>
<author>
<name>Mao, Nannan</name>
</author>
<author>
<name>Hoang, Lauren</name>
</author>
<author>
<name>Wang, Zhien</name>
</author>
<author>
<name>Liu, Hongwei</name>
</author>
<author>
<name>Peng, Yu‐Ren</name>
</author>
<author>
<name>Zhu, Yunyue</name>
</author>
<author>
<name>Zheng, Xudong</name>
</author>
<author>
<name>Isotta, Eleonora</name>
</author>
<author>
<name>Kira, Maria E</name>
</author>
<author>
<name>Righi, Ariete</name>
</author>
<author>
<name>Pimenta, Marcos A</name>
</author>
<author>
<name>Chueh, Yu‐Lun</name>
</author>
<author>
<name>Pop, Eric</name>
</author>
<author>
<name>Mannix, Andrew J</name>
</author>
<author>
<name>Kong, Jing</name>
</author>
<id>https://hdl.handle.net/1721.1/163385</id>
<updated>2026-03-08T03:28:43Z</updated>
<published>2025-08-08T00:00:00Z</published>
<summary type="text">Synthesis‐Related Nanoscale Defects in Mo‐Based Janus Monolayers Revealed by Cross‐Correlated AFM and TERS Imaging
Zhang, Tianyi; Krayev, Andrey; Yang, Tilo H; Mao, Nannan; Hoang, Lauren; Wang, Zhien; Liu, Hongwei; Peng, Yu‐Ren; Zhu, Yunyue; Zheng, Xudong; Isotta, Eleonora; Kira, Maria E; Righi, Ariete; Pimenta, Marcos A; Chueh, Yu‐Lun; Pop, Eric; Mannix, Andrew J; Kong, Jing
2D Janus transition metal dichalcogenides (TMDs) are promising candidatesfor various applications including non-linear optics, energy harvesting, andcatalysis. These materials are usually synthesized via chemical conversionof pristine TMDs. Nanometer-scale characterization of the obtained Janusmaterials’ morphology and local composition is crucial for both the synthesisoptimization and the future device applications. In this work, we present theresults of cross-correlated atomic force microscopy (AFM) and tip-enhancedRaman spectroscopy (TERS) study of Janus monolayers synthesizedby the hydrogen plasma-assisted chemical conversion of MoSe 2 andMoS2 . We demonstrate that the choice of both the growth substrate and thestarting TMD inﬂuences the residual strain, thereby shaping the nanoscalemorphology of the resulting Janus material. Furthermore, by employingTERS imaging, we show the presence of nanoscale islands (≈20 nm across)of MoSe 2 - Mo SSe (MoS2 -MoSeS ) vertical heterostructures originating from thebilayer nanoislands in the precursor monolayer crystals. The understanding ofthe origins of nanoscale defects in Janus TMDs revealed in this study can helpwith further optimization of the Janus conversion process towards uniformand wrinkle-/crack-free Janus materials. Moreover, this work shows thatcross-correlated AFM and TERS imaging is a powerful and accessible methodfor studying nanoscale composition and defects in Janus TMD monolayers.
</summary>
<dc:date>2025-08-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Kigali story, the Singapore model, and rights to the city</title>
<link href="https://hdl.handle.net/1721.1/163384" rel="alternate"/>
<author>
<name>Fischer, Michael MJ</name>
</author>
<id>https://hdl.handle.net/1721.1/163384</id>
<updated>2026-03-08T03:28:46Z</updated>
<published>2025-08-05T00:00:00Z</published>
<summary type="text">The Kigali story, the Singapore model, and rights to the city
Fischer, Michael MJ
Three recent ethnographies of Kigali's urban planning and development provide a welcome addition to a long tradition of such ethnographies, including Lisa Redfield Peattie's famous fieldwork in the planning of Ciudad Guayana (1968; 1987), Grace Goodell's ethnographic account of the disjunction between planning offices in Tehran and the urban settlements (sharaks) of the Khuzistan Development Project modelled on the Tennessee Vally Authority (1986), and Gökce Günel's ethnographic analysis of the disjunction between plans for, and implementation of, Mazdar City and Mazdar Institute in Abu Dhabi (2019).
</summary>
<dc:date>2025-08-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pressurized plankton observatory offers a new window into deep‐sea larval behavior</title>
<link href="https://hdl.handle.net/1721.1/163383" rel="alternate"/>
<author>
<name>Zúñiga Mouret, Rodrigo</name>
</author>
<author>
<name>Hourdez, Stéphane</name>
</author>
<author>
<name>Curran, Molly</name>
</author>
<author>
<name>DiBenedetto, Michelle H.</name>
</author>
<author>
<name>Mills, Susan W.</name>
</author>
<author>
<name>Vetriani, Costantino</name>
</author>
<author>
<name>Arellano, Shawn M.</name>
</author>
<author>
<name>Weston, Johanna N. J.</name>
</author>
<author>
<name>Dykman, Lauren N.</name>
</author>
<author>
<name>Best, Ayinde C.</name>
</author>
<author>
<name>Pires, Anthony</name>
</author>
<author>
<name>Mullineaux, Lauren S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163383</id>
<updated>2026-03-08T03:28:42Z</updated>
<published>2025-07-23T00:00:00Z</published>
<summary type="text">Pressurized plankton observatory offers a new window into deep‐sea larval behavior
Zúñiga Mouret, Rodrigo; Hourdez, Stéphane; Curran, Molly; DiBenedetto, Michelle H.; Mills, Susan W.; Vetriani, Costantino; Arellano, Shawn M.; Weston, Johanna N. J.; Dykman, Lauren N.; Best, Ayinde C.; Pires, Anthony; Mullineaux, Lauren S.
The High-Pressure Plankton Observatory (HiPPO) is designed to quantify motions of zooplankton for behavioral study, including swimming and metabolic responses to environmental perturbations. It builds on prior chamber designs while filling gaps in capability for resolving orientation of small (&lt; 1 mm) plankton, tracking their movements over ecologically relevant spatial scales, and recording in flow-through conditions on a vessel at sea. The HiPPO chamber has a direct light path for silhouette imaging of zooplankton as they move vertically and horizontally across a 3.56 cm diameter viewing area. Seawater forced by a high-performance liquid chromatography pump is exchanged continuously through the chamber, but flushing of zooplankton is prevented by fine mesh at the ports. A high-resolution camera/computer setup enables sustained imaging of plankton motions for quantitative analysis. Application of HiPPO to an investigation of larval behavior of deep-sea hydrothermal vent species revealed swimming behaviors similar to those of shallow-water species, including upward and downward helices, meandering, and short hovers. In conditions with microbial biofilm (a potential settlement cue) on a 2024 expedition, vent larvae unexpectedly swam rapidly upward in tight helices at velocities (0.15 cm s−1) higher than those observed in prior experiments with no biofilm (0.03 cm s−1). Many factors varied between the 2024 and earlier trials, so the difference cannot be attributed with certainty to a cue response. This study describes key new features of HiPPO and demonstrates the system's ability to document novel zooplankton behavior.
</summary>
<dc:date>2025-07-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Social-ecological system approaches for water resources management</title>
<link href="https://hdl.handle.net/1721.1/163382" rel="alternate"/>
<author>
<name>Gain, Animesh K.</name>
</author>
<author>
<name>Hossain, Sarwar</name>
</author>
<author>
<name>Benson, David</name>
</author>
<author>
<name>Di Baldassarre, Giuliano</name>
</author>
<author>
<name>Giupponi, Carlo</name>
</author>
<author>
<name>Huq, Nazmul</name>
</author>
<id>https://hdl.handle.net/1721.1/163382</id>
<updated>2026-03-08T03:28:45Z</updated>
<published>2020-06-18T00:00:00Z</published>
<summary type="text">Social-ecological system approaches for water resources management
Gain, Animesh K.; Hossain, Sarwar; Benson, David; Di Baldassarre, Giuliano; Giupponi, Carlo; Huq, Nazmul
In the era of the Anthropocene, understanding the dynamic interactions between humans andwater is crucial for supporting both human well-being and the sustainable management ofresources. The current water management challenges are inherently unpredictable and difficultto control. Social-ecological systems (SESs) approaches explicitly recognize the connections andfeedbacks between human and natural systems. For addressing the complex challenges of theAnthropocene, consideration of SES attributes such as causality (or interdependence), feedback,non-linearity, heterogeneity, and cross-scale dynamics is important. In addition, innovative quali-tative and quantitative methods such as Bayesian networks, agent-based modelling, systemdynamics, network analysis, multicriteria analysis, integrated assessment and role-play gameshave recently been used in SES research. The overall goal of this review is to gauge the extentto which SES attributes and methods are considered within the current interdisciplinary waterparadigm. The paper therefore develops the normative theoretical characteristics of SES in termsof its key attributes (i.e. causality, feedback, heterogeneity, nonlinearity, and cross-scale dynamics)incorporated in the water paradigm approaches. The paper then compares the methods appliedin the interdisciplinary water paradigm and examines how they can complement each other.Finally, the paper reflects back on the usefulness of SES attributes and methods for assessing theinterdisciplinary water paradigm and makes recommendations for future research.
</summary>
<dc:date>2020-06-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying and improving the optical performance of the laser ablation aerosol particle time of flight mass spectrometer (LAAPToF) instrument</title>
<link href="https://hdl.handle.net/1721.1/163381" rel="alternate"/>
<author>
<name>Zawadowicz, Maria A</name>
</author>
<author>
<name>Lance, Sara</name>
</author>
<author>
<name>Jayne, John T</name>
</author>
<author>
<name>Croteau, Philip</name>
</author>
<author>
<name>Worsnop, Douglas R</name>
</author>
<author>
<name>Mahrt, Fabian</name>
</author>
<author>
<name>Leisner, Thomas</name>
</author>
<author>
<name>Cziczo, Daniel J</name>
</author>
<id>https://hdl.handle.net/1721.1/163381</id>
<updated>2026-03-08T03:28:42Z</updated>
<published>2020-02-21T00:00:00Z</published>
<summary type="text">Quantifying and improving the optical performance of the laser ablation aerosol particle time of flight mass spectrometer (LAAPToF) instrument
Zawadowicz, Maria A; Lance, Sara; Jayne, John T; Croteau, Philip; Worsnop, Douglas R; Mahrt, Fabian; Leisner, Thomas; Cziczo, Daniel J
Single particle mass spectrometer (SPMS) instruments have been used for in-situ chemicalcharacterization of atmospheric aerosols, both in the field and laboratory, for over two deca-des. SPMSs typically combine precise optical particle sizing with laser desorption and ioniza-tion followed by time of flight mass spectrometry. Among the advantages of SPMSs overother aerosol chemistry measurement techniques are their single particle resolution andhigh sensitivity to trace chemical species. The AeroMegt Laser Ablation Aerosol ParticleTime of Flight Mass Spectrometer (LAAPToF) is a commercially available member of thisinstrument class, aiming for a compact size and simplicity for the end user. This articlequantifies the performance of LAAPToF with an emphasis on optical counting efficiency.Recommendations for improving detection compared to the base LAAPToF hardware aredescribed. Our results show that changes to the optical detection scheme can lead to overtwo orders of magnitude improvement in optical counting efficiency in the size range500–2000 nm vacuum aerodynamic diameter. We also present mass spectral performancefor characterizing atmospherically relevant particles in a comparison to a current SPMSdesign, the Particle Analysis by Laser Mass Spectrometry.
</summary>
<dc:date>2020-02-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Expression of endogenous Anopheles gambiae microRNAs using an Anopheles gambiae densovirus (AgDNV) intronic expression system</title>
<link href="https://hdl.handle.net/1721.1/163380" rel="alternate"/>
<author>
<name>Johnson, Rebecca M.</name>
</author>
<author>
<name>Metz, Hillery C.</name>
</author>
<author>
<name>Suzuki, Yasutsugu</name>
</author>
<author>
<name>McLean, Kyle J.</name>
</author>
<author>
<name>Rasgon, Jason L.</name>
</author>
<id>https://hdl.handle.net/1721.1/163380</id>
<updated>2026-03-08T03:26:21Z</updated>
<published>2025-08-19T00:00:00Z</published>
<summary type="text">Expression of endogenous Anopheles gambiae microRNAs using an Anopheles gambiae densovirus (AgDNV) intronic expression system
Johnson, Rebecca M.; Metz, Hillery C.; Suzuki, Yasutsugu; McLean, Kyle J.; Rasgon, Jason L.
Background Anopheles gambiae densovirus (AgDNV) is a highly species-specific parvovirus that reaches high titers in adult Anopheles gambiae mosquitoes with few transcriptomic effects and minimal significant fitness effects. Given these characteristics, AgDNV has been proposed as a viral vector for basic research and mosquito control. Previous work created an AgDNV co-expression system with a wild-type AgDNV helper plasmid and a transducing plasmid expressing enhanced green fluorescent protein (EGFP) that can be used to co-transfect cells to generate infectious recombinant transducing AgDNV virions. Generated virions infect the An. gambiae midgut, fat body, and ovaries, yet this viral vector system is limited in the size of transgenes that can be expressed due to capsid packaging limitations. Methods Considering these size constraints, we created an artificial intron within the EGFP gene of the transducing construct that can express small pieces of genetic material such as microRNAs (miRNAs), microRNA sponges, or other small sequences. Placement of this intron in EGFP created a fluorescent reporter such that incorrect splicing produces a frameshift mutation in EGFP and an early stop codon, whereas correct splicing results in normal EGFP expression and co-transcription of the intronic genetic cargo. A selection of miRNAs with predicted or demonstrated importance in mosquito immunity and reproduction with expression localized to the fat body or ovaries were chosen as intronic cargo. Construct expression and splicing was evaluated, and the impact of miRNA expression on putative miRNA targets was measured in vitro and in vivo. Results The created intron was correctly spliced in cells and mosquitoes; however, miRNA delivery resulted in inconsistent changes to miRNA and predicted target gene transcript levels—possibly due to organ-specific miRNA expression or inaccurate putative target predictions leading to miRNA–target gene sequence mismatch. Conclusions Although our results on target gene expression were inconsistent, with optimization this viral vector and developed intron have potential as an expression tool within An. gambiae mosquitoes or cell lines.
</summary>
<dc:date>2025-08-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advanced Modeling and Microstructural Insights into the Hot Deformation Behavior of Fe–11Al–5Mn–1Nb–1C Low-Density Steel</title>
<link href="https://hdl.handle.net/1721.1/163379" rel="alternate"/>
<author>
<name>Mahanta, Bashista K.</name>
</author>
<author>
<name>Rawat, Pankaj</name>
</author>
<author>
<name>Bhan, Sumit</name>
</author>
<author>
<name>Roy, Swagata</name>
</author>
<id>https://hdl.handle.net/1721.1/163379</id>
<updated>2026-03-08T03:26:43Z</updated>
<published>2025-05-18T00:00:00Z</published>
<summary type="text">Advanced Modeling and Microstructural Insights into the Hot Deformation Behavior of Fe–11Al–5Mn–1Nb–1C Low-Density Steel
Mahanta, Bashista K.; Rawat, Pankaj; Bhan, Sumit; Roy, Swagata
The hot deformation behavior of Fe–11Al–5Mn–1Nb–1C low-density steel was investigated using a GLEEBLE 3800R thermomechanical simulator across a temperature range of 900–1200 ℃ and strain rates of 1–0.001 s−1. An Arrhenius-type constitutive model was developed to predict flow stress during deformation, alongside a bilayer evolutionary neural network (EvoNN) model based on an artificial neural network (ANN) approach. The EvoNN model demonstrated higher prediction accuracy than the constitutive model. Microstructural analysis revealed a ferritic matrix with kappa carbide as a secondary phase at 900 and 1000 ℃, while at 1100 and 1200 ℃, a dual-phase structure (ferrite + austenite) with fine kappa carbides at the phase interface was observed. NbC particles were consistently present in all hot compressed samples. Partial dynamic recrystallization (DRX) occurred at 900 and 1000 ℃, whereas more extensive DRX was observed at 1100 and 1200 ℃. Grain coarsening was evident at lower strain rates, increasing as the strain rate decreased. Fine NbC particles and kappa carbides pinned grain boundaries, potentially delaying DRX onset, while coarse NbC particles appeared to enhance particle-stimulated nucleation (PSN), introducing complexity to DRX dynamics and contributing to model discrepancies in the constitutive and EvoNN model.
</summary>
<dc:date>2025-05-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Three-pion Bose-Einstein correlations measured in proton-proton collisions</title>
<link href="https://hdl.handle.net/1721.1/163378" rel="alternate"/>
<author>
<name>Aaij, R.</name>
</author>
<author>
<name>Abdelmotteleb, A. S. W.</name>
</author>
<author>
<name>Abellan Beteta, C.</name>
</author>
<author>
<name>Abudinén, F.</name>
</author>
<author>
<name>Ackernley, T.</name>
</author>
<author>
<name>Adefisoye, A. A.</name>
</author>
<author>
<name>Adeva, B.</name>
</author>
<author>
<name>Adinolfi, M.</name>
</author>
<author>
<name>Adlarson, P.</name>
</author>
<author>
<name>Agapopoulou, C.</name>
</author>
<author>
<name>Aidala, C. A.</name>
</author>
<author>
<name>Ajaltouni, Z.</name>
</author>
<author>
<name>Akar, S.</name>
</author>
<author>
<name>Akiba, K.</name>
</author>
<author>
<name>Albicocco, P.</name>
</author>
<author>
<name>Albrecht, J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163378</id>
<updated>2026-03-08T03:26:22Z</updated>
<published>2025-08-21T00:00:00Z</published>
<summary type="text">Three-pion Bose-Einstein correlations measured in proton-proton collisions
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
A study on the Bose-Einstein correlations for triplets of same-sign pions is presented. The analysis is performed using proton-proton collisions at a centre-of-mass energy of s = 7 TeV, recorded by the LHCb experiment, corresponding to an integrated luminosity of 1.0 fb−1. For the first time, the results are interpreted in the core-halo model. The parameters of the model are determined in regions of charged-particle multiplicity. This measurement provides insight into the nature of hadronisation in terms of coherence, being consistent with the presence of coherent emission of pions.
</summary>
<dc:date>2025-08-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Search for dark matter produced in association with one or two top quarks in proton-proton collisions at √s = 13 TeV</title>
<link href="https://hdl.handle.net/1721.1/163377" rel="alternate"/>
<author>
<name>Chekhovsky, V.</name>
</author>
<author>
<name>Hayrapetyan, A.</name>
</author>
<author>
<name>Makarenko, V.</name>
</author>
<author>
<name>Tumasyan, A.</name>
</author>
<author>
<name>Adam, W.</name>
</author>
<author>
<name>Andrejkovic, J. W.</name>
</author>
<author>
<name>Benato, L.</name>
</author>
<author>
<name>Bergauer, T.</name>
</author>
<author>
<name>Chatterjee, S.</name>
</author>
<author>
<name>Damanakis, K.</name>
</author>
<author>
<name>Dragicevic, M.</name>
</author>
<author>
<name>Hussain, P. S.</name>
</author>
<author>
<name>Jeitler, M.</name>
</author>
<author>
<name>Krammer, N.</name>
</author>
<author>
<name>Li, A.</name>
</author>
<author>
<name>Liko, D.</name>
</author>
<author>
<name>Mikulec, I.</name>
</author>
<id>https://hdl.handle.net/1721.1/163377</id>
<updated>2026-03-08T03:26:23Z</updated>
<published>2025-08-12T00:00:00Z</published>
<summary type="text">Search for dark matter produced in association with one or two top quarks in proton-proton collisions at √s = 13 TeV
Chekhovsky, V.; Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.
A search is performed for dark matter (DM) produced in association with a single top quark or a pair of top quarks using the data collected with the CMS detector at the LHC from proton-proton collisions at a center-of-mass energy of 13 TeV, corresponding to 138 fb−1 of integrated luminosity. An excess of events with a large imbalance of transverse momentum is searched for across 0, 1 and 2 lepton final states. Novel multivariate techniques are used to take advantage of the differences in kinematic properties between the two DM production mechanisms. No significant deviations with respect to the standard model predictions are observed. The results are interpreted considering a simplified model in which the mediator is either a scalar or pseudoscalar particle and couples to top quarks and to DM fermions. Axion-like particles that are coupled to top quarks and DM fermions are also considered. Expected exclusion limits of 410 and 380 GeV for scalar and pseudoscalar mediator masses, respectively, are set at the 95% confidence level. A DM particle mass of 1 GeV is assumed, with mediator couplings to fermions and DM particles set to unity. A small signal-like excess is observed in data, with the largest local significance observed to be 1.9 standard deviations for the 150 GeV pseudoscalar mediator hypothesis. Because of this excess, mediator masses are only excluded below 310 (320) GeV for the scalar (pseudoscalar) mediator. The results are also translated into model-independent 95% confidence level upper limits on the visible cross section of DM production in association with top quarks, ranging from 1 pb to 0.02 pb.
</summary>
<dc:date>2025-08-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>A parametric approach to plot-based urban design: A climate-responsive algorithmic control for the generation of urban block</title>
<link href="https://hdl.handle.net/1721.1/163376" rel="alternate"/>
<author>
<name>Çalışkan, Olgu</name>
</author>
<author>
<name>Akay, Mert</name>
</author>
<id>https://hdl.handle.net/1721.1/163376</id>
<updated>2026-03-08T03:28:19Z</updated>
<published>2025-10-17T00:00:00Z</published>
<summary type="text">A parametric approach to plot-based urban design: A climate-responsive algorithmic control for the generation of urban block
Çalışkan, Olgu; Akay, Mert
In modern urbanism, (re)production of urban land predominantly relies on large parcels through intensive capital investments. Such a mainstream significantly shapes the overall urban form, subsequently influencing the quality of life through the perceived characteristics of the form and program of the planned districts. Consequently, critical urban design theory increasingly prioritizes the plot as the fundamental unit of future urban development. While ‘plot-based urbanism’ presents a responsive approach to this issue, there remains a notable gap in systematic methodologies that can be universally applied across different contexts. In this paper, the authors propose an algorithmic framework that would be employed as a design control tool based on the associative logic of plot-based urban formation. The model framework comprises three steps: (1) plot layout generation, (2) building configuration, and (3) incremental formation of the block fabric. The applied model demonstrates the compositional variation and coherence within the urban block while concurrently optimizing the climatic performance of the emerging fabric.
</summary>
<dc:date>2025-10-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing uncertainties in parton showers at double logarithmic accuracy for jet quenching studies</title>
<link href="https://hdl.handle.net/1721.1/163375" rel="alternate"/>
<author>
<name>Andres, Carlota</name>
</author>
<author>
<name>Apolinário, Liliana</name>
</author>
<author>
<name>Armesto, Néstor</name>
</author>
<author>
<name>Cordeiro, André</name>
</author>
<author>
<name>Dominguez, Fabio</name>
</author>
<author>
<name>Milhano, José G.</name>
</author>
<id>https://hdl.handle.net/1721.1/163375</id>
<updated>2026-03-08T03:26:24Z</updated>
<published>2025-08-20T00:00:00Z</published>
<summary type="text">Assessing uncertainties in parton showers at double logarithmic accuracy for jet quenching studies
Andres, Carlota; Apolinário, Liliana; Armesto, Néstor; Cordeiro, André; Dominguez, Fabio; Milhano, José G.
We present a systematic study of how different choices of ordering and phase-space constraints in parton showers affect the space-time structure of vacuum parton cascades and their interface with jet quenching models. Using a simplified Monte Carlo shower implemented at double logarithmic accuracy, we analyse variations in emission patterns and resulting phase-space arising from three ordering variables: inverse formation time, invariant mass, and opening angle. These are coupled with two kinematic reconstruction schemes defined by different phase-space constraints. We show that, while global features are relatively stable, differences emerge in the temporal evolution of the cascade. To probe the impact of these differences, we introduce a simplified model for in-medium energy loss based on formation time and colour decoherence, enabling us to evaluate the sensitivity of quenching observables to the underlying space-time structure of the vacuum shower. We further quantify the role of time-ordering violations and propose strategies to preserve a consistent space-time interpretation. Lastly, we explore a range of alternative quenching models confirming the robustness of our conclusions. Our findings highlight the importance of maintaining a coherent space-time structure in parton shower algorithms when modelling jet propagation in an extended QCD medium, as this structure becomes a physically meaningful and testable component of the jet itself.
</summary>
<dc:date>2025-08-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>Caribbean Creep meets Chesapeake Creep: marine bioinvasions and community shifts along the Mid-Atlantic Coast, USA</title>
<link href="https://hdl.handle.net/1721.1/163373" rel="alternate"/>
<author>
<name>Fowler, Amy E.</name>
</author>
<author>
<name>Blakeslee, April M. H.</name>
</author>
<author>
<name>Davinack, Andrew</name>
</author>
<author>
<name>Aguilar, Robert</name>
</author>
<author>
<name>Andersen, Miranda</name>
</author>
<author>
<name>Benadon, Clara</name>
</author>
<author>
<name>Choong, Henry H. C.</name>
</author>
<author>
<name>Green-Gavrielidis, Lindsay</name>
</author>
<author>
<name>Greenberg, Sarah R.</name>
</author>
<author>
<name>Hartshorn, El</name>
</author>
<author>
<name>Hobbs, Niels-Viggo</name>
</author>
<author>
<name>Labbe, Sara</name>
</author>
<author>
<name>Larson, Kristen</name>
</author>
<id>https://hdl.handle.net/1721.1/163373</id>
<updated>2026-03-08T03:28:41Z</updated>
<published>2025-08-18T00:00:00Z</published>
<summary type="text">Caribbean Creep meets Chesapeake Creep: marine bioinvasions and community shifts along the Mid-Atlantic Coast, USA
Fowler, Amy E.; Blakeslee, April M. H.; Davinack, Andrew; Aguilar, Robert; Andersen, Miranda; Benadon, Clara; Choong, Henry H. C.; Green-Gavrielidis, Lindsay; Greenberg, Sarah R.; Hartshorn, El; Hobbs, Niels-Viggo; Labbe, Sara; Larson, Kristen
The Mid-Atlantic waters of North America are warming faster than &gt; 90% of other global oceans, leading to significant increases in bottom water temperatures and influencing shifts in marine community structure. Given this modern-day scenario of significant community shifts over space and time, baseline surveys of species diversity are increasingly valuable. Therefore, we performed the first-ever marine bioinvasions Rapid Assessment Survey (RAS) along the Mid-Atlantic waters of the United States in June 2023, focused on marina floating pontoons in Virginia, Maryland, Delaware, and New Jersey. We recorded 29 non-indigenous, 16 cryptogenic, and 10 species that have expanded their ranges in the mid-Atlantic. Seven of these 10 species have expanded northwards from southern locations in the Caribbean (“Caribbean Creep”) or the western Atlantic (“Chesapeake Creep”), and three have expanded southwards. Five non-indigenous species (NIS) were found at more than 60% of the 10 sampled sites: the bryozoans Bugula neritina, Schizoporella pungens, Tricellaria inopinata, macroalgae Codium fragile subsp. fragile, and the sea anemone Aiptasiogeton eruptaurantia. We did not document any new nonindigenous species not already recorded on the Western Atlantic coast. All 10 communities were distinctly different, and species dominance varied by latitude and by site. This first-ever RAS of the Mid-Atlantic waters of the United States provides critical insight into how marine communities have been and are changing as a result of colonization by NIS, including those that have expanded their ranges as a result of human-induced climate change.
</summary>
<dc:date>2025-08-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Observation of the distribution of nuclear magnetization in a molecule</title>
<link href="https://hdl.handle.net/1721.1/163372" rel="alternate"/>
<author>
<name>Wilkins, S. G.</name>
</author>
<author>
<name>Udrescu, S. M.</name>
</author>
<author>
<name>Athanasakis-Kaklamanakis, M.</name>
</author>
<author>
<name>Garcia Ruiz, R. F.</name>
</author>
<author>
<name>Belosevic, I.</name>
</author>
<author>
<name>Berger, R.</name>
</author>
<author>
<name>Bissell, M. L.</name>
</author>
<author>
<name>Breier, A. A.</name>
</author>
<author>
<name>Brinson, A. J.</name>
</author>
<author>
<name>Chrysalidis, K.</name>
</author>
<author>
<name>Cocolios, T. E.</name>
</author>
<author>
<name>de Groote, R. P.</name>
</author>
<author>
<name>Dorne, A.</name>
</author>
<author>
<name>Flanagan, K. T.</name>
</author>
<author>
<name>Franchoo, S.</name>
</author>
<author>
<name>Gaul, K.</name>
</author>
<author>
<name>Geldhof, S.</name>
</author>
<author>
<name>Giesen, T. F.</name>
</author>
<author>
<name>Hanstorp, D.</name>
</author>
<author>
<name>Heinke, R.</name>
</author>
<author>
<name>Isaev, T.</name>
</author>
<author>
<name>Koszorus, A.</name>
</author>
<author>
<name>Kujanpa, S.</name>
</author>
<author>
<name>Lalanne, L.</name>
</author>
<author>
<name>Neyens, G.</name>
</author>
<author>
<name>Nichols, M.</name>
</author>
<author>
<name>Perrett, H.A.</name>
</author>
<author>
<name>Reilly, J.R.</name>
</author>
<author>
<name>Skripnikov, L. V.</name>
</author>
<author>
<name>Rothe, S.</name>
</author>
<author>
<name>van den Borne, B.</name>
</author>
<author>
<name>Wang, W.</name>
</author>
<author>
<name>Wessolek, J.</name>
</author>
<author>
<name>Yang, X.F.</name>
</author>
<author>
<name>Zulch, C.Z.</name>
</author>
<id>https://hdl.handle.net/1721.1/163372</id>
<updated>2026-03-08T03:28:47Z</updated>
<published>2025-10-23T00:00:00Z</published>
<summary type="text">Observation of the distribution of nuclear magnetization in a molecule
Wilkins, S. G.; Udrescu, S. M.; Athanasakis-Kaklamanakis, M.; Garcia Ruiz, R. F.; Belosevic, I.; Berger, R.; Bissell, M. L.; Breier, A. A.; Brinson, A. J.; Chrysalidis, K.; Cocolios, T. E.; de Groote, R. P.; Dorne, A.; Flanagan, K. T.; Franchoo, S.; Gaul, K.; Geldhof, S.; Giesen, T. F.; Hanstorp, D.; Heinke, R.; Isaev, T.; Koszorus, A.; Kujanpa, S.; Lalanne, L.; Neyens, G.; Nichols, M.; Perrett, H.A.; Reilly, J.R.; Skripnikov, L. V.; Rothe, S.; van den Borne, B.; Wang, W.; Wessolek, J.; Yang, X.F.; Zulch, C.Z.
Rapid progress in the experimental control and interrogation of molecules, combined&#13;
with developments in precise calculations of their structure, are enabling new opportunities in the investigation of nuclear and particle physics phenomena. Molecules&#13;
containing heavy, octupole-deformed nuclei such as radium are of particular interest&#13;
for such studies, offering an enhanced sensitivity to the properties of fundamental particles and interactions. Here, we report precision laser spectroscopy measurements&#13;
and theoretical calculations of the structure of the radioactive radium monofluoride&#13;
molecule, 225Ra19F. Our results allow fine details of the short-range electron-nucleus&#13;
interaction to be revealed, indicating the high sensitivity of this molecule to the distribution of magnetization, currently a poorly constrained nuclear property, within the&#13;
radium nucleus. These results provide a direct and stringent test of the description of&#13;
the electronic wavefunction inside the nuclear volume, highlighting the suitability of&#13;
these molecules to investigate subatomic phenomena.
</summary>
<dc:date>2025-10-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Wafold: A Theory of Spacetime Termination Inside Black Holes</title>
<link href="https://hdl.handle.net/1721.1/163371" rel="alternate"/>
<author>
<name>Viaña, Javier</name>
</author>
<id>https://hdl.handle.net/1721.1/163371</id>
<updated>2025-10-23T03:01:55Z</updated>
<published>2025-10-22T00:00:00Z</published>
<summary type="text">The Wafold: A Theory of Spacetime Termination Inside Black Holes
Viaña, Javier
This article introduces a proposal for a novel conceptual interpretation of black holes in which spacetime can terminate on a curvature-triggered hypersurface. When curvature reaches a critical limit, the three-dimensional spatial geometry is proposed to undergo a dimensional compression into a thin, curved boundary identified as the wafold. Beyond this, spacetime no longer continues; the manifold itself comes to an end. All mass-energy and information would then be confined to the wafold, forming a structure consistent with the external Schwarzschild geometry and the Bekenstein-Hawking entropy law. We outline a possible Dimensional Conversion Law that could govern this phenomenon, and discuss the conservation, causal, and thermodynamic implications of the wafold at a conceptual level. This work should be regarded as a hypothesis-generating perspective, not a complete theory. Its purpose is to motivate further mathematical and physical inquiry.
</summary>
<dc:date>2025-10-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Performance of Metal Hydride Composite Neutron Shields for Compact, High-Power Fusion Reactors</title>
<link href="https://hdl.handle.net/1721.1/163370" rel="alternate"/>
<author>
<name>Fletcher, Jack W</name>
</author>
<author>
<name>Peterson, Ethan E</name>
</author>
<author>
<name>Trelewicz, Jason R</name>
</author>
<author>
<name>Snead, Lance L</name>
</author>
<id>https://hdl.handle.net/1721.1/163370</id>
<updated>2026-03-08T03:28:33Z</updated>
<published>2025-08-04T00:00:00Z</published>
<summary type="text">Design and Performance of Metal Hydride Composite Neutron Shields for Compact, High-Power Fusion Reactors
Fletcher, Jack W; Peterson, Ethan E; Trelewicz, Jason R; Snead, Lance L
We present the process and results of neutronics-driven shielding design using metal and ceramic matrix metal hydride neutron shields within the context of compact, high-power tokamaks. In particular, hafnium hydrides were considered within a matrix of stainless steel or magnesium oxide and contrasted with established and novel fast neutron shielding materials. These shielding materials are found to substantially increase the lifetime of toroidal field magnets made of high-temperature superconductors by a factor of up to 14.5. Specifically, a stainless steel–20% HfH1.7 thermal shield and outer neutron shield, paired with an inner tungsten carbide (WC) shield and toroidal field magnet case and winding pack both doped with 40% HfH1.7 by volume, were found to achieve a 93.1% reduction in peak fast neutron flux to high-temperature superconductor tapes. Simultaneously, this configuration reduced the total mass (and cost) of the neutron shield, as well as the nuclear heating rate of the magnet coil, in comparison to monolithic shields of WC and boron carbide.
</summary>
<dc:date>2025-08-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>GeoConformal Prediction: A Model-Agnostic Framework for Measuring the Uncertainty of Spatial Prediction</title>
<link href="https://hdl.handle.net/1721.1/163369" rel="alternate"/>
<author>
<name>Lou, Xiayin</name>
</author>
<author>
<name>Luo, Peng</name>
</author>
<author>
<name>Meng, Liqiu</name>
</author>
<id>https://hdl.handle.net/1721.1/163369</id>
<updated>2026-03-08T03:28:52Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">GeoConformal Prediction: A Model-Agnostic Framework for Measuring the Uncertainty of Spatial Prediction
Lou, Xiayin; Luo, Peng; Meng, Liqiu
Spatial prediction is a fundamental task in geography, providing essential data support for various scenarios.Recent advancements, empowered by the development of geospatial artificial intelligence (GeoAI), haveprimarily focused on improving prediction accuracy while overlooking reliable measurements of predictionuncertainty. Such measures are crucial for enhancing model trustworthiness and supporting responsibledecision-making. To address this issue, we propose a model-agnostic uncertainty assessment method calledGeoConformal Prediction (GeoCP). First, a simulation study is conducted to validate the usefulness ofGeoCP. Then, we applied GeoCP to two classic spatial prediction cases, spatial regression and spatialinterpolation, to evaluate its reliability. For the case of spatial regression, we used XGBoost to predicthousing prices, followed by GeoCP to calculate uncertainty. Our results show that GeoCP achieved acoverage rate of 93.67 percent, whereas bootstrapping methods reached a maximum coverage of 81.00percent after 2,000 runs. We then applied GeoCP for the case of spatial interpolation models. By comparinga GeoAI-based geostatistical model with a traditional geostatistical model (Kriging), we found that theuncertainty obtained from GeoCP aligned closely with the variance in Kriging. Finally, using GeoCP, weanalyzed the sources of uncertainty in spatial prediction. We found that explicitly including local features inAI models can significantly reduce prediction uncertainty, especially in areas with strong local dependence.Our findings suggest that GeoCP holds substantial potential not only for geographic knowledge discovery butalso for guiding the design of future GeoAI models, paving the way for more reliable and interpretablespatial prediction frameworks. The method is implemented in an open-source Python package namedgeoconformal. Key Words: conformal prediction, GeoAI, Kriging, spatial regression, spatial uncertainty.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Belief revision revised</title>
<link href="https://hdl.handle.net/1721.1/163368" rel="alternate"/>
<author>
<name>Pearson, Joshua Edward</name>
</author>
<id>https://hdl.handle.net/1721.1/163368</id>
<updated>2026-03-08T03:28:32Z</updated>
<published>2025-07-27T00:00:00Z</published>
<summary type="text">Belief revision revised
Pearson, Joshua Edward
I outline a novel counterexample to the principle ofbelief revision, Anticipation: if both learning &#119890; andlearning not-&#119890; would render belief in &#119901; unjustified, youcannot now be justified in believing &#119901;. If I am right,not only is the leading theory of belief revision false, soare various recently proposed weakenings. I develop anddefend a new theory that correctly predicts the failuresof Anticipation I argue for, predicated on the simpleidea that one is justified in ruling out possibility just incase that possibility is sufficiently improbable.
</summary>
<dc:date>2025-07-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Incorporating Deep Learning Into System Dynamics: Amortized Bayesian Inference for Scalable Likelihood‐Free Parameter Estimation</title>
<link href="https://hdl.handle.net/1721.1/163367" rel="alternate"/>
<author>
<name>Rahmandad, Hazhir</name>
</author>
<author>
<name>Akhavan, Ali</name>
</author>
<author>
<name>Jalali, Mohammad S</name>
</author>
<id>https://hdl.handle.net/1721.1/163367</id>
<updated>2026-03-08T03:28:50Z</updated>
<published>2025-01-21T00:00:00Z</published>
<summary type="text">Incorporating Deep Learning Into System Dynamics: Amortized Bayesian Inference for Scalable Likelihood‐Free Parameter Estimation
Rahmandad, Hazhir; Akhavan, Ali; Jalali, Mohammad S
Estimating parameters and their credible intervals for complex system dynamics models is challenging but critical to continu-ous model improvement and reliable communication with an increasing fraction of audiences. The purpose of this study is tointegrate Amortized Bayesian Inference (ABI) methods with system dynamics. Utilizing Neural Posterior Estimation (NPE), wetrain neural networks using synthetic data (pairs of ground truth parameters and outcome time series) to estimate parameters ofsystem dynamics models. We apply this method to two example models: a simple Random Walk model and a moderately complexSEIRb model. We show that the trained neural networks can output the posterior for parameters instantly given new unseentime series data. Our analysis highlights the potential of ABI to facilitate a principled, scalable, and likelihood-free inferenceworkflow that enhance the integration of models of complex systems with data. Accompanying code streamlines application todiverse system dynamics models.
</summary>
<dc:date>2025-01-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Influences of Non‐Oberbeck–Boussinesq Effects on Tracer Transport in Icy Ocean Worlds</title>
<link href="https://hdl.handle.net/1721.1/163366" rel="alternate"/>
<author>
<name>Wang, Shuang</name>
</author>
<author>
<name>Kang, Wanying</name>
</author>
<id>https://hdl.handle.net/1721.1/163366</id>
<updated>2026-03-08T03:28:55Z</updated>
<published>2025-07-14T00:00:00Z</published>
<summary type="text">Influences of Non‐Oberbeck–Boussinesq Effects on Tracer Transport in Icy Ocean Worlds
Wang, Shuang; Kang, Wanying
The subsurface oceans on icy satellites are potentially habitable. To understand their habitability,we need to know how tracers with various lifetimes distribute. Convection is the main vehicle for tracertransport, and we expect convection on icy satellites to differ from regular rotating convection, because aspressure increases, water's thermal expansivity can vary by orders of magnitude or even reverse sign nearfreezing point. Any variation of fluid properties would break the Oberbeck–Boussinesq approximation, leadingto non‐Oberbeck–Boussinesq (NOB) effects, measured by a coefficient ϵ. In this work, we identify twocompeting impacts of NOB effects on tracer transport. The first promotes overall upward tracer transport at ϵ2‐order, while the second enhances transport near the bottom source but inhibits transport further up at ϵ3‐order. Inweakly nonlinear regime, the former effect dominates, causing more tracers reaching the ice shell. While instrongly nonlinear regime, the latter effect dominates, reducing tracer concentrations near the ice shell. Byvarying particle lifetimes, we find that NOB corrections are most pronounced when particle lifetime iscomparable to the timescale of upward tracer transport. Additionally, when NOB effects are strong enough tocreate a stratified layer in the upper part of the ocean, tracer transport into the stratified layer is set by energetics.These effects are expected to prolong the transport timescale of chemical tracers or biosignatures from theseafloor to the ice shell on icy satellites.
</summary>
<dc:date>2025-07-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Single Word Change Is All You Need: Using LLMs to Create Synthetic Training Examples for Text Classifiers</title>
<link href="https://hdl.handle.net/1721.1/163365" rel="alternate"/>
<author>
<name>Xu, Lei</name>
</author>
<author>
<name>Alnegheimish, Sarah</name>
</author>
<author>
<name>Berti‐Equille, Laure</name>
</author>
<author>
<name>Cuesta‐Infante, Alfredo</name>
</author>
<author>
<name>Veeramachaneni, Kalyan</name>
</author>
<id>https://hdl.handle.net/1721.1/163365</id>
<updated>2026-03-08T03:28:50Z</updated>
<published>2025-07-07T00:00:00Z</published>
<summary type="text">Single Word Change Is All You Need: Using LLMs to Create Synthetic Training Examples for Text Classifiers
Xu, Lei; Alnegheimish, Sarah; Berti‐Equille, Laure; Cuesta‐Infante, Alfredo; Veeramachaneni, Kalyan
In text classification, creating an adversarial example means subtly perturbing a few words in a sentence without changing itsmeaning, causing it to be misclassified by a classifier. A concerning observation is that a significant portion of adversarial exam-ples generated by existing methods change only one word. This single-word perturbation vulnerability represents a significantweakness in classifiers, which malicious users can exploit to efficiently create a multitude of adversarial examples. This paperstudies this problem and makes the following key contributions: (1) We introduce a novel metric &#120588; to quantitatively assess a clas-sifier's robustness against single-word perturbation. (2) We present the SP-Attack, designed to exploit the single-word perturbationvulnerability, achieving a higher attack success rate, better preserving sentence meaning, while reducing computation costscompared to state-of-the-art adversarial methods. (3) We propose SP-Defence, which aims to improve &#120588; by applying data augmen-tation in learning. Experimental results on 4 datasets and 2 masked language models show that SP-Defence improves &#120588; by 14.6%and 13.9% and decreases the attack success rate of SP-Attack by 30.4% and 21.2% on two classifiers respectively, and decreasesthe attack success rate of existing attack methods that involve multiple-word perturbation.
</summary>
<dc:date>2025-07-07T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluation and Spatial Optimization Model of Urban Medical Resource Distribution Considering Equity and Efficiency</title>
<link href="https://hdl.handle.net/1721.1/163364" rel="alternate"/>
<author>
<name>Yao, Yao</name>
</author>
<author>
<name>Wang, Yujia</name>
</author>
<author>
<name>Liang, Lin</name>
</author>
<author>
<name>Yan, Xiaoqin</name>
</author>
<author>
<name>Dong, Anning</name>
</author>
<author>
<name>Guan, Qingfeng</name>
</author>
<author>
<name>Luo, Peng</name>
</author>
<id>https://hdl.handle.net/1721.1/163364</id>
<updated>2026-03-08T03:28:54Z</updated>
<published>2025-07-05T00:00:00Z</published>
<summary type="text">Evaluation and Spatial Optimization Model of Urban Medical Resource Distribution Considering Equity and Efficiency
Yao, Yao; Wang, Yujia; Liang, Lin; Yan, Xiaoqin; Dong, Anning; Guan, Qingfeng; Luo, Peng
The rapidly increasing demand for medical resources in accelerating urbanization countries is facing the challenge of unequalresource distribution. Despite numerous studies on the siting of medical resources aimed at improving public accessibility andefficiency to these resources, there is comparatively less research focusing on the equity of access to medical resources. Thisstudy establishes a framework that optimizes the distribution of medical resources by considering both equity and efficiency. Weintroduce an optimization allocation model for both equity and efficiency based on the location set coverage problem (LSCP). Themodel combines region growing algorithm and genetic algorithm to optimize site selection for hospitals. Taking Wuhan as thestudy area, the results demonstrate that the optimized service coverage increases by 21.2%, and the number of people served hasreached 87.3%. The hospital bed utilization rate in downtown areas reaches 92.89%, while it exceeds 99% at suburban hospitals.The optimized site selection significantly enhances medical resource utilization efficiency, effectively addressing the resourcedistribution inequity between urban and rural areas. This study offers a novel approach to optimizing medical resource alloca-tion, effectively balancing equity and efficiency, and providing valuable theoretical underpinnings for enhancing medical servicesystems in emerging urban areas.
</summary>
<dc:date>2025-07-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the use of high‐density polyethylene bottles for long‐term storage of total alkalinity samples</title>
<link href="https://hdl.handle.net/1721.1/163363" rel="alternate"/>
<author>
<name>Woosley, Ryan J</name>
</author>
<author>
<name>Neithardt, Daina</name>
</author>
<author>
<name>Bruno, Jessica A</name>
</author>
<author>
<name>Lahn, Lou</name>
</author>
<id>https://hdl.handle.net/1721.1/163363</id>
<updated>2026-03-08T03:28:48Z</updated>
<published>2025-06-25T00:00:00Z</published>
<summary type="text">On the use of high‐density polyethylene bottles for long‐term storage of total alkalinity samples
Woosley, Ryan J; Neithardt, Daina; Bruno, Jessica A; Lahn, Lou
Total alkalinity (TA) plays an important role in buffering seawater and determining how much anthropogeniccarbon dioxide the oceans can absorb and mitigate the rise in atmospheric concentrations. Total alkalinity varieswith location, depth, and time making it an important variable needed to quantify and monitor ocean acidiﬁcation,and potentially for ocean alkalinity enhancement interventions. Currently, best practices are to use expensivehigh-quality borosilicate glass bottles for collecting and storing these samples. However, unlike other carbon systemvariables, TA is not affected by gas exchange meaning plastic bottles may be suitable for TA sample storage. Plasticbottles are lighter, cheaper, and less prone to breakage making them easier to handle and ship. Here, we test the suit-ability of high-density polyethylene (HDPE) for collection and long-term storage of TA samples. In two sets of exper-iments, it was determined that HDPE is not suitable for long-term storage of TA samples as there were large changesin TA over time and precision of duplicate samples was very poor. We hypothesize that HDPE plastic is slightlyporous leading to leaching of alkalinity either into or out of the bottle over time impacting the value of the sample.Use of HDPE bottles for TA samples is not recommended for long term sample storage.
</summary>
<dc:date>2025-06-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adapting temporal preference to scarcity: A role for emotion?</title>
<link href="https://hdl.handle.net/1721.1/163362" rel="alternate"/>
<author>
<name>Blain, Bastien</name>
</author>
<author>
<name>Globig, Laura K.</name>
</author>
<author>
<name>Sharot, Tali</name>
</author>
<id>https://hdl.handle.net/1721.1/163362</id>
<updated>2026-03-08T03:26:15Z</updated>
<published>2025-06-20T00:00:00Z</published>
<summary type="text">Adapting temporal preference to scarcity: A role for emotion?
Blain, Bastien; Globig, Laura K.; Sharot, Tali
A critical optimization problem is how to distribute resource consumption over time. Humans tend to value immediate rewards over equivalent future rewards—a phenomenon called temporal discounting. Such imbalance can lead to poor health, education, and financial decisions. It is also a hurdle for implementing sustainability policies. A major research goal is to identify factors that influence temporal discounting, so that policymakers could develop interventions to correct for this imbalance. One such factor is available resources; scarcity may increase in temporal discounting. Another potential factor is emotion; negative emotions may lead to high temporal discounting. However, emotion and resources are not independent. For example, losing a large sum of money will lead to negative affect. Here, we take advantage of one of the largest global ‘income shocks’ in history, to tease apart the role of emotion and income on temporal discounting. We tested 1,145 individuals as the market was crashing in late March 2020 and unemployment rising and then retested 200 of those individuals as the market was recovering in June 2020. We found that income shock was strongly related to an increase in delay discounting using cross-sectional and longitudinal data. Importantly, this relationship was independent of the negative impact on affect. These findings suggest that, contrary to wide held assumptions, people directly adapt delay discounting to environmental constraints, without the need for input from the affective system. This independence may be adaptive, as affect is a noisy reflection of environmental constraints, which may lead to suboptimal choice.
</summary>
<dc:date>2025-06-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shattering in the Ising p-spin glass model</title>
<link href="https://hdl.handle.net/1721.1/163361" rel="alternate"/>
<author>
<name>Gamarnik, David</name>
</author>
<author>
<name>Jagannath, Aukosh</name>
</author>
<author>
<name>Kızıldağ, Eren C.</name>
</author>
<id>https://hdl.handle.net/1721.1/163361</id>
<updated>2026-03-08T03:26:16Z</updated>
<published>2025-09-11T00:00:00Z</published>
<summary type="text">Shattering in the Ising p-spin glass model
Gamarnik, David; Jagannath, Aukosh; Kızıldağ, Eren C.
We study the Ising p-spin glass model for large p. We show that for any inverse temperature ln 2 &lt; β &lt; 2 ln 2 and any large p, the model exhibits shattering: w.h.p. as n → ∞ , there exists exponentially many well-separated clusters such that (a) each cluster has exponentially small Gibbs mass, and (b) the clusters collectively contain all but a vanishing fraction of Gibbs mass. Moreover, these clusters consist of configurations with energy near β . Range of temperatures for which shattering occurs is within the replica symmetric region. To the best of our knowledge, this is the first shattering result regarding the Ising p-spin glass models. Furthermore, we show that for any γ &gt; 0 and any large enough p, the model exhibits an intricate geometrical property known as the multi Overlap Gap Property above the energy value γ 2 ln 2 . Our proofs are elementary, and in particular based on simple applications of the first and the second moment methods.
</summary>
<dc:date>2025-09-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>Combined mechanical ventilatory and mechanical circulatory support aids pulmonary vascular state in cardiogenic shock</title>
<link href="https://hdl.handle.net/1721.1/163360" rel="alternate"/>
<author>
<name>Lamberti, Kimberly K.</name>
</author>
<author>
<name>Edelman, Elazer R.</name>
</author>
<author>
<name>Keller, Steven P.</name>
</author>
<id>https://hdl.handle.net/1721.1/163360</id>
<updated>2026-03-08T03:28:18Z</updated>
<published>2025-10-15T00:00:00Z</published>
<summary type="text">Combined mechanical ventilatory and mechanical circulatory support aids pulmonary vascular state in cardiogenic shock
Lamberti, Kimberly K.; Edelman, Elazer R.; Keller, Steven P.
Background Percutaneous ventricular assist devices (pVADs) support patients in circulatory failure and increasingly concomitant respiratory failure. The presence of co-existent lung disease creates a management challenge due to cardiopulmonary interactions, especially when there is simultaneous mechanical ventilation and mechanical circulatory support. Enhanced understanding of the combined effects of these devices is necessary to better inform care for circulatory failure patients. Methods A porcine model of titratable acute cardiogenic shock was used to quantify the effect of pVAD support on cardiac loading states in five intubated animals with positive pressure ventilation and varied intrathoracic pressure. Cardiovascular hemodynamics were assessed across positive end-expiratory pressure (PEEP) ramps in animals in health, health with pVAD, and pVAD-supported cardiogenic shock induced via coronary microembolization. Results This study employed invasive physiological metrics and assessment of right and left ventricular press-volume loops to recreate classic Frank-Starling curves. Increased intrathoracic pressure altered transmural pressure in the ventricles and the pulmonary vasculature and resulted in decreased venous return and stroke volume while increasing end-diastolic pressure consistent with decreased ventricular compliance. In pVAD-supported cardiogenic shock, elevated PEEP enhanced left ventricular output and increased pulmonary vascular compliance in several animals, contrary to traditional decrements observed with elevated PEEP. The right ventricular functional response aligned with these varied responses in pulmonary vascular state. Conclusions These results demonstrate that combined used of cardiopulmonary support devices in cardiogenic shock can create variable responses compared to classic physiological understanding. In pVAD-supported cardiogenic shock, an increase in ventilatory PEEP increased unloading from the heart and improved right ventricular function, counter to traditional findings. This demonstrates that combined use of these technologies could be leveraged to optimize a patient’s volume status in complex shock and provides promise for management of patients with cardiopulmonary failure requiring simultaneous use of mechanical circulatory support and mechanical ventilation.
</summary>
<dc:date>2025-10-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Chip-Firing on Undirected Binary Trees</title>
<link href="https://hdl.handle.net/1721.1/163359" rel="alternate"/>
<author>
<name>Inagaki, Ryota</name>
</author>
<author>
<name>Khovanova, Tanya</name>
</author>
<author>
<name>Luo, Austin</name>
</author>
<id>https://hdl.handle.net/1721.1/163359</id>
<updated>2026-03-08T03:26:07Z</updated>
<published>2025-08-18T00:00:00Z</published>
<summary type="text">On Chip-Firing on Undirected Binary Trees
Inagaki, Ryota; Khovanova, Tanya; Luo, Austin
Chip-firing is a combinatorial game played on an undirected graph in which we place chips on vertices and disperse them. We study chip-firing on an infinite binary tree in which we add a self-loop to the root to ensure each vertex has degree 3. A vertex can fire if the number of chips placed on it is at least its degree. In our case, a vertex can fire if it has at least three chips, and it fires by dispersing one chip to each neighbor. Motivated by a 2023 paper by Musiker and Nguyen on this setting of chip-firing, we give an upper bound for the number of stable configurations when we place 2 ℓ - 1 labeled chips at the root. When starting with N chips at the root where N is a positive integer, we determine the number of times each vertex fires when N is not necessarily of the form 2 ℓ - 1 . We also calculate the total number of fires in this case.
</summary>
<dc:date>2025-08-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measurement of charged hadron multiplicity in Au+Au collisions at s NN = 200 GeV with the sPHENIX detector</title>
<link href="https://hdl.handle.net/1721.1/163358" rel="alternate"/>
<author>
<name>Abdulhamid, M. I.</name>
</author>
<author>
<name>Acharya, U.</name>
</author>
<author>
<name>Adams, E. R.</name>
</author>
<author>
<name>Adawi, G.</name>
</author>
<author>
<name>Aidala, C. A.</name>
</author>
<author>
<name>Akiba, Y.</name>
</author>
<author>
<name>Alfred, M.</name>
</author>
<author>
<name>Ali, S.</name>
</author>
<author>
<name>Alsayegh, A.</name>
</author>
<author>
<name>Altaf, S.</name>
</author>
<author>
<name>Amedi, H.</name>
</author>
<author>
<name>Anderson, D. M.</name>
</author>
<author>
<name>Andrieux, V. V.</name>
</author>
<author>
<name>Angerami, A.</name>
</author>
<author>
<name>Applegate, N.</name>
</author>
<author>
<name>Aso, H.</name>
</author>
<author>
<name>Aune, S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163358</id>
<updated>2026-03-08T03:26:19Z</updated>
<published>2025-08-12T00:00:00Z</published>
<summary type="text">Measurement of charged hadron multiplicity in Au+Au collisions at s NN = 200 GeV with the sPHENIX detector
Abdulhamid, M. I.; Acharya, U.; Adams, E. R.; Adawi, G.; Aidala, C. A.; Akiba, Y.; Alfred, M.; Ali, S.; Alsayegh, A.; Altaf, S.; Amedi, H.; Anderson, D. M.; Andrieux, V. V.; Angerami, A.; Applegate, N.; Aso, H.; Aune, S.
The pseudorapidity distribution of charged hadrons produced in Au+Au collisions at a center-of-mass energy of s NN = 200 GeV is measured using data collected by the sPHENIX detector. Charged hadron yields are extracted by counting cluster pairs in the inner and outer layers of the Intermediate Silicon Tracker, with corrections applied for detector acceptance, reconstruction efficiency, combinatorial pairs, and contributions from secondary decays. The measured distributions cover |η| &lt; 1.1 across various centralities, and the average pseudorapidity density of charged hadrons at mid-rapidity is compared to predictions from Monte Carlo heavy-ion event generators. This result, featuring full azimuthal coverage at mid-rapidity, is consistent with previous experimental measurements at the Relativistic Heavy Ion Collider, thereby supporting the broader sPHENIX physics program.
</summary>
<dc:date>2025-08-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Search for a heavy pseudoscalar Higgs boson decaying to a 125 GeV Higgs boson and a Z boson in final states with two tau and two light leptons in proton-proton collisions at √s = 13 TeV</title>
<link href="https://hdl.handle.net/1721.1/163357" rel="alternate"/>
<author>
<name>Chekhovsky, V.</name>
</author>
<author>
<name>Hayrapetyan, A.</name>
</author>
<author>
<name>Makarenko, V.</name>
</author>
<author>
<name>Tumasyan, A.</name>
</author>
<author>
<name>Adam, W.</name>
</author>
<author>
<name>Andrejkovic, J. W.</name>
</author>
<author>
<name>Benato, L.</name>
</author>
<author>
<name>Bergauer, T.</name>
</author>
<author>
<name>Chatterjee, S.</name>
</author>
<author>
<name>Damanakis, K.</name>
</author>
<author>
<name>Dragicevic, M.</name>
</author>
<author>
<name>Hussain, P. S.</name>
</author>
<author>
<name>Jeitler, M.</name>
</author>
<author>
<name>Krammer, N.</name>
</author>
<author>
<name>Li, A.</name>
</author>
<author>
<name>Liko, D.</name>
</author>
<author>
<name>Mikulec, I.</name>
</author>
<id>https://hdl.handle.net/1721.1/163357</id>
<updated>2026-03-08T03:28:14Z</updated>
<published>2025-10-09T00:00:00Z</published>
<summary type="text">Search for a heavy pseudoscalar Higgs boson decaying to a 125 GeV Higgs boson and a Z boson in final states with two tau and two light leptons in proton-proton collisions at √s = 13 TeV
Chekhovsky, V.; Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.
A search for a heavy pseudoscalar Higgs boson, A, decaying to a 125 GeV Higgs&#13;
boson h and a Z boson is presented. The h boson is identified via its decay to a pair of tau&#13;
leptons, while the Z boson is identified via its decay to a pair of electrons or muons. The&#13;
search targets the production of the A boson via the gluon-gluon fusion process, gg → A,&#13;
and in association with bottom quarks, bb¯A. The analysis uses a data sample corresponding&#13;
to an integrated luminosity of 138 fb−1&#13;
collected with the CMS detector at the CERN LHC&#13;
in proton-proton collisions at a centre-of-mass energy of √&#13;
s = 13 TeV. Constraints are set on&#13;
the product of the cross sections of the A production mechanisms and the A → Zh decay&#13;
branching fraction. The observed (expected) upper limit at 95% confidence level ranges&#13;
from 0.049 (0.060) pb to 1.02 (0.79) pb for the gg → A process and from 0.053 (0.059) pb&#13;
to 0.79 (0.61) pb for the bb¯A process in the probed range of the A boson mass, mA, from&#13;
225 GeV to 1 TeV. The results of the search are used to constrain parameters within the&#13;
M&#13;
125&#13;
h,EFT benchmark scenario of the minimal supersymmetric extension of the standard model.&#13;
Values of tan β below 2.2 are excluded in this scenario at 95% confidence level for all mA&#13;
values in the range from 225 to 350 GeV.
</summary>
<dc:date>2025-10-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measurements of inclusive and differential cross sections for top quark production in association with a Z boson in proton-proton collisions at s = 13 TeV</title>
<link href="https://hdl.handle.net/1721.1/163356" rel="alternate"/>
<author>
<name>Hayrapetyan, A.</name>
</author>
<author>
<name>Tumasyan, A.</name>
</author>
<author>
<name>Adam, W.</name>
</author>
<author>
<name>Andrejkovic, J. W.</name>
</author>
<author>
<name>Bergauer, T.</name>
</author>
<author>
<name>Chatterjee, S.</name>
</author>
<author>
<name>Damanakis, K.</name>
</author>
<author>
<name>Dragicevic, M.</name>
</author>
<author>
<name>Hussain, P. S.</name>
</author>
<author>
<name>Jeitler, M.</name>
</author>
<author>
<name>Krammer, N.</name>
</author>
<author>
<name>Li, A.</name>
</author>
<author>
<name>Liko, D.</name>
</author>
<author>
<name>Mikulec, I.</name>
</author>
<author>
<name>Schieck, J.</name>
</author>
<author>
<name>Schöfbeck, R.</name>
</author>
<author>
<name>Schwarz, D.</name>
</author>
<id>https://hdl.handle.net/1721.1/163356</id>
<updated>2026-03-08T03:28:13Z</updated>
<published>2025-02-26T00:00:00Z</published>
<summary type="text">Measurements of inclusive and differential cross sections for top quark production in association with a Z boson in proton-proton collisions at s = 13 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.; Schwarz, D.
Measurements are presented of inclusive and differential cross sections for Z boson associated production of top quark pairs ( t t ¯ Z ) and single top quarks (tZq or tWZ). The data were recorded in proton-proton collisions at a center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 138 fb−1. Events with three or more leptons, electrons or muons, are selected and a multiclass deep neural network is used to separate three event categories, the t t ¯ Z and tWZ processes, the tZq process, and the backgrounds. A profile likelihood approach is used to unfold the differential cross sections, to account for systematic uncertainties, and to determine the correlations between the two signal categories in one global fit. The inclusive cross sections for a dilepton invariant mass between 70 and 110 GeV are measured to be 1.14 ± 0.07 pb for the sum of t t ¯ Z and tWZ, and 0.81 ± 0.10 pb for tZq, in good agreement with theoretical predictions.
</summary>
<dc:date>2025-02-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Priming agents transiently reduce the clearance of cell-free DNA to improve liquid biopsies</title>
<link href="https://hdl.handle.net/1721.1/163355" rel="alternate"/>
<author>
<name>Martin-Alonso, Carmen</name>
</author>
<author>
<name>Tabrizi, Shervin</name>
</author>
<author>
<name>Xiong, Kan</name>
</author>
<author>
<name>Blewett, Timothy</name>
</author>
<author>
<name>Sridhar, Sainetra</name>
</author>
<author>
<name>Crnjac, Andjela</name>
</author>
<author>
<name>Patel, Sahil</name>
</author>
<author>
<name>An, Zhenyi</name>
</author>
<author>
<name>Bekdemir, Ahmet</name>
</author>
<author>
<name>Shea, Douglas</name>
</author>
<author>
<name>Wang, Shih-Ting</name>
</author>
<author>
<name>Rodriguez-Aponte, Sergio</name>
</author>
<author>
<name>Naranjo, Christopher A</name>
</author>
<author>
<name>Rhoades, Justin</name>
</author>
<author>
<name>Kirkpatrick, Jesse D</name>
</author>
<author>
<name>Fleming, Heather E</name>
</author>
<author>
<name>Amini, Ava P</name>
</author>
<author>
<name>Golub, Todd R</name>
</author>
<author>
<name>Love, J Christopher</name>
</author>
<author>
<name>Bhatia, Sangeeta N</name>
</author>
<author>
<name>Adalsteinsson, Viktor A</name>
</author>
<id>https://hdl.handle.net/1721.1/163355</id>
<updated>2026-03-08T03:28:38Z</updated>
<published>2024-01-19T00:00:00Z</published>
<summary type="text">Priming agents transiently reduce the clearance of cell-free DNA to improve liquid biopsies
Martin-Alonso, Carmen; Tabrizi, Shervin; Xiong, Kan; Blewett, Timothy; Sridhar, Sainetra; Crnjac, Andjela; Patel, Sahil; An, Zhenyi; Bekdemir, Ahmet; Shea, Douglas; Wang, Shih-Ting; Rodriguez-Aponte, Sergio; Naranjo, Christopher A; Rhoades, Justin; Kirkpatrick, Jesse D; Fleming, Heather E; Amini, Ava P; Golub, Todd R; Love, J Christopher; Bhatia, Sangeeta N; Adalsteinsson, Viktor A
Liquid biopsies enable early detection and monitoring of diseases such as cancer, but their sensitivity remains limited by the scarcity of analytes such as cell-free DNA (cfDNA) in blood. Improvements to sensitivity have primarily relied on enhancing sequencing technology ex vivo. We sought to transiently augment the level of circulating tumor DNA (ctDNA) in a blood draw by attenuating its clearance in vivo. We report two intravenous priming agents given 1 to 2 hours before a blood draw to recover more ctDNA. Our priming agents consist of nanoparticles that act on the cells responsible for cfDNA clearance and DNA-binding antibodies that protect cfDNA. In tumor-bearing mice, they greatly increase the recovery of ctDNA and improve the sensitivity for detecting small tumors.
</summary>
<dc:date>2024-01-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Vaccine targeting to mucosal lymphoid tissues promotes humoral immunity in the gastrointestinal tract</title>
<link href="https://hdl.handle.net/1721.1/163354" rel="alternate"/>
<author>
<name>Kocabiyik, Ozgun</name>
</author>
<author>
<name>Amlashi, Parastoo</name>
</author>
<author>
<name>Vo, A Lina</name>
</author>
<author>
<name>Suh, Heikyung</name>
</author>
<author>
<name>Rodriguez-Aponte, Sergio A</name>
</author>
<author>
<name>Dalvie, Neil C</name>
</author>
<author>
<name>Love, J Christopher</name>
</author>
<author>
<name>Andrabi, Raiees</name>
</author>
<author>
<name>Irvine, Darrell J</name>
</author>
<id>https://hdl.handle.net/1721.1/163354</id>
<updated>2026-03-08T03:28:39Z</updated>
<published>2024-05-29T00:00:00Z</published>
<summary type="text">Vaccine targeting to mucosal lymphoid tissues promotes humoral immunity in the gastrointestinal tract
Kocabiyik, Ozgun; Amlashi, Parastoo; Vo, A Lina; Suh, Heikyung; Rodriguez-Aponte, Sergio A; Dalvie, Neil C; Love, J Christopher; Andrabi, Raiees; Irvine, Darrell J
Viruses, bacteria, and parasites frequently cause infections in the gastrointestinal tract, but traditional vaccination strategies typically elicit little or no mucosal antibody responses. Here, we report a strategy to effectively concentrate immunogens and adjuvants in gut-draining lymph nodes (LNs) to induce gut-associated mucosal immunity. We prepared nanoemulsions (NEs) based on biodegradable oils commonly used as vaccine adjuvants, which encapsulated a potent Toll-like receptor agonist and displayed antigen conjugated to their surface. Following intraperitoneal administration, these NEs accumulated in gut-draining mesenteric LNs, priming strong germinal center responses and promoting B cell class switching to immunoglobulin A (IgA). Optimized NEs elicited 10- to 1000-fold higher antigen-specific IgG and IgA titers in the serum and feces, respectively, compared to free antigen mixed with NE, and strong neutralizing antibody titers against severe acute respiratory syndrome coronavirus 2. Thus, robust gut humoral immunity can be elicited by exploiting the unique lymphatic collection pathways of the gut with a lymph-targeting vaccine formulation.
</summary>
<dc:date>2024-05-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Expansion of tumor-reactive CD8+ T cell clonotypes occurs in the spleen in response to immune checkpoint blockade</title>
<link href="https://hdl.handle.net/1721.1/163353" rel="alternate"/>
<author>
<name>Morgan, Duncan M</name>
</author>
<author>
<name>Horton, Brendan L</name>
</author>
<author>
<name>Bhandarkar, Vidit</name>
</author>
<author>
<name>Van, Richard</name>
</author>
<author>
<name>Dinter, Teresa</name>
</author>
<author>
<name>Zagorulya, Maria</name>
</author>
<author>
<name>Love, J Christopher</name>
</author>
<author>
<name>Spranger, Stefani</name>
</author>
<id>https://hdl.handle.net/1721.1/163353</id>
<updated>2026-03-08T03:28:40Z</updated>
<published>2024-09-13T00:00:00Z</published>
<summary type="text">Expansion of tumor-reactive CD8+ T cell clonotypes occurs in the spleen in response to immune checkpoint blockade
Morgan, Duncan M; Horton, Brendan L; Bhandarkar, Vidit; Van, Richard; Dinter, Teresa; Zagorulya, Maria; Love, J Christopher; Spranger, Stefani
Immune checkpoint blockade (ICB) enhances T cell responses against cancer, leading to long-term&#13;
survival in a fraction of patients. CD8+ T cell differentiation in response to chronic antigen&#13;
stimulation is highly complex and it remains unclear precisely which T cell differentiation states at&#13;
which anatomic sites are critical for the response to ICB. We identified an intermediate-exhausted&#13;
population in the white pulp of the spleen which underwent significant expansion in response&#13;
to ICB and gave rise to the majority of tumor-infiltrating clonotypes. Increased systemic antigen&#13;
perturbed differentiation of this population towards a most circulatory exhausted_KLR state, while&#13;
a lack of cross-presented tumor-antigen blunted its differentiation in the spleen. An analogous&#13;
population of exhausted_KLR CD8+ T cells in human blood samples exhibited diminished tumortrafficking ability. Collectively, our data demonstrate the critical role of antigen density within the&#13;
spleen for the differentiation and expansion of T cell clonotypes in response to ICB.
</summary>
<dc:date>2024-09-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Drell-Yan transverse-momentum spectra at N3LL′ and approximate N4LL with SCETlib</title>
<link href="https://hdl.handle.net/1721.1/163352" rel="alternate"/>
<author>
<name>Billis, Georgios</name>
</author>
<author>
<name>Michel, Johannes K. L.</name>
</author>
<author>
<name>Tackmann, Frank J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163352</id>
<updated>2026-03-08T03:28:17Z</updated>
<published>2025-02-25T00:00:00Z</published>
<summary type="text">Drell-Yan transverse-momentum spectra at N3LL′ and approximate N4LL with SCETlib
Billis, Georgios; Michel, Johannes K. L.; Tackmann, Frank J.
We provide state-of-the-art precision QCD predictions for the fiducial W and Z boson transverse momentum spectra at the LHC at N3LL′ and approximate N4LL in resummed perturbation theory, matched to available O α s 3 fixed-order results. Our predictions consistently combine all information from across the spectrum in a unified way, ranging from the nonperturbative region of small transverse momenta to the fixed-order tail, with an emphasis on estimating the magnitude of residual perturbative uncertainties, and in particular of those related to the matching. Parametric uncertainties related to the strong coupling, the collinear PDFs, and the nonperturbative transverse momentum-dependent (TMD) dynamics are studied in detail. To assess the latter, we explicitly demonstrate how the full complexity of flavor and Bjorken x-dependent TMD dynamics can be captured by a single, effective nonperturbative function for the resonant production of any given vector boson at a given collider. We point out that the cumulative p T Z cross section at the level of precision enabled by our predictions provides strong constraining power for PDF determinations at full N3LO.
</summary>
<dc:date>2025-02-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Search for a heavy resonance decaying into a Z and a Higgs boson in events with an energetic jet and two electrons, two muons, or missing transverse momentum in proton-proton collisions at √s = 13 TeV</title>
<link href="https://hdl.handle.net/1721.1/163351" rel="alternate"/>
<author>
<name>Hayrapetyan, A.</name>
</author>
<author>
<name>Tumasyan, A.</name>
</author>
<author>
<name>Adam, W.</name>
</author>
<author>
<name>Andrejkovic, J. W.</name>
</author>
<author>
<name>Bergauer, T.</name>
</author>
<author>
<name>Chatterjee, S.</name>
</author>
<author>
<name>Damanakis, K.</name>
</author>
<author>
<name>Dragicevic, M.</name>
</author>
<author>
<name>Hussain, P. S.</name>
</author>
<author>
<name>Jeitler, M.</name>
</author>
<author>
<name>Krammer, N.</name>
</author>
<author>
<name>Li, A.</name>
</author>
<author>
<name>Liko, D.</name>
</author>
<author>
<name>Mikulec, I.</name>
</author>
<author>
<name>Schieck, J.</name>
</author>
<author>
<name>Schöfbeck, R.</name>
</author>
<id>https://hdl.handle.net/1721.1/163351</id>
<updated>2026-03-08T03:28:16Z</updated>
<published>2025-02-13T00:00:00Z</published>
<summary type="text">Search for a heavy resonance decaying into a Z and a Higgs boson in events with an energetic jet and two electrons, two muons, or missing transverse momentum in proton-proton collisions at √s = 13 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.
A search is presented for a heavy resonance decaying into a Z boson and a Higgs (H) boson. The analysis is based on data from proton-proton collisions at a centre-of-mass energy of 13 TeV corresponding to an integrated luminosity of 138 fb−1, recorded with the CMS experiment in the years 2016–2018. Resonance masses between 1.4 and 5 TeV are considered, resulting in large transverse momenta of the Z and H bosons. Final states that result from Z boson decays to pairs of electrons, muons, or neutrinos are considered. The H boson is reconstructed as a single large-radius jet, recoiling against the Z boson. Machine-learning flavour-tagging techniques are employed to identify decays of a Lorentz-boosted H boson into pairs of charm or bottom quarks, or into four quarks via the intermediate H → WW* and ZZ* decays. The analysis targets H boson decays that were not generally included in previous searches using the H → b b ¯ channel. Compared with previous analyses, the sensitivity for high resonance masses is improved significantly in the channel where at most one b quark is tagged.
</summary>
<dc:date>2025-02-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measurement of the CKM angle γ in B± → DK*(892)± decays</title>
<link href="https://hdl.handle.net/1721.1/163350" rel="alternate"/>
<author>
<name>Aaij, R.</name>
</author>
<author>
<name>Abdelmotteleb, A. S. W.</name>
</author>
<author>
<name>Abellan Beteta, C.</name>
</author>
<author>
<name>Abudinén, F.</name>
</author>
<author>
<name>Ackernley, T.</name>
</author>
<author>
<name>Adefisoye, A. A.</name>
</author>
<author>
<name>Adeva, B.</name>
</author>
<author>
<name>Adinolfi, M.</name>
</author>
<author>
<name>Adlarson, P.</name>
</author>
<author>
<name>Agapopoulou, C.</name>
</author>
<author>
<name>Aidala, C. A.</name>
</author>
<author>
<name>Ajaltouni, Z.</name>
</author>
<author>
<name>Akar, S.</name>
</author>
<author>
<name>Akiba, K.</name>
</author>
<author>
<name>Albicocco, P.</name>
</author>
<author>
<name>Albrecht, J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163350</id>
<updated>2026-03-08T03:28:15Z</updated>
<published>2025-02-18T00:00:00Z</published>
<summary type="text">Measurement of the CKM angle γ in B± → DK*(892)± decays
Aaij, R.; Abdelmotteleb, A. S. W.; Abellan Beteta, C.; Abudinén, F.; Ackernley, T.; Adefisoye, A. A.; Adeva, B.; Adinolfi, M.; Adlarson, P.; Agapopoulou, C.; Aidala, C. A.; Ajaltouni, Z.; Akar, S.; Akiba, K.; Albicocco, P.; Albrecht, J.
Measurements of CP observables and the CKM angle γ are performed in B± → DK*(892)± decays, where D represents a superposition of D0 and D ¯ 0 states, using the LHCb dataset collected during Run 1 (2011–2012) and Run 2 (2015–2018). A study of this channel is presented with the D meson reconstructed in two-body final states K±π∓, K+K− and π+π−; four-body final states K±π∓π±π∓ and π+π−π+π−; and three-body final states K S 0 π + π − and K S 0 K + K − . This analysis includes the first observation of the suppressed B± → [π±K∓]DK*± and B± → [π±K∓π±π∓]DK*± decays. The combined result gives γ = (63 ± 13)°.
</summary>
<dc:date>2025-02-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measurement of the t¯tH and tH production rates in the H → bb¯ decay channel using proton-proton collision data at √s = 13 TeV</title>
<link href="https://hdl.handle.net/1721.1/163349" rel="alternate"/>
<author>
<name>Hayrapetyan, A.</name>
</author>
<author>
<name>Tumasyan, A.</name>
</author>
<author>
<name>Adam, W.</name>
</author>
<author>
<name>Andrejkovic, J. W.</name>
</author>
<author>
<name>Bergauer, T.</name>
</author>
<author>
<name>Chatterjee, S.</name>
</author>
<author>
<name>Damanakis, K.</name>
</author>
<author>
<name>Dragicevic, M.</name>
</author>
<author>
<name>Hussain, P. S.</name>
</author>
<author>
<name>Jeitler, M.</name>
</author>
<author>
<name>Krammer, N.</name>
</author>
<author>
<name>Li, A.</name>
</author>
<author>
<name>Liko, D.</name>
</author>
<author>
<name>Mikulec, I.</name>
</author>
<author>
<name>Schieck, J.</name>
</author>
<author>
<name>Schöfbeck, R.</name>
</author>
<author>
<name>Schwarz, D.</name>
</author>
<id>https://hdl.handle.net/1721.1/163349</id>
<updated>2026-03-08T03:28:11Z</updated>
<published>2025-02-14T00:00:00Z</published>
<summary type="text">Measurement of the t¯tH and tH production rates in the H → bb¯ decay channel using proton-proton collision data at √s = 13 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.; Schwarz, D.
An analysis of the production of a Higgs boson (H) in association with a top quark-antiquark pair ( t t ¯ H ) or a single top quark (tH) is presented. The Higgs boson decay into a bottom quark-antiquark pair (H → b b ¯ ) is targeted, and three different final states of the top quark decays are considered, defined by the number of leptons (electrons or muons) in the event. The analysis utilises proton-proton collision data collected at the CERN LHC with the CMS experiment at s = 13 TeV in 2016–2018, which correspond to an integrated luminosity of 138 fb−1. The observed t t ¯ H production rate relative to the standard model expectation is 0.33 ± 0.26 = 0.33 ± 0.17(stat) ± 0.21(syst). Additionally, the t t ¯ H production rate is determined in intervals of Higgs boson transverse momentum. An upper limit at 95% confidence level is set on the tH production rate of 14.6 times the standard model prediction, with an expectation of 19.3 − 6.0 + 9.2 . Finally, constraints are derived on the strength and structure of the coupling between the Higgs boson and the top quark from simultaneous extraction of the t t ¯ H and tH production rates, and the results are combined with those obtained in other Higgs boson decay channels.
</summary>
<dc:date>2025-02-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Differential cross section measurements for the production of top quark pairs and of additional jets using dilepton events from pp collisions at √s = 13 TeV</title>
<link href="https://hdl.handle.net/1721.1/163348" rel="alternate"/>
<author>
<name>Tumasyan, A.</name>
</author>
<author>
<name>Adam, W.</name>
</author>
<author>
<name>Andrejkovic, J. W.</name>
</author>
<author>
<name>Bergauer, T.</name>
</author>
<author>
<name>Chatterjee, S.</name>
</author>
<author>
<name>Damanakis, K.</name>
</author>
<author>
<name>Dragicevic, M.</name>
</author>
<author>
<name>Escalante Del Valle, A.</name>
</author>
<author>
<name>Hussain, P. S.</name>
</author>
<author>
<name>Jeitler, M.</name>
</author>
<author>
<name>Krammer, N.</name>
</author>
<author>
<name>Lechner, L.</name>
</author>
<author>
<name>Liko, D.</name>
</author>
<author>
<name>Mikulec, I.</name>
</author>
<author>
<name>Paulitsch, P.</name>
</author>
<author>
<name>Pitters, F. M.</name>
</author>
<author>
<name>Schieck, J.</name>
</author>
<author>
<name>Schöfbeck, R.</name>
</author>
<id>https://hdl.handle.net/1721.1/163348</id>
<updated>2026-03-08T03:28:09Z</updated>
<published>2025-02-11T00:00:00Z</published>
<summary type="text">Differential cross section measurements for the production of top quark pairs and of additional jets using dilepton events from pp collisions at √s = 13 TeV
Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Escalante Del Valle, A.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Lechner, L.; Liko, D.; Mikulec, I.; Paulitsch, P.; Pitters, F. M.; Schieck, J.; Schöfbeck, R.
Differential cross sections for top quark pair ( t t ¯ ) production are measured in proton-proton collisions at a center-of-mass energy of 13 TeV using a sample of events containing two oppositely charged leptons. The data were recorded with the CMS detector at the CERN Large Hadron Collider and correspond to an integrated luminosity of 138 fb−1. The differential cross sections are measured as functions of kinematic observables of the t t ¯ system, the top quark and antiquark and their decay products, as well as of the number of additional jets in the event. The results are presented as functions of up to three variables and are corrected to the parton and particle levels. When compared to standard model predictions based on quantum chromodynamics at different levels of accuracy, it is found that the calculations do not always describe the observed data. The deviations are found to be largest for the multi-differential cross sections.
</summary>
<dc:date>2025-02-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>Search for dark matter produced in association with a pair of bottom quarks in proton-proton collisions at s = 13 TeV</title>
<link href="https://hdl.handle.net/1721.1/163347" rel="alternate"/>
<author>
<name>Hayrapetyan, A.</name>
</author>
<author>
<name>Tumasyan, A.</name>
</author>
<author>
<name>Adam, W.</name>
</author>
<author>
<name>Andrejkovic, J. W.</name>
</author>
<author>
<name>Bergauer, T.</name>
</author>
<author>
<name>Chatterjee, S.</name>
</author>
<author>
<name>Damanakis, K.</name>
</author>
<author>
<name>Dragicevic, M.</name>
</author>
<author>
<name>Hussain, P. S.</name>
</author>
<author>
<name>Jeitler, M.</name>
</author>
<author>
<name>Krammer, N.</name>
</author>
<author>
<name>Li, A.</name>
</author>
<author>
<name>Liko, D.</name>
</author>
<author>
<name>Mikulec, I.</name>
</author>
<author>
<name>Schieck, J.</name>
</author>
<author>
<name>Schöfbeck, R.</name>
</author>
<author>
<name>Schwarz, D.</name>
</author>
<author>
<name>Sonawane, M.</name>
</author>
<id>https://hdl.handle.net/1721.1/163347</id>
<updated>2026-03-08T03:28:03Z</updated>
<published>2025-02-11T00:00:00Z</published>
<summary type="text">Search for dark matter produced in association with a pair of bottom quarks in proton-proton collisions at s = 13 TeV
Hayrapetyan, A.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.; Mikulec, I.; Schieck, J.; Schöfbeck, R.; Schwarz, D.; Sonawane, M.
A search for dark matter (DM) particles produced in association with bottom quarks is presented. The analysis uses proton-proton collision data at a center-of-mass energy of s = 13 TeV, corresponding to an integrated luminosity of 138 fb−1. The search is performed in a final state with large missing transverse momentum and a pair of jets originating from bottom quarks. No significant excess of data is observed with respect to the standard model expectation. Results are interpreted in the context of a type-II two-Higgs-doublet model with an additional light pseudoscalar (2HDM+a). An upper limit is set on the mass of the lighter pseudoscalar, probing masses up to 260 GeV at 95% confidence level. Sensitivity to the parameter space with the ratio of the vacuum expectation values of the two Higgs doublets, tan β, greater than 15 is achieved, capitalizing on the enhancement of couplings between pseudoscalars and bottom quarks with high tan β.
</summary>
<dc:date>2025-02-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>Boundary terms in string field theory</title>
<link href="https://hdl.handle.net/1721.1/163346" rel="alternate"/>
<author>
<name>Fırat, Atakan H.</name>
</author>
<author>
<name>Mamade, Raji A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163346</id>
<updated>2026-03-08T03:28:02Z</updated>
<published>2025-02-11T00:00:00Z</published>
<summary type="text">Boundary terms in string field theory
Fırat, Atakan H.; Mamade, Raji A.
We supplement the string field theory action with boundary terms to make its variational principle well-posed. Central to our considerations is the violation of the stress-energy tensor conservation in non-compact CFTs due to the boundary terms. This manifests as the failure of the cyclicity of the BRST operator, which encodes the target space integration by parts identities at the level of the worldsheet. Using this failure, we argue that the free closed string field theory action admits a well-posed variational principle upon including an additional boundary contribution. We explicitly work out the resulting action up to the massless level and show that it is related to the expansion of the low-energy effective string action endowed with the Gibbons-Hawking-York term on a flat background. We also discuss the structure of the boundary terms in the interacting theory.
</summary>
<dc:date>2025-02-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Structural Approach to Measuring Time-varying Risk&#13;
Aversion</title>
<link href="https://hdl.handle.net/1721.1/163345" rel="alternate"/>
<author>
<name>von Turkovich, Nick</name>
</author>
<id>https://hdl.handle.net/1721.1/163345</id>
<updated>2025-10-22T03:34:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Structural Approach to Measuring Time-varying Risk&#13;
Aversion
von Turkovich, Nick
Non-homothetic preferences have the potential to rationalize important asset pricing facts including time-varying risk premia and business cycle movements in asset prices (e.g., Campbell and Cochrane (1999)). This paper offers a structural approach to measuring time-varying risk aversion. Motivated by the literature on consumption commitments (e.g., Flavin and Nakagawa (2008), Chetty and Szeidl (2016), Chetty, Sandor, and Szeidl (2017)), I develop a model in which investors have nonseparable preferences over housing and nonhousing consumption, and investors must consume a minimum amount of housing each period. Non-housing consumption is assumed to be flexibly chosen. The key insight is that the intratemporal optimality condition between the two goods reveals information about the surplus consumption ratio, a key variable driving risk aversion. A cointegrating relationship between relative quantities and prices allow us to identify the elasticity of intratemporal substitution and measure surplus housing consumption. Using aggregate U.S. consumption data from 1959 to the present, the measured surplus consumption ratio demonstrates clear business cycle fluctuations, rising during expansions and falling during recessions. Consistent with the theory, this measure also predicts future excess returns.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decarbonization of Gas Heating in Massachusetts: An Evaluation of Current Trends and Opportunities</title>
<link href="https://hdl.handle.net/1721.1/163344" rel="alternate"/>
<author>
<name>Epstein, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/163344</id>
<updated>2025-10-22T03:34:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Decarbonization of Gas Heating in Massachusetts: An Evaluation of Current Trends and Opportunities
Epstein, Andrew
The commonwealth of Massachusetts has ambitious decarbonization goals enshrined in law and has been establishing the regulations to achieve them. Through its Department of Public Utilities regulatory rulings, the state has required local gas and electric utilities to pursue decarbonization not only by reducing the emissions of their electric supply but also by actively supporting gas load reduction. The residential heating sector dominates this effort, with programs like MassSave incentivizing customer adoption and now MA DPU 20-80-B&#13;
requiring gas utilities to demonstrate that they have sufficiently evaluated the possibility of non-pipeline alternatives, including but not limited to electrifying customers instead of reinvesting in the gas system for all future gas investments.&#13;
&#13;
This paper looks at a single Massachusetts utility, National Grid, and evaluates where its customers are switching to electric heat and which mechanisms are driving current adoption. It further evaluates where geographically National Grid could invest in electrification instead of replacing gas investments under the new 20-80-B order. In doing so it establishes a model for cost benefit calculations related to prospective NPA projects. This paper then examines the degree to which ongoing electrification efforts are aligned with one another. Finally, this paper explores concerns that the process of electrification might be regressive, leaving behind those who cannot afford to electrify their systems and leaving them to pay ever-increasing prices as the full gas system is paid for through rates from a decreasing population of consumers. In evaluation of such concerns, it determines the geographic correlation between ongoing decarbonization efforts and communities already facing housing burden.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Streamlining Diagnostics of Electrical-Connection-Related Errors in General Assembly Using Augmented Reality Wearables</title>
<link href="https://hdl.handle.net/1721.1/163343" rel="alternate"/>
<author>
<name>Salata, Elizabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/163343</id>
<updated>2025-10-22T03:34:53Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Streamlining Diagnostics of Electrical-Connection-Related Errors in General Assembly Using Augmented Reality Wearables
Salata, Elizabeth
Electrical connection errors arise frequently during manufacturing. It is optimal to repair these errors during General Assembly Trim Line stations when the wiring harnesses are still exposed and easily accessible. However, the time required to locate the cause of the errors often exceeds Trim station cycle times, so most repairs are delayed until after General Assembly. Due to the implications of shutting down the line, this results in significantly higher repair times, scrap costs, and resources. To overcome these challenges, there is clear evidence supporting the use of Augmented Reality (AR) tools to innovate and streamline manufacturing processes. This master's thesis identified deficiencies in the current standard operating procedure for addressing errors and used a human-centered design approach to develop a novel error diagnostic process using an AR overlay technique to pin point on the vehicle where the problem lies. This thesis also conducted an experiment to assess the performance, success rate, and perceived cognitive load of the two processes. The data collected from the experiment provided sufficient evidence that the diagnostic process developed for this thesis reduces the elapsed time to locate the connection error by 75% with a statistically significant reduction in overall perceived cognitive load. The likelihood of widespread adoption of the AR overlay process was assessed from an estimate of further AR hardware development, safety considerations in automotive manufacturing environments, and the level of enthusiasm of all stakeholders who were consulted for this research project.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Techno-Economic Assessment of Hybrid Renewable Energy and Battery Storage Systems for Data Centers</title>
<link href="https://hdl.handle.net/1721.1/163342" rel="alternate"/>
<author>
<name>Sirgo, Alex</name>
</author>
<id>https://hdl.handle.net/1721.1/163342</id>
<updated>2025-10-22T03:34:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Techno-Economic Assessment of Hybrid Renewable Energy and Battery Storage Systems for Data Centers
Sirgo, Alex
As the demand for data centers continues to grow, so does their energy consumption, making it increasingly important to develop sustainable and cost-effective strategies for powering them with carbon-free electricity. This thesis explores a techno-economic modeling framework that evaluates combinations of solar, wind, and battery energy storage systems to assess their ability to meet a data center’s electricity demand with on-site renewable generation. The model fills a gap in current literature by focusing on real-time energy matching using co-located infrastructure, rather than traditional off-site procurement methods like power purchase agreements and renewable energy credits.&#13;
&#13;
Using real-world weather and price data, the simulation calculates hourly generation, storage behavior, and grid interactions across a 20-year period. A financial model then calculates the levelized cost of energy (LCOE) for each system configuration. Results show that wind energy generally provides the lowest-cost renewable supply option, while hybrid solar and wind configurations improve renewable penetration. Battery storage plays a key role in shifting excess generation to periods of undersupply, but its economic viability depends on system sizing. Across different system configurations, renewable penetration ranged from 31.3% to 97.8%, while LCOE varied from $27.5/MWh to over $100/MWh, illustrating the trade-offs between cost and grid independence.&#13;
&#13;
By providing a structured analysis of the trade-offs between renewable penetration and cost, this research offers insight into how data centers and other energy-intensive facilities can design dedicated carbon-free energy systems. The findings underscore the importance of balancing resource diversity and storage investment to achieve decarbonization goals while maintaining economic viability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Diagnostics in Additive Manufacturing Using Image-Based Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/163341" rel="alternate"/>
<author>
<name>Varma, Arun Alejandro</name>
</author>
<id>https://hdl.handle.net/1721.1/163341</id>
<updated>2025-10-22T03:34:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Diagnostics in Additive Manufacturing Using Image-Based Machine Learning
Varma, Arun Alejandro
Additive Manufacturing (AM) is a vital capability in the aerospace industry. Blue Origin manufactures a substantial share of engine parts via metal AM. To meet growing customer demand, the company must dramatically increase engine throughput and, thus, 3D prints. Blue Origin has identified non-destructive testing (NDT) – particularly, Computed Tomography (CT) scanning – as an unsustainable bottleneck to expanding AM capacity. Not only is this process expensive, but, critically, there are not enough aerospace-grade CT machines in the world to support projected throughput. Without process change, meeting customer demand will soon become impossible. Yet, these scans provide important quality control, and any reduction in NDT must be accompanied by assurances of engine part integrity. This thesis introduces a diagnostic system that safely alleviates the bottleneck, and further yields insights that end-stage NDT alone cannot provide. The proposal is a machine learning system that evaluates the manufacturing process itself, examining layer-by-layer photographs captured during printing. It is predicated on two hypotheses: (1) These images, considered together, provide a synthetic 3D illustration of the build process; and (2) Machines can be taught to assess these process signatures dependably. The resulting system provides rich diagnostics. It achieves near-perfect anomaly recognition – 100% when using conservative defect thresholds. Operationally, the system can (at minimum) safely enable a 37-54% reduction in NDT, translating to millions of dollars in annual cost savings. In practice, this reduction will likely be higher. The system further enables early process intervention and a more data-driven approach to manufacturing intelligence. This work turns what began as an unsustainable bottleneck into an opportunity for enhanced quality control, process intelligence, and long-term manufacturing resilience.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Substitution among Social Media Platforms: Evidence from App Tracking Panel Data</title>
<link href="https://hdl.handle.net/1721.1/163340" rel="alternate"/>
<author>
<name>Lagutina, Rina</name>
</author>
<id>https://hdl.handle.net/1721.1/163340</id>
<updated>2025-10-22T03:34:30Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Substitution among Social Media Platforms: Evidence from App Tracking Panel Data
Lagutina, Rina
This thesis explores a novel approach to competitive intelligence in the social media ecosystem by leveraging external mobile panel data to study substitution dynamics. It focuses on contextspecific behavioral patterns to identify which platforms compete for user attention in given situations. Using mobile app session data from April 2023 for approximately 5,000 users, the analysis segments usage into three behavioral contexts – morning, evening, and at-home sessions – and characterizes user-app interactions through descriptive statistics. K-means clustering is applied to identify archetypes of usage behavior across these contexts, revealing distinct patterns such as quick-check habits, deep content consumption, and intensive texting. By comparing app usage profiles across contexts, the study uncovers shifts in how and when platforms are used, highlighting subtle substitution dynamics. To validate the findings, the study analyzes app usage during service outages, testing if potential substitutes see increased engagement when a competing platform is unavailable. These insights offer a richer, contextaware framework for product managers to uncover indirect competition and tailor platform strategies to specific user behaviors. Limitations include reliance on behavioral data without content-level detail, mobile-only focus, and demographic skew in the panel.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Harnessing Generative AI in Developing Economies: A Systems Framework for Policy Design in Bangladesh</title>
<link href="https://hdl.handle.net/1721.1/163339" rel="alternate"/>
<author>
<name>Bari, Md Mustabeen Ul</name>
</author>
<id>https://hdl.handle.net/1721.1/163339</id>
<updated>2025-10-22T03:34:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Harnessing Generative AI in Developing Economies: A Systems Framework for Policy Design in Bangladesh
Bari, Md Mustabeen Ul
This thesis develops a systems-based policy framework for Generative Artificial Intelligence (GenAI) implementation in developing economies, with specific application to Bangladesh. While GenAI's potential productivity and labor market impacts are well-studied in developed economies, limited research addresses the challenges faced by developing countries positioned primarily as technology consumers rather than producers. The research employs causal loop diagramming to map interactions between five critical policy domains: human capital development, digital infrastructure, data sovereignty, sectoral stimulus, and governance.&#13;
&#13;
The resulting framework identifies four primary reinforcing mechanisms that can accelerate adoption and three balancing mechanisms related to labor displacement. To validate the framework, the research analyzes contrasting implementation approaches from India and Egypt, demonstrating the importance of cross-domain synergies in effective policy design.&#13;
&#13;
Applied to Bangladesh, the framework yields a dual-entry strategy focusing on healthcare and education sectors as initial implementation domains, leveraging the country's strategic advantages while addressing resource constraints through a consortia-based implementation model that creates institutional resilience. The thesis contributes both a reusable conceptual toolkit for analyzing GenAI policy in resource-constrained settings and an initial context-anchored roadmap for Bangladesh. Future research should refine the framework through longitudinal case studies while developing more detailed, stakeholder-engaged implementation plans for Bangladesh that include concrete budget allocations, institutional responsibilities, and measurable outcomes.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Value of Digitizing Manufacturing Environments</title>
<link href="https://hdl.handle.net/1721.1/163338" rel="alternate"/>
<author>
<name>Briggi, Conor S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163338</id>
<updated>2025-10-22T03:34:40Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Value of Digitizing Manufacturing Environments
Briggi, Conor S.
There is significant variability and dispute around the value of digitally transformed manufacturing environments and no single methodology is broadly accepted. The variability stems from time-dependencies, implementation effectiveness, and the dynamic environments digital solutions are deployed in. However, an accurate accounting of this value is essential to company strategic planning. The research outlines how to approach this variability, cost parameters to consider, primary sources of value generation, and best practices for implementing Smart Factories. A tool that addresses these issues was successfully developed and deployed at Stanley Black &amp; Decker, helping the company to assess performance of the digitization efforts and tailor the delivered solution to optimize manufacturing performance. Results from this tool showed a positive expected return on investment and are provided to contextualize efforts in similar areas.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multimodal Generative AI Chatbot for Root Cause Diagnosis in Predictive Maintenance</title>
<link href="https://hdl.handle.net/1721.1/163337" rel="alternate"/>
<author>
<name>Lorente Anon, Carla</name>
</author>
<id>https://hdl.handle.net/1721.1/163337</id>
<updated>2025-10-22T03:34:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Multimodal Generative AI Chatbot for Root Cause Diagnosis in Predictive Maintenance
Lorente Anon, Carla
Predictive maintenance plays a critical role in industrial operations by enabling organizations to detect potential equipment failures before they occur. However, while sensor data can identify anomalies such as excessive vibration or temperature fluctuations, technicians often struggle to efficiently diagnose and resolve the root causes of these alarms. This research presents a generative AI-powered chatbot designed to enhance the root cause diagnosis process in predictive maintenance by leveraging multimodal retrieval-augmented generation (RAG) and advanced AI-driven troubleshooting capabilities.&#13;
&#13;
The chatbot integrates multiple functionalities to support maintenance teams in resolving alarms quickly and accurately. Its time series analysis module processes real-time sensor data, identifying abnormal patterns and guiding users through a structured troubleshooting workflow. The retrieval-augmented generation (RAG) engine allows the chatbot to retrieve and synthesize relevant troubleshooting information from technical manuals, historical maintenance records, and structured knowledge bases, ensuring that technicians receive precise, grounded outputs. Additionally, the chatbot supports multimodal interactions, enabling users to upload images, audio, and video for more comprehensive diagnostics. By analyzing uploaded images of damaged components, transcribing spoken maintenance reports, and processing video footage of equipment malfunctions, the chatbot enhances problem identification and resolution.&#13;
&#13;
Another key feature of the chatbot is its interactive guided conversation system, which enables multi-turn dialogues that refine diagnostics dynamically based on technician input. Instead of providing static troubleshooting steps, the chatbot continuously adapts its responses to ensure that users receive the most relevant recommendations as the diagnostic process unfolds. To maintain safety and reliability, the system incorporates AI guardrails, filtering inappropriate or irrelevant inputs while ensuring that generated responses align with best practices for industrial maintenance.&#13;
&#13;
An evaluation framework is proposed to assess the chatbot’s effectiveness, focusing on retrieval accuracy, response relevance, and diagnostic efficiency. Initial results demonstrate approximately 30% reduction in diagnostic time, highlighting the chatbot’s potential to improve maintenance workflows, reduce downtime, and enhance technician productivity. This research underscores the transformative role of multimodal generative AI in predictive maintenance and lays the foundation for broader industrial applications. As a result of this work, a patent has been filed to protect the novel architecture and methods developed. Future work could focus on expanding retrieval capabilities to include video, integrating intelligent task automation for dynamic work order generation, and refining alarm prioritization using adaptive risk-based assessments.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying over Individual Concepts</title>
<link href="https://hdl.handle.net/1721.1/163336" rel="alternate"/>
<author>
<name>Kobayashi, Filipe Hisao</name>
</author>
<id>https://hdl.handle.net/1721.1/163336</id>
<updated>2025-10-22T03:31:15Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Quantifying over Individual Concepts
Kobayashi, Filipe Hisao
Since Montague (1973), it has been assumed that quantificational DPs must, at least sometimes, be analyzed as quantifiers over individual concepts (i.e., functions from indices of evaluation to individuals). Because the domain of individual concepts is significantly greater than that of individuals, the challenge has always been how to properly constrain quantification over these objects. This dissertation proposes a solution to this problem by developing a novel theory as to how NPs are shifted from predicates of individual into predicates of individual concepts. The idea is that, since NPs are interpreted as restrictors, the nature of this shifting mechanism will constrain quantification. The proposal bears a strong resemblance to the analysis of interrogative clauses of Karttunen (1977): suitable predicates of individual concepts are built from the interaction of a type-shifting operation and existential quantifiers. In three cases studies, I show how this theory can solve old and new puzzles: (i) the different interpretations of sentences of the form ‘[Det NP] changed’ (Nathan 2006); (ii) two ambiguities in the interpretation of concealed questions (Heim 1979); and (iii) question intruders, a novel puzzle concerning the interpretation of both embedded interrogative clauses and concealed questions.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Within ‘Reason’: A Study of Normative Language</title>
<link href="https://hdl.handle.net/1721.1/163335" rel="alternate"/>
<author>
<name>Watkins, Eliot</name>
</author>
<id>https://hdl.handle.net/1721.1/163335</id>
<updated>2025-10-22T03:31:13Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Within ‘Reason’: A Study of Normative Language
Watkins, Eliot
What do we mean when we say that someone ought to do something? What do we mean when we say that someone has a reason to do something? What do we mean when we say that someone has more reason to do one thing rather than another? The primary goal of this project is to shed light on these semantic questions.&#13;
&#13;
The picture of normative talk that I develop across this thesis has a distinctive feature: the notion of a reason (roughly, a fact that counts in favour of something) isn’t given any fundamental role to play. Instead, the meanings of ‘ought’, ‘must’ and ‘is a reason for…’ are all understood in terms of something gradable – they’re understood in terms of facts about how much reason there is for something to be done.&#13;
&#13;
Chapter One focuses on deontic modals like ‘ought’ and ‘must’. I argue that the standard semantics for these expressions is incompatible with the idea that facts about what you ought to do are connected with facts about what you have reason to do. I develop a new semantics for deontic modals which builds-in the connections between ought and reasons from the ground up.&#13;
&#13;
Chapter Two centres on ‘reason’. We use ‘reason’ as both a count noun (as in “there is a reason for you to read my dissertation”) and a mass noun (as in “there is some reason for you to read my dissertation”). I argue that the best semantics for ‘reason’ will treat the mass form as fundamental. ‘Reason’ is a predicate of a particular kind of state – the state someone is in when they have reason to do something. I turn this result into an argument against the enduringly popular idea that count noun reasons are normatively fundamental.&#13;
&#13;
Chapter Three stays with reasons. According to a standard picture, normative reasons do not extend beyond the boundaries of agency. If something isn’t an agent – if it can’t do rudimentary practical reasoning – then there can’t be normative reasons for it to do one thing rather than another. I argue that this standard picture gets things totally wrong: there are reasons for non-agents to be certain ways and do certain things. We must not analyse what it is to be a reason by appealing to distinctively agential capacities.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lessons from CP in Passamaquoddy and beyond</title>
<link href="https://hdl.handle.net/1721.1/163334" rel="alternate"/>
<author>
<name>Grishin, Peter Nicholas</name>
</author>
<id>https://hdl.handle.net/1721.1/163334</id>
<updated>2025-10-22T03:31:18Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Lessons from CP in Passamaquoddy and beyond
Grishin, Peter Nicholas
This thesis explores various aspects of CP morphosyntax in Passamaquoddy-Wolastoqey and other Algonquian languages and their consequences for broader generative syntactic theory. It consists of two parts: one investigates clause typing and clause size in Passamaquoddy, and the other investigates the properties of a CP-layer agreement marker, the peripheral suffix, across Algonquian. In addition, a lengthy background chapter offers new data and insight on the correct analysis of the inverse and obviation in Passamaquoddy and across Algonquian.&#13;
&#13;
Part I studies the distribution of the three morphologically-distinguished non-imperative clause types in Passamaquoddy: the independent, the conjunct, and the subordinative. I argue that their distribution in complementation and coordination structures falls out naturally from their structural size, following the work of Wurmbrand and Lohninger (2023) and Bjorkman (2012, 2013). I support this conclusion by carefully investigating how each clause type interacts with Ā phenomena like wh movement and long distance agreement, showing that various complex interactions between these syntactic processes are derivative of clause size: independent clauses and conjunct clauses under epistemic attitudes are large, phasal CPs, conjunct clauses under direct perception predicates are smaller, non-phasal CPs, and subordinative clauses are bare TPs.&#13;
&#13;
Part II studies two unexpected properties of peripheral agreement across Algonquian: (i) its preference for agreeing with third persons, no matter their syntactic role (found in all Algonquian languages); and (ii) its preference for agreeing with the least local goal (found in languages like Passamaquoddy, Ojibwe, and Wampanoag). I explore the consequences of these typologically unusual properties for the theory of φ agreement and provide an analysis of the cross-Algonquian variation we find in peripheral agreement (building on Xu 2021, 2022). I argue that Algonquian third person preference forces us to accept Nevins (2007) and Trommer’s (2008) conclusion that third person cannot be underspecified relative to first and second person, even in the syntax (contra Preminger 2019a and van Alem 2023). Additionally, I show that Algonquian lowest preference doesn’t force us to give up on standard locality properties of Agree, and argue for an analysis under which C agrees with all matching accessible goals, but only spells out the last Agree relation—Expone Outermost—building a parallel with similar ideas in the domain of multiple case assignment. Finally, I capture cross-Algonquian variation in peripheral agreement by varying the specification of the peripheral agreement probe and varying which arguments are able to shift out of the VP phase.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>*ABA in Multidimensional Paradigms: A MAX/DEP-based account</title>
<link href="https://hdl.handle.net/1721.1/163333" rel="alternate"/>
<author>
<name>Zompì, Stanislao</name>
</author>
<id>https://hdl.handle.net/1721.1/163333</id>
<updated>2025-10-22T03:31:14Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">*ABA in Multidimensional Paradigms: A MAX/DEP-based account
Zompì, Stanislao
The last decade and a half has witnessed intensive research into *ABA universals—generalizations such as “If a nominative and the corresponding dative have the same exponent, then the corresponding accusative has that exponent, too” (Caha 2009; Smith et al. 2019). Most existing work on these universals has only focused on one ‘paradigm column’ at a time, by checking a given paradigm’s nominative singular, accusative singular, and dative singular, for example, with no heed to whether any of the relevant exponents would also show up in that paradigm’s nominative plural, accusative plural, or dative plural. However, some recent literature has pointed out that inspecting full paradigms is crucial to our understanding of *ABA, because some classic accounts that derive *ABA column-internally turn out to also make predictions as to what may or may not happen across columns, and those predictions are often incorrect (cf., among others, Christopoulos &amp; Zompì 2022). In this dissertation, I review those incorrect predictions and replace them with a novel generalization specifically concerning *ABA-like effects in multidimensional paradigms. I then set out to derive this generalization by setting up an exponent-selection system wherein exponents may both be underspecified and be overspecified with respect to their exponenda, with each of these departures from a perfect match being penalized but not necessarily fatal. In particular, I explicitly implement this intuition in optimality-theoretic terms, via a strict-domination ranking of violable Max and Dep constraints (cf. in particular Ackema &amp; Neeleman 2005; Wolf 2008; Müller 2020), and I show that the resulting system, while restrictive enough to derive the desired generalization, is also powerful enough to afford a natural account of some notoriously unnatural (‘morphomic’) exponent distributions in the inflection of Germanic pronouns and Romance verbs.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Intangible Investments and the Accrual-Cash Flow Relationship</title>
<link href="https://hdl.handle.net/1721.1/163332" rel="alternate"/>
<author>
<name>Soares, Fabio</name>
</author>
<id>https://hdl.handle.net/1721.1/163332</id>
<updated>2025-10-22T03:34:34Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Intangible Investments and the Accrual-Cash Flow Relationship
Soares, Fabio
This paper investigates whether the weakening negative relationship between accruals and operating cash flows can be attributed to the immediate expensing of intangible investments under current accounting standards. Building on the framework proposed by Green et al. (2022), I examine how the mechanical capitalization of intangible investments affects the accrual-cash flow relationship across firms with varying R&amp;D intensities. I show that the capitalization impacts the relationship in unexpected ways, indicating that the proposed rationale cannot fully explain the observed trend. I further exploit differences in accounting treatments under IFRS and US GAAP to test whether increased capitalization of intangible investments through development costs strengthens the relationship. I find that the relationship is significantly more negative under IFRS than US GAAP, independently of R&amp;D expenditure, suggesting that increased capitalization alone does not explain the differences. Additionally, the positive trend observed for high R&amp;D firms in both standards highlights that increased capitalization is insufficient to reverse the weakening trend. These results challenge the view that current accounting practices are the primary cause of the weakening accrual-cash flow relationship and underscore the need for further exploration of alternative explanations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Winning Over Gen Z: The Evolving Strategies of Sports Leagues&#13;
and Media in Response to Changing Youth Habits</title>
<link href="https://hdl.handle.net/1721.1/163331" rel="alternate"/>
<author>
<name>Zeng, Arnaud</name>
</author>
<id>https://hdl.handle.net/1721.1/163331</id>
<updated>2025-10-22T03:34:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Winning Over Gen Z: The Evolving Strategies of Sports Leagues&#13;
and Media in Response to Changing Youth Habits
Zeng, Arnaud
This thesis examines how sports leagues and media companies are evolving to better connect with Generation Z, a generation whose changing expectations and habits – on-demand and socially driven – are reshaping the landscape of sports consumption. With fewer Gen Z fans watching full games on traditional mediums, the industry is being pushed to rethink its approach, adapting not just how content is delivered, but also what kind of content is created. Through a combination of expert interviews and industry data, this paper looks at the rise of short-form content, the importance of digital-first platforms, and the growing influence of storytelling&#13;
through influencers or behind the scenes. It also explores how new competition formats are exploiting what it now means to be a fan. The goal is to understand how the sports ecosystem is adjusting to remain relevant to its youngest audience.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Metal Additive Manufacturing Capabilities for Footwear Prototyping and Product Creation</title>
<link href="https://hdl.handle.net/1721.1/163330" rel="alternate"/>
<author>
<name>Xi, Tiffany</name>
</author>
<id>https://hdl.handle.net/1721.1/163330</id>
<updated>2025-10-22T03:34:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Metal Additive Manufacturing Capabilities for Footwear Prototyping and Product Creation
Xi, Tiffany
In the footwear industry, the speed in which footwear designs reach the market impacts the ability for a company to accurately meet the demands of its customers as the probability of consumer preferences changing increases with time. This research investigates the impact of incorporating metal additive manufacturing capabilities into the product creation process of a major athletic footwear company. The study aims to determine whether and under which applications metal additive manufacturing can increase the speed at which footwear designs reach the market, while maintaining or improving the desired product quality.&#13;
    A case study approach was employed, focusing on the development of rubber outsole molds using metal additive manufacturing technology. The study compared two process flows that excluded and included metal additive manufacturing. The case study evaluated these processes based on the speed of the development process and the quality of the produced footwear samples. The footwear sample quality was measured against production-equivalent samples obtained from the company’s manufacturing partner. The results demonstrated that incorporating metal additive manufacturing capabilities led to a reduction in the time required for mold design and fabrication. This speed advantage was primarily attributed to the ability to directly fabricate detailed textures into the mold, eliminating the need for outsourced etching processes.&#13;
    The visual quality of samples produced did not fully match those created by the company's manufacturing partners but were sufficient for initial sample development. Importantly, the traction properties were comparable to those of the manufacturing partner's samples, indicating that the functional quality of the samples is adequate for product development purposes.&#13;
This research provides valuable insights into the potential of metal additive manufacturing in accelerating footwear product development. Future work recommendations include exploring advanced modeling and design software and examining the impact of machine parameters on build quality. The findings of this study have implications for both the footwear industry and other sectors considering the integration of metal additive manufacturing technologies into their product development processes.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Impact Investing through a Systems Thinking Lens:&#13;
Hallmarks of a Transformational Approach</title>
<link href="https://hdl.handle.net/1721.1/163329" rel="alternate"/>
<author>
<name>Zhang, Yu (Sherry)</name>
</author>
<id>https://hdl.handle.net/1721.1/163329</id>
<updated>2025-10-22T03:34:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Evaluating Impact Investing through a Systems Thinking Lens:&#13;
Hallmarks of a Transformational Approach
Zhang, Yu (Sherry)
As impact investing increasingly aspires to drive systemic change, the question of how to evaluate such efforts remains underexplored. Traditional evaluation approaches often grounded in linear causality and program-level outputs, and struggled to capture the complexity, interdependence, and emergent nature of systemic transformation. This thesis investigates how systemic investing can be evaluated by integrating systems thinking, evaluation theory, and investing practice. It develops a conceptual framework of thirteen hallmarks that characterize systemic investing evaluation across dimensions such as time horizons, stakeholder engagement, cross-sector collaboration, and capital dynamics. Drawing on 46 real-world cases, the research identifies 112 indicators to make these hallmarks observable and assessable in practice. To support practical application, the thesis also introduces an AI-assisted scoring tool that automates the evaluation of narrative content using the framework. Together, these contributions aim to support more reflective, adaptive, and system-aware evaluation practices in the emerging field of systemic investing.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Objective Optimization of Container Load Plans for Modulating Inventory Flow</title>
<link href="https://hdl.handle.net/1721.1/163328" rel="alternate"/>
<author>
<name>Sen, Shweta</name>
</author>
<id>https://hdl.handle.net/1721.1/163328</id>
<updated>2025-10-22T03:34:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Multi-Objective Optimization of Container Load Plans for Modulating Inventory Flow
Sen, Shweta
Conventional strategies for container load planning (CLP) predominantly emphasize maximizing container utilization, which can result in suboptimally-timed inventory arrival, increased inventory holding costs, and downstream operational inefficiencies. Using a real-world case study from a global footwear and apparel retailer, this research formulates a novel multi-objective mixed-integer linear programming (MOMILP) model that jointly considers container utilization, transportation and storage costs, and timing accuracy of inventory delivery. The proposed model utilizes a branch-and-bound algorithm to evaluate numerous load configurations, assessing the impact of different load rules and weighting parameters on transportation performance metrics and inventory flow. Results highlight the cruciality of prioritizing delivery precision in transportation management decisions, demonstrating that solely maximizing volume utilization can adversely affect overall cost efficiency when downstream inventory storage and operational requirements are considered. This work also provides a process map of load planning activities and identifies targeted operational improvements, such as consolidation bypass and purchase order (PO) partitioning, that can enhance inventory flow smoothness, reduce transportation costs, and support more responsive logistics networks. Collectively, this work extends existing CLP methodologies by incorporating delivery timing and inventory storage considerations into load planning decisions, offering practical enhancements for logistics optimization.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Principles and Practices of Gap-Closing Investing</title>
<link href="https://hdl.handle.net/1721.1/163327" rel="alternate"/>
<author>
<name>Kapor, Mitchell</name>
</author>
<id>https://hdl.handle.net/1721.1/163327</id>
<updated>2025-10-22T03:34:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Principles and Practices of Gap-Closing Investing
Kapor, Mitchell
This thesis examines the principles and practices of gap-closing investing, a distinctive model of early-stage venture capital investing that seeks to close gaps in access, opportunity, and outcomes for low-income communities and communities of color. Developed by Dr. Freada Kapor Klein and Mitchell Kapor through Kapor Capital, gap-closing investing integrates social impact objectives with a performance-driven investment strategy. The thesis combines historical analysis of socially responsible investing and impact investing with case studies of venturebacked startups to situate gap-closing investing within a broader tradition of values-based finance. It traces the ethical roots of impact investing to religious traditions, the emergence of socially responsible investing funds in the 1970s, and the formalization of impact investing terminology in the late 2000s. Gap-closing investing is distinguished by a developmental approach to startup growth, a redefinition of founder selection criteria emphasizing “distance traveled” over pedigree, and a focus on mitigating structural barriers through capital allocation. The thesis critically compares gap-closing investing to Corporate Social Responsibility (CSR) and Environmental, Social, and Governance (ESG) frameworks, arguing that gap-closing uniquely centers systemic impact as a core investment goal rather than a secondary consideration. The findings challenge the perception that impact investing is inherently concessionary, using performance data from Kapor Capital’s portfolio to demonstrate that intentional, equity-focused investing can produce both superior financial returns and measurable social outcomes. Gap-closing investing is presented as both a pragmatic investment strategy and a model for using venture capital to drive systemic change toward a more inclusive economy.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predictive Model for Battery State of Health</title>
<link href="https://hdl.handle.net/1721.1/163326" rel="alternate"/>
<author>
<name>Garza Lozano, Catalina</name>
</author>
<id>https://hdl.handle.net/1721.1/163326</id>
<updated>2025-10-22T03:34:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Predictive Model for Battery State of Health
Garza Lozano, Catalina
As battery energy storage systems (BESS) become critical components of grid infrastructure, accurately assessing their State of Health (SoH) is essential for optimizing performance, reducing costs, and ensuring contractual compliance. This thesis investigates the development of accurate, real-time SoH estimation models for utility-scale battery storage sites operated by NextEra Energy. Current SoH measurements—derived from annual capacity tests and Battery Management System (BMS) data—are often inaccurate or infrequent, leading to either over- or under-augmentation and resulting in financial inefficiencies. &#13;
&#13;
To address this gap, four state estimation models were developed and evaluated: an Unscented Kalman Filter (UKF), a Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN), a multitask RNN, and a Delayed Reinforcement Learning (DRL) model. Each model uses operational data—such as voltage, current, temperature, and State of&#13;
Charge (SoC)—to estimate degradation patterns and predict SoH at the rack, lineup, and site levels. Their outputs were compared against ground-truth capacity test results from a large-scale battery storage site.&#13;
&#13;
The DRL model demonstrated the highest accuracy, achieving a deviation of only 1.6 months compared to capacity test data, significantly outperforming existing BMS readings and the other three models. These findings underscore the value of advanced machine learning techniques in enabling proactive maintenance, optimized augmentation scheduling, and cost-efficient storage site management. This research offers a scalable framework for real-time SoH estimation across large fleets of battery storage assets and contributes to the broader goal of improving grid reliability through smarter energy storage management.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-Driven Key Performance Indicator Modeling for Robotic Mobile Fulfillment Systems</title>
<link href="https://hdl.handle.net/1721.1/163325" rel="alternate"/>
<author>
<name>Sowards, Steffan</name>
</author>
<id>https://hdl.handle.net/1721.1/163325</id>
<updated>2025-10-22T03:34:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Data-Driven Key Performance Indicator Modeling for Robotic Mobile Fulfillment Systems
Sowards, Steffan
This work presents a study on the development and application of data-driven operational efficiency and throughput Key Performance Indicator (KPI) modeling for Robotic Mobile Fulfillment Systems (RMFS). Through rigorous analysis of extensive operational data from an operating RMFS, we demonstrate the efficacy of machine learning approaches in predicting and optimizing the performance of complex warehouse automation systems. The research employs advanced techniques, including gradient boosted bagged tree ensembles and AutoML, to capture complex input interactions and provide parallel predictions across multiple KPIs. Our models achieve a mean R² value of 0.7838 across all templates and KPIs, with particularly strong performance in our top performing metric across templates (mean R² of 0.9660).&#13;
&#13;
The study introduces a novel framework for feature engineering and selection, emphasizing actionable inputs while excluding intermediate variables to enhance model interpretability and practical utility. We validate our approach against novel operating conditions, demonstrating the models’ ability to generalize to unseen scenarios. Interpretability techniques, including SHAP analysis and permutation feature importance, provide valuable insights into system behavior and key performance drivers.&#13;
&#13;
This research establishes a generalizable framework for leveraging data-driven modeling in predicting and optimizing brownfield warehouse automation system behavior. The developed approach offers significant potential for enhancing operational decision-making, system design, and strategic planning in the rapidly evolving field of e-commerce fulfillment.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Essays on Bayesian Entrepreneurship: Evaluating and Commercializing Unconventional Ideas</title>
<link href="https://hdl.handle.net/1721.1/163324" rel="alternate"/>
<author>
<name>Gius, Luca</name>
</author>
<id>https://hdl.handle.net/1721.1/163324</id>
<updated>2025-10-22T03:31:12Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Essays on Bayesian Entrepreneurship: Evaluating and Commercializing Unconventional Ideas
Gius, Luca
This dissertation investigates a fundamental challenge complicating the evaluation and commercialization of entrepreneurial opportunities: some ideas are valuable precisely because not everyone recognizes their worth. The first essay analyzes barriers against the commercialization of contrarian ideas. Researchers working with unpopular AI algorithms tend to commercialize their work only after a successful public evaluation. Those who clear this hurdle subsequently achieve better entrepreneurial outcomes. A regression-discontinuity analysis shows that this partly reflects status quo bias: for unpopular methods only, winning a contest serves as a certification, channeling disproportionate resources to the winner while equally strong near-misses remain sidelined. The second essay finds that greater judge disagreement in venture competitions predicts higher future success, especially for more distinctive startups. The third essay shows that skewness in idea value exacerbates asymmetric information in markets for ideas. Using data from auctions for digital businesses, I illustrate how this can explain why online marketplaces for ideas have struggled to emerge despite lowering transaction costs: informational frictions severely depress bids and prevent high-value digital startups from trading. The final essay, coauthored with Alfonso Gambardella and Scott Stern, introduces the archetype of Homo Entrepreneuricus: an entrepreneur who deliberately tests subjective beliefs through structured experimentation to navigate uncertainty.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploiting Additive Structure in Algorithm Design and Fine-Grained Complexity</title>
<link href="https://hdl.handle.net/1721.1/163323" rel="alternate"/>
<author>
<name>Jin, Ce</name>
</author>
<id>https://hdl.handle.net/1721.1/163323</id>
<updated>2025-10-22T03:31:10Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Exploiting Additive Structure in Algorithm Design and Fine-Grained Complexity
Jin, Ce
In this thesis, we investigate the fine-grained complexity of various algorithmic problems with an additive flavor, including 3SUM, Subset Sum, and their close relatives. We explore their connections to various areas, such as graph algorithms, discrete optimization, combinatorial pattern matching, and computational geometry. Our new results include improved algorithms and conditional lower bounds for a wide range of problems, answering multiple open questions from the literature:&#13;
&#13;
• Conditional lower bounds for graph problems: We prove new lower bounds for 4-Cycle Listing and Approximate Distance Oracles conditioned on the 3SUM Hypothesis. As a key intermediate step, we show a fine-grained reduction from 3SUM to the special case of 3SUM where all pairwise sums of input numbers are distinct.&#13;
&#13;
• Combinatorial pattern matching: We design improved algorithms for Text-to-Pattern Hamming Distances, Pattern Matching with Wildcards, and Geometric Pattern Matching, by drawing connections from 3SUM and sparse convolution.&#13;
&#13;
• Knapsack-type problems: We obtain a pseudo-polynomial time algorithm for 0-1 Knapsack with (conditionally) near-optimal dependence on the maximum item weight, an improved approximation scheme for the counting problem #Knapsack, and improved exponential time algorithms for the total search problem Pigeonhole Equal Subset Sum.&#13;
&#13;
In order to obtain these results, we employ and develop techniques based on convolution algorithms and their extensions, as well as classic tools from additive combinatorics.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systems Approach to Component Code Optimization for Wound Closure Portfolio</title>
<link href="https://hdl.handle.net/1721.1/163322" rel="alternate"/>
<author>
<name>Dubelier, Madeline</name>
</author>
<id>https://hdl.handle.net/1721.1/163322</id>
<updated>2025-10-22T03:34:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Systems Approach to Component Code Optimization for Wound Closure Portfolio
Dubelier, Madeline
Product portfolio management involves strategically analyzing, optimizing, and expanding a company’s offerings to maximize value and align with business goals. While companies often focus on portfolio expansion to meet evolving customer needs and gain market share, product deletion is frequently overlooked, leading to code proliferation and undermining operational efficiency. Effective variety management often requires input from stakeholders across the supply chain, yet few published methods take this approach. This work presents a systematic supply chain management approach to portfolio optimization using a case study from Johnson &amp; Johnson MedTech. The case study is on pledgets, key components in non-absorbable suture systems. Recent pledget product quality issues exposed the need for a systematic approach to reducing component variety and operational efficiency. A current-state analysis addressed multiple dimensions of complexity. The evaluation combined qualitative and quantitative data and led to a five-stage optimization strategy. The proposed future state portfolio reduces component variety by 60%, guided by three constraints: continue to meet customer needs, protect competitiveness, and reduce manufacturing complexity. This method provides a replicable model for rationalizing legacy portfolios in the medical device industry.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Content Creator Conduct</title>
<link href="https://hdl.handle.net/1721.1/163321" rel="alternate"/>
<author>
<name>Du, Jason</name>
</author>
<id>https://hdl.handle.net/1721.1/163321</id>
<updated>2025-10-22T03:31:07Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Content Creator Conduct
Du, Jason
This thesis investigates the behaviors of content creators. The first study examines whether musicians learn from the success of earlier songs when they create new ones, finding that tracks on a musician’s next album tend to be more similar to the songs that performed better on their current album. The second study explores the cultural, social, and psychological aspects of content creation by tracing first-person singular pronoun usage in contemporary music, revealing geographic, temporal, and genre-based patterns. The third study analyzes the association between content creators' learning tendencies and the explainability of previous outcomes, showing that news editors are more likely to resemble previous popular headlines when those outcomes are more explainable. Collectively, these studies facilitate understanding of the factors that underlie content creation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Optimization-Based Approach to Efficient Clearance Inventory Allocation</title>
<link href="https://hdl.handle.net/1721.1/163320" rel="alternate"/>
<author>
<name>Perez Munoz, Karla Mayra</name>
</author>
<id>https://hdl.handle.net/1721.1/163320</id>
<updated>2025-10-22T03:34:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Optimization-Based Approach to Efficient Clearance Inventory Allocation
Perez Munoz, Karla Mayra
Allocating clearance inventory effectively remains a critical challenge in retail environments characterized by short decision cycles, fluctuating demand, and operational constraints. Decisions made during the clearance period are particularly impactful, as they determine the final opportunity to recover value from unsold products before they lose relevance or perishability. This thesis presents a mathematical optimization model designed to support the redistribution of discounted articles across a network of stores, with the objective of maximizing revenue while satisfying constraints related to stock availability, store capacity, and observed demand at the article-size level.Developed in collaboration with a leading global fashion retail company, the model was built to align with existing business processes and balances analytical rigor with simplicity in implementation. The model incorporates business-defined parameters and is tested using real operational data from selected distribution centers. It demonstrates significant improvements over the current practice of single-item allocation and addresses the computational challenges posed by the high dimensionality of real-world retail problems. By implementing efficient iterative procedures and demand-scaling mechanisms, the model ensures tractability while capturing the complexity of the business environment.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Gas Network Preparations for Networked Geothermal</title>
<link href="https://hdl.handle.net/1721.1/163319" rel="alternate"/>
<author>
<name>Serbent, M. Patrick</name>
</author>
<id>https://hdl.handle.net/1721.1/163319</id>
<updated>2025-10-22T03:34:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Gas Network Preparations for Networked Geothermal
Serbent, M. Patrick
As Massachusetts pursues its goal of achieving net-zero carbon emissions by 2050, the transition from natural gas to sustainable thermal energy solutions presents both opportunities and challenges for its 1.6 million natural gas customers. This thesis investigates the potential of networked geothermal systems as a viable alternative to traditional natural gas infrastructure, with a focus on leveraging existing gas network replacement programs, such as the Gas System Enhancement Plan (GSEP), to facilitate this shift. A four-phase methodology —encompassing site selection, model development, cost analysis, and business case formulation— evaluates the feasibility of integrating high-density polyethylene (HDPE) piping into leak-prone pipe replacement efforts as a preparatory step for future geothermal or hydrogen applications. Findings suggest that HDPE offers potential material and inventory cost advantages over medium-density polyethylene (MDPE), with added flexibility for low-carbon conversions, though significant upfront costs and regulatory uncertainties remain barriers. An example site already scheduled for main replacement work showed a 6% total increase in cost for the project based on the change in pipe from MDPE to HDPE. This work underscores the potential of aligning infrastructure modernization with climate goals, offering a framework for utilities like National Grid to navigate the energy transition in cold, densely populated regions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning and Biosecurity in the Age of Pandemics: Advancing Biological Research and Safeguarding Synthetic Biology</title>
<link href="https://hdl.handle.net/1721.1/163318" rel="alternate"/>
<author>
<name>Siddiqui, Sameed Muneeb</name>
</author>
<id>https://hdl.handle.net/1721.1/163318</id>
<updated>2025-10-22T03:34:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Machine Learning and Biosecurity in the Age of Pandemics: Advancing Biological Research and Safeguarding Synthetic Biology
Siddiqui, Sameed Muneeb
This thesis explores the dual imperatives of enhancing biosecurity and accelerating outbreak response. The research addresses two key areas. First, the thesis analyzes the implications of a national nucleic acid synthesis screening framework on outbreak response agility. A first-hand perspective is provided, identifying potential bottlenecks stemming from lagging customer verification and sequence screening approaches. Concrete solutions, such as pre-verification of first responders, priority processing channels, pre-approval of standard countermeasure sequences, and optimized computational screening, are proposed to mitigate these challenges and ensure rapid response capabilities without compromising biosecurity. Second, a machine learning architecture for biological sequence modeling, “Lyra” is presented. Lyra is grounded in the biological principle of epistasis and leverages state space models (SSMs) combined with projected gated convolutions to efficiently capture both local and long-range sequence interactions. We demonstrate new mathematical theory to connect SSMs with the approximation of polynomial functions - key to predicting epistatic effects. This subquadratic architecture achieves state-of-the-art performance on diverse biological tasks, including protein fitness landscape prediction, RNA function prediction, and CRISPR guide design, while utilizing substantially fewer parameters and computational resources than existing foundation models like transformers. The thesis concludes by highlighting the synergistic potential of advanced machine learning and thoughtful policy to significantly improve pandemic preparedness.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>In-or-Out: Creators’ Odyssey for Success</title>
<link href="https://hdl.handle.net/1721.1/163317" rel="alternate"/>
<author>
<name>Li, Zelin</name>
</author>
<id>https://hdl.handle.net/1721.1/163317</id>
<updated>2025-10-22T03:34:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">In-or-Out: Creators’ Odyssey for Success
Li, Zelin
The creator economy is flourishing, driven by shifts in advertising budgets and a surge in the supply of content creators. This has introduced a new challenge for firms: identifying which early-stage creators will grow to become stars. By identifying future stars, firms can choose who to invest their scarce resources in. They may also be able to purchase effective influence at a (proportionately) lower price than what they will pay once a creator becomes a star. Past research has shown that predicting which content will become viral is challenging. Instead, we focus on using content to predict which early-stage creators will grow their follower bases. We measure both the positioning of a creator’s early content and how the creator adjusts this positioning. We find that the initial position is not predictive of future success. However, subsequent adjustments in position are predictive, particularly if the creator’s initial follower base has grown consistently, rather than over a short period of rapid (viral) growth. Our insights inform the construction of predictive models that outperform baseline models in out-of-sample predictions of which creators will grow their followers the fastest.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategic Cooperation in Water Management: A Game-Theoretic&#13;
Approach to Sustainable Infrastructure in Chilean Mining</title>
<link href="https://hdl.handle.net/1721.1/163316" rel="alternate"/>
<author>
<name>Moscoso Restovic, Rodrigo Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/163316</id>
<updated>2025-10-22T03:34:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Strategic Cooperation in Water Management: A Game-Theoretic&#13;
Approach to Sustainable Infrastructure in Chilean Mining
Moscoso Restovic, Rodrigo Y.
Through a game-theoretic methodology this thesis examines collaborative approaches to managing water infrastructure within Chilean mining operations. The research examines cooperative stakeholder interactions to tackle water scarcity and growing demand in Chile's mining industry among mining firms, local residents and regulatory bodies. It utilizes game theory with a focus on cooperative games and bargaining models to develop a structured analytical framework for analyzing stakeholder dynamics including their incentives and cooperative opportunities.&#13;
The thesis centers on creating a mathematical model that shows stakeholders as rational entities who seek to maximize their benefits while facing resource constraints and regulatory limitations. The implementation of cooperative game theory allows for detailed examination of coalition building processes along with resource sharing agreements and benefit allocation practices which helps to define stable cooperative possibilities.&#13;
The primary findings show that mining companies achieve greater efficiency gains through water infrastructure collaboration than through separate individual investments. This thesis presents quantitative evidence that partnerships among mining projects generate significant financial savings and lead to better resource usage and positive environmental and social results.&#13;
Sensitivity analyses identify that cooperative stability depends on several critical factors, including the asymmetries existing in the different mining projects, the sequence in which investment decisions are made, and the transfer price for water selling for those projects that prefer free rides. The final part of the thesis presents concrete suggestions for policymakers and industry leaders to develop cooperative frameworks through specific policy mechanisms and incentive systems that support long-term collaboration.&#13;
The study advances existing academic knowledge by utilizing detailed game-theoretic approaches to address practical problems in sustainable mining practices. The findings reveal that strategic partnerships serve as fundamental tools for managing resources which can effectively tackle the urgent water scarcity challenges Chile faces.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Driving Manufacturing Best Practices Using Multimodal AI</title>
<link href="https://hdl.handle.net/1721.1/163315" rel="alternate"/>
<author>
<name>Zachary, Mark</name>
</author>
<id>https://hdl.handle.net/1721.1/163315</id>
<updated>2025-10-22T03:34:26Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Driving Manufacturing Best Practices Using Multimodal AI
Zachary, Mark
Multimodal artificial intelligence offers promising solutions for enhancing operational excellence in contract manufacturing, where small job shops typically operate with limited standardization and high process variability. This research develops a part similarity tool that integrates geometric, material, and scale information to improve quoting accuracy and engineering efficiency in high-mix, low-volume production environments. After examining the fragmented manufacturing landscape and reviewing current AI applications in manufacturing, the study introduces an approach based on Variational Autoencoders for encoding 3D geometry alongside material properties and dimensional scale information. The technical implementation addresses challenges of multimodal fusion, missing data handling, and computational efficiency, while a qualitative ablation study demonstrates how this comprehensive approach outperforms single-modal methods in manufacturing relevance. Engineers benefit from improved insights for manufacturing planning, while estimators achieve more consistent cost predictions using the multimodal system. Reinforcement learning with human feedback provides a mechanism for continuous refinement, creating a framework that bridges geometric similarity with manufacturing context and reduces subjectivity in critical business processes. The research contributes both theoretical insights into multimodal learning and practical implementation strategies for standardizing operations in contract manufacturing environments.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Made in Mexico: How Chinese Firms Navigate Nearshoring Amid Global Trade Disruptions</title>
<link href="https://hdl.handle.net/1721.1/163314" rel="alternate"/>
<author>
<name>Zeng, Bob</name>
</author>
<id>https://hdl.handle.net/1721.1/163314</id>
<updated>2025-10-22T03:34:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Made in Mexico: How Chinese Firms Navigate Nearshoring Amid Global Trade Disruptions
Zeng, Bob
This research explores the surge of Chinese manufacturing investments in Mexico as a strategic adaptation to recent global trade disruptions, specifically the U.S.–China trade tensions and the COVID-19 pandemic. By analyzing Chinese firms' motivations and strategies, the study highlights how they leverage Mexico’s strategic geographic proximity, favorable trade conditions under the USMCA, competitive labor market, and established industrial infrastructure to secure continued access to the North American market while minimizing tariff impacts and supply chain risks. Sector-specific analyses of the automotive, electronics, and renewable energy industries reveal distinct operational, regulatory, and cultural challenges encountered by these companies during their transition to Mexican production facilities. In addressing these challenges, Chinese firms have adopted strategies such as supply chain localization, rigorous adherence to North American regulatory frameworks, and effective cross-cultural management practices. Furthermore, the analysis situates this trend within the broader geopolitical context, emphasizing the role of evolving U.S. trade policies and proactive Mexican industrial initiatives in shaping the nearshoring landscape. The findings suggest that while Chinese investment in Mexico presents significant opportunities for industrial upgrading and enhanced bilateral cooperation, the longevity and effectiveness of these ventures depend on firms' strategic flexibility, deeper integration into local economies, and adept management of complex geopolitical and regulatory environments. By evaluating these elements, the research provides valuable insights into the drivers behind the increased Chinese presence in Mexico and the broader implications for global trade patterns, supply chain resilience, and regional economic integration.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bridging the Gap: Strategies and Roles of Chinese Fintech Entrepreneurs and Startups in Sub-Sahara African Markets</title>
<link href="https://hdl.handle.net/1721.1/163313" rel="alternate"/>
<author>
<name>Zhu, Yuan</name>
</author>
<id>https://hdl.handle.net/1721.1/163313</id>
<updated>2025-10-22T03:34:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Bridging the Gap: Strategies and Roles of Chinese Fintech Entrepreneurs and Startups in Sub-Sahara African Markets
Zhu, Yuan
This thesis examines the strategies and operational practices of Chinese fintech entrepreneurs in sub-Saharan African markets, with a focus on how they navigate regulatory fragmentation, localize business models, and build trust in low-infrastructure environments. Drawing on fieldwork and semi-structured interviews with founders, executives, and product leads from fifteen China-linked fintech firms across Nigeria, Kenya, and Francophone Africa, the study investigates how these actors engage with underdeveloped financial systems while adapting knowledge and models from China’s digital finance ecosystem. The research identifies several distinct approaches to market entry and adaptation, including platform integration, compliance-focused positioning, and informal ecosystem engagement. Findings suggest that these ventures do not simply export Chinese models but instead reconfigure them in response to local constraints in regulation, consumer trust, and institutional capacity. By analyzing firm-level strategies in diverse regulatory and market settings, this study contributes to broader discussions on transnational entrepreneurship, financial infrastructure development, and the evolving role of private actors in advancing digital inclusion across emerging economies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Data-Driven Work Center Assignment and Pricing Strategy for a Job Shop</title>
<link href="https://hdl.handle.net/1721.1/163312" rel="alternate"/>
<author>
<name>Carson, Alix</name>
</author>
<id>https://hdl.handle.net/1721.1/163312</id>
<updated>2025-10-22T03:34:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Data-Driven Work Center Assignment and Pricing Strategy for a Job Shop
Carson, Alix
Job shops with semi-autonomous work centers must understand their capacity utilization and financial state to maximize efficiency and profitability. Machine monitoring software allows managers to see the state of machines at any time and capture real-time capacity utilization. Job shops are positioned to maximize these work centers and must connect their manufacturing and operations strategy to the real-time shop data to maximize efficiency. This research is a case study in how a job shop can create a right-to-win strategy targeting jobs that are compatible and profitable for semiautonomous machines.&#13;
&#13;
ADDMAN Precision Baltimore (APBAL), a precision machine shop in the aerospace and defense industry, is facing labor constraints and underutilized work centers. This research aims to develop a structured quoting strategy and strategic pricing model to optimize job allocation between APBAL’s two semi-autonomous machining centers: the Makino Machining Complex 2 (MMC) and the Fanuc Robodrill. By integrating qualitative observations, historical job data, and machine utilization metrics, this study identifies inefficiencies in current job assignment practices. Key findings indicate that aligning work center assignments with projected profitability and capacity utilization can improve overall efficiency. A decision-making framework and pricing matrix are proposed to enhance job quoting accuracy, optimize machine usage, and increase APBAL’s competitiveness in securing high-volume contracts. The results offer a scalable framework for APBAL and its parent company, ADDMAN Engineering, to deploy across other machining facilities, ultimately improving operational performance and financial outcomes.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Technoeconomic Model for Maritime Applications of Green Power Technologies</title>
<link href="https://hdl.handle.net/1721.1/163311" rel="alternate"/>
<author>
<name>Tuana, Daniel I. S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163311</id>
<updated>2025-10-22T03:34:19Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Technoeconomic Model for Maritime Applications of Green Power Technologies
Tuana, Daniel I. S.
Growing societal and regulatory pressures are causing industries around the world to consider greener alternatives to conventional fossil fuel power technologies. As a result, power solution suppliers like CAT are facing strategic uncertainties:  if, where, and when their core product markets will be disrupted by the novel adoption of alternative technologies. With the intention of helping to inform CAT’s future product and service strategy in conjunction with previous research related to powering mines and data centers, this thesis outlines the development of a code to estimate and compare the total cost of ownership of battery, hydrogen fuel cell, and nuclear power technologies to incumbent fossil fuel-driven systems in a variety of maritime scenarios including serving shoreside port electricity demand and on-water power demand across a diverse set of vessel segments. &#13;
The code leverages first principles, empirical models, and researched assumptions to model the performance and costs of power systems in response to stochastically generated and deterministic power demand profiles over the useful lifetimes of the assets. For vessel applications, the code also estimates the volumes and masses of the alternative systems as a basis to judge their practicality. Hypothetical power systems for four archetypal ports and six vessel segments (across a range of power nodes) were studied to identify potential opportunities in and adjacent to the marine markets CAT currently serves.&#13;
The outcomes of the study align with conventional intuition regarding the application of the technologies considered. Under certain conditions, the results support the technoeconomic case for the implementation of battery technology on short-haul vessels whose operations are predictable and would not be disrupted by shortened refueling/recharging intervals. Similarly, the results show that adoption of small modular nuclear reactors at ports and on large vessels with consistently large baseload power demand can provide economic advantages over incumbent fossil fuel technologies. The results of the simulations are sensitive to several technology-agnostic parameters including discount rates, fuel and electricity prices, demand growth rates, and other macro-economic conditions. In future, with ample case-specific data, the code developed for this thesis may provide convincing justification for the adoption of an alternative technology to serve the power demand of an individual port or vessel.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Discrete Event Simulation as a Predictor for Factory Traffic Management</title>
<link href="https://hdl.handle.net/1721.1/163310" rel="alternate"/>
<author>
<name>Ramirez Echavarria, Esteban</name>
</author>
<id>https://hdl.handle.net/1721.1/163310</id>
<updated>2025-10-22T03:34:21Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Discrete Event Simulation as a Predictor for Factory Traffic Management
Ramirez Echavarria, Esteban
Manufacturing environments increasingly rely on automation and data-driven decision-making to optimize efficiency and production rates. This study explores the application of Discrete Event Simulation (DES) to model material flow and predict AGV (Automated Guided Vehicle), crane, and cart movements within a factory. The goal is to develop a digital twin that enables real-time decision-making, optimizes scheduling, and minimizes bottlenecks.&#13;
&#13;
To achieve this, we utilize SimPy, an open-source Python-based DES library, in conjunction with a custom-built API and React.js front-end interface. The study evaluates available DES software options and justifies the selection of SimPy based on flexibility, integration capabilities, and its suitability for modeling custom business rules. The solution is structured into modular components handling path planning, transporters, flows, stations, hot-cold starts, and utilities, ensuring adaptability to future improvements.&#13;
&#13;
A validation framework was established, utilizing historical data comparison and real-time validation to assess the simulation’s predictive accuracy. Over a 40-day testing period, the simulation achieved 89.6% accuracy and a sensitivity, or true positive rate (TPR), of 80.2%. The simulation provides a reliable first-pass scheduling tool that can be further refined with improved data collection.&#13;
&#13;
The findings indicate that while full automation of AGV deployment is not yet feasible, this study lays the foundation for future integration with the factory’s Vehicle Management System (VMS). Business implications include the potential for automated scheduling, enhanced material flow visibility, and optimization of capacity planning. Future work should focus on improving data accuracy, integrating live factory data streams, and refining algorithms for predictive scheduling.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Strategy to Execution: An Optimization Approach to New Product Placement in the Apparel Industry</title>
<link href="https://hdl.handle.net/1721.1/163309" rel="alternate"/>
<author>
<name>Netteberg, Sofie F.</name>
</author>
<id>https://hdl.handle.net/1721.1/163309</id>
<updated>2025-10-22T03:34:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">From Strategy to Execution: An Optimization Approach to New Product Placement in the Apparel Industry
Netteberg, Sofie F.
This thesis presents the development and implementation of a new product placement optimization model for a large global apparel and footwear company’s supply chain, aimed at maximizing network-wide profits while aligning with long-term strategic goals amidst demand volatility. The model leverages a mixed-integer linear programming approach, integrating probabilistic demand simulations to optimize the placement of new products within the company’s existing network of third-party partner company factories. Key elements of the model, including decision variables, price and cost coefficients, an objective function, and constraints that reflect operational realities and strategic priorities, are discussed in detail. Through analysis and results validation, this research demonstrates how data-driven optimization can improve network profitability and adherence to companies’ long-term strategic supply chain objectives and develop networks that are more profitable. The thesis then includes an exploration of historic demand variability at the host company, followed by a recommendation to integrate probabilistic forecasting in network planning to generate production networks more robust to volatility in consumer product demand. The findings contribute to advancing data-driven decision-making in supply chain management and offer actionable insights for future product placement strategies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Policy Approaches and Entrepreneurial Responses in Strategic Industries: Comparing Innovation Ecosystems in China and the United States</title>
<link href="https://hdl.handle.net/1721.1/163308" rel="alternate"/>
<author>
<name>Ni, Mengmeng</name>
</author>
<id>https://hdl.handle.net/1721.1/163308</id>
<updated>2025-10-22T03:34:18Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Policy Approaches and Entrepreneurial Responses in Strategic Industries: Comparing Innovation Ecosystems in China and the United States
Ni, Mengmeng
This thesis investigates how government policy approaches shape regional entrepreneurial ecosystems and influence entrepreneurial strategy in strategic industries across China and the United States. Through comparative analysis of four region-industry pairs—Shanghai's semiconductor sector, Shenzhen's drone technology sector, Boston's biotechnology cluster, and New York's fintech ecosystem—the study examines the dynamic interplay between institutional design and entrepreneurial behavior. Drawing on Porter's Cluster Theory, Mazzucato's Entrepreneurial State concept, and the MIT REAP framework, the research develops a novel policy categorization encompassing four innovation governance tools: Cluster and Crisis Response Tools, Innovation Ecosystem Tools, Market-Shaping Tools, and Institutional Restructuring Tools. A qualitative case study methodology is employed, with in-depth firm-level analyses of Biren Technology in Shanghai and Moderna in Boston illustrating how entrepreneurs strategically respond to distinct institutional environments. The findings reveal four distinct models of innovation governance: Shanghai’s state-directed coordination, Shenzhen’s regulatory experimentation, Boston’s market-based orchestration, and New York’s regulation-centered oversight. Across contexts, entrepreneurs emerge as interpretive agents who actively leverage, adapt to, and at times reshape institutional conditions. This thesis contributes to the literature by offering comparative insights into the co-evolution of public policy and entrepreneurial strategy. It also provides practical implications for policymakers designing innovation ecosystems and for entrepreneurs navigating increasingly complex regulatory and technological landscapes.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing a Data-Driven Approach to Reducing Excess Inventory in a Multi-Echelon Supply Chain</title>
<link href="https://hdl.handle.net/1721.1/163307" rel="alternate"/>
<author>
<name>Gosen Cappellin, Carlos Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/163307</id>
<updated>2025-10-22T03:34:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Developing a Data-Driven Approach to Reducing Excess Inventory in a Multi-Echelon Supply Chain
Gosen Cappellin, Carlos Daniel
The medical technology company MedTechCo, specifically its Spine division, has deployed millions of implants in hospitals to meet demand. When inventory deployment and allocation are not managed appropriately to ensure that products are in the right place at the right time, excess inventory arises. Currently, MedTechCo Spine holds large amounts of excess inventory that are not utilized effectively. &#13;
&#13;
The objective of this research is to leverage a data-driven approach to define and reduce implant excess inventory at scale for MedTechCo’s Spine business unit in the United States. The research strategy used in this thesis begins with a root cause analysis to understand the causes of excess inventory. A robust data model was then developed to determine appropriate inventory levels by SKU, map all excess field inventory, and prioritize the most valuable excess SKUs. This data model was used to&#13;
automate the company’s ERP system to repurpose excess inventory, limit unnecessary inventory deployments to the field, and eliminate redundant backorders. Finally, an impact analysis was performed to measure the potential excess inventory reduction in both dollar value and units. &#13;
&#13;
Time constraints limited the implementation of the recommendations during the research period. However, MedTechCo Spine agreed to incorporate the proposed recommendations into its ERP system and operational processes in mid-2025. These recommendations will help reduce implant excess field inventory, unlocking tied-up capital, creating flexibility in the supply chain to meet demand changes, and enabling additional investment in innovation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI and ML in Real Estate Underwriting: Transforming Financial Decision-Making and Operational Efficiency</title>
<link href="https://hdl.handle.net/1721.1/163306" rel="alternate"/>
<author>
<name>Jaklis, Cyril</name>
</author>
<id>https://hdl.handle.net/1721.1/163306</id>
<updated>2025-10-22T03:34:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">AI and ML in Real Estate Underwriting: Transforming Financial Decision-Making and Operational Efficiency
Jaklis, Cyril
Real estate is the world's largest untapped market, at $650 trillion (Statista, 2023), yet technological innovation, particularly in financial underwriting, is underrepresented. Excel spreadsheets, broker-driven data collection, and expensive public database subscriptions are still used by most institutional players and family offices. These outdated approaches result in inefficiencies and higher operational expenses. Firms are now waiting for more innovative tools to improve their workflows and predict their Net Operating Income (NOI). Development and maintenance costs are often underestimated due to optimistic estimates and unplanned material or labor cost price escalations. This paper examines how to increase the accuracy of underwriting by examining the full underwriting process, identifying operational inefficiencies, and analyzing how new technologies like Artificial Intelligence (AI) and Machine Learning (ML) are currently being utilized to better value properties and reduce error margins. The analysis covers the entire underwriting process, from data sourcing, collection, structuring, and analysis. It also reviews the platforms and software tools utilized to connect these phases, from initial appraisal to investment memo and investment committee (IC) decision-making. The objective is to understand practical constraints, recognize opportunities for optimization, and explore where investors can strategically position themselves to leverage these technologies while also providing a forward-looking outlook on the changing function of AI/ML in the sector over the next decade.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation Into Sources of Volatility in Sortation Center Processes to Improve Productivity and On-Time Delivery</title>
<link href="https://hdl.handle.net/1721.1/163305" rel="alternate"/>
<author>
<name>Fenstermacher, Andrew D.</name>
</author>
<id>https://hdl.handle.net/1721.1/163305</id>
<updated>2025-10-22T03:34:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Investigation Into Sources of Volatility in Sortation Center Processes to Improve Productivity and On-Time Delivery
Fenstermacher, Andrew D.
Target Corporation has expanded its Last Mile Delivery (TLMD) capabilities through an omni-channel, "stores-as-hubs" strategy, using stores as fulfillment centers for online orders. The Target Sortation Centers was developed to receive packages from stores in the region, sort, route and dispatch these packages each day to accomplish faster delivery for online orders. Designed to never hold inventory, the goal is to have every package received delivered that same day. This presents new operational challenges common for brick-and-mortar retailers that develop an omni-channel strategy. This thesis investigates core processes in Sortation Centers to identify sources of volatility and propose improvements that enhance productivity and on-time delivery while minimizing labor costs and incomplete volume. Many of the current processes in Target’s Sortation Centers are manual and unstandardized. Moreover, improving operations and piloting changes is challenging, especially during peak seasons. To address these challenges, this study employs discrete event simulation (DES) using SimPy, informed by current operational data and in-person observations, to model and analyze current processes. Key findings reveal that pre-sorting TLMD volume from other national carrier volume at the stores prior to linehaul pick up for same day packages decrease the overall completion times for the day’s volume by 5.8% and lowers incomplete volume probability by up to 85% under excess volume scenarios. These process changes enhance site resilience to demand volatility without significant capital investment. The research underscores the value of DES for testing process improvements virtually and highlights the need for network-level optimization across Target’s omnichannel supply chain. Recommendations include piloting floor loading and pre-sorting in select markets, alongside future exploration of performance standards, automation, and standardized processes to further mitigate volatility impacts.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computer Vision for Cell Line Development</title>
<link href="https://hdl.handle.net/1721.1/163304" rel="alternate"/>
<author>
<name>Albright, Jackson A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163304</id>
<updated>2025-10-22T03:34:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Computer Vision for Cell Line Development
Albright, Jackson A.
Anomalies in Cell Line Development prove to have significant impact on material and opportunity cost when screening for the Master Cell Bank that is used for all clinical drug development. Cell Line Development scientists spend hundreds of hours collectively identifying anomalies in fluorescent and brightfield imagery to ensure only high-performing cell clones are downselected for testing. The use of computer vision models alleviates this burden on scientists and better standardizes the selection process. Three techniques were tested for classifying anomalous and nominal fluorescent images: an autoencoder, an edge CNN and an RGB SVM. Examining performance through composite metrics such as F1 Score and MCC, the autoencoder (0.8744 and 0.8619, respectively) outperformed the edge CNN (0.8488 and 0.8257) and RGB SVM (0.8343 and 0.8252) for fluorescent anomaly classification. The high performance of the autoencoder came from training solely  on anomalous images and using a percentile-based threshold to classify images on their reconstruction error. Data robustness proved to be an issue, with certain test datasets having worse performance due to inherent variability of images within both nominal and anomalous classes. Gathering and labeling more datasets for training and testing will allow models to learn from this variability and provide higher confidence in model performance for real-time screening applications. Adjusting the structure of the traditional autoencoder to that of a variational autoencoder will also help with learning the variability of images within classes, and improve performance on previously unseen data. Overall, the current iteration of the models proves to be beneficial for anomaly detection in Cell Line Development and demonstrates that some modifications to data sourcing and model architecture could see even better performance. These same techniques could be applied to similar biopharmaceutical applications provided care is taken to properly source clean and labeled image data and construct appropriate model architectures for the images' inherent features.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sensor simulation for autonomous vehicles: Diffusion based image and depth generation for driving scenes</title>
<link href="https://hdl.handle.net/1721.1/163303" rel="alternate"/>
<author>
<name>Bieske, Linn</name>
</author>
<id>https://hdl.handle.net/1721.1/163303</id>
<updated>2025-10-22T03:34:12Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Sensor simulation for autonomous vehicles: Diffusion based image and depth generation for driving scenes
Bieske, Linn
Background: Autonomous vehicle (AV) testing requires extensive real-world data collection, which is costly and time-consuming. Existing simulation techniques struggle to generate high-fidelity sensor data, particularly for multimodal signals like RGB camera images, LiDAR depth maps or LiDAR point clouds. Recent advances in generative AI, specifically diffusion models, offer a solution for improving synthetic driving scene simulations.&#13;
&#13;
Objective: This thesis enhances diffusion-based generative models to: 1) Encode LiDAR depth data into a stable diffusion model’s latent space, 2) Generate simultaneously, consistently and with high fidelity eight RGB camera images, 2D LiDAR depth maps and 3D LiDAR point clouds for a full 360-degrees range, and 3) Evaluate the realism and consistency of the generated sensor data.&#13;
&#13;
Methods: A multimodal, multi-view latent stable diffusion model was trained to generate complete 360’ synthetic driving scenes and simulate camera and LiDAR sensor signals for autonomous vehicles. The generated scenes were evaluated for sensor alignment, realism, and depth accuracy.&#13;
&#13;
Results: The diffusion model produced realistic, spatially consistent camera and LiDAR sensor data, reducing reliance on real-world validation miles and lowering AV testing costs. To further improve the quality of the multimodal driving scene generation it is recommended to retrain the VAE on LiDAR data. &#13;
&#13;
Conclusion: This work advances AV simulation by extending stable diffusion models to multimodal sensor data. Future improvements should focus on real-time generation and expanding to additional sensor types or hardware setups for enhanced simulation fidelity.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predictive Modelling of Customer Membership Purchases to Minimize Marketing Costs</title>
<link href="https://hdl.handle.net/1721.1/163302" rel="alternate"/>
<author>
<name>Liu, Ying</name>
</author>
<id>https://hdl.handle.net/1721.1/163302</id>
<updated>2025-10-22T03:34:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Predictive Modelling of Customer Membership Purchases to Minimize Marketing Costs
Liu, Ying
This thesis develops and evaluates a series of predictive models to improve the efficiency of marketing resource allocation in the context of an outbound campaign for a premium membership product. The central objective is to identify customers most likely to respond positively to a membership offer, thereby minimizing outreach costs and maximizing return on investment. The study leverages a dataset from a large retail superstore that includes customer demographics, transactional behavior, and campaign response history. Data preprocessing involved the creation of engineered features such as age and tenure groupings and the transformation of categorical variables into factor types suitable for classification algorithms. Three modeling approaches were applied: classification with logistic regression, classification and regression trees (CART), and random forest. Logistic regression yielded strong predictive performance with an AUC of 0.851 and identified several statistically significant predictors, including spending on wine and meat products, recent purchase behavior, and tenure length. However, its primary limitation lies in its inability to accommodate cost asymmetries, as it lacks the capacity to incorporate a loss matrix which assigns different penalty to false positives and false negatives. The CART model addressed this limitation by introducing a customized loss matrix that reflects the asymmetric cost structure of marketing misclassifications—assigning a higher penalty to false negatives than to false positives. While this cost-sensitive structure aligned better with business objectives, the CART model achieved a moderate AUC of 0.767, reflecting limited classification accuracy and robustness. To overcome these limitations, a Random Forest model was implemented, combining the strengths of ensemble learning with cost-sensitive training. It achieved the highest AUC of 0.864 and allowed for the integration of a loss matrix during training. Feature importance analysis revealed that variables such as number of days since the last purchase, the amount spent on meat products, and a customer's enrollment length with the company were among the most influential predictors of customer response. The model not only improved classification performance but also supported strategic targeting through interpretable outputs. An economic evaluation demonstrated the practical value of the predictive model. Under a loss matrix where the cost of a false positive was set to $2 and a false negative to $10, the Random Forest model reduced total campaign costs by approximately 30% compared to a non-targeted approach. This cost savings translates into a meaningful economic impact, particularly when applied to large-scale campaigns. Overall, the findings support the use of Random Forest with a cost-sensitive design as a superior modeling framework in marketing applications. By aligning machine learning with real-world cost structures, this approach offers both statistical rigor and economic relevance for data-driven decision-making in customer acquisition strategies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data, Analytics, and Optimization for Production Planning</title>
<link href="https://hdl.handle.net/1721.1/163301" rel="alternate"/>
<author>
<name>Malinowski, Maxwell X.</name>
</author>
<id>https://hdl.handle.net/1721.1/163301</id>
<updated>2025-10-22T03:34:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Data, Analytics, and Optimization for Production Planning
Malinowski, Maxwell X.
This thesis serves as a case study for the implementation of data analytics and optimization within a high mix low volume electronics production environment in the Aerospace and Defense industry. This case study demonstrates the benefits of data analysis for defining and quantifying operational bottlenecks and explores the implementation of an optimization model to better allocate resources for production planning. Results demonstrate the insights derived from using data and analytics in this environment, and further discussion explores what contributes to an effective implementation of an optimization model in a production setting.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Industrial Pollution and Firm Ownership Structure: Evidence from M&amp;A</title>
<link href="https://hdl.handle.net/1721.1/163300" rel="alternate"/>
<author>
<name>Zhang, Cindy</name>
</author>
<id>https://hdl.handle.net/1721.1/163300</id>
<updated>2025-10-22T03:30:30Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Industrial Pollution and Firm Ownership Structure: Evidence from M&amp;A
Zhang, Cindy
This paper studies whether firm ownership structure influences pollutive activity. Using facility-level data from the Toxics Release Inventory, I employ a differences-in-differences (DiD) approach to compare toxic chemical release and pollution prevention activity between public and private firms' facilities by exploiting ownership changes. I compare facilities initially owned by private firms that were acquired by public firms and those that were acquired by private firms in the same year. My findings suggest that public acquirers significantly reduce toxic release activity relative to private acquirers. In the reverse case, I find that private acquirers decrease abatement, but pollution volume does not differ significantly. However, for later ownership changes in my sample, private acquirers increase toxic release volume and intensity significantly relative to public acquirers. Lastly, I explore how financial constraints and the local political environment moderate pollution activity. Debt-constrained public acquirers show no significant difference in pollution activity from private acquirers. In Democrat-leaning counties, public acquirers reduce toxic releases more than private acquirers, but in Republican-leaning counties, the differences are less pronounced. Overall, my findings suggest that public firms have decreased toxic release activity over time, but the declines have been offset by increases from private firms.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Debt Complexity and Equity Behavior</title>
<link href="https://hdl.handle.net/1721.1/163299" rel="alternate"/>
<author>
<name>Li, Jack</name>
</author>
<id>https://hdl.handle.net/1721.1/163299</id>
<updated>2025-10-22T03:33:42Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Debt Complexity and Equity Behavior
Li, Jack
I examine how the complexity of firm debt affects the incorporation of news into equity prices. As residual claimants to firm cash flows, equity investors must be able to value all outstanding debt contracts, suggesting that complex debt can interfere with their ability to process news effectively. Using a model in which debt complexity causes a subset of investors to initially underweight news precision, I derive three predictions for the equity behavior of debt-complex firms around news events: (1) they exhibit greater post-announcement drift, (2) they show elevated trading volume both on announcement day and in the post-announcement period, and (3) their return volatility decreases on announcement day but increases during the post-announcement period. These predictions are supported by empirical evidence in the context of earnings announcements, suggesting that debt complexity introduces meaningful frictions in how news is incorporated into equity markets.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mapping Wildness: Simulating Post-Extraction Wildland Regeneration</title>
<link href="https://hdl.handle.net/1721.1/163298" rel="alternate"/>
<author>
<name>Griggs, Crystal Ling</name>
</author>
<id>https://hdl.handle.net/1721.1/163298</id>
<updated>2025-10-22T03:34:07Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Mapping Wildness: Simulating Post-Extraction Wildland Regeneration
Griggs, Crystal Ling
This thesis introduces a novel approach to wildlife habitat classification for ecological regeneration. It is focused by the extreme environmental degradation of mountaintop removal (MTR) in the Appalachian Mountains, a violent coal extraction process that has significantly altered the landscape of this ecologically sensitive region. By integrating remote sensing and Geographic Information Systems (GIS) with machine learning, this research aims to develop a method that transcends traditional human egocentric landscape assessments, advocating for a model that foregrounds the habitats and needs of critically endangered species by simulating landscape regeneration and assessing topographical alterations in terms of how design decisions impact wildlife. Central to this study is the concept of Umwelt, the subjective experiences of nonhuman species, including how their spatial perception and spectrum are used to discern details within their environment. Umwelt broadens traditional spatial understanding by emphasizing that each species experiences the world through its sensory filters, which shape its interactions within their habitat. This understanding guides the research’s approach to approximating the Umwelt of the Cerulean Warbler (Setophaga cerulea), a surrogate species in this work, which has faced steep declines due to habitat loss in Appalachia. Through the development of a habitat suitability model that utilizes advanced computational tools and multispectral imagery, the thesis endeavors to offer a new perspective on environmental planning and conservation efforts - a computational approach to near-approximations of Umwelt. The methodological framework seeks not only to classify post-extraction landscapes for their potential in supporting wildlife but also to inform design and land use decisions that are sensitive to the temporal and complex processes of natural habitat regeneration. By challenging the prevailing paradigms of landscape restoration, which often lack consideration for the intricacies of wildland dynamics such as the multitudes of species interactions and interdependencies, this research proposes a new methodology that empowers wildlife to guide the ecological recovery process. The findings underscore the potential of applied GIS and machine learning in environmental advocacy, setting a precedent for future research and practice aimed at the regeneration of ecosystems that considers the ecological realities of all species involved.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hydrogen Adoption Dynamics: A Flexible Modeling Framework for U.S. Industrial Applications</title>
<link href="https://hdl.handle.net/1721.1/163297" rel="alternate"/>
<author>
<name>Ray, Jennifer</name>
</author>
<id>https://hdl.handle.net/1721.1/163297</id>
<updated>2025-10-22T03:33:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Hydrogen Adoption Dynamics: A Flexible Modeling Framework for U.S. Industrial Applications
Ray, Jennifer
As climate change concerns drive the need for decarbonization, hydrogen stands as a potential tool to help reduce emissions across the United States industrial and energy sectors. This thesis develops a flexible modeling framework for hydrogen adoption across multiple industrial applications, designed specifically to support strategic investment decision-making in an evolving market. The tool analyzes six major industries – steel, chemicals, energy storage, biofuels, vehicles, and natural gas– through two metrics: potential hydrogen consumption and threshold prices for economic viability. The framework applies scenario analysis to examine how government policy and technological advancement influence potential market trajectories.  &#13;
&#13;
Analysis reveals significant sensitivity to input assumptions. Even small variations in the assumed initial hydrogen production cost can result in significantly different adoption timelines. In scenarios where initial hydrogen production costs are $5/kg, widespread adoption requires maximum policy support and technological progress. However, reducing the initial cost by just $1 to $4/kg makes broader adoption feasible with less reliance on government intervention. Light-duty fuel cell electric vehicle penetration rate and steel industry growth rate emerge as the most sensitive parameters affecting overall hydrogen demand, followed by biofuel blending rate and hydrogen injection percentage into natural gas infrastructure.&#13;
The vehicles industry is identified as a first mover in widespread hydrogen adoption, followed by steelmaking and methanol production. Hydrogen adoption for natural gas blending, methanol for export, and methanol-to-gasoline applications occur later due to their lower threshold price for economic viability. Under optimal conditions with strong government support and significant technological advancements, total hydrogen demand could reach 48.8 million metric tons by 2050, approximately a sevenfold increase from scenarios with minimal support.&#13;
The tool’s value lies not in projecting a definitive, single-point forecast, but in providing a flexible framework that helps stakeholders navigate market uncertainties as the decarbonization landscape evolves. Future research should integrate supply-side dynamics, infrastructure requirements, and geographic variability to enhance projection accuracy.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Green Aluminum</title>
<link href="https://hdl.handle.net/1721.1/163296" rel="alternate"/>
<author>
<name>Schurr, Kevin</name>
</author>
<id>https://hdl.handle.net/1721.1/163296</id>
<updated>2025-10-22T03:34:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards Green Aluminum
Schurr, Kevin
Aluminum is an important metal to facilitate the energy transition. Its high strength to weight ratio and easy recyclability make it a useful material in many industries from automobiles to food packaging. However, the aluminum smelting process accounts for 2% of all global greenhouse gas emissions due to both the high amount of power needed to facilitate the electrolysis reaction and to the consumption of carbon anodes in the process. As regulatory changes in Europe raise the monetary cost of emitting carbon, smelters are investigating new technologies to integrate into their operations to cut Scope 1 and 2 emissions. Two such technologies are carbon capture systems to abate process emissions and small modular nuclear reactors to reduce emissions incurred during electric power generation. This work explores the technical and economic feasibility of leveraging these systems at Aluminum of Europe, a primary aluminum smelter subject to these changing European regulations. Results suggest that while these technologies have not been specifically adapted for aluminum production yet, they can play an important role in reducing the overall emissions from the smelting process under specific economic conditions. However, the analysis indicates that, at present, significant subsidies are required for such projects to be financially viable.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Corporate Transparency and Cybersecurity Risks</title>
<link href="https://hdl.handle.net/1721.1/163295" rel="alternate"/>
<author>
<name>Kim, David Sunghyo</name>
</author>
<id>https://hdl.handle.net/1721.1/163295</id>
<updated>2025-10-22T03:30:26Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Corporate Transparency and Cybersecurity Risks
Kim, David Sunghyo
I study whether disclosure mandates alter the equilibrium of cyberattacks by unintentionally informing cybercriminals. The California Consumer Privacy Act (CCPA) requires companies to disclose their personal information collection practices to consumers, inadvertently informing cybercriminals about the potential benefits of breaching each firm. Using a difference-in-differences design, I find that firms disclosing the collection of valuable personal data face an increased probability of data breaches. These firms also strengthen their cyberdefenses both in terms of cybersecurity software and cybersecurity specialists. Firms trade off cybersecurity costs against the risk of data breaches, with the increase in breach probabilities more pronounced among firms that invest less in cybersecurity. Finally, I find that firms adjust their data collection policies as additional defense strategies. Overall, this study highlights the trade-off between transparency and cybersecurity risks in today’s digital economy.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fully Connected Digital Ecosystems within Hospitals – AI/ML Solutions for Improved Patient Care</title>
<link href="https://hdl.handle.net/1721.1/163294" rel="alternate"/>
<author>
<name>Dugan, Andrew D.</name>
</author>
<id>https://hdl.handle.net/1721.1/163294</id>
<updated>2025-10-22T03:33:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Fully Connected Digital Ecosystems within Hospitals – AI/ML Solutions for Improved Patient Care
Dugan, Andrew D.
Cardiogenic shock (CS) in the context of acute myocardial infarction (AMI) remains a significant challenge in critical care, with high mortality rates despite the availability of advanced mechanical circulatory support (MCS) devices like the Impella pump. However, adoption of these devices in clinical practice remains limited. This thesis explores two complementary strategies to address these challenges: developing machine learning (ML) models to predict shock severity and assessing the feasibility of integrating hospital Electronic Medical Record (EMR) data into Abiomed’s digital ecosystem to support standardized shock care.&#13;
In the first phase, ML models were trained on multiple clinical datasets to predict Society for Cardiovascular Angiography and Interventions (SCAI) shock stages based on patient data. While these models demonstrated strong predictive performance, feature analysis revealed that SCAI stages often reflect physician treatment decisions rather than purely patient physiology. This raises concerns about their utility as real-time clinical decision tools and suggests that ML applications may be better suited to prompting early data collection and intervention before severe shock develops.&#13;
The second phase evaluated the feasibility of EMR integration to support the broader adoption of standardized shock protocols. After considering regulatory, operational, and technical factors, third- party data aggregation emerged as the most practical path forward. Integrating EMR data could improve outcome tracking, support protocol adoption, and strengthen partnerships between Abiomed and hospitals, creating a foundation for more consistent and proactive shock management.&#13;
Together, these findings highlight the need for predictive tools that guide early clinical action and infrastructure that supports seamless data integration. By advancing both, Abiomed can expand its role in cardiogenic shock care, improve patient outcomes, and lead the evolution of data-driven, standardized treatment strategies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategic Recommendations for Legacy Automakers in the Evolving Automotive Landscape</title>
<link href="https://hdl.handle.net/1721.1/163293" rel="alternate"/>
<author>
<name>Tike, Gauri</name>
</author>
<id>https://hdl.handle.net/1721.1/163293</id>
<updated>2025-10-22T03:34:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Strategic Recommendations for Legacy Automakers in the Evolving Automotive Landscape
Tike, Gauri
The automotive industry is undergoing a transformative shift with technological advancements in many areas such as Electric cars, Autonomous vehicles, Software Defined Vehicles, and decarbonization of mobility. Alternate means of transportation are also becoming available and sometimes even the cost is even lower than owning a car. The best way to get from point A to point B might not be the car in some of the cities. It might involve heterogenous modes of public transportation, using a bike, using ride hailing service or using a car for different portions of the route. Despite the concerns about the environment, we are still seeing an increase in global car ownership trends. These changing times pose challenges to legacy automakers. While they are experts in traditional car manufacturing, modern cars not only require traditional mechanical and electrical skills but also need deep expertise in developing software for these cars. With the growing EV adoption, we are seeing Chinese EV automakers are capturing market share quickly. What is the future of mobility with all these developments? What do traditional automakers need to do in this era to remain successful? In this report we will examine key trends in mobility: Global electric vehicles (EVs) adoption, software-defined vehicles (SDVs), autonomous vehicles (AVs), environmental implications. Based on this research we will propose strategic recommendations for traditional automakers in order to continue their success over the next decade and beyond.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Navigating Fintech Innovations: Strategic Insights from the United States and India</title>
<link href="https://hdl.handle.net/1721.1/163292" rel="alternate"/>
<author>
<name>Shanbhag, Rishabh Ganesh</name>
</author>
<id>https://hdl.handle.net/1721.1/163292</id>
<updated>2025-10-22T03:34:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Navigating Fintech Innovations: Strategic Insights from the United States and India
Shanbhag, Rishabh Ganesh
This thesis examines how fintech ventures are reshaping financial services through new technologies and strategic choices tailored to different markets. It first looks at key innovations: digital payments, digital wealth management, and open banking, and how they have transformed everyday financial activities. The research then compares how fintech companies operate in the US and India by analyzing how market conditions, government initiatives, regulations, and consumer behaviors shape adoption. Finally, through case studies of Robinhood (US), Revolut (Global), and Paytm (India), the thesis examines how fintech firms navigate the choice between competing with traditional players and collaborating with them to scale under different market scenarios. Together, these insights aim to help entrepreneurs, investors and policymakers understand how strategy and technology come together in the fintech industry.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forming the Future: A Digital Approach to Simulating Thermoplastic Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/163291" rel="alternate"/>
<author>
<name>Harkavy, Rachael</name>
</author>
<id>https://hdl.handle.net/1721.1/163291</id>
<updated>2025-10-22T03:34:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Forming the Future: A Digital Approach to Simulating Thermoplastic Manufacturing
Harkavy, Rachael
This thesis develops a digital framework for simulating and validating thermoplastic composite manufacturing processes, focusing on reducing the time associated with new product development. Using Finite Element Analysis (FEA) software (SimSof) and high-precision 3D scanning tools (ScanSof), the research introduces a geometric similarity metric to quantify deviations between simulated and real-world parts. By aligning simulations with production data, the study aims to replace costly physical trials with reliable digital models, accelerating customer onboarding and improving&#13;
manufacturing efficiency.&#13;
&#13;
Key contributions include establishing a systematic pipeline for integrating simulation tools into Oribi Composites’ workflow, defining critical parameters such as laminate width, material card accuracy, and mesh size, and validating their impact on simulation accuracy. Results demonstrate that accurate material modeling and parameter selection significantly enhance digital twin accuracy, while mesh size has minimal influence, allowing for computational cost savings. The research also highlights challenges in replicating real-world conditions digitally, including inconsistent material cards, and limited control over pressure profiles. Despite these limitations, the study proves that simulations can reliably predict manufacturable designs within&#13;
customer tolerances, reducing reliance on physical iterations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward a Sustainable and Scalable Ecosystem: Breaking the Cycle of Intergenerational Poverty for Single Mothers in Japan with Private Sector Engagement</title>
<link href="https://hdl.handle.net/1721.1/163290" rel="alternate"/>
<author>
<name>Imaeda, Hiroko</name>
</author>
<id>https://hdl.handle.net/1721.1/163290</id>
<updated>2025-10-22T03:34:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Toward a Sustainable and Scalable Ecosystem: Breaking the Cycle of Intergenerational Poverty for Single Mothers in Japan with Private Sector Engagement
Imaeda, Hiroko
Despite Japan’s reputation as an economically advanced nation, it faces one of the highest relative poverty rates among OECD countries, with nearly half of all single-mother households living below the poverty line. This thesis examines why poverty among single mothers persists despite a formal support ecosystem and proposes a systemic redesign grounded in life-stage-aligned, user-centered principles. Drawing on historical-institutional analysis, organizational theory, fieldwork interviews, and auto-ethnographic insights, the study identifies deeply embedded barriers that reinforce fragmented, crisis-oriented support systems misaligned with real-life trajectories. In response, it introduces the "Single Mother Journey" framework, reframing single mothers not as a static category but as a dynamic population with distinct, evolving needs. Through this lens, the thesis exposes critical gaps in preventive support, labor market misalignment, and information accessibility. Building on these findings, it proposes a future-ready support ecosystem, positioning corporations, local municipalities, NPOs, and education institutions as collaborative actors. It presents mumtec, a conceptual digital platform designed to consolidate fragmented services, personalize interventions by life stage, predict crisis points, and generate adaptive policy feedback. The thesis moves beyond surface-level critique by connecting institutional analysis with practical system design to offer a scalable framework for inclusive innovation. Listening to the silent voices of single mothers navigating precarity is an ethical imperative and a strategic necessity for sustainable, resilient societies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing automotive production scheduling to reduce finished vehicle inventory</title>
<link href="https://hdl.handle.net/1721.1/163289" rel="alternate"/>
<author>
<name>Johnson, Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/163289</id>
<updated>2025-10-22T03:33:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing automotive production scheduling to reduce finished vehicle inventory
Johnson, Christopher
This thesis addresses inefficiencies in automotive finished vehicle inventory management arising from misalignment between production scheduling and outbound logistics. Traditional production planning prioritizes manufacturing efficiency, causing significant inventory accumulation as vehicles await completion of full shipment loads. This research proposes an Integrated Production and Outbound Distribution Scheduling approach, introducing an optimization step within existing production scheduling workflows to align production sequences for expedited load formation. Back-testing on two automotive assembly lines over 82 weeks reveals a mean inventory reduction potential of 63–65%, with variability influenced by production volumes and vehicle configurations. A proof-of-concept implementation confirms the practical feasibility of optimized schedules, reducing inventory holding times by 33% without disrupting manufacturing operations. Computational performance analysis demonstrates good scalability for instances with fewer than 600 vehicles, though larger scenarios still yield meaningful inventory reductions. This work highlights substantial opportunities for automotive original equipment manufacturers to enhance efficiency by integrating outbound logistics into production scheduling.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transforming Unstructured Data into Actionable Insights: A Use Case of Generative AI in Operational Technology Problem Management</title>
<link href="https://hdl.handle.net/1721.1/163288" rel="alternate"/>
<author>
<name>Gallardo Moncayo, Gabriel A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163288</id>
<updated>2025-10-22T03:34:05Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Transforming Unstructured Data into Actionable Insights: A Use Case of Generative AI in Operational Technology Problem Management
Gallardo Moncayo, Gabriel A.
The increasing availability and reduced cost of Generative AI applications for the general public have motivated organizations across all industries to implement AI-based solutions in their daily operations. Still, they struggle to determine the capabilities and limitations of this technology when implementing it in their specific context. This thesis addresses these challenges through a practical case study: deploying a text-based Generative AI system (using Large Language Models - LLMs) for automated downtime event characterization within a global industrial operational technology (OT) setting by transforming unstructured&#13;
problem management reports into structured, actionable business insights. The developed software system contains a data pre-processing stage, followed by four LLM-based tasks (LLM-extraction, LLM-autoclassification, multi-aspect multi-level LLM-classification, and LLM-accuracy). We wrap everything in a well-structured and easy-to-understand evaluation framework that ensures the system’s output is format-reliable, accurate, and consistent. Through simple prompt engineering techniques and continuous failure modes analysis, we achieve high accuracy (&gt;89%) and consistency (&gt;79%) for downtime events characterization at 1% of the current cost. In the end, we prove that it is possible to implement an AI-based solution within current operational processes while properly communicating its capabilities and limitations and adapting its usage to the most added value purpose.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Raw Wire Inventory Management: A Data-Driven Approach to Demand Forecasting and Supply Chain Decision Support</title>
<link href="https://hdl.handle.net/1721.1/163287" rel="alternate"/>
<author>
<name>Gebner, Adam R.</name>
</author>
<id>https://hdl.handle.net/1721.1/163287</id>
<updated>2025-10-22T03:33:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing Raw Wire Inventory Management: A Data-Driven Approach to Demand Forecasting and Supply Chain Decision Support
Gebner, Adam R.
This thesis investigates methods to improve demand forecasting and inventory management for raw wire. Challenges such as supply chain disruptions from the COVID-19 pandemic, operational variability, and loss of expertise exposed vulnerabilities in the existing manufacturing system, leading to shortages and inefficiencies. By leveraging extensive production data, this research develops and evaluates tools to mitigate these issues while aiming for a 100% service rate.&#13;
The project leveraged extensive production data to predict future wire requirements, optimize inventory, and achieve a 100% service rate. Key contributions include:&#13;
1. A data-driven demand simulation model, reducing forecast error and surpassing&#13;
baseline methods&#13;
2. Quantification of waste distributions and variability in wire consumption&#13;
3. An inventory simulation framework for policy evaluation and shortage mitigation&#13;
4. Clustering analysis to classify demand patterns and identify key wire categories&#13;
5. A decision support tool supporting real-time visibility into inventory levels and risks&#13;
The models and tools developed through this project provide enhanced capabilities to predict future wire requirements and manage inventory more effectively through continued development. Though the initial results indicate potential business value, areas for future work include incorporating additional data sources, exploring advanced machine learning techniques, and conducting longer-term pilot studies to quantify business impact. This project demonstrates the value of leveraging data analytics and simulation modeling to enhance supply chain decision-making in complex manufacturing environments.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Economies of Space: Developing a Lean Manufacturing Framework for Work Center Floorspace Reduction</title>
<link href="https://hdl.handle.net/1721.1/163286" rel="alternate"/>
<author>
<name>Gerbino, Jacob</name>
</author>
<id>https://hdl.handle.net/1721.1/163286</id>
<updated>2025-10-22T03:34:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Economies of Space: Developing a Lean Manufacturing Framework for Work Center Floorspace Reduction
Gerbino, Jacob
This thesis aims to develop a lean manufacturing framework with the goal of optimizing the use of floorspace in Boeing's Interiors Responsibility Center South Carolina (IRCSC). The primary goal is to eliminate wasted floorspace while increasing production capacity and efficiency. The motivation behind this project stems from the need to address the fully allocated production floorspace at IRCSC and the pressing requirement to add new product lines without expanding the facility's physical footprint. Additionally, the project seeks to prepare IRCSC for possible increases in production rates for the 787 Dreamliner Program, necessitating a redesign of work centers to support higher output levels while enhancing efficiency and reducing costs.&#13;
&#13;
The project employs the DMAIC (Define, Measure, Analyze, Improve, Control) methodology and lean tools such as spaghetti diagramming and value stream mapping to treat "Misused Space" as an additional form of waste, alongside the traditional forms of lean waste. The framework was applied to a sample interior product work center to test its effectiveness. The study involved mapping the current layout, observing technician travel, conducting time studies, and analyzing value stream maps. The methodology facilitated the creation of a new floorplan and scheduling system that consolidates cure times and balances workloads between work cells. Discrete event simulation was used to validate the proposed changes, ensuring they would achieve the desired improvements.&#13;
&#13;
The results of the study revealed inefficiencies in the current layout and scheduling practices of the work center. The proposed changes demonstrated a potential 25% reduction in floorspace and a 55% decrease in product throughput time. The new scheduling and work allocation strategy reduced product throughput time from nine days to four, and the new layout reduced worker travel distances by as much as 50% in some work cells. The lean manufacturing principles and scheduling optimizations discussed in this thesis should be applied to other work centers within IRCSC. Future research should explore advanced methodologies and tools to handle the complexities of more interconnected work centers.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Impact of AI Integration in Healthcare: Exploring Regulatory,&#13;
Cultural, and Strategic Barriers</title>
<link href="https://hdl.handle.net/1721.1/163285" rel="alternate"/>
<author>
<name>Venkatanarayanan, Sriya</name>
</author>
<id>https://hdl.handle.net/1721.1/163285</id>
<updated>2025-10-22T03:33:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Impact of AI Integration in Healthcare: Exploring Regulatory,&#13;
Cultural, and Strategic Barriers
Venkatanarayanan, Sriya
This thesis investigates the barriers and enablers to predictive AI adoption in healthcare through a thematic synthesis of 13 academic articles and real-world case studies published over the last five years. Barriers were categorized into three domains: regulatory, cultural, and strategic. These included challenges such as fragmented regulation, clinician skepticism, data quality limitations, and poor alignment with clinical workflows. Cross-cutting patterns, stakeholder tensions, and recurring meta-themes revealed that these barriers are deeply interconnected. Drawing from over 200 individual findings, an actionable visual framework was developed to guide responsible and sustainable predictive AI integration. The proposed model, consisting of an internal “Pyramid” of enablers and an external “Circular Loop” of ecosystem conditions, provides a practical structure for aligning governance, engagement, and workflow with ongoing commitments to equity, collaboration, safety, and transparency.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generative AI in Private Equity for Accumulative Advantage</title>
<link href="https://hdl.handle.net/1721.1/163284" rel="alternate"/>
<author>
<name>Mahajan, Bonny</name>
</author>
<id>https://hdl.handle.net/1721.1/163284</id>
<updated>2025-10-22T03:33:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Generative AI in Private Equity for Accumulative Advantage
Mahajan, Bonny
This research explores the use of Generative AI (Gen AI) for achieving accumulative gains across various business and technical functions within commercial enterprises under private equity firms. While based on applied experiments in a private equity-owned, resource constrained portfolio company, many of the findings presented here may apply in other types of organizations.Through this study, we conduct case studies across key departments such as customer service, purchasing, engineering, employee management, and marketing. For each use case, we delve into the utilization of custom-built or publicly available Gen AI-based tools, aiming to understand the unique considerations and challenges that may arise when implementing Gen AI solutions in industries like manufacturing, which have traditionally been underserved by the tech sector.Through this research, we identify the critical role of humans in the loop, emphasizing the importance of UI/UX design, domain expertise, and local culture in the successful adoption and acceptance of Gen AI tools designed to enhance workforce efficiency in portfolio companies. This study also aims to illustrate how investing in Gen AI technologies is ultimately an investment in a company’s most valuable resource—its employees. By equipping employees with innovative tools, the organization not only improves productivity and job satisfaction but also fosters a culture of continuous improvement and adaptability. This research highlights the transformative potential of Gen AI in reshaping traditional business processes and driving sustainable growth in different functions of organizations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Standard Work for High Mix Low Volume Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/163283" rel="alternate"/>
<author>
<name>McNulty, Will</name>
</author>
<id>https://hdl.handle.net/1721.1/163283</id>
<updated>2025-10-22T03:33:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Standard Work for High Mix Low Volume Manufacturing
McNulty, Will
This thesis examines the challenges of developing standard work at scale in a high-mix low-volume (HMLV) manufacturing environment. The research is conducted at Re:Build Composite Resources, a thermoset composites (TSC) manufacturer. In the context of the company, impending growth demands more skilled laminators and the manual, complex nature of TSC lamination exposes the need for improved and documented standard procedures. By documenting existing processes through operator shadowing, time studies, and quality data analysis a “best-known” standard was created for the production steps of a subset of parts. Two pilot parts—one focused on cutting scrap rates, the other on boosting throughput—demonstrated how standard work instructions and a standard work schedule designed for one-piece flow significantly reduced errors and production variability. The thesis also explores the effectiveness and limitations of using computer vision as a tool to automate work instruction and time study data set generation. Beyond the immediate improvements in quality, efficiency, and new operator onboarding, the project’s scalable framework lays out a roadmap for broader adoption&#13;
of standard work in fast-growing HMLV operations. By focusing first on parts that yield the most significant gains — either due to high volume or high unit cost — organizations can maximize returns on continuous-improvement efforts while not overburdening their engineering staff with excess analysis and documentation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating The Feasibility of Electrified Process Heating&#13;
for Drug Substance Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/163282" rel="alternate"/>
<author>
<name>Bhirgoo, Priya Darshini</name>
</author>
<id>https://hdl.handle.net/1721.1/163282</id>
<updated>2025-10-22T03:34:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Evaluating The Feasibility of Electrified Process Heating&#13;
for Drug Substance Manufacturing
Bhirgoo, Priya Darshini
The pharmaceutical industry relies on high-temperature fluids such as pure steam to support critical operations including equipment cleaning and sterilization and on hot Water-For-Injection (WFI) as a key ingredient for drug substance manufacturing. These high-temperature process-driven heat demands are fulfilled through fossil fuel-based heating which contributes significantly to Scope 1 carbon emissions. Recognizing the link between environmental stressors and human health, Amgen has committed to achieving carbon neutrality by 2027. This thesis explores the feasibility and implications of transitioning from fossil fuel-based process heating to a fully electric system at one of Amgen’s drug substance manufacturing sites. Amgen’s existing fossil fuel-based steam system was analyzed through site visits, engineering reviews, and stakeholder engagements to quantify capital and operating costs, energy usage, and carbon emissions. A fully electric alternative was designed by researching commercial technologies and collaborating with suppliers as well as internal stakeholders. The analysis found that while the capital investment required for electrification is comparable to that of traditional steam systems, the operating costs for an electric system are significantly higher, driven by the higher price of electricity relative to natural gas. From a sustainability perspective, electrification eliminates on-site Scope 1 carbon emissions but shifts emissions to Scope 2, making the environmental benefit dependent on the carbon intensity of the local electricity grid. As grids transition to renewable energy sources, the potential for long-term emissions reductions strengthens. Future work should focus on evaluating the costs of necessary electrical infrastructure upgrades and identifying regions with lower-carbon, lower-cost electricity grids where electrified systems could be more readily implemented.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Partnerships as Retention Levers: A Study of Credit Card–Entertainment Collaborations</title>
<link href="https://hdl.handle.net/1721.1/163281" rel="alternate"/>
<author>
<name>Tchelikidi, Cloe</name>
</author>
<id>https://hdl.handle.net/1721.1/163281</id>
<updated>2025-10-22T03:33:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Partnerships as Retention Levers: A Study of Credit Card–Entertainment Collaborations
Tchelikidi, Cloe
In mature, competitive sectors such as financial services and media and entertainment, customer loyalty is increasingly difficult to sustain. This thesis explores the emergence of cross-industry partnerships, specifically between credit card issuers and digital entertainment platforms, as a strategic response to rising churn and declining differentiation. Through a case study of the American Express Digital Entertainment Credit, the research examines how lifestyle-aligned benefits can foster deeper behavioral engagement, reduce switching, and enhance customer lifetime value. The thesis situates these partnerships within the broader evolution of loyalty strategies, marked by hyper-personalization, subscription fatigue, and platform convergence. Findings suggest that flexible, recurring rewards embedded in consumers’ daily routines offer a path to durable retention, especially among younger, digital-native cohorts. The study concludes that such partnerships are not peripheral marketing tools but increasingly core to competitive strategy in commoditized markets.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring the Dynamics of Regulatory Compliance, Cost Management, and Competition in the Pharmaceutical Industry</title>
<link href="https://hdl.handle.net/1721.1/163280" rel="alternate"/>
<author>
<name>Wu, Lanchen</name>
</author>
<id>https://hdl.handle.net/1721.1/163280</id>
<updated>2025-10-22T03:33:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Exploring the Dynamics of Regulatory Compliance, Cost Management, and Competition in the Pharmaceutical Industry
Wu, Lanchen
This paper explores how financial pressures, regulatory enforcement, and market dynamics interact to shape pharmaceutical manufacturing quality and drug supply stability. Using a causal loop diagram (CLD), it examines how cost-cutting behavior affects control and validation capabilities, interacts with regulatory agency oversight, and contributes to recurring drug shortages. The analysis highlights how competition drive companies to operate at or near the minimum regulatory requirements, gradually eroding quality systems. Because of the nature of medical products, the quality of a drug cannot be directly assessed by individual users, distributors, or payers, making it necessary for government agencies like the FDA to rely on internal manufacturing data to ensure all drugs meet a minimum standard of quality. Regulatory oversight serves as a safeguard rather than a tool for guiding business decisions. However, its effectiveness is constrained by the frequency of inspections, the capacity of auditors, and limited resources—especially when government budgets are stretched and other priorities take precedence. The paper also discusses how manufacturers may avoid detection by strategically presenting information during inspections, making it harder for auditors to spot issues and allowing weakened controls to persist. Over time, these dynamics reinforce one another, creating a self-sustaining cycle in which cost pressures lead to a minimal compliance, quality issues, and regulatory responses that increase costs further. &#13;
As the number of manufacturers shrinks due to market consolidation, supply disruptions become more severe when failures occur. Regulatory discretion—intended to avoid immediate shortages—can unintentionally reduce incentives for long-term quality investment, further weakening the system’s resilience. &#13;
To address these issues, the paper proposes structural changes, including financial accountability for payers during shortages, tighter regulatory focus on process reliability, and linking regulatory flexibility to quality improvement obligations. These approaches aim to create balancing mechanisms that reduce cost-driven deterioration of quality and promote a more stable pharmaceutical supply chain.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ant Group’s Transformative Impact on China’s Financial Industry</title>
<link href="https://hdl.handle.net/1721.1/163279" rel="alternate"/>
<author>
<name>Pan, Kathryn</name>
</author>
<id>https://hdl.handle.net/1721.1/163279</id>
<updated>2025-10-22T03:33:30Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Ant Group’s Transformative Impact on China’s Financial Industry
Pan, Kathryn
Ant Group, China’s leading digital finance company, has fundamentally transformed the nation’s financial industry through groundbreaking innovations in digital payments, micro-lending, wealth management, and investment advisory. This paper explores the company’s role in reshaping China’s financial ecosystem, analyzing its impact on traditional banking institutions, regulatory policies, and consumer behavior. Utilizing analytical frameworks such as Porter’s Five Forces, PEST analysis, and SWOT analysis, this study provides a comprehensive assessment of the external and internal factors influencing Ant Group’s development and competitive positioning.&#13;
This research highlights Ant Group’s key financial innovations, including its online transaction platform, offline payment services, online credit solutions, digital fund distribution channels, and AI-driven investment advisory. By leveraging advanced technologies such as artificial intelligence, blockchain, and big data analytics, Ant Group has enhanced service efficiency, expanded accessibility, and strengthened risk management capabilities. These innovations have significantly advanced financial inclusion, extending financial services to previously underserved populations. However, Ant Group’s rapid growth has also intensified regulatory scrutiny, prompting major restructuring efforts and adjustments to its business model.&#13;
This paper employs three major analytical frameworks: PEST analysis, Porter’s Five Forces, and SWOT analysis. The PEST analysis examines the political, economic, social, and technological factors shaping Ant Group’s trajectory, highlighting the impact of evolving government policies and macroeconomic conditions on its operations. Meanwhile, Porter’s Five Forces framework assesses the competitive dynamics within China’s financial sector, identifying key market pressures such as rising competitions and regulatory constraints. Finally, the SWOT analysis evaluates Ant Group’s internal strengths and weaknesses, as well as external opportunities and threats, offering a comprehensive perspective on the company’s strategic positioning.&#13;
Drawing from these analyses, the paper offers strategic recommendations to ensure Ant Group’s sustained growth and resilience in an increasingly complex financial environment. These recommendations include strengthening regulatory compliance, fostering strategic alliances with both domestic and international partners, and further leveraging technological advancements to expand its service offerings. Additionally, the study explores potential global expansion strategies, considering how Ant Group can adapt its innovative financial solutions to international markets while navigating diverse regulatory landscapes.&#13;
By examining Ant Group’s evolution and the broader implications of its digital finance model, this study contributes to a deeper understanding of fintech’s disruptive power in China’s financial sector. The findings provide valuable insights for industry leaders, policymakers, and scholars interested in the intersection of financial technology, regulation, and strategic business management. As digital finance continues to evolve, Ant Group’s trajectory serves as a critical case study in balancing innovation, regulation, and market competition within a rapidly shifting financial landscape.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Process Optimization and Proactive Quality Control to Increase Investment Casting Throughput</title>
<link href="https://hdl.handle.net/1721.1/163278" rel="alternate"/>
<author>
<name>Sircar, Julia Sarita</name>
</author>
<id>https://hdl.handle.net/1721.1/163278</id>
<updated>2025-10-22T03:33:50Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Process Optimization and Proactive Quality Control to Increase Investment Casting Throughput
Sircar, Julia Sarita
Blue Origin is an aerospace company with ambitious throughput goals in response to increased commercial space exploration. Pressure to increase throughput is especially apparent within its BE-4 engine business, as the engines support Blue Origin and its customers. Blue Castings is one of the primary in-house manufacturing plants that supports BE-4 production; the plant manufactures rocket engine components through a process called investment casting. Investment casting, by nature, is a complex process involving long rework times, high incidence of defects, and significant process variability. These characteristics contribute to the discrepancies between Blue Origin’s target BE-4 production rate, the production rate feasible at Blue Castings, and its actual delivery rate. This thesis explores how defect management and prevention techniques can improve throughput at Blue Castings and reduce the number of Blue Origin’s schedule delays attributable to Blue Castings. The research began with a baseline investigation and analysis of Blue Castings’ actual and best-case throughput rates compared to its goal. Two gaps were identified: 1) a gap between actual and feasible throughput, and 2) a gap between feasible and target throughput. The analyses highlight the need for better process and quality management to close both gaps. Through a mixed-method approach, the researcher explored and piloted process and data improvements to understand their impact on throughput. This included qualitative and quantitative data collection through on-site interviews, random sampling of defect data, and queries from the manufacturing execution system. With this data, the researcher investigated how machine learning can predict rework severity and support defect prevention. A case study on a selected part number demonstrated the potential to improve throughput by reducing unnecessary rework. By aligning stock-on surface criteria to downstream machining requirements, average rework loops were reduced from thrice the industry benchmark to below the benchmark. This increased capacity at the rework work center and improved the overall delivery of this part. The research also demonstrated how a cross-functional collaboration to formalize producibility lessons reduces the creation of defects, promotes systematic knowledge-sharing, and accelerates improvements similar to the stock-on surface case study. In parallel, this research evaluated how Blue Castings could improve defect documentation and tracking without causing significant additional effort for operators. The researcher’s findings highlight the limitations of handwritten weld maps and inconsistent data capture practices on effectively preventing defects. Digitization of defect tracking is recommended to enable consistent defect data collection and improved root cause and trend analyses. As data quality improves, applying classification ML models for predictive analytics can scale throughput. This work provides recommendations for Blue Castings to implement mechanisms that reduce rework, improve producibility, and increase throughput to align with Blue Origin’s goals.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Systematic Political Philosophy of Education</title>
<link href="https://hdl.handle.net/1721.1/163277" rel="alternate"/>
<author>
<name>Pavel, Sonia Maria</name>
</author>
<id>https://hdl.handle.net/1721.1/163277</id>
<updated>2025-10-22T03:30:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Systematic Political Philosophy of Education
Pavel, Sonia Maria
My dissertation proposes a fundamental repositioning of philosophy of education relative to political philosophy. I argue that we cannot afford doing political philosophy without a theory of education, just as we cannot afford making philosophy of education modular, insulated from the rest of political philosophy. To this end, I propose a systematic political philosophy of education, meaning both a systematization of existing approaches to education and a comprehensive assessment of their merits and limitations. I reconstruct the main theories of education – liberal, conservative, democratic, and critical – from their most basic social ontological assumptions to their political programs for education. I then argue that they all struggle to realize their goals for education either as a result of flawed social ontological assumptions or because of a failure to institutionalize their commitments in practice. The lessons I draw from these critiques form the basis of my own novel systematic theory of education. My theory combines traditional political philosophy with insights spanning critical theory, social ontology, and education studies. The central goal is to reconfigure the school as a democratic institution of social learning that not only enables the flourishing of all students but helps society as a whole progress. The project advances on two levels: a methodological and a substantive-normative one. Methodologically, I resist a growing tendency towards the unmooring of political philosophy and philosophy of education. This tendency is peculiar both from a historical and a conceptual perspective. Historically, education was a core issue of political philosophy. Many, even most, of the canonical political philosophers started from the assumption that education is a central purpose of political life. In my substantive introduction, I take a historical excursus through the canonical political thinkers who best exemplify this emphasis on education: Plato, Rousseau, and Dewey. For all the differences in their views, all three understood education as essential to realizing their visions. They would have regarded any political philosophy that failed to address education as incomplete. Today, however, few political philosophers address the subject at all, let alone give it pride of place in their theories. This unmooring has had bad consequences for both subfields. Much contemporary work in philosophy of education takes for granted a liberal social ontology and liberal normative commitments without sufficient critical scrutiny. Similarly, most contemporary political theory neglects the topic of education and operates under the assumption of fully formed liberal agents. The lack of conceptual clarity is mirrored in political practice. Education is marred by persistent and seemingly intractable disagreements – from controversies about indoctrination to failures to realize the ideal of equality of opportunity. Our substantive disagreements about education, I argue in my first chapter, are not merely value disagreements about the goals of education. They stem from deep-rooted social ontological assumptions about the nature of human beings and society. But these social ontological assumptions are rarely acknowledged, let alone articulated, by political philosophers or philosophers of education. To correct this, I propose a novel metatheory that shows the systematic connections between the social ontology, normative commitments, and political programs of our dominant approaches to education (liberal, conservative, democratic). My reconstruction illuminates several surprising agreements and differences between them. For example, it reveals that many of our most heated political debates about education, between left and right liberals, are merely intramural disagreements among thinkers committed to the same individualist ontology. The systematic reconstruction also illuminates these theories’ failure to generate a coherent vision for education. My critiques show that each approach is characterized by a flawed or incomplete social theory which prevents it from promoting its own values and fulfilling its aims for education. In the case of liberal theories, I show that the liberal goal of cultivating autonomy is selfundermining in light of liberal theory’s individualist social ontology. In the second chapter, I turn to critical theories, which focus on the function of education in reproducing our broader social system. Whereas the dominant approaches start by asking about the nature and goals of education in general, critical theories analyze our contemporary educational systems under specific political and economic conditions. They reveal how schools contribute to perpetuating an oppressive and unjust social system. In other words, the focus of these theories is not on the school as a standalone institution, but as a particularly important subsystem in a larger process of social reproduction. While they are promising in many ways, I nevertheless argue that critical theories of education also have distinct limitations. In particular, even though their social theory and normative commitment are more compelling than the dominant views’, they do not satisfactorily translate these into practical proposals for remaking our systems of education. Having found none of the existing approaches fully satisfactory, I start developing the positive and evaluative dimensions of my own view in the third chapter. I go beyond critical social theory while relying on the broad strokes of its ontology of the human. My aim is to supplement this ontology by drawing on both empirical social studies and complexity theory to more precisely characterize the social relations and practices that constitute the domain of education. More specifically, I argue that we can best understand the educational subsystem by attending to its overlap and co-integration with the family, the state, and economic production. Schools are the mediating institutional domain between the family on one hand and the polity and economic production on the other. At the evaluative level, I articulate three critiques of social pathologies that I believe have been ignored or underutilized in critical education studies: alienation, commodification, and fragmentation. Alienation refers to a pathological relation of disconnection from one’s own learning, other students, and teachers. Commodification and fragmentation, on the other hand, are problems with the organization and distribution of resources in the education system. In my fourth and final chapter, I propose a new program for education that seeks to overcome some of the barriers faced by other systematic theories of education. Attempting to counter the problems I diagnosed and explained in the third chapter, I argue for a few different kinds of interventions. First, I propose restructuring the educational system in order to resist fragmentation by pursuing a unified distributive pool, consolidating school districts, and abolishing charters. Second, I argue for a reconfiguration of the co-integrated subsystems of the family, the school, and production that seeks to empower both children and those involved in their care to be involved in free, meaningful work. Finally, I articulate a set of classroom-level practices that seek to equalize access to development for individual students while cultivating their collective social and political imagination. One of the long-term goals is to make schools into democratic institutions of social learning, through which we strive to remove social blockages such as ideology and reflexivity deficits, in order to collectively solve political problems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Searching with Intuition: Exploring (the bounds of) LLM-guided Search with Unknown Objectives</title>
<link href="https://hdl.handle.net/1721.1/163276" rel="alternate"/>
<author>
<name>Kaashoek, Justin H.</name>
</author>
<id>https://hdl.handle.net/1721.1/163276</id>
<updated>2025-10-22T03:33:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Searching with Intuition: Exploring (the bounds of) LLM-guided Search with Unknown Objectives
Kaashoek, Justin H.
Large language models (LLMs) can perform a wide range of search and optimization tasks over discrete spaces. This work seeks to explore the limits of LLM-guided search. We construct a set of text optimization tasks with different levels of "intuitiveness'' and evaluate whether LLMs can effectively optimize objectives. We show that the LLM's performance depends not only on its intuition for the objective, but also on the alignment between the objective and its priors. We also find that the LLM can successfully optimize an objective even without an explicit description of the objective. Our results largely focus on greedy search strategies; we develop a theoretical characterization of conditions under which greedy search is optimal, meaning the LLM's failures result from a fundamental inability to take gradient-like steps, not suboptimal search.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Semantic Account of Distributional Constraints on Temporal in-Adverbials</title>
<link href="https://hdl.handle.net/1721.1/163275" rel="alternate"/>
<author>
<name>Rouillard, Vincent S</name>
</author>
<id>https://hdl.handle.net/1721.1/163275</id>
<updated>2025-10-22T03:30:25Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">A Semantic Account of Distributional Constraints on Temporal in-Adverbials
Rouillard, Vincent S
Temporal in-adverbials (TIAs) are a class of English expressions that can be exemplified with in three days. They are remarkable in that, depending on the syntactic position they occupy, TIAs are subject to very different distributional constraints. In some configurations, their licensing is conditioned by the lexical aspect of verbal predicates. In others, these expressions are negative polarity items. Though both varieties of TIAs have been discussed extensively in the semantics literature (Gajewski, 2005, 2007; Hoeksema, 2006; Iatridou and Zeijlstra, 2017, 2021; Krifka, 1989, 1998), no attempt has been made to understand the relationship between the two. I offer a unified semantic analysis of TIAs, which derives from semantic principles their eclectic distributional constraints.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Metrical Grids and Active Edges</title>
<link href="https://hdl.handle.net/1721.1/163274" rel="alternate"/>
<author>
<name>Asherov, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/163274</id>
<updated>2025-10-22T03:30:24Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Metrical Grids and Active Edges
Asherov, Daniel
Theories of word stress assignment differ in the kind of representations they adopt. One family of theories takes stress to be assigned by grouping stress-bearing elements into small units below the level of the word (typically, metrical feet), such that one element in each unit is marked as stronger, hence stressed (e.g., Liberman and Prince 1977; Hayes 1980). Another family of theories, often referred to as grid-only, models stress assignment without appealing to feet or similar bracketed representations above the syllable (Prince 1983; Selkirk 1984; Gordon 2002).&#13;
While the grid-only approach generates the attested languages with relatively simple representations, it also generates a host of patterns which are very different from those attested in human languages (Hayes 1995; Kager 2012; also see Stanton 2016).&#13;
This thesis aims to solve a set of overgeneration problems that arise in the grid-only approach. The solution involves three components. The first is a novel class of constraints that are sensitive to word edges but unspecified to the edge they apply to (left or right). The value of this edge, considered the “active” edge, is determined by the ranking between two competing constraints (cf. Richards 2016). The second component involves a specific characterization of alignment constraints and the crucial exclusion of computationally weaker or stronger alternatives. The third component is a set of principled fixed rankings between two classes of constraints. In particular, I propose that constraints sensitive to the active edge systematically outrank constraints that regulate rhythmic alternations (cf. van der Hulst 1997; 2012). The result is a grid-only theory of stress that has a significantly tighter fit to the typology compared to previous theories.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of a Throttleable Attitude Control Scheme for Electrospray Propulsion Systems</title>
<link href="https://hdl.handle.net/1721.1/163273" rel="alternate"/>
<author>
<name>Harjono, Hanna-Lee</name>
</author>
<id>https://hdl.handle.net/1721.1/163273</id>
<updated>2025-10-22T03:33:53Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Development of a Throttleable Attitude Control Scheme for Electrospray Propulsion Systems
Harjono, Hanna-Lee
Electrospray thrusters have emerged as highly promising propulsion options for small satellites due to their compact size, low weight, and power requirements. These thrusters offer precise, efficient, and scalable attitude control, making them ideal for missions requiring fine adjustments and advanced capabilities such as formation flying and docking maneuvers. However, to fully exploit the potential of electrospray thrusters, control strategies specific to them must be developed. In this work, a parameterized, PID gain-scheduled attitude controller that leverages the unique throttleability of electrospray thrusters is developed and validated. The developed controller is adaptable across operating conditions, as well as electrospray thrust coefficient values. Extensive modeling efforts are undertaken to incorporate the throttleability and operational constraints of electrospray thrusters, ensuring accurate performance predictions. The control system is simulated under various operating conditions to assess and verify its functionality and robustness against disturbance torques. Validation experiments in a magnetic levitation CubeSat testbed are proposed.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Visibility in synthetic aperture radar satellite data: metric formulation, observation scheduling, and orbit design</title>
<link href="https://hdl.handle.net/1721.1/163272" rel="alternate"/>
<author>
<name>Kramer, Evan L.</name>
</author>
<id>https://hdl.handle.net/1721.1/163272</id>
<updated>2025-10-22T03:30:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Visibility in synthetic aperture radar satellite data: metric formulation, observation scheduling, and orbit design
Kramer, Evan L.
Earth observation satellites serve as vital information gatherers for effectively addressing some of humanity’s most pressing challenges including management of limited resources and minimization of losses from disasters. Synthetic aperture radar (SAR) is a type of active remote sensing instrument that operates in the microwave portion of the electromagnetic spectrum and is a preferred Earth observation system thanks to the reliable imagery it can collect in all illumination and weather conditions. SAR data is acquired using a side-looking viewing geometry in which the radar is pointed perpendicular to the satellite platform’s direction of motion. This viewing geometry, in conjunction with the illuminated terrain’s topography, results in geometric distortions termed layover and shadow. These distortions degrade the utility of the collected imagery since they effectively obscure portions of the image and preclude extraction of actionable insights. While geometric distortions will be everpresent in SAR imagery, their location and coverage can be manipulated by controlling the relative orientation between the satellite and the illuminated topography. Such manipulation has historically been infeasible for legacy SAR satellites that collect globally consistent data sets under rigid operating requirements. However, the recent advent of commercial SAR satellite constellations has re-framed the practicality of carefully tuned observation geometries that maximize region of interest visibility. Commercial SAR constellations operate on a task-wise basis that grants data end-users flexibility in specifying desired observation parameters including acquisition times and observation geometries. However, a mismatch between on-orbit capabilities and delivered data quality exists due to a lack of formalized tools for planning observations with maximum region of interest visibility. Specifically, no systematic method for identifying visibility-favorable observation geometries exists. This dissertation addresses this gap in a stepwise approach. First, an extension of opensource radar processing software is developed that enables prediction of layover and shadow in a 2D distortion mask for any satellite-target relative geometry. Visibility metrics are then defined to represent the favorability of a particular observation geometry with respect to a distortion mask. The computation of visibility metric scores at geometries spanning the entire sample space enables creation of visibility maps that completely characterize the visibility characteristics of a given region of interest. To broaden the suitability of visibility maps for observation planning, a set of generalizable visibility maps are created to enable estimation of region of interest visibility characteristics in mission scenarios that are computationallyconstrained and information-limited. Visibility maps are then directly integrated into satellite operations by developing the first SAR observation scheduling algorithm that explicitly accounts for visibility. Finally, visibility is considered in the orbit design process to establish general guidance on optimal repeat ground track orbit parameters for pre-defined region of interest visibility characteristics. Region of interest visibility improvements of up to 90% are obtained for individual tasks when using the observation planning tools developed in this dissertation. Constellation-wide visibility improvements of 18% are achieved with modest reductions in traditional performance measures when integrating visibility into observation scheduling. Two-fold improvements in the visibility characteristics of observation opportunities are attained for orbits designed to maximize overpass geometry quality. The contributions of this dissertation are timely, given the concurrent proliferation of flexible, high-resolution SAR observation capabilities, and lay the groundwork for enabling the acquisition of SAR data that is maximally useful for limited resource management, disaster response, and other applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulation Modeling of Drug Substance Tech Transfer Timelines at Amgen</title>
<link href="https://hdl.handle.net/1721.1/163271" rel="alternate"/>
<author>
<name>Goel, Viraat Yogi</name>
</author>
<id>https://hdl.handle.net/1721.1/163271</id>
<updated>2025-10-22T03:33:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Simulation Modeling of Drug Substance Tech Transfer Timelines at Amgen
Goel, Viraat Yogi
Technology transfer (TT), or the process by which a product's manufacturing is moved and scaled, is a complex business process with countless deliverables and stakeholders. This is especially true in biomanufacturing, where drug commercialization timelines are measured in years, manufacturing facilities are specially designed, and regulations must be stringently met. This systems-level complexity can create inefficiencies in the TT process, lengthening timelines and wasting resources. In this project, we use simulation modeling techniques to digitally model Amgen's Commercial Tech Transfer (CTT) process for biologic drugs. We use virtual experimentation to identify key bottlenecks in the TT workflow, quantify how workstream alterations impact project timelines, and identify process changes likely to shorten timelines. We also extend this analysis to Amgen's New Product Introduction (NPI) process, identifying how coordination between upstream and downstream processes may accelerate NPI timelines. Finally, we link this project to the ongoing development of TT data visualization dashboards.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Safety Stock Modeling for a Medical Devices Supply Chain</title>
<link href="https://hdl.handle.net/1721.1/163270" rel="alternate"/>
<author>
<name>Chong, Julie</name>
</author>
<id>https://hdl.handle.net/1721.1/163270</id>
<updated>2025-10-22T03:33:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Safety Stock Modeling for a Medical Devices Supply Chain
Chong, Julie
This thesis examines the current inventory management practices at a leading manufacturer of medical devices, and identifies areas for significant improvement. The analysis reveals inefficiencies in safety stock management, with finished goods inventories being excessively high and raw material stocks being underestimated. The study applies single-echelon and multi-echelon inventory modeling to demonstrate potential cost savings through optimized safety stock levels. Additionally, it highlights the importance of reevaluating high service level targets and improving forecasting accuracy to reduce reliance on costly countermeasures. The thesis also emphasizes the need for effective management of component lead times and enhanced data visibility. Recommendations include transitioning to data-driven safety stock calculations, adopting multi-echelon inventory optimization, reassessing service level targets, enhancing forecasting accuracy, and improving component lead time management. By implementing these strategies, the company can enhance operational efficiency, reduce costs, and build greater resilience in its supply chain.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Expanding home broadband coverage through existing Low Earth Orbit megaconstellations</title>
<link href="https://hdl.handle.net/1721.1/163269" rel="alternate"/>
<author>
<name>Gonzalez Martinez, Gretel</name>
</author>
<id>https://hdl.handle.net/1721.1/163269</id>
<updated>2025-10-22T03:33:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Expanding home broadband coverage through existing Low Earth Orbit megaconstellations
Gonzalez Martinez, Gretel
Expanding broadband access to underserved areas continues to be a significant challenge for Internet Service Providers (ISPs). While its services perform well in high-density regions, they face scalability limitations in sparsely populated areas where infrastructure costs must be spread across a smaller customer base. This study explores the potential of Low Earth Orbit (LEO) satellite megaconstellations as a scalable solution for extending broadband coverage in the United States. By analyzing the technical capabilities, deployment timelines, and economic feasibility of partnering with LEO satellite providers, this research offers a strategic framework for integrating satellite broadband into ISPs service portfolio.&#13;
&#13;
A customer demand model identifies approximately 17 million unserved households within the addressable market of one of the largest U.S. telecommunications companies. The business case assessment evaluates broadband profitability by optimizing customer base size relative to proximity to existing infrastructure. While fiber optics remains the most profitable solution in high-density areas and fixed wireless access effectively utilizes excess 5G capacity, both require substantial infrastructure investment, limiting their feasibility for rural broadband expansion. In contrast, a satellite broadband partnership emerges as the most cost-effective solution for at least 1 million households, surpassing the profitability of currently existing offerings. With minimal capital investment, satellite technology enables rapid customer acquisition and scalable nationwide expansion. The analysis highlights that wholesale agreements play a critical role in profitability and the need to secure a minimum revenue share of 16.5% to reach the break-even point.&#13;
&#13;
Performance modeling and curve approximation techniques estimate that if Kuiper meets Federal Communications Commission (FCC) deployment milestones, it could serve 8.5 million customers by 2026, with full nationwide coverage projected by 2029. Under a 200x oversubscription model, Kuiper’s total subscriber capacity could scale to 32.8 million, demonstrating its ability to complement current broadband o!erings. While LEO broadband networks can achieve capacities in the tens of Tbps, they remain far below fiber networks, which operate in the thousands of Tbps. Rather than competing directly, satellite broadband is positioned as a complementary solution, addressing connectivity gaps in rural and underserved&#13;
regions.&#13;
&#13;
To capitalize on these findings, this study recommends leveraging existing LEO megaconstellations to expand broadband coverage nationwide. A phased rollout should begin with a beta program in California, the state with the highest number of unserved households, to validate network performance and optimize deployment for broader expansion. Partnering with an&#13;
existing LEO megaconstellation could e!ectively bridge the digital divide in rural areas, expand service offerings, and enable a stronger position in the growing satellite broadband market.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Economic Determinants of Increased Use of Performance-Vesting Provisions in CEO Incentives</title>
<link href="https://hdl.handle.net/1721.1/163268" rel="alternate"/>
<author>
<name>Kim, Jason Gwanhee</name>
</author>
<id>https://hdl.handle.net/1721.1/163268</id>
<updated>2025-10-22T03:33:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Economic Determinants of Increased Use of Performance-Vesting Provisions in CEO Incentives
Kim, Jason Gwanhee
This study examines the determinants of firms adopting performance-vesting long-term incentive (PLI) awards, a rapidly growing form of executive compensation. Using data provided by Equilar on Russell 3000 firms, I investigate how a firm's contracting environment and inter-firm networks influence the adoption and design of PLI awards. I find that stock liquidity and analyst coverage significantly increase the likelihood of adoption by enhancing the informativeness of performance measures. The findings suggest that firms adopt PLI awards to better align managerial incentives with shareholder interests, focusing on the measures that are both reliable and strategically aligned. I also show that board interlocks, particularly those involving compensation committee members, and shared compensation consultants play a significant role in facilitating the diffusion of PLI awards across firms.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Essays on Content Moderation Interventions for Addressing Online Misinformation</title>
<link href="https://hdl.handle.net/1721.1/163267" rel="alternate"/>
<author>
<name>Martel, Cameron</name>
</author>
<id>https://hdl.handle.net/1721.1/163267</id>
<updated>2025-10-22T03:30:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Essays on Content Moderation Interventions for Addressing Online Misinformation
Martel, Cameron
In Chapter 1, I examine the efficacy of fact-checker warning labels as a content moderation intervention for addressing online misinformation. Warning labels from professional fact-checkers are one of the most historically used interventions against online misinformation. But are fact-checker warning labels effective for those who distrust fact-checkers? In a first correlational study, we validate a measure of trust in fact-checkers. Next, we conduct meta-analyses across 21 experiments in which participants evaluated true and false news posts and were randomized to either see no warning labels or to see warning labels on a high proportion of the false posts. Warning labels were on average effective at reducing belief in, and sharing of, false headlines. While warning effects were smaller for participants with less trust in fact-checkers, warning labels nonetheless significantly reduced belief in, and sharing of, false news even for those most distrusting of fact-checkers. Our results suggest fact-checker warning labels are a broadly effective tool for combatting misinformation.&#13;
&#13;
In Chapter 2, joint with Jennifer Allen, Gordon Pennycook, and David G. Rand, I investigate the potential of crowdsourced fact-checking systems to identify misleading online content. Social media platforms are increasingly adopting crowd-based content moderation interventions for identifying false or misleading content. However, existing theories posit that lay individuals can be highly politically biased, and that these strong political motivations inherently undermine accuracy. Alternatively, we propose that political and accuracy motivations may, in some cases, operate in tandem – in which case politically motivated individuals need not hamper truth discernment. We empirically assess this by analyzing a survey study of misinformation flagging and field data from X’s Community Notes. Consistent with a simple model of flagging behavior, posts that are both false and politically discordant are flagged the most. Importantly, we find that more politically motivated users flag a greater number of posts, engage in more politically biased flagging, and yet exhibit the same or better flagging discernment. Together, these results show that politically motivated individuals are integral to provisioning a high overall quantity and quality of crowdsourced fact-checks.&#13;
&#13;
In Chapter 3, I assess the perceived legitimacy of different content moderation interventions for addressing online misinformation. Current content moderation practices have been criticized as unjust. This raises an important question – who do Americans want deciding whether online content is harmfully misleading? We conducted a nationally representative survey experiment in which U.S. participants evaluated the legitimacy of hypothetical content moderation juries tasked with evaluating whether online content was harmfully misleading. These moderation juries varied on whether they were described as consisting of experts, laypeople, or non-juries. We also randomized features of jury composition (size, necessary qualifications) and whether juries engaged in discussion during content evaluation. Overall, participants evaluated expert juries as more legitimate than layperson juries or a computer algorithm. However, modifying layperson jury features helped increase legitimacy perceptions – nationally representative or politically balanced composition enhanced legitimacy, as did increased size, individual juror knowledge qualifications, and enabling juror discussion. Our findings shed light on the foundations of institutional legitimacy in content moderation and have implications for the design of online moderation systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Materials and Devices for Optoelectronic Packaging</title>
<link href="https://hdl.handle.net/1721.1/163266" rel="alternate"/>
<author>
<name>Weninger, Drew Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/163266</id>
<updated>2025-10-22T03:31:06Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Materials and Devices for Optoelectronic Packaging
Weninger, Drew Michael
Over the last two decades, improvements in semiconductor manufacturing have allowed for the commercialization of silicon photonic integrated circuits with over 10,000 devices. These chips are critical to data and telecommunications networks where they convert and encode optical signals to electrical signals, and vice versa, in the form of transceivers. Scaling up the number of transceivers and optical fiber connections, or optical input/output (I/O), will be critical to meet the exponential rise in demand for cloud data capacity since 2010.  However, the costly process of active alignment and bonding of optical fiber arrays directly to photonic chips presents a barrier to their high volume packaging and assembly. This approach limits optical I/O density to a maximum of 8 connections per millimeter since optical fibers for communications applications have cladding diameters of 125 micron.&#13;
&#13;
To address this challenge, this thesis explored a new field of silicon integrated photonics involving chip-to-chip (i.e. flip-chip) optical coupling. Evanescent chip-to-chip couplers were designed, fabricated, packaged, and tested for directly connecting silicon photonic chips to other silicon photonic chips, interposers, or printed circuit boards using automated assembly. The design's compact footprint allows for coupler pitches below 10 micron, or an optical I/O density of greater than 100 connections per millimeter, to be realized - an order of magnitude improvement over fiber-to-chip connections. By designing the coupler to use silicon materials and back-end-of-line compatible packaging processes, ease of integration with existing microelectronic foundry tool sets was ensured. Results from an experimental flip-chip coupler prototype showed greater than 90% coupling efficiency with micron scale alignment tolerances when coupling from silicon nitride to silicon-on-insulator waveguides, the first demonstration of such a device. &#13;
&#13;
To further improve optical flip-chip coupler performance, designs were proposed for combining the evanescent coupler with an integrated graded index lens using silicon oxynitride films. Such a device would provide a universal coupling interface in silicon photonics for both chip-to-chip or fiber-to-chip connections. Simulations showed sub-dB coupling loss across all interfaces including flip-chip coupling across a 10 micron gap. Initial fabrication processes were established to deposit, pattern, and etch greater than 10 micron thick silicon oxynitride graded index lenses on silicon and glass substrates. In showing that automated pick-and-place tools can be used for photonic chip assembly, this work represents a critical step in eliminating active alignment and sustainably scaling optical I/O in future transceiver packages.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Selecting for and Selecting Despite: A Javanese case study</title>
<link href="https://hdl.handle.net/1721.1/163265" rel="alternate"/>
<author>
<name>Lesure, Cora</name>
</author>
<id>https://hdl.handle.net/1721.1/163265</id>
<updated>2025-10-22T03:30:41Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Selecting for and Selecting Despite: A Javanese case study
Lesure, Cora
This is an investigation of the argument structure of Javanese (Austronesian, Indonesia) which focuses on the distribution of four core derivational morphemes: the Actor Voice prefix, and the suffixes -ake, -i, and -an. The project is based on original consultant work conducted with a speaker of the Central dialect of Javanese. The work establishes language internal diagnostics for various aspects of a stem's lexical semantics and lexical category and then utilizes these criteria to analyze a wide variety of morphological derivatives, both verbal and nominal. The resulting analysis is able to predict the distribution of derivational morphemes and the nature of their resulting derivatives to a higher degree than what was previously understood to be possible.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Uncovering Mandarin Speaker Knowledge with Language Game Experiments</title>
<link href="https://hdl.handle.net/1721.1/163264" rel="alternate"/>
<author>
<name>Fu, Boer</name>
</author>
<id>https://hdl.handle.net/1721.1/163264</id>
<updated>2025-10-22T03:30:33Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Uncovering Mandarin Speaker Knowledge with Language Game Experiments
Fu, Boer
Mandarin Chinese offers many intriguing puzzles for linguists because it has a shortage of morphophonological alternations. This has resulted in indeterminacy in various aspects of its phonological grammar, triggering much debate on syllable structure and allophonic mapping. The ambiguity of the data is also a problem for children acquiring Mandarin since alternative grammars can account for the surface forms equally well.&#13;
&#13;
In order to find out what Mandarin speakers have learned about the phonology of their language, I conducted two language game experiments based on fanqie secret languages. It was found that markedness and faithfulness constraints are psychologically real for Mandarin speakers. Furthermore, the interactions between markedness and faithfulness constraints are shown to have an effect on glide movement in the language game. In addition, much speaker variation was observed in the experiment. I demonstrate that it is the result of constraint ranking variation. Nevertheless, general population-level trends on constraint ranking could still be identified. These trends lead to insights on phonological learning beyond Mandarin, showing evidence for naturalness bias and lexicon optimization.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Business Value of Enterprise Digital Architecture</title>
<link href="https://hdl.handle.net/1721.1/163263" rel="alternate"/>
<author>
<name>Venkata Aditya, Saraswatula (Adi SV)</name>
</author>
<id>https://hdl.handle.net/1721.1/163263</id>
<updated>2025-10-22T03:33:56Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Business Value of Enterprise Digital Architecture
Venkata Aditya, Saraswatula (Adi SV)
Digital technologies are fundamentally reshaping markets and organizations globally. This thesis is exploratory research that seeks to explain how large multi-regional and global enterprises determine, prioritize, measure, and manage business value outcomes of digital investments over time. I examine the value construct of digital initiatives in firms from different industries by interviewing various stakeholders. Insights surfaced from this primary research are analyzed in conjunction with the concepts from current literature. Qualitative findings are proposed, and a list of value metrics is presented that can serve as a future reference for firms. A causal loop diagram is proposed to visualize firm capabilities and value dynamics.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Risk-Aware Reinforcement Learning with Safety Constraints</title>
<link href="https://hdl.handle.net/1721.1/163262" rel="alternate"/>
<author>
<name>Feng, Meng</name>
</author>
<id>https://hdl.handle.net/1721.1/163262</id>
<updated>2025-10-22T03:30:40Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Risk-Aware Reinforcement Learning with Safety Constraints
Feng, Meng
Safety is a critical concern in reinforcement learning (RL) and learning-based systems more broadly, as ensuring reliable and safe decision-making is essential for their deployment in real-world applications. Traditional approaches to address safety often rely on techniques such as reward shaping, carefully curated training data, or explicit handcrafted rules to avoid unsafe actions. More recent advancements have adopted the Constrained Markov Decision Process (CMDP) framework, which trains agents while explicitly enforcing constraints on auxiliary measures such as safety or risk. However, these methods often suffer from significant constraint violations. This thesis identifies the root cause of such violations as stemming from the pursuit of maximal task performance in each policy update. Given the inherent limitations of sample-based constraint assessments in RL, where data is limited and approximation errors are inevitable, these methods often fail near constraint boundaries, leading to excessive violations. To address this, we propose a novel constrained reinforcement learning algorithm that dynamically adjusts its conservativeness during policy updates. By incorporating the risk of constraint violation into the update process, our method can shift focus toward constraint satisfaction when violations are likely, while still striving to improve task performance whenever feasible. Our algorithm reduces constraint violations by up to 99% compared to state-of-the-art baselines while achieving comparable task performance. In the second part of this thesis, we extend CMDPs to address multi-goal, long-horizon problems. We augment the CMDP formulation to incorporate goals, enabling it to handle multiple goals while preserving goal-independent constraint specifications in CMDP. To tackle the complexity of long-horizon tasks with high-dimensional inputs (e.g., visual observations), we propose a method that integrates planning with safe reinforcement learning. By leveraging deep reinforcement learning, we acquire the essential components for planning, including a low-dimensional state-space representation and planning heuristics. The planning algorithm then decomposes long-horizon problems into a sequence of shorter, easier subgoal-reaching tasks. The learned agents safely navigate toward these subgoals step by step, ultimately reaching the final goal. We evaluate our method on both single-agent and multi-agent tasks. In 2D navigation, our approach demonstrated up to 74.2% risk reduction, while in visual navigation, it achieved up to 49.3% risk reduction, all while reaching comparable or better success rates.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Inventory Rebalancing: Strategies for Managing Excess Inventory in a Dynamic Supply Chain</title>
<link href="https://hdl.handle.net/1721.1/163261" rel="alternate"/>
<author>
<name>Oludipe, Lanre</name>
</author>
<id>https://hdl.handle.net/1721.1/163261</id>
<updated>2025-10-22T03:33:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing Inventory Rebalancing: Strategies for Managing Excess Inventory in a Dynamic Supply Chain
Oludipe, Lanre
The increasing demand for faster consumer delivery has led retailers to establish smaller regional distribution centers alongside traditional main distribution centers (MDCs). However, the limited capacity of some of these regional centers heightens the need for precise inventory forecasting and deployment to minimize excess inventory, particularly when few viable outlets exist for excess inventory. This research examines strategies to mitigate excess inventory at regional centers through inventory rebalancing, the integration of additional outlets, and modifications to existing inventory policies. A Monte Carlo simulation was conducted to compare the performance of the current system with a modified system incorporating these enhancements. The results showed that the modified system improved capacity utilization and reduced inventory deployment from the MDC without affecting margin. These improvements can enable more agile operations at smaller regional centers, reduce inventory buildup, and reduce the pressure of precise inventory deployment.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Electric Vehicle Fleet Charging: A Simulation-Based Comparison of Charging Strategies and Cost Implications</title>
<link href="https://hdl.handle.net/1721.1/163260" rel="alternate"/>
<author>
<name>Knapp, Rachael</name>
</author>
<id>https://hdl.handle.net/1721.1/163260</id>
<updated>2025-10-22T03:33:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Electric Vehicle Fleet Charging: A Simulation-Based Comparison of Charging Strategies and Cost Implications
Knapp, Rachael
The global shift to electric vehicles (EVs) is progressing rapidly, driven by the need to reduce greenhouse gas (GHG) emissions and global reliance on fossil fuels. However, fleet electrification presents unique challenges, particularly in regard to rolling out the necessary charging infrastructure and operational efficiency. This study examines how various depot-based fleet charging strategies impact up-front capital and long-term operational expenditures. The operational feasibility of each method is evaluated through the use of a discrete event simulation. The study incorporates fleet data to assess the time required to charge the fleet, the number of chargers needed, and the number of associates needed to operate manual strategies. The analyzed charging methods include dedicated level 2 charging, vehicle swapping, level 2 cable swapping, level 3 cable swapping, sequential and simultaneous charging. Key findings indicate that while a 1:1 vehicle-to-charger ratio ensures charging reliability within the designated time, it incurs the highest capital costs. Alternative strategies, such as cable swapping and simultaneous charging, significantly reduce costs while successfully charging the fleet within the charging window.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Integrated Optimization Model for Large-Scale EV Fleet Deployment: Balancing Emissions Reduction and Operational Costs</title>
<link href="https://hdl.handle.net/1721.1/163259" rel="alternate"/>
<author>
<name>Kasliwal, Mohit</name>
</author>
<id>https://hdl.handle.net/1721.1/163259</id>
<updated>2025-10-22T03:33:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Integrated Optimization Model for Large-Scale EV Fleet Deployment: Balancing Emissions Reduction and Operational Costs
Kasliwal, Mohit
This thesis presents an integrated optimization framework designed for the large-scale deployment of electric vehicles (EVs) within commercial fleets, specifically focusing on balancing emissions reduction and operational cost efficiencies. Utilizing Verizon’s extensive fleet of over 10,000 light-duty vehicles across 1,000 sites as a case study, the research addresses the challenges and complexities in effective site selections for such a large and dispersed fleet. &#13;
The research involved developing and testing several optimization models under varying scenarios, including scenarios prioritizing maximum operational savings, maximum emissions reduction, and a hybrid model employing an internal cost of carbon (ICC) to balance both operational and environmental objectives. The model essentially develops a ranking system for sites – suggesting which sites to electrify in which year and order, with how many EV conversions (from existing ICE vehicles) at each site.&#13;
The results highlight the importance of tailoring EV deployment strategies to site-specific conditions, such as unique vehicle usage patterns, grid emissions profiles, regional operational costs, and available incentives. Particularly, smaller sites were found to offer greater relative benefits in terms of both cost savings and emissions reductions per unit of capital invested due to their high average mileage, making them strategic priorities for early electrification.&#13;
Operational feasibility was also thoroughly examined, recommending practical constraints such as limiting the number of sites electrified annually to ensure project manageability and effectiveness. &#13;
Sensitivity analyses addressed critical uncertainties such as battery degradation over the vehicle lifespan and the impact of extreme weather on EV performance. These analyses underscore the necessity of conservative battery range buffers ("safe ranges"). Robust load management strategies can be deployed to significantly reduce demand charges and optimize charging schedules based on time-of-use rates where available.&#13;
Recommendations from the study advocate for implementing a hybrid optimization strategy incorporating an ICC based on corporate goals, continuous adaptive management informed by ongoing data collection, and strategic infrastructure investments to future-proof EV deployments. Policy alignment is also critical to enhance economic viability via incentives and ensure regulatory compliance.&#13;
Finally, the thesis proposes future research directions, including investigation of advanced load management and integration with renewable energy sources, exploring bi-directional charging to add revenue streams, incorporating marginal operating emissions rate (MOER) data to further reduce grid emissions and exploring the resilience of EV fleets to power outages. These initiatives aim to further enhance strategic decision-making and ensure the long-term sustainability and efficiency of fleet electrification programs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Breaking the Chain: Building Resilience in the Insurance Value Chain</title>
<link href="https://hdl.handle.net/1721.1/163258" rel="alternate"/>
<author>
<name>Chuah, Chung Jin</name>
</author>
<id>https://hdl.handle.net/1721.1/163258</id>
<updated>2025-10-22T03:33:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Breaking the Chain: Building Resilience in the Insurance Value Chain
Chuah, Chung Jin
This thesis examines how strategic transformation approaches reshape the resilience of the Property &amp; Casualty (P&amp;C) insurance industry in the light of ongoing technological disruption, climate change, and regulatory pressures. Through empirical analysis of 9 insurers, the study reveals that while all transformation types improve performance, phased 'test-refine-execute' strategies achieve superior outcomes by combining operational focus with strategic agility. The research identifies four implementation levers: (i) digital modernization, (ii) phased transformation execution, (iii) resource-allocation agility, and (iv) aligned leadership - which together explain why some transformations succeed where others fail."
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Domain Adaptation of VLM for Soccer Video Understanding</title>
<link href="https://hdl.handle.net/1721.1/163257" rel="alternate"/>
<author>
<name>Jiang, Tiancheng(Tony)</name>
</author>
<id>https://hdl.handle.net/1721.1/163257</id>
<updated>2025-10-22T03:33:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Domain Adaptation of VLM for Soccer Video Understanding
Jiang, Tiancheng(Tony)
Vision Language Models (VLMs) have demonstrated strong performance in multi-modal tasks by effectively aligning visual and textual representations. However, most video under- standing VLM research has been domain-agnostic, leaving the understanding of their transfer learning capability to specialized domains underexplored. In this work, we address this by exploring the adaptability of open-source VLMs to specific domains, and focusing on soccer as an initial case study. Our approach uses large-scale soccer datasets and LLM to create instruction-following data, and use them to iteratively fine-tune the general-domain VLM in a curriculum learning fashion (first teaching the model key soccer concepts to then question answering tasks). The final adapted model, trained using a curated dataset of 20k video clips, exhibits significant improvement in soccer-specific tasks compared to the base model, with a 37.5% relative improvement for the visual question-answering task and an accuracy improvement from 11.8% to 63.5% for the downstream soccer action classification task.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Minimizing Cost of Intra-Yard Finished Vehicle Logistics Through Automation and Optimization</title>
<link href="https://hdl.handle.net/1721.1/163256" rel="alternate"/>
<author>
<name>Garber, Jeremy</name>
</author>
<id>https://hdl.handle.net/1721.1/163256</id>
<updated>2025-10-22T03:33:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Minimizing Cost of Intra-Yard Finished Vehicle Logistics Through Automation and Optimization
Garber, Jeremy
This thesis analyzes and validates autonomous Finished Vehicle Logistics (FVLa) operations, at the plant of an automative Original Equipment Manufacturer (OEM), through the development of a Vehicle-Plug-In (VPI) system with Level 4 autonomous driving capabilities. The research combines process flow analysis with FlexSim simulation modeling to optimize operational parameters and assess safety performance. Results demonstrate FVLa operational feasibility with a recommended VPI inventory of 750 units and 6-hour replenishment cycle. The study's key contributions include a validated operational model using Economic Order Quantity calculations and a safety framework utilizing Bayesian Networks, establishing foundations for the planned 2028 implementation while maintaining required throughput rates and safety standards.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decarbonized Cement Manufacturing via Advanced Production Technologies</title>
<link href="https://hdl.handle.net/1721.1/163255" rel="alternate"/>
<author>
<name>Norwalk, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/163255</id>
<updated>2025-10-22T03:33:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Decarbonized Cement Manufacturing via Advanced Production Technologies
Norwalk, Michael
Cement production is the second-largest source of industrial carbon dioxide emissions world-wide. Due to the chemical reactions inherent in its production and the temperatures required to drive those reactions, cement is considered a “hard-to-decarbonize” industry. In this study, three emerging technologies to reduce the carbon intensity of industrial processes, namely, direct high-temperature electric process heat, electric process heat utilizing thermal storage, and liquid amine-based carbon capture are assessed in the context of a greenfield cement production facility relative to a new-build conventional cement plant fueled with natural gas. Cement plants utilizing this set of technologies were modeled in five U.S. geographies to determine the relative economic returns. The economics were assessed, inclusive of available economic incentives, both for the scenario in which the cement produced is sold in the U.S. market and for the scenario in which the cement produced is exported to the European Union (E.U.) market to assess potential benefits from the E.U. carbon pricing system. The analysis indicates that at current technology prices, the economic returns of the assessed technologies, while in some cases profitable, continue to lag those of conventional production technology for the domestic U.S. market. As costs come down as technology is deployed, the economics of carbon capture solutions have the potential to be competitive with conventional technology. The E.U. carbon emissions penalties are effective in altering the economics in such a way that implementing carbon capture systems becomes the most attractive economic option, demonstrating the power of carbon emissions markets. With increased technology deployment as well as the adoption of targeted incentives in the U.S. market, the adoption of low carbon cement production technologies can be accelerated.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Obscured universality in Mandarin</title>
<link href="https://hdl.handle.net/1721.1/163254" rel="alternate"/>
<author>
<name>Chen, Fulang</name>
</author>
<id>https://hdl.handle.net/1721.1/163254</id>
<updated>2025-10-22T03:30:31Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Obscured universality in Mandarin
Chen, Fulang
In this dissertation, I investigate the apparently distinctive syntactic properties associated with the BEI-construction, the BA-construction, and resultative constructions in Mandarin Chinese, which I argue obscure properties that are universal across natural languages. In the case of the Mandarin BEI-construction, it exhibits both passive-like and tough-movementlike properties. I argue for a novel analysis of the BEI-construction as a passive construction, where the passive head/BEI hosts a composite probe [&#120601; + Ā], which triggers composite A/Ā-movement, in the sense of Van Urk (2015). The subject in the BEI-construction is derived via (successivecyclic) composite A/Ā-movement, followed by a terminating step of A-movement, similar to Longenbaugh’s (2017) analysis of English tough-movement. Under the proposed analysis, the mixed A/Ā-properties associated with the BEI-construction are direct consequences of composite A/Āmovement (following Van Urk 2015; Longenbaugh 2017). In the case of the Mandarin BA-construction, it involves an apparently pre-posed noun phrase (the post-BA NP) with an affectedness interpretation, which can be identified with either the subject of a resultative phrase in a complex predicate or the direct object of a simple transitive verb. I argue for a novel analysis of the Mandarin BA-construction as a causative construction, where the causative head, which selects a predicate of the caused/resulting event and projects a predicate of the causing event (following Pylkkänen 2002, 2008), has two additional arguments: a causer and a causee. The post-BA NP, as the causee argument of the causative head, also controls a PRO subject in a resultative phrase (following Huang 1992), which is overt in a complex-predicate BAconstruction and is phonologically null in a simple-transitive BA-construction (following Sybesma 1992, 1999). The post-BA NP is interpreted as being affected in the causing event, in the sense that it is caused to perform an action or undergo a change of state (following Alsina 1992). Lastly, in Mandarin, there is no apparent unaccusative-unergative distinction in resultative constructions, unlike languages like English, where distinctions between resultative constructions with unaccusative and unergative matrix verbs follow from the Unaccusativity Hypothesis (Perlmutter 1978; Burzio 1986) and general principles such as the Direct Object Restriction (Simpson 1983; Levin &amp; Rappaport Hovav 1995) and Burzio’s generalization (Burzio 1986). I argue that resultative constructions in Mandarin are causative constructions, where the causative head has four possible argument structures, depending on whether the matrix verb is unaccusative, unergative, or transitive, as well as the semantic relation between the matrix subject and the matrix verb (and between the post-verbal NP and the matrix verb). Despite the fact that the argument structure of the causative head obscures the argument structure of the matrix verb, I argue that in Mandarin resultative constructions, the sole argument of an unaccusative matrix verb is always a causee argument, whether or not an additional causer external argument is present, while the sole argument of an unergative matrix verb, which is a causer external argument otherwise, is a causer argument when the causer is an internal argument. The dissertation showcases how Mandarin provides insight in defending and expanding our knowledge of cross-linguistic properties such as passivization (which embodies Burzio’s generalization and feature-driven movement), composite probing, the bi-clausal syntax and bi-eventive semantics of causative constructions, as well as the nature of affectedness (in causative constructions) and implications for the Unaccusativity Hypothesis and the Uniformity of Theta-Assignment Hypothesis (Baker 1988).
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Single-Polarity Ion Electrospray Propulsion</title>
<link href="https://hdl.handle.net/1721.1/163253" rel="alternate"/>
<author>
<name>Shaik, Saba Zareen</name>
</author>
<id>https://hdl.handle.net/1721.1/163253</id>
<updated>2025-10-22T03:33:43Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Single-Polarity Ion Electrospray Propulsion
Shaik, Saba Zareen
Electrospray thrusters are highly efficient spacecraft propulsion devices that accelerate ions sourced from ionic liquid propellants to produce thrust. Typically, electrosprays are fired in a dual-polarity configuration in which the polarity of the ion beam is periodically reversed. This strategy is difficult to implement and imposes limitations on system size and performance. We instead propose a single-polarity design where negative ions are emitted continuously from the thruster, enabling extreme miniaturization, faster startup, better emission stability, and simpler power processing. This thesis investigates two challenges associated with the single-polarity design. First, system lifetime is of principal importance for electrospray propulsion systems in general and must be verified for a single-polarity implementation. Long-duration electrospray tests are performed, demonstrating that single polarity thrusters achieve comparable lifetimes and performance to state of the art systems with high mass utilization and minimal hardware degradation. An additional challenge is propellant electrochemistry, triggered when positive counterions accumulate in the ionic liquid. A suite of experiments is conducted to identify and characterize electrochemical processes, including electrical double-layer potential evolution and gas-phase product formation, in electrospray thrusters over long firing durations.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparative Analysis of Semiconductor Investment Environments&#13;
in the U.S. and China</title>
<link href="https://hdl.handle.net/1721.1/163252" rel="alternate"/>
<author>
<name>Zhang, Hanxue</name>
</author>
<id>https://hdl.handle.net/1721.1/163252</id>
<updated>2025-10-22T03:33:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Comparative Analysis of Semiconductor Investment Environments&#13;
in the U.S. and China
Zhang, Hanxue
Semiconductors are fundamental to Artificial Intelligence(AI) and central to global technological competition. Against this backdrop, this thesis compares semiconductor primary investment environments in the United States and China, examining their implications for industry development and innovation. The study employs a mixed-methods approach, combining expert interviews, data analysis, and natural language processing (NLP). It draws on primary market investment, M&amp;A deals and government grants data to examine capital structures, investment stages, sectoral focus, and exit efficiency. Furthermore, it analyzes nearly 3,000 semiconductor industry reports(2020-2025) to identify evolving strategic priorities and thematic trends shaping these environments. Findings reveal that China’s state-led, vertically integrated model prioritizes upstream capacity building and supply chain autonomy, supported by government guidance funds, private capital, and policy-driven mechanisms. However, there remains a significant gap in leading-edge chips, necessitating precise investments and patient capital to bridge this divide. While the U.S. ecosystem, shaped by major technology firms and federal support, focuses on design innovation and cutting-edge technologies. However, structural constraints such as limited exit pathways, fragmented fabrication capacity, and insufficient industrial policies may hinder the U.S. in nurturing innovation-driven small and medium-sized enterprises (SMEs) in the semiconductor industry. This thesis highlights the structural divergence between the U.S. and China’s semiconductor ecosystems by examining policy, primary market capital, and investment dynamics. It offers policymakers and investors a strategic overview of how these forces shape innovation and resilience, while identifying emerging investment priorities and future development paths.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forecasting Automotive Production Volume Using Regression and Time Series Modelling</title>
<link href="https://hdl.handle.net/1721.1/163251" rel="alternate"/>
<author>
<name>Gong, Yutao</name>
</author>
<id>https://hdl.handle.net/1721.1/163251</id>
<updated>2025-10-22T03:33:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Forecasting Automotive Production Volume Using Regression and Time Series Modelling
Gong, Yutao
Accurate forecasting of automotive production volumes is a critical capability for suppliers navigating an increasingly volatile industry. Overly optimistic forecasts, particularly from Original Equipment Manufacturers (OEMs), lead to misallocated capacity and lost opportunities across the supply chain. This thesis investigates whether advanced statistical models can improve upon benchmark industry forecasts and provide automotive suppliers with more reliable, practical tools for demand planning. Several forecasting methodologies are evaluated, including ARIMA, standard linear regression, Lasso regression, Theta model, and a hybrid Boosted Theta model. Models are tested across North America, Europe, and Greater China using 2000-2024 vehicle production and macroeconomic data. Results show that Theta model outperforms industry forecasts across both 1-year and 5-year horizons in North America and Europe. Its simplicity, low data requirements, and robustness to market volatility make it suitable for industrial use. The model was successfully implemented at Commonwealth Rolled Products, an aluminum rolling mill in Kentucky, portfolio company of American Industrial Partners (AIP), where it was adopted for 2025 planning and drove a shift towards data-centric forecasting practices. This research presents a real-world example of applying academic techniques to solving actual business problems, serving as a valuable reference for suppliers seeking to improve forecast accuracy and operational planning in the evolving automotive landscape.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The role of university venture funds in supporting early-stage Japanese startups</title>
<link href="https://hdl.handle.net/1721.1/163250" rel="alternate"/>
<author>
<name>Brillaud, Nami</name>
</author>
<id>https://hdl.handle.net/1721.1/163250</id>
<updated>2025-10-22T03:33:40Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The role of university venture funds in supporting early-stage Japanese startups
Brillaud, Nami
This thesis explores how university venture funds in Japan are uniquely positioned to turn the country’s innovation capacity into entrepreneurial capacity by supporting early-stage startups. While Japan consistently ranks high in research output, much of this potential is not being translated into successful entrepreneurship. Risk capital is scarce compared to other ecosystems, particularly for deep tech, and support systems for early-stage startups are still limited. University venture funds – which inherently connect universities, entrepreneurs, and risk capital – are well positioned to bridge this gap. Yet despite their growing relevance, their evolving role in supporting Japanese early-stage startups is understudied.&#13;
&#13;
This study compares university venture funds with different profiles – ranging from leading and longstanding funds like UTEC, to public-private venture funds established through government initiatives, to recent funds with diversified structures – analyzing how they are structured, how they invest, and what results they have seen so far. It then builds on startup examples and interviews with university venture funds to identify how these funds can better support early-stage startups through improved fund operations, stronger pre-seed support, as well as a strategic approach to growth and exits. Ultimately, this thesis advocates for actionable solutions informed by global practices but adapted to Japan’s unique startup ecosystem.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing Procurement Data for Cost Saving Application</title>
<link href="https://hdl.handle.net/1721.1/163249" rel="alternate"/>
<author>
<name>Pan, Haoting</name>
</author>
<id>https://hdl.handle.net/1721.1/163249</id>
<updated>2025-10-22T03:33:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Analyzing Procurement Data for Cost Saving Application
Pan, Haoting
In an increasingly data-driven business environment, procurement analytics plays a critical role in optimizing costs and improving supply chain efficiency. This thesis examines the development and implementation of the Lifecycle Cost Management (LCM) tool at Caterpillar Inc., a global leader in heavy equipment manufacturing. Given Caterpillar's decentralized procurement structure, managing cost-saving initiatives across its 150 facilities (Caterpillar | Caterpillar Frequently Asked Questions (FAQs), n.d.) and 28,000 suppliers (Caterpillar | Caterpillar at a Glance, n.d.) poses a significant challenge. The LCM tool leverages machine learning models to identify overpriced purchase orders (POs) and generate actionable cost-saving opportunities.&#13;
This study explores the methodology used to enhance LCM's predictive capabilities, including data sourcing and cleaning, feature engineering, model selection, and validation. Various regression models, clustering techniques, and machine learning algorithms, such as Random Forest and XGBoost, are tested to identify cost outliers. A validation process is implemented to ensure that flagged outliers are cost-saving opportunities appropriate for execution.&#13;
Beyond technical development, the thesis addresses the processes of digital tool adoption within Caterpillar’s procurement teams. A change management approach is employed, incorporating buyer interviews, stakeholder engagement, and iterative user experience (UX) improvements. Through case studies, the study highlights the machine learning model performance and tangible financial impact of LCM. &#13;
The LCM tool has identified more than $100M data-driven potential savings, and hopes to realize 20% of the savings. Because Caterpillar’s procurement contracts are often long-term, these savings can be considered perpetual. &#13;
Findings indicate that while machine learning models effectively identify cost outliers, their success is contingent on robust data governance, stakeholder buy-in, and integration into procurement workflows. The study underscores the importance of data management, organizational alignment, and continuous refinement of digital procurement tools. Future work recommendations are enhancing data infrastructure, integrating AI-driven contract management and analysis, and refining cost estimation methodologies. The insights gained contribute to the broader application of procurement analytics and digital transformation in manufacturing enterprises.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact Evaluation and Prioritization Framework for Manufacturing Inspection Technology Investment</title>
<link href="https://hdl.handle.net/1721.1/163248" rel="alternate"/>
<author>
<name>DiDio, Isabella</name>
</author>
<id>https://hdl.handle.net/1721.1/163248</id>
<updated>2025-10-22T03:33:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Impact Evaluation and Prioritization Framework for Manufacturing Inspection Technology Investment
DiDio, Isabella
Advancements in visual inspection technologies and machine learning algorithms present Johnson &amp; Johnson Vision with an opportunity to enhance quality control for Acuvue contact lenses, addressing inefficiencies such as unnecessary scrap, customer complaints, and lead time variability. With over 5 billion lenses produced annually across 100 manufacturing lines, the proposed inspection implementation of advanced camera optics and machine learning aims to improve defect detection accuracy, minimize manual inspection, and reduce customer complaints.&#13;
An impact evaluation and prioritization framework was developed to strategically implement these upgrades across 100 manufacturing lines, integrating historical data analysis, financial modeling, and engineering risk assessments. Key findings highlight that complaint reduction, scrap savings, and labor cost reductions are the primary drivers of cost savings, with inventory savings offering incremental benefits over time.&#13;
In conclusion, this research demonstrates the process of integrating advanced technologies into manufacturing processes. By aligning engineering solutions with strategic business objectives, the findings provide actionable insights for managing large-scale technological upgrades across global networks.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI Through the Viewfinder: Reimagining the Camera as a Tool for AI Image Generation</title>
<link href="https://hdl.handle.net/1721.1/163247" rel="alternate"/>
<author>
<name>Shodipo, Bukunmi</name>
</author>
<id>https://hdl.handle.net/1721.1/163247</id>
<updated>2025-10-22T03:35:19Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">AI Through the Viewfinder: Reimagining the Camera as a Tool for AI Image Generation
Shodipo, Bukunmi
The rapid emergence of artificial intelligence (AI) is causing profound shifts within the art world, reigniting age-old debates on the boundaries of what can be considered art. For example, many AI systems are employed to mimic the styles of existing artists and their works. Although this approach is deemed to be derivative and uninspiring to many people in the art world, it is also forcing us to reconsider longstanding beliefs attached to creativity such as the importance of originality and authorship. Given that AI is here to stay, this thesis explores a critical question around AI and perception, asking “How and what does AI see? Specifically, this thesis investigates the types of biases that are ingrained or embedded into AI systems, and how these biases are reflected in the output, specifically in the context of images. As part of this investigation, this thesis culminates with a prototype - an AI camera that embodies the process of AI ‘seeing the world’. This camera integrates photography with artificial intelligence, serving not only as a tool for technical exploration but also as a metaphor for examining how AI technologies offer diverse and potentially transformative perspectives on reality, much like a traditional camera. By abstracting AI technology into a camera, this project aims to start a conversation about how AI, like a camera, offers us different, sometimes biased views of the world. In doing so, the camera is redefined from a mere tool for capturing images to one that generates them, and in some cases (mis)represents human forms and identities.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Technical Discourse of Miyake Design Studio: Episode in the Interpretation of Cloth, 1995-2007</title>
<link href="https://hdl.handle.net/1721.1/163246" rel="alternate"/>
<author>
<name>Tan, Yi-Ern Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/163246</id>
<updated>2025-10-22T03:33:29Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Technical Discourse of Miyake Design Studio: Episode in the Interpretation of Cloth, 1995-2007
Tan, Yi-Ern Samuel
In the late 1980s, Miyake Design Studio began to register patents concerning the Studio’s development of novel techniques to process pleated clothing. Their first patent, filed in 1989, was registered in designer Issey Miyake’s name, detailing the use of an industrial machine to pleat an entire garment after sewing, reversing the order of the conventional approach to creating pleated garments. In the years that followed, this entry into what I term “technical discourse” would proliferate with the Studio’s establishing of the PLEATS PLEASE brand specializing in pleated garments, and the A-POC (“a piece of cloth”) project with designer and textile engineer Fujiwara Dai. Each of these projects produced numerous patents, including a period between 1997 and 2008 I call the “Miyake Patent Explosion” when the Studio filed twenty patents with the Japan Patent Office and its international counterparts.&#13;
&#13;
In contrast to aesthetic discourses proposing the value of a work on its artistic merits and intellectual content, technical discourse points to the profusion of texts produced and circulated by the Studio—in this thesis, patents and legal claims—to uphold the utility of their products and their protection as intellectual property. By engaging with technical discourse, Miyake Design Studio were not only creating legal safeguards around the ideas it considered proprietary. Rather, their extensive production of technical discourse positioned Miyake as a figure who exceeded the boundaries of fashion, approaching its adjacent categories of unhyphenated design, architecture, and art within whose circles his objects circulate as currency.&#13;
&#13;
Exploring these texts as they are deployed in the defense of intellectual property, I argue that technical discourse can be treated as a form of historical archive that allows us to historicize claims to technological inheritance that bear upon the discussion of Miyake’s work. Specifically, I look to patents as a citational practice, or as Alain Pottage and Brad Sherman write, a “chain of reference” through which patent lawyers and engineers make deliberate connections between one technology and another to acknowledge, distinguish, and legitimize. Examining three episodes where technical discourse opens the way for historical narrative—a lawsuit over imitation goods, a case of mistaken identity in design criticism, and a moment of technological dissolution—I argue that we cannot divorce Miyake and his work from the technical complex that surrounds the Studio’s production of objects. Turning to these technical discourses that exist in the public record, I suspend the promise of monographic history that peers into the mind of the individual and probe instead the possibilities of seeing agencies beyond those attributed to the authorial figure of Miyake— his corporate apparatus, his allies, his admirers, his critics, his opponents, the receptive public.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Joint Inference of the Lexicon and Phonology Affects the Learnability of Process Interactions</title>
<link href="https://hdl.handle.net/1721.1/163245" rel="alternate"/>
<author>
<name>Yang, Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/163245</id>
<updated>2025-10-22T03:29:51Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">How Joint Inference of the Lexicon and Phonology Affects the Learnability of Process Interactions
Yang, Christopher
Contemporary phonological research has increasingly become interested in exploring the topic of learnability through the use of computational models. However, many of the proposed models lack one or more of the following properties. (1) Many models do not consider the effect of the lexicon at all on performance, and those that do fail to consider the effect contextual allomorphy has on performance. (2) Many models characterize learnability in terms of the algorithmic implementation of search, rather than a more principled relationship between the data and the hypothesis space. These properties are critically relevant when it comes to the learnability of process interactions. The experimental literature has demonstrated that artificial languages exhibiting patterns generated from certain process interactions are more likely to be successfully reproduced and generalized by participants than others (Ettlinger 2008; Kim 2012; Brooks, Pajak, &amp; Baković 2013; Prickett 2019). The historical literature has likewise noted that surface patterns generated from particular process interactions are more likely to change in systematic ways than others, including lexicalization, in which an alternation is encoded into the lexicon instead of the phonology, and reanalysis, in which a surface generalization is lost or changed entirely (Kiparsky 1968, 1971). Each of these hypotheses make different predictions when generating forms not seen during training. In this dissertation, I make the following contributions. (1) I propose a novel noisy-channel model of morphophonological learning. This model jointly infers a weighted space of consistent and nearly consistent lexicons and grammars from labelled, unparsed surface data. Predictions are generated given the entirety of the inferred weighted space. (2) I compare the predictions of the model to the results two artificial language learning experiments, which, despite involving the same underlying processes, produced contradictory results. I show that the model is able to achieve the results of both experiments under a unified account: the generalizability of a pattern is determined by the number of hypotheses compatible or nearly compatible with that pattern.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Search for resonant pair production of Higgs bosons in the bbb¯ b¯ final state using large-area jets in proton-proton collisions at √s = 13 TeV</title>
<link href="https://hdl.handle.net/1721.1/163244" rel="alternate"/>
<author>
<name>Tumasyan, A.</name>
</author>
<author>
<name>Adam, W.</name>
</author>
<author>
<name>Andrejkovic, J. W.</name>
</author>
<author>
<name>Bergauer, T.</name>
</author>
<author>
<name>Chatterjee, S.</name>
</author>
<author>
<name>Damanakis, K.</name>
</author>
<author>
<name>Dragicevic, M.</name>
</author>
<author>
<name>Escalante Del Valle, A.</name>
</author>
<author>
<name>Frühwirth, R.</name>
</author>
<author>
<name>Jeitler, M.</name>
</author>
<author>
<name>Krammer, N.</name>
</author>
<author>
<name>Lechner, L.</name>
</author>
<author>
<name>Liko, D.</name>
</author>
<author>
<name>Mikulec, I.</name>
</author>
<author>
<name>Paulitsch, P.</name>
</author>
<author>
<name>Pitters, F. M.</name>
</author>
<id>https://hdl.handle.net/1721.1/163244</id>
<updated>2026-03-08T03:28:06Z</updated>
<published>2025-02-07T00:00:00Z</published>
<summary type="text">Search for resonant pair production of Higgs bosons in the bbb¯ b¯ final state using large-area jets in proton-proton collisions at √s = 13 TeV
Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Escalante Del Valle, A.; Frühwirth, R.; Jeitler, M.; Krammer, N.; Lechner, L.; Liko, D.; Mikulec, I.; Paulitsch, P.; Pitters, F. M.
A search is presented for the resonant production of a pair of standard model-like Higgs bosons using data from proton-proton collisions at a centre-of-mass energy of 13 TeV, collected by the CMS experiment at the CERN LHC in 2016–2018, corresponding to an integrated luminosity of 138 fb−1. The final state consists of two b quark-antiquark pairs. The search is conducted in the region of phase space where at least one of the pairs is highly Lorentz-boosted and is reconstructed as a single large-area jet. The other pair may be either similarly merged or resolved, the latter reconstructed using two b-tagged jets. The data are found to be consistent with standard model processes and are interpreted as 95% confidence level upper limits on the product of the cross sections and the branching fractions of the spin-0 radion and the spin-2 bulk graviton that arise in warped extradimensional models. The limits set are in the range 9.74–0.29 fb and 4.94–0.19 fb for a narrow radion and a graviton, respectively, with masses between 1 and 3 TeV. For a radion and for a bulk graviton with widths 10% of their masses, the limits are in the range 12.5–0.35 fb and 8.23–0.23 fb, respectively, for the same masses. These limits result in the exclusion of a narrow-width graviton with a mass below 1.2 TeV, and of narrow and 10%-width radions with masses below 2.6, and 2.9 TeV, respectively.
</summary>
<dc:date>2025-02-07T00:00:00Z</dc:date>
</entry>
<entry>
<title>Future Circular Collider Feasibility Study Report</title>
<link href="https://hdl.handle.net/1721.1/163243" rel="alternate"/>
<author>
<name>Benedikt, M.</name>
</author>
<author>
<name>Zimmermann, F.</name>
</author>
<author>
<name>Auchmann, B.</name>
</author>
<author>
<name>Bartmann, W.</name>
</author>
<author>
<name>Burnet, J. P.</name>
</author>
<author>
<name>Carli, C.</name>
</author>
<author>
<name>Chancé, A.</name>
</author>
<author>
<name>Craievich, P.</name>
</author>
<author>
<name>Giovannozzi, M.</name>
</author>
<author>
<name>Grojean, C.</name>
</author>
<author>
<name>Gutleber, J.</name>
</author>
<author>
<name>Hanke, K.</name>
</author>
<author>
<name>Henriques, A.</name>
</author>
<author>
<name>Janot, P.</name>
</author>
<author>
<name>Lourenço, C.</name>
</author>
<author>
<name>Mangano, M.</name>
</author>
<author>
<name>Otto, T.</name>
</author>
<author>
<name>Poole, J.</name>
</author>
<author>
<name>Rajagopalan, S.</name>
</author>
<author>
<name>Raubenheimer, T.</name>
</author>
<id>https://hdl.handle.net/1721.1/163243</id>
<updated>2026-03-08T03:27:57Z</updated>
<published>2025-10-13T00:00:00Z</published>
<summary type="text">Future Circular Collider Feasibility Study Report
Benedikt, M.; Zimmermann, F.; Auchmann, B.; Bartmann, W.; Burnet, J. P.; Carli, C.; Chancé, A.; Craievich, P.; Giovannozzi, M.; Grojean, C.; Gutleber, J.; Hanke, K.; Henriques, A.; Janot, P.; Lourenço, C.; Mangano, M.; Otto, T.; Poole, J.; Rajagopalan, S.; Raubenheimer, T.
Volume 3 of the FCC Feasibility Report presents studies related to civil engineering, the development of a project implementation scenario, and environmental and sustainability aspects. The report details the iterative improvements made to the civil engineering concepts since 2018, taking into account subsurface conditions, accelerator and experiment requirements, and territorial considerations. It outlines a technically feasible and economically viable civil engineering configuration that serves as the baseline for detailed subsurface investigations, construction design, cost estimation, and project implementation planning. Additionally, the report highlights ongoing subsurface investigations in key areas to support the development of an improved 3D subsurface model of the region. The report describes the development of the project scenario based on the ‘avoid-reduce-compensate’ iterative optimisation approach. The reference scenario balances optimal physics performance with territorial compatibility, implementation risks, and costs. Environmental field investigations covering almost 600 hectares of terrain—including numerous urban, economic, social, and technical aspects—confirmed the project’s technical feasibility and contributed to the preparation of essential input documents for the formal project authorisation phase. The summary also highlights the initiation of public dialogue as part of the authorisation process. The results of a comprehensive socio-economic impact assessment, which included significant environmental effects, are presented. Even under the most conservative and stringent conditions, a positive benefit-cost ratio for the FCC-ee is obtained. Finally, the report provides a summary of the studies conducted to document the current state of the environment.
</summary>
<dc:date>2025-10-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Locational and Spatial Development Patterns in U.S. Urban Micro Housing</title>
<link href="https://hdl.handle.net/1721.1/163242" rel="alternate"/>
<author>
<name>Wang, Bing</name>
</author>
<author>
<name>Seiler, Michael J.</name>
</author>
<author>
<name>Liu, Kui</name>
</author>
<author>
<name>Du, Jinfeng</name>
</author>
<id>https://hdl.handle.net/1721.1/163242</id>
<updated>2026-03-08T03:28:25Z</updated>
<published>2025-10-16T00:00:00Z</published>
<summary type="text">Locational and Spatial Development Patterns in U.S. Urban Micro Housing
Wang, Bing; Seiler, Michael J.; Liu, Kui; Du, Jinfeng
While previous studies of micro-housing have primarily relied on qualitative methods or case-based analyses, this study deploys a more rigorous, data-driven approach. We construct a hand-collected dataset covering 11 major U.S. cities to enable a quantitative examination of this emerging housing form. Drawing on 40 variables from 32 projects, including locational data, physical characteristics, market performance, and amenity features, we identified five distinct micro-housing typologies: TechEd, Dependent, Stand-Alone, Luxury, and Affordable Sharing Economy. In the context of increasing remote work and the growing influence of the sharing economy, these distinct micro-housing types are becoming increasingly relevant as an urban development model. This paper represents a first step toward systematically understanding these building typologies and uncovers their locational patterns through empirical analysis.
</summary>
<dc:date>2025-10-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Psyche Mission Description and Design Rationale</title>
<link href="https://hdl.handle.net/1721.1/163241" rel="alternate"/>
<author>
<name>Polanskey, Carol A.</name>
</author>
<author>
<name>Elkins-Tanton, Linda T.</name>
</author>
<author>
<name>Bell, James F.</name>
</author>
<author>
<name>Alonge, Eleanor K.</name>
</author>
<author>
<name>Bairstow, Sarah H.</name>
</author>
<author>
<name>Binzel, Richard P.</name>
</author>
<author>
<name>Biswas, Abhijit</name>
</author>
<author>
<name>Bury, Luke</name>
</author>
<author>
<name>Cisneros, Ernest</name>
</author>
<author>
<name>Han, Dongsuk</name>
</author>
<author>
<name>Jun, Insoo</name>
</author>
<author>
<name>Klipstein, William M.</name>
</author>
<author>
<name>Lawrence, David J.</name>
</author>
<author>
<name>McCoy, Timothy J.</name>
</author>
<author>
<name>Mastrodemos, Nickolaos</name>
</author>
<id>https://hdl.handle.net/1721.1/163241</id>
<updated>2026-03-08T03:27:58Z</updated>
<published>2025-10-14T00:00:00Z</published>
<summary type="text">Psyche Mission Description and Design Rationale
Polanskey, Carol A.; Elkins-Tanton, Linda T.; Bell, James F.; Alonge, Eleanor K.; Bairstow, Sarah H.; Binzel, Richard P.; Biswas, Abhijit; Bury, Luke; Cisneros, Ernest; Han, Dongsuk; Jun, Insoo; Klipstein, William M.; Lawrence, David J.; McCoy, Timothy J.; Mastrodemos, Nickolaos
The Psyche spacecraft launched on October 13, 2023 to journey to the asteroid of the same name. Psyche is the largest M-class asteroid and possibly the remanent core of an early differentiated planetesimal that was disrupted by collisions. The Psyche mission will test that hypothesis as the 14th mission in NASA’s Discovery Program. An alternative hypothesis is that the asteroid is unmelted primordial material. We describe the proposal competition process leading to selection of the mission and its context with other small body missions. This paper will briefly introduce the three science instruments, gravity science investigation, and Deep Space Optical Communications technology demonstration, leading into a detailed explanation of the science mission architecture. The orbital science phase is divided into a series of circular mapping orbits at four distinct altitudes, each selected to address specific science objectives. The requirements and objectives for each orbit are accompanied by an assessment of the effectiveness of each phase. We discuss the structure of the Psyche team during the operations phase along with the roles and responsibilities of the science and flight operations teams. Key elements of mission operations that are unique to the Psyche mission are provided. The Science Data Center manages and archives the Psyche mission data. The contents of the archive data sets for each instrument are outlined as well as the interfaces between the Science Data Center, the instrument teams, and the Planetary Data System.
</summary>
<dc:date>2025-10-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantum Perfect Matchings</title>
<link href="https://hdl.handle.net/1721.1/163240" rel="alternate"/>
<author>
<name>Cui, David</name>
</author>
<author>
<name>Mančinska, Laura</name>
</author>
<author>
<name>Nezhadi, Seyed S.</name>
</author>
<author>
<name>Roberson, David E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163240</id>
<updated>2026-03-08T03:27:44Z</updated>
<published>2025-10-14T00:00:00Z</published>
<summary type="text">Quantum Perfect Matchings
Cui, David; Mančinska, Laura; Nezhadi, Seyed S.; Roberson, David E.
We investigate quantum and nonsignaling generalizations of perfect matchings in graphs using nonlocal games. Specifically, we introduce nonlocal games that test for L-perfect matchings in bipartite graphs, perfect matchings in general graphs and hypergraphs, and fractional perfect matchings. Our definitions come from the fact that these games are classical property tests for the corresponding matching conditions. We use the existence of perfect quantum and nonsignaling strategies for these games to define quantum and nonsignaling versions of perfect matchings. Finally, we provide characterizations of when graphs exhibit these extended properties: For nonsignaling matchings, we give a complete combinatorial characterization. In particular, a graph has a nonsignaling perfect matching if and only if it admits a fractional perfect matching that has bounded value on triangles. In bipartite graphs, the nonsignaling L-perfect matching property is achieved exactly when the left component of the graph can be split into two disjoint subgraphs: one with a classical L-perfect matching and another with left-degree 2. In the quantum setting, we show that complete graphs K n with odd n ≥ 7 have quantum perfect matchings. We prove that a graph has a quantum perfect matching if and only if the quantum independence number of its line graph is maximal, extending a classical relationship between perfect matchings and line graph independence numbers. For bipartite graphs, we establish that the L-perfect matching game does not exhibit quantum pseudotelepathy, but we characterize the quantum advantage for complete bipartite graphs K n , 2 . Additionally, we prove that deciding quantum perfect matchings in hypergraphs is undecidable and leaves open the question of its complexity in graphs.
</summary>
<dc:date>2025-10-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Trophic transfer of lipid-derived energy through Adélie and gentoo penguins near Palmer Station along the west Antarctic Peninsula</title>
<link href="https://hdl.handle.net/1721.1/163239" rel="alternate"/>
<author>
<name>Bent, Shavonna M.</name>
</author>
<author>
<name>Cimino, Megan A.</name>
</author>
<author>
<name>Connors, Elizabeth J.</name>
</author>
<author>
<name>Thomas, Maya I.</name>
</author>
<author>
<name>Miller, Carolyn A.</name>
</author>
<author>
<name>Fredricks, Helen F.</name>
</author>
<author>
<name>Van Mooy, Benjamin A. S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163239</id>
<updated>2026-03-08T03:28:24Z</updated>
<published>2025-10-14T00:00:00Z</published>
<summary type="text">Trophic transfer of lipid-derived energy through Adélie and gentoo penguins near Palmer Station along the west Antarctic Peninsula
Bent, Shavonna M.; Cimino, Megan A.; Connors, Elizabeth J.; Thomas, Maya I.; Miller, Carolyn A.; Fredricks, Helen F.; Van Mooy, Benjamin A. S.
Although Adélie and gentoo penguins are experiencing similar climatic conditions along the west Antarctic Peninsula (WAP), Adélie populations have decreased in the northern WAP, while gentoo populations have increased. We examined the lipid component of regurgitated prey (chick diets) from each penguin species to elucidate broader population trends. Nearly 90% of chick diet samples were composed of only krill, which we confirmed contained abundant phosphatidyl choline. Chick diets rich in fish had similar total caloric content to krill-only diets; however, these “fishy” chick diets had significantly more energy derived from triacylglycerides, an important energy-rich storage molecule, and were only found in gentoo penguins. We found that whole-krill eaten by adult penguins had 1.25–3.75 times more energy than chick diets, highlighting the role of digestion in the transfer of energy to chicks. Our results highlight dynamics between climate, predator–prey relationships, and trophic transfer of energy in the Antarctic food web.
</summary>
<dc:date>2025-10-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>IsoDAR@Yemilab: Preliminary design report—volume I (cyclotron driver)</title>
<link href="https://hdl.handle.net/1721.1/163238" rel="alternate"/>
<author>
<name>Winklehner, Daniel</name>
</author>
<author>
<name>Abs, Michel</name>
</author>
<author>
<name>Alonso, Jose R.</name>
</author>
<author>
<name>Conrad, Janet M.</name>
</author>
<author>
<name>Engebretson, Samuel J.</name>
</author>
<author>
<name>Forton, Eric</name>
</author>
<author>
<name>Herrod, Alexander T.</name>
</author>
<author>
<name>Joassin, Denis</name>
</author>
<author>
<name>Moon, Jarrett</name>
</author>
<author>
<name>de Neuter, Sébastien</name>
</author>
<author>
<name>Van der Kraaij, Erik</name>
</author>
<author>
<name>Wéry, Gil</name>
</author>
<author>
<name>Winkler, Eleanor</name>
</author>
<author>
<name>Adelmann, Andreas</name>
</author>
<author>
<name>Axani, Spencer N.</name>
</author>
<author>
<name>Barletta, William A.</name>
</author>
<author>
<name>Barlow, Roger</name>
</author>
<author>
<name>Bartoszek, Larry</name>
</author>
<author>
<name>Bungau, Adriana</name>
</author>
<author>
<name>Calabretta, Luciano</name>
</author>
<id>https://hdl.handle.net/1721.1/163238</id>
<updated>2026-03-08T03:28:05Z</updated>
<published>2025-10-15T00:00:00Z</published>
<summary type="text">IsoDAR@Yemilab: Preliminary design report—volume I (cyclotron driver)
Winklehner, Daniel; Abs, Michel; Alonso, Jose R.; Conrad, Janet M.; Engebretson, Samuel J.; Forton, Eric; Herrod, Alexander T.; Joassin, Denis; Moon, Jarrett; de Neuter, Sébastien; Van der Kraaij, Erik; Wéry, Gil; Winkler, Eleanor; Adelmann, Andreas; Axani, Spencer N.; Barletta, William A.; Barlow, Roger; Bartoszek, Larry; Bungau, Adriana; Calabretta, Luciano
This Preliminary Design Report (PDR) describes the IsoDAR electron-antineutrino source in two volumes which are mostly site-independent and describe the cyclotron driver providing a 10 mA/60 MeV proton beam (this Volume); and the medium energy beam transport line (MEBT) and target (Volume II). The IsoDAR driver and target will produce about 1.15 × 10 23 electron-antineutrinos over 5 years while operating with the anticipated 10 mA/60 MeV beam at an estimated 80% duty factor. Paired with a kton-scale liquid scintillator detector, it will enable a broad particle physics program including searches for new symmetries, new interactions and new particles. Here in Volume I, we describe the driver, which includes the ion source, low energy beam transport, and cyclotron. The latter features Radio-Frequency Quadrupole (RFQ) direct axial injection and represents the first accelerator purpose-built to make use of so-called vortex motion.
</summary>
<dc:date>2025-10-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Physics-Based Inverse Problem Approach for Estimating Operating Conditions in Forced Convection Systems with Uncertainty Quantification</title>
<link href="https://hdl.handle.net/1721.1/163237" rel="alternate"/>
<author>
<name>Kim, Haeseong</name>
</author>
<author>
<name>Cetiner, Sacit M</name>
</author>
<author>
<name>Bucci, Matteo</name>
</author>
<id>https://hdl.handle.net/1721.1/163237</id>
<updated>2026-03-08T03:28:23Z</updated>
<published>2025-09-03T00:00:00Z</published>
<summary type="text">Physics-Based Inverse Problem Approach for Estimating Operating Conditions in Forced Convection Systems with Uncertainty Quantification
Kim, Haeseong; Cetiner, Sacit M; Bucci, Matteo
Accurately determining the operating conditions of thermal systems with limited measurements is a critical challenge in convection-dominated problems of interest for nuclear engineering applications. Because of the complexity of these phenomena, existing research has often relied on data-driven reconstruction of physical quantities. In this work, instead of using a data-driven approach, which usually lacks interpretability, we focus on a physics-based inverse problem to estimate unknown causes from available observations. We address the problem of estimating operating conditions (such as heat source intensity and flow rate) in a steady-state turbulent forced convection system from a limited number of temperature measurements. Based on a forward model with quantified uncertainty, we employed Newton’s method to estimate unknown parameters and incorporated uncertainty quantification. The uncertainty analysis addresses the impact of measurement uncertainty and errors in closure relationships. The identified uncertainties provide insights into their mitigation and inform experimental design. The structured approach to inverse analysis enables accurate estimation with minimal sensor data, as shown in this specific example. The analysis will contribute to the development of advanced sparse sensing techniques, with potential implications for broader industrial and environmental applications.
</summary>
<dc:date>2025-09-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design-Based Uncertainty for Quasi-Experiments</title>
<link href="https://hdl.handle.net/1721.1/163236" rel="alternate"/>
<author>
<name>Rambachan, Ashesh</name>
</author>
<author>
<name>Roth, Jonathan</name>
</author>
<id>https://hdl.handle.net/1721.1/163236</id>
<updated>2026-03-08T03:28:28Z</updated>
<published>2025-08-27T00:00:00Z</published>
<summary type="text">Design-Based Uncertainty for Quasi-Experiments
Rambachan, Ashesh; Roth, Jonathan
Design-based frameworks of uncertainty are frequently used in settings where the treatment is (conditionally) randomly assigned. This article develops a design-based framework suitable for analyzing quasi-experimental settings in the social sciences, in which the treatment assignment can be viewed as the realization of some stochastic process but there is concern about unobserved selection into treatment. In our framework, treatments are stochastic, but units may differ in their probabilities of receiving treatment, thereby allowing for rich forms of selection. We provide conditions under which the estimands of popular quasi-experimental estimators correspond to interpretable finite-population causal parameters. We characterize the biases and distortions to inference that arise when these conditions are violated. These results can be used to conduct sensitivity analyses when there are concerns about selection into treatment. Taken together, our results establish a rigorous foundation for quasi-experimental analyses that more closely aligns with the way empirical researchers discuss the variation in the data. Supplementary materials for this article are available online, including a standardized description of the materials available for reproducing the work.
</summary>
<dc:date>2025-08-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Golden Dome and Arms Control: Impediment or Opportunity?</title>
<link href="https://hdl.handle.net/1721.1/163235" rel="alternate"/>
<author>
<name>Vaddi, Pranay R.</name>
</author>
<author>
<name>Warden, John K.</name>
</author>
<id>https://hdl.handle.net/1721.1/163235</id>
<updated>2026-03-08T03:28:19Z</updated>
<published>2025-07-15T00:00:00Z</published>
<summary type="text">Golden Dome and Arms Control: Impediment or Opportunity?
Vaddi, Pranay R.; Warden, John K.
The Trump administration identified arms control talks with Russia and China as an early priority. At the same time, the US President directed the Defense Department to develop a comprehensive air and missile defense system for the United States, and potentially for forward-deployed forces and allies as well. The interrelationship between strategic offensive and defensive arms will complicate, but not necessarily derail, the administration’s strategic arms control agenda.
</summary>
<dc:date>2025-07-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Syndicated Lending Relationships, Information Asymmetry, and Market Making in the Secondary Loan Market</title>
<link href="https://hdl.handle.net/1721.1/163234" rel="alternate"/>
<author>
<name>PHILLIPS, MATTHEW A</name>
</author>
<id>https://hdl.handle.net/1721.1/163234</id>
<updated>2026-03-08T03:28:26Z</updated>
<published>2025-07-16T00:00:00Z</published>
<summary type="text">Syndicated Lending Relationships, Information Asymmetry, and Market Making in the Secondary Loan Market
PHILLIPS, MATTHEW A
This paper investigates why commercial lenders make markets for the loansthat they sell on the secondary market. Using loan-level data, I ﬁnd thatorigination lenders with extensive borrower relationships and more repu-tational capital at stake are more likely to serve as market makers. Greaterparticipation of origination lenders as market makers is associated with lowertrading costs for their borrowers’ loans. This association remains even in con-ditions where origination lenders could exploit their information advantagefor market making proﬁts. Lenders beneﬁt from being market makers bymaintaining strong subsequent lending relationships with their borrowers.Collectively, this evidence is consistent with origination lenders’ participationin the secondary market being motivated by reducing trading frictions ratherthan market making proﬁts.
</summary>
<dc:date>2025-07-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evolution of the South Pacific's Iron Cycle Over the Cenozoic</title>
<link href="https://hdl.handle.net/1721.1/163233" rel="alternate"/>
<author>
<name>Tegler, Logan A.</name>
</author>
<author>
<name>Horner, Tristan J.</name>
</author>
<author>
<name>Nielsen, Sune G.</name>
</author>
<author>
<name>Heard, Andy W.</name>
</author>
<author>
<name>Squires, Katherine R.</name>
</author>
<author>
<name>Severmann, Silke</name>
</author>
<author>
<name>Peucker‐Ehrenbrink, Bernhard</name>
</author>
<author>
<name>Blusztajn, Jerzy</name>
</author>
<author>
<name>Dunlea, Ann G.</name>
</author>
<id>https://hdl.handle.net/1721.1/163233</id>
<updated>2026-03-08T03:28:21Z</updated>
<published>2025-07-03T00:00:00Z</published>
<summary type="text">Evolution of the South Pacific's Iron Cycle Over the Cenozoic
Tegler, Logan A.; Horner, Tristan J.; Nielsen, Sune G.; Heard, Andy W.; Squires, Katherine R.; Severmann, Silke; Peucker‐Ehrenbrink, Bernhard; Blusztajn, Jerzy; Dunlea, Ann G.
Iron (Fe) availability impacts marine primary productivity, potentially influencing the efficiency of the biological carbon pump. Stable Fe isotope analysis has emerged as a tool to understand how Fe is sourced and cycled in the water column; however its application to sediment records is complicated by overlapping isotope signatures of different sources and uncertainties in establishing chronologies. To overcome these challenges, we integrate Fe and osmium isotope measurements with multi-element geochemical analysis and statistical modeling. We apply this approach to reconstruct the history of Fe delivery to the South Pacific from three pelagic clay sequences spanning 93 million years. Our analysis reveals five principal Fe sources—dust, distal background, two distinct hydrothermal inputs, and a magnesium-rich volcanic ash. Initially, hydrothermal inputs dominated Fe deposition, but as the sites migrated away from their respective mid-ocean ridges, other sources became prominent. Notably, from 66 to 40 million years ago (Ma), distal background Fe was the primary source before a shift to increasing dust dominance around 30 Ma. This transition implies that Fe in South Pacific seawater has been dust-dominated since ≈30 Ma, despite extremely low dust deposition rates today. We speculate that the shift to episodic and low Fe fluxes in the South Pacific and Southern Ocean over the Cenozoic helped shape an ecological niche that favored phytoplankton that adapted to these conditions, such as diatoms. Our analysis highlights how Fe delivery to the ocean is driven by large-scale tectonic and climatic shifts, while also influencing climate through its integral role in marine phytoplankton and Earth's biogeochemical cycles.
</summary>
<dc:date>2025-07-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Starship as an Enabling Option for a Uranus Flagship Mission</title>
<link href="https://hdl.handle.net/1721.1/163232" rel="alternate"/>
<author>
<name>Gochenaur, Daniel</name>
</author>
<author>
<name>Gentgen, Chloe</name>
</author>
<author>
<name>de Weck, Olivier</name>
</author>
<id>https://hdl.handle.net/1721.1/163232</id>
<updated>2026-03-08T03:28:20Z</updated>
<published>2025-07-14T00:00:00Z</published>
<summary type="text">Starship as an Enabling Option for a Uranus Flagship Mission
Gochenaur, Daniel; Gentgen, Chloe; de Weck, Olivier
In 2022, the National Academy of Sciences Planetary Science Decadal Survey recommended exploration of Uranus as its highest priority Flagship mission for the 2030s. The Decadal recommendation relied on the Uranus Orbiter and Probe (UOP) concept as its baseline for the mission. UOP assumed a launch in 2031 on a Falcon Heavy Expendable rocket and an intermediate Jupiter flyby, allowing it to arrive at Uranus before 2050. At present, it is likely that the original UOP launch will be postponed, which will cause a Jupiter gravity assist to become unavailable and could delay the arrival at Uranus. However, a later launch date allows us to consider launch vehicles currently under development such as SpaceX's Starship, a two-stage heavy-lift launch vehicle that is intended to be refuelable on-orbit. Although Starship's performance capabilities have yet to be demonstrated, current development timelines suggest they will be known before selecting a launch vehicle for a Uranus mission. This study investigates the possibility of leveraging the anticipated capabilities of Starship to support a Flagship mission to Uranus. The results show that with on-orbit refueling, Starship will be capable of performing direct transfer to Uranus without the need for intermediate planetary flybys. Direct transfer with Starship orbit insertion allows nearly five metric tonnes of mass to be deployed to Uranus orbit using nine refueling launches in ten years, compared to more than thirteen years for UOP. If the spacecraft is used to perform the orbit insertion maneuver, five tonnes of mass can be deployed in less than nine years with seven refueling trips. Larger payload masses and shorter times of flight can be achieved by using Starship to perform aerocapture. As a mid- to high-lift to drag ratio vehicle, Starship can succesfully perform aerocapture while maintaining deceleration and heating values that are not more severe than those observed by aerocapture studies for other vehicles. With seven refueling launches and a seven-year transfer time of flight, Starship can deliver nearly six tonnes of payload mass to Uranus using aerocapture. With a longer time of flight and additional refueling launches, mission masses greater than fifty tonnes can be delivered to Uranus orbit. By using Starship to deploy a spacecraft and probe of a similar design as UOP, the reduced transfer times can facilitate an arrival at Uranus well before equinox, and can enable science phases of up to ten years. Performing the insertion burn with Starship also increases the Δv available for the science tour. Using the UOP architecture would make the mission compatible with both Falcon Heavy and Starship, thereby reducing risk. Alternatively, the additional payload mass that can be deployed to Uranus with Starship can enhance the orbiter and probe architecture beyond the current design, potentially allowing for a larger instrument suite, additional probes, and even a secondary spacecraft. To this end, a Uranus Flagship mission using Starship presents a higher-risk, yet potentially greater-science-return option that could become viable if financial conditions permit.
2025 IEEE Aerospace Conference, 1-8 March, Big Sky, MT, USA
</summary>
<dc:date>2025-07-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forecasting Research Trends Using Knowledge Graphs and Large Language Models</title>
<link href="https://hdl.handle.net/1721.1/163231" rel="alternate"/>
<author>
<name>Tomczak, Maciej</name>
</author>
<author>
<name>Park, Yang Jeong</name>
</author>
<author>
<name>Hsu, Chia‐Wei</name>
</author>
<author>
<name>Brown, Payden</name>
</author>
<author>
<name>Massa, Dario</name>
</author>
<author>
<name>Sankowski, Piotr</name>
</author>
<author>
<name>Li, Ju</name>
</author>
<author>
<name>Papanikolaou, Stefanos</name>
</author>
<id>https://hdl.handle.net/1721.1/163231</id>
<updated>2026-03-08T03:28:25Z</updated>
<published>2025-09-12T00:00:00Z</published>
<summary type="text">Forecasting Research Trends Using Knowledge Graphs and Large Language Models
Tomczak, Maciej; Park, Yang Jeong; Hsu, Chia‐Wei; Brown, Payden; Massa, Dario; Sankowski, Piotr; Li, Ju; Papanikolaou, Stefanos
Since ancient times, oracles (e.g., Delphi) has the ability to provide useful visions of where the society is headed, based on key event correlations and educated guesses. Currently, foundation models are able to distill and analyze enormous text-based data that can be used to understand where societal components are headed in the future. This work investigates the use of three large language models (LLM) and their ability to aid the research of nuclear materials. Using a large dataset of Journal of Nuclear Materials papers spanning from 2001 to 2021, models are evaluated and compared with perplexity, similarity of output, and knowledge graph metrics such as shortest path length. Models are compared to the highest performer, OpenAI's GPT-3.5. LLM-generated knowledge graphs with more than 2 × 105 nodes and 3.3 × 105 links are analyzed per publication year, and temporal tracking leads to the identification of criteria for publication innovation, controversy, influence, and future research trends.
</summary>
<dc:date>2025-09-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing Cloud Feedbacks Over the Atlantic With Bias‐Corrected Downscaling</title>
<link href="https://hdl.handle.net/1721.1/163230" rel="alternate"/>
<author>
<name>Liu, Shuchang</name>
</author>
<author>
<name>Zeman, Christian</name>
</author>
<author>
<name>Schär, Christoph</name>
</author>
<id>https://hdl.handle.net/1721.1/163230</id>
<updated>2026-03-08T03:28:27Z</updated>
<published>2025-06-16T00:00:00Z</published>
<summary type="text">Assessing Cloud Feedbacks Over the Atlantic With Bias‐Corrected Downscaling
Liu, Shuchang; Zeman, Christian; Schär, Christoph
Clouds exert a significant impact on global temperatures and climate change. Cloud‐radiativefeedback (CRF) is one of the major sources of climate change uncertainty. Understanding CRF is thereforecrucial for accurate climate projections. Biases like the double‐ITCZ problem in Global Climate Models(GCMs) hamper precise climate projections. Here, we explore a bias‐corrected downscaling method toconstrain the cloud feedback uncertainties in the tropical and sub‐tropical Atlantic region. We use regionalclimate model (RCM) simulations with convection permitting resolution, driven by debiased driving fields fromthree different global climate models (GCMs). Bias‐corrected downscaling significantly reduces biases in ITCZintensity and position, eliminating the double‐ITCZ bias across all six experiments (three GCMs for historicaland future periods). We explore the new methodology's potential to investigate the CRF in comparison to that ofthe driving GCMs. Results indicate that additional GCMs and RCMs are necessary for a more comprehensiveuncertainty estimation and more conclusive results, while our simulations suggest a potentially narrower rangeof CRF over the tropical and subtropical Atlantic, primarily due to an improved representation of stratocumulusclouds. Our study highlights the potential of bias‐corrected downscaling in constraining the uncertainty ofsimulations and estimates of cloud feedback and equilibrium climate sensitivity. The results advocate for furthersimulations with additional RCMs and domains for a more comprehensive analysis.
</summary>
<dc:date>2025-06-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Feasibility Analysis and Fuel Burn Benefits of Relaxing Constraints in High Altitude Cruise</title>
<link href="https://hdl.handle.net/1721.1/163229" rel="alternate"/>
<author>
<name>Cezairli, Mina</name>
</author>
<author>
<name>Hansman, R. John</name>
</author>
<id>https://hdl.handle.net/1721.1/163229</id>
<updated>2025-10-18T03:01:22Z</updated>
<published>2025-10-18T00:00:00Z</published>
<summary type="text">Feasibility Analysis and Fuel Burn Benefits of Relaxing Constraints in High Altitude Cruise
Cezairli, Mina; Hansman, R. John
</summary>
<dc:date>2025-10-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating Practical Engineering Design Optimization with Computational Graph Transformations</title>
<link href="https://hdl.handle.net/1721.1/163228" rel="alternate"/>
<author>
<name>Sharpe, Peter D.</name>
</author>
<id>https://hdl.handle.net/1721.1/163228</id>
<updated>2025-10-18T03:01:00Z</updated>
<published>2025-10-18T00:00:00Z</published>
<summary type="text">Accelerating Practical Engineering Design Optimization with Computational Graph Transformations
Sharpe, Peter D.
</summary>
<dc:date>2025-10-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performance, Stability and Control of Electric Short Takeoff and Landing Aircraft</title>
<link href="https://hdl.handle.net/1721.1/163227" rel="alternate"/>
<author>
<name>Courtin, Christopher B.</name>
</author>
<author>
<name>Hansman, R. John</name>
</author>
<id>https://hdl.handle.net/1721.1/163227</id>
<updated>2025-10-18T03:01:06Z</updated>
<published>2025-10-18T00:00:00Z</published>
<summary type="text">Performance, Stability and Control of Electric Short Takeoff and Landing Aircraft
Courtin, Christopher B.; Hansman, R. John
</summary>
<dc:date>2025-10-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>INCREASING FLEXIBILITY IN THE DESIGN AND OPERATION OF INSTRUMENT FLIGHT PROCEDURES</title>
<link href="https://hdl.handle.net/1721.1/163226" rel="alternate"/>
<author>
<name>Salgueiro, Sandro</name>
</author>
<author>
<name>Hansman, R. John</name>
</author>
<id>https://hdl.handle.net/1721.1/163226</id>
<updated>2025-10-18T03:01:29Z</updated>
<published>2025-10-17T00:00:00Z</published>
<summary type="text">INCREASING FLEXIBILITY IN THE DESIGN AND OPERATION OF INSTRUMENT FLIGHT PROCEDURES
Salgueiro, Sandro; Hansman, R. John
</summary>
<dc:date>2025-10-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adaptation of Aglycosylated Monoclonal Antibodies for Improved Production in Komagataella phaffii</title>
<link href="https://hdl.handle.net/1721.1/163225" rel="alternate"/>
<author>
<name>Yang, Yuchen</name>
</author>
<author>
<name>Dalvie, Neil C</name>
</author>
<author>
<name>Brady, Joseph R</name>
</author>
<author>
<name>Naranjo, Christopher A</name>
</author>
<author>
<name>Lorgeree, Timothy</name>
</author>
<author>
<name>Rodriguez‐Aponte, Sergio A</name>
</author>
<author>
<name>Johnston, Ryan S</name>
</author>
<author>
<name>Tracey, Mary K</name>
</author>
<author>
<name>Elenberger, Carmen M</name>
</author>
<author>
<name>Lee, Eric</name>
</author>
<author>
<name>Tié, Mark</name>
</author>
<author>
<name>Love, Kerry R</name>
</author>
<author>
<name>Love, J Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/163225</id>
<updated>2025-10-18T04:57:24Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Adaptation of Aglycosylated Monoclonal Antibodies for Improved Production in Komagataella phaffii
Yang, Yuchen; Dalvie, Neil C; Brady, Joseph R; Naranjo, Christopher A; Lorgeree, Timothy; Rodriguez‐Aponte, Sergio A; Johnston, Ryan S; Tracey, Mary K; Elenberger, Carmen M; Lee, Eric; Tié, Mark; Love, Kerry R; Love, J Christopher
Monoclonal antibodies (mAbs) are a major class of biopharmaceuticals manufactured by well-established processes using Chinese Hamster Ovary (CHO) cells. Next-generation biomanufacturing using alternative hosts like Komagataella phaffii could improve the accessibility of these medicines, address broad societal goals for sustainability, and offer financial advantages for accelerated development of new products. Antibodies produced by K. phaffii, however, may manifest unique molecular quality attributes, like host-dependent, product-related variants, that could raise potential concerns for clinical use. We demonstrate here conservative modifications to the amino acid sequence of aglycosylated antibodies based on the human IgG1 isotype that minimize product-related variations when secreted by K. phaffii. A combination of 2–3 changes of amino acids reduced variations across six different aglycosylated versions of commercial mAbs. Expression of a modified sequence of NIST mAb in both K. phaffii and CHO cells showed comparable biophysical properties and molecular variations. These results suggest a path toward the production of high-quality mAbs that could be expressed interchangeably by either yeast or mammalian cells. Improving molecular designs of proteins to enable a range of manufacturing strategies for well-characterized biopharmaceuticals could accelerate global accessibility and innovations.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hierarchical Behavior Models for Characterizing Trajectories within Terminal Airspace</title>
<link href="https://hdl.handle.net/1721.1/163224" rel="alternate"/>
<author>
<name>Li, Clement</name>
</author>
<author>
<name>Hansman, R. John</name>
</author>
<id>https://hdl.handle.net/1721.1/163224</id>
<updated>2025-10-18T03:01:25Z</updated>
<published>2025-10-17T00:00:00Z</published>
<summary type="text">Hierarchical Behavior Models for Characterizing Trajectories within Terminal Airspace
Li, Clement; Hansman, R. John
</summary>
<dc:date>2025-10-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modulation of antigen delivery and lymph node activation in nonhuman primates by saponin adjuvant saponin/monophosphoryl lipid A nanoparticle</title>
<link href="https://hdl.handle.net/1721.1/163223" rel="alternate"/>
<author>
<name>Yousefpour, Parisa</name>
</author>
<author>
<name>Zhang, Yiming J</name>
</author>
<author>
<name>Maiorino, Laura</name>
</author>
<author>
<name>Melo, Mariane B</name>
</author>
<author>
<name>Arainga Ramirez, Mariluz A</name>
</author>
<author>
<name>Kumarapperuma, Sidath C</name>
</author>
<author>
<name>Xiao, Peng</name>
</author>
<author>
<name>Silva, Murillo</name>
</author>
<author>
<name>Li, Na</name>
</author>
<author>
<name>Michaels, Katarzyna K</name>
</author>
<author>
<name>Georgeson, Erik</name>
</author>
<author>
<name>Eskandarzadeh, Saman</name>
</author>
<author>
<name>Kubitz, Michael</name>
</author>
<author>
<name>Groschel, Bettina</name>
</author>
<author>
<name>Qureshi, Kashif</name>
</author>
<author>
<name>Fontenot, Jane</name>
</author>
<author>
<name>Hangartner, Lars</name>
</author>
<author>
<name>Nedellec, Rebecca</name>
</author>
<author>
<name>Love, J Christopher</name>
</author>
<author>
<name>Burton, Dennis R</name>
</author>
<author>
<name>Schief, William R</name>
</author>
<author>
<name>Villinger, Francois J</name>
</author>
<author>
<name>Irvine, Darrell J</name>
</author>
<id>https://hdl.handle.net/1721.1/163223</id>
<updated>2025-10-18T04:57:21Z</updated>
<published>2024-11-25T00:00:00Z</published>
<summary type="text">Modulation of antigen delivery and lymph node activation in nonhuman primates by saponin adjuvant saponin/monophosphoryl lipid A nanoparticle
Yousefpour, Parisa; Zhang, Yiming J; Maiorino, Laura; Melo, Mariane B; Arainga Ramirez, Mariluz A; Kumarapperuma, Sidath C; Xiao, Peng; Silva, Murillo; Li, Na; Michaels, Katarzyna K; Georgeson, Erik; Eskandarzadeh, Saman; Kubitz, Michael; Groschel, Bettina; Qureshi, Kashif; Fontenot, Jane; Hangartner, Lars; Nedellec, Rebecca; Love, J Christopher; Burton, Dennis R; Schief, William R; Villinger, Francois J; Irvine, Darrell J
Saponin-based vaccine adjuvants are potent in preclinical animal models and humans, but their mechanisms of action remain poorly understood. Here, using a stabilized HIV envelope trimer immunogen, we carried out studies in nonhuman primates (NHPs) comparing the most common clinical adjuvant aluminum hydroxide (alum) with saponin/monophosphoryl lipid A nanoparticles (SMNP), an immune-stimulating complex–like adjuvant. SMNP elicited substantially stronger humoral immune responses than alum, including 7-fold higher peak antigen-specific germinal center B-cell responses, 18-fold higher autologous neutralizing antibody titers, and higher levels of antigen-specific plasma and memory B cells. Positron emission tomography and computed tomography imaging in live NHPs showed that, unlike alum, SMNP promoted rapid antigen accumulation in both proximal and distal lymph nodes (LNs). SMNP also induced strong type I interferon transcriptional signatures, expansion of innate immune cells, and increased antigen-presenting cell activation in LNs. These findings indicate that SMNP promotes multiple facets of the early immune response relevant for enhanced immunity to vaccination.
</summary>
<dc:date>2024-11-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Vaccines combining slow release and follicle targeting of antigens increase germinal center B cell diversity and clonal expansion</title>
<link href="https://hdl.handle.net/1721.1/163222" rel="alternate"/>
<author>
<name>Rodrigues, Kristen A</name>
</author>
<author>
<name>Zhang, Yiming J</name>
</author>
<author>
<name>Lam, Jonathan</name>
</author>
<author>
<name>Aung, Aereas</name>
</author>
<author>
<name>Morgan, Duncan M</name>
</author>
<author>
<name>Romanov, Anna</name>
</author>
<author>
<name>Maiorino, Laura</name>
</author>
<author>
<name>Yousefpour, Parisa</name>
</author>
<author>
<name>Gibson, Grace</name>
</author>
<author>
<name>Ozorowski, Gabriel</name>
</author>
<author>
<name>Gregory, Justin R</name>
</author>
<author>
<name>Amlashi, Parastoo</name>
</author>
<author>
<name>Van, Richard</name>
</author>
<author>
<name>Buckley, Maureen</name>
</author>
<author>
<name>Ward, Andrew B</name>
</author>
<author>
<name>Schief, William R</name>
</author>
<author>
<name>Love, J Christopher</name>
</author>
<author>
<name>Irvine, Darrell J</name>
</author>
<id>https://hdl.handle.net/1721.1/163222</id>
<updated>2025-10-18T04:57:23Z</updated>
<published>2025-06-18T00:00:00Z</published>
<summary type="text">Vaccines combining slow release and follicle targeting of antigens increase germinal center B cell diversity and clonal expansion
Rodrigues, Kristen A; Zhang, Yiming J; Lam, Jonathan; Aung, Aereas; Morgan, Duncan M; Romanov, Anna; Maiorino, Laura; Yousefpour, Parisa; Gibson, Grace; Ozorowski, Gabriel; Gregory, Justin R; Amlashi, Parastoo; Van, Richard; Buckley, Maureen; Ward, Andrew B; Schief, William R; Love, J Christopher; Irvine, Darrell J
Vaccine adjuvants play important roles in shaping the humoral response to immunization. Here, we analyzed mechanisms of action of a clinically relevant combination adjuvant strategy, where phosphoserine (pSer)–tagged immunogens bound to aluminum hydroxide (alum) adjuvant, promoting prolonged antigen release to draining lymph nodes, are combined with a saponin nanoparticle adjuvant termed SMNP, which alters lymph flow and antigen entry into lymph nodes. When used with a stabilized HIV envelope trimer antigen in mice, this combined adjuvant approach promoted substantial enhancements in germinal center and antibody responses relative to either adjuvant alone. Using single-cell RNA and B cell receptor sequencing, we found that the alum-pSer/SMNP combination augmented the clonal expansion and diversity of the germinal center B cell repertoire, coincident with an increased proportion of S-phase germinal center B cells and expression of positive selection markers. Moreover, we found that the combination adjuvant approach, but not alum-pSer delivery or SMNP alone, promoted accumulation of intact antigen on follicular dendritic cells, reflecting integrated effects of slow antigen delivery and altered lymph node uptake. Genetic ablation of Cr1/2 expression by follicular dendritic cells eliminated antigen accumulation and hampered the antigen-specific germinal center response, supporting antigen delivery to these cells as a key mechanism of the improved response elicited by this combination adjuvant. These results demonstrate how adjuvants with complementary mechanisms of action affecting vaccine biodistribution and kinetics can enhance humoral immunity.
</summary>
<dc:date>2025-06-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Task 2 Technical Report</title>
<link href="https://hdl.handle.net/1721.1/163221" rel="alternate"/>
<author>
<name>Perez Gago, Cecilia</name>
</author>
<author>
<name>Hansman, R. John</name>
</author>
<id>https://hdl.handle.net/1721.1/163221</id>
<updated>2025-10-18T03:01:07Z</updated>
<published>2025-10-17T00:00:00Z</published>
<summary type="text">Task 2 Technical Report
Perez Gago, Cecilia; Hansman, R. John
</summary>
<dc:date>2025-10-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating cell culture media development using Bayesian optimization-based iterative experimental design</title>
<link href="https://hdl.handle.net/1721.1/163220" rel="alternate"/>
<author>
<name>Narayanan, Harini</name>
</author>
<author>
<name>Hinckley, Joshua A</name>
</author>
<author>
<name>Barry, Rachel</name>
</author>
<author>
<name>Dang, Brendan</name>
</author>
<author>
<name>Wolffe, Lenna A</name>
</author>
<author>
<name>Atari, Adel</name>
</author>
<author>
<name>Tseng, Yuen-Yi</name>
</author>
<author>
<name>Love, J Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/163220</id>
<updated>2025-10-18T04:57:10Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">Accelerating cell culture media development using Bayesian optimization-based iterative experimental design
Narayanan, Harini; Hinckley, Joshua A; Barry, Rachel; Dang, Brendan; Wolffe, Lenna A; Atari, Adel; Tseng, Yuen-Yi; Love, J Christopher
Optimizing operational conditions for complex biological systems used in life sciences research and biotechnology is an arduous task. Here, we apply a Bayesian Optimization-based iterative framework for experimental design to accelerate cell culture media development for two applications. First, we show that this approach yields new compositions of media with cytokine supplementation to maintain the viability and distribution of human peripheral blood mononuclear cells in the culture. Second, we apply this framework to optimize the production of three recombinant proteins in cultivations of &lt;jats:italic&gt;K.phaffii&lt;/jats:italic&gt;. We identified conditions with improved outcomes for both applications compared to the initial standard media using 3–30 times fewer experiments than that estimated for other methods such as the standard Design of Experiments. Subsequently, we also demonstrated the extensibility of our approach to efficiently account for additional design factors through transfer learning. These examples demonstrate how coupling data collection, modeling, and optimization in this iterative paradigm, while using an exploration-exploitation trade-off in each iteration, can reduce the time and resources for complex optimization tasks such as the one demonstrated here.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Emerging immunomodulatory strategies for cell therapeutics</title>
<link href="https://hdl.handle.net/1721.1/163219" rel="alternate"/>
<author>
<name>Chua, Corrine Ying Xuan</name>
</author>
<author>
<name>Jiang, Allen Yujie</name>
</author>
<author>
<name>Eufrásio-da-Silva, Tatiane</name>
</author>
<author>
<name>Dolatshahi-Pirouz, Alireza</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Orive, Gorka</name>
</author>
<author>
<name>Grattoni, Alessandro</name>
</author>
<id>https://hdl.handle.net/1721.1/163219</id>
<updated>2025-10-18T04:57:19Z</updated>
<published>2023-03-01T00:00:00Z</published>
<summary type="text">Emerging immunomodulatory strategies for cell therapeutics
Chua, Corrine Ying Xuan; Jiang, Allen Yujie; Eufrásio-da-Silva, Tatiane; Dolatshahi-Pirouz, Alireza; Langer, Robert; Orive, Gorka; Grattoni, Alessandro
Cellular therapies are poised to transform the field of medicine by restoring dysfunctional tissues and treating various diseases in a dynamic manner not achievable by conventional pharmaceutics. Spanning various therapeutic areas inclusive of cancer, regenerative medicine, and immune disorders, cellular therapies comprise stem or non-stem cells derived from various sources. Despite numerous clinical approvals or trials underway, the host immune response presents a critical impediment to the widespread adoption and success of cellular therapies. Here, we review current research and clinical advances in immunomodulatory strategies to mitigate immune rejection or promote immune tolerance to cellular therapies. We discuss the potential of these immunomodulatory interventions to accelerate translation or maximize the prospects of improving therapeutic outcomes of cellular therapies for clinical success.
</summary>
<dc:date>2023-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Annotated Bibliography</title>
<link href="https://hdl.handle.net/1721.1/163218" rel="alternate"/>
<author>
<name>Perez Gago, Cecilia</name>
</author>
<author>
<name>Hansman, R. John</name>
</author>
<id>https://hdl.handle.net/1721.1/163218</id>
<updated>2025-10-18T03:01:08Z</updated>
<published>2025-10-17T00:00:00Z</published>
<summary type="text">An Annotated Bibliography
Perez Gago, Cecilia; Hansman, R. John
</summary>
<dc:date>2025-10-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Long-Time Quantum–Classical Correspondence for Open Systems in Trace Norm</title>
<link href="https://hdl.handle.net/1721.1/163217" rel="alternate"/>
<author>
<name>Li, Zhenhao</name>
</author>
<id>https://hdl.handle.net/1721.1/163217</id>
<updated>2025-10-18T04:56:57Z</updated>
<published>2025-08-21T00:00:00Z</published>
<summary type="text">Long-Time Quantum–Classical Correspondence for Open Systems in Trace Norm
Li, Zhenhao
We consider a frictionless system coupled to an external Markovian environment. The quantum and classical evolution of such systems are described by the Lindblad and the Fokker–Planck equation, respectively. We show that when such a system is given by an at most quadratically growing Hamiltonian and at most linearly growing real jump functions, the quantum and classical evolutions remain close on time scales much longer than Ehrenfest time. In particular, we show that the evolution of a density matrix by the Lindblad equation is close in trace norm to the quantization of the corresponding evolution by the Fokker–Planck equation. Such agreement improves upon recent results (Galokowski and Zworski in Classical quantum correspondence in Lindblad evolution, 2024. arXiv:2403.09345 ; Hernández et al. in Decoherence ensures classicality beyond the Ehrenfest time as ħ → 0 , 2023. arXiv:2306.13717 , Hernández et al. in The limit of open quantum systems with general Lindbladians: vanishing noise ensures classicality beyond the ehrenfest time, 2023. arXiv:2307.05326 ), which proved long-time agreement in weaker norms.
</summary>
<dc:date>2025-08-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Search for vector-like leptons with long-lived particle decays in the CMS muon system in proton-proton collisions at √s = 13 TeV</title>
<link href="https://hdl.handle.net/1721.1/163216" rel="alternate"/>
<author>
<name>Chekhovsky, V.</name>
</author>
<author>
<name>Hayrapetyan, A.</name>
</author>
<author>
<name>Makarenko, V.</name>
</author>
<author>
<name>Tumasyan, A.</name>
</author>
<author>
<name>Adam, W.</name>
</author>
<author>
<name>Andrejkovic, J. W.</name>
</author>
<author>
<name>Benato, L.</name>
</author>
<author>
<name>Bergauer, T.</name>
</author>
<author>
<name>Chatterjee, S.</name>
</author>
<author>
<name>Damanakis, K.</name>
</author>
<author>
<name>Dragicevic, M.</name>
</author>
<author>
<name>Hussain, P. S.</name>
</author>
<author>
<name>Jeitler, M.</name>
</author>
<author>
<name>Krammer, N.</name>
</author>
<author>
<name>Li, A.</name>
</author>
<author>
<name>Liko, D.</name>
</author>
<id>https://hdl.handle.net/1721.1/163216</id>
<updated>2025-10-18T04:57:03Z</updated>
<published>2025-08-20T00:00:00Z</published>
<summary type="text">Search for vector-like leptons with long-lived particle decays in the CMS muon system in proton-proton collisions at √s = 13 TeV
Chekhovsky, V.; Hayrapetyan, A.; Makarenko, V.; Tumasyan, A.; Adam, W.; Andrejkovic, J. W.; Benato, L.; Bergauer, T.; Chatterjee, S.; Damanakis, K.; Dragicevic, M.; Hussain, P. S.; Jeitler, M.; Krammer, N.; Li, A.; Liko, D.
A first search is presented for vector-like leptons (VLLs) exclusively decaying into a light long-lived pseudoscalar boson and a standard model τ lepton. The pseudoscalar boson is assumed to have a mass below the τ+τ− threshold, so that it decays exclusively into two photons. It is identified using the CMS muon system. The analysis is carried out using a data set of proton-proton collisions at a center-of-mass energy of 13 TeV collected by the CMS experiment in 2016–2018, corresponding to an integrated luminosity of 138 fb−1. Selected events contain at least one pseudoscalar boson decaying electromagnetically in the muon system and at least one hadronically decaying τ lepton. No significant excess of data events is observed compared to the background expectation. Upper limits are set at 95% confidence level on the vector-like lepton production cross section as a function of the VLL mass and the pseudoscalar boson mean proper decay length. The observed and expected exclusion ranges of the VLL mass extend up to 700 and 670 GeV, respectively, depending on the pseudoscalar boson lifetime.
</summary>
<dc:date>2025-08-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>Percolation effects on fracture in ductile-phase toughened oxide coatings</title>
<link href="https://hdl.handle.net/1721.1/163215" rel="alternate"/>
<author>
<name>Gupta, Isha</name>
</author>
<author>
<name>Kpamegan, Aliya K.</name>
</author>
<author>
<name>Vaidyanathan, Annika M. L.</name>
</author>
<author>
<name>Cordero, Zachary C.</name>
</author>
<id>https://hdl.handle.net/1721.1/163215</id>
<updated>2025-10-18T04:57:00Z</updated>
<published>2025-03-25T00:00:00Z</published>
<summary type="text">Percolation effects on fracture in ductile-phase toughened oxide coatings
Gupta, Isha; Kpamegan, Aliya K.; Vaidyanathan, Annika M. L.; Cordero, Zachary C.
The toughness and damage behaviors of ductile-phase toughened oxide coatings were characterized as the reinforcement volume fraction varied across the percolation threshold. The coatings, consisting of Ni particles in a borate glass-ceramic matrix, showed a rising resistance curve, with the extent of stable crack growth increasing with Ni content. While initiation toughness was relatively insensitive to reinforcement topology, peak toughness increased sharply once the Ni reinforcement percolated, reaching a maximum value of ~ 160 J/m2 in an interpenetrating composite coating with 35 vol% Ni. This toughness is sufficiently high to resist failure in the target application of rocket engine turbomachinery, where coatings must withstand rapid thermal transients upon engine startup and shutdown. Characterization of the crack path confirmed that this toughening increment corresponded to a transition from crack deflection to crack bridging as the dominant toughening mechanism. The implications of these results on design of ductile-phase toughened coatings are discussed. Graphical abstract Double-cantilever beam specimens with the ductile-phase toughened oxide coating as an interlayer between the two beams
</summary>
<dc:date>2025-03-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI Challenge for Satellite Pattern-of-Life Identification: Dataset, Design and Results</title>
<link href="https://hdl.handle.net/1721.1/163214" rel="alternate"/>
<author>
<name>Siew, Peng M.</name>
</author>
<author>
<name>Solera, Haley E.</name>
</author>
<author>
<name>Lavezzi, Giovanni</name>
</author>
<author>
<name>Roberts, Thomas G.</name>
</author>
<author>
<name>Jang, Daniel</name>
</author>
<author>
<name>Baldsiefen, David</name>
</author>
<author>
<name>Tran, Binh</name>
</author>
<author>
<name>Yeung, Christopher</name>
</author>
<author>
<name>Johnson, Kurtis</name>
</author>
<author>
<name>Metzger, Nathan</name>
</author>
<author>
<name>Porcher, Francois</name>
</author>
<author>
<name>Haik, Isaac</name>
</author>
<author>
<name>Rodriguez-Fernandez, Victor</name>
</author>
<author>
<name>Folcik, Zachary</name>
</author>
<author>
<name>Price, Jeffrey</name>
</author>
<id>https://hdl.handle.net/1721.1/163214</id>
<updated>2025-10-18T04:57:05Z</updated>
<published>2025-08-04T00:00:00Z</published>
<summary type="text">AI Challenge for Satellite Pattern-of-Life Identification: Dataset, Design and Results
Siew, Peng M.; Solera, Haley E.; Lavezzi, Giovanni; Roberts, Thomas G.; Jang, Daniel; Baldsiefen, David; Tran, Binh; Yeung, Christopher; Johnson, Kurtis; Metzger, Nathan; Porcher, Francois; Haik, Isaac; Rodriguez-Fernandez, Victor; Folcik, Zachary; Price, Jeffrey
Despite the availability of extensive historical data on Earth-orbiting objects, artificial intelligence (AI) adoption in space domain awareness remains limited. To address this gap, the 2024 MIT ARCLab Prize for AI Innovation in Space challenged participants to develop AI models for characterizing satellite pattern-of-life (PoL) in Geostationary Earth Orbit. The challenge focused on developing machine learning models capable of classifying behavioral patterns and detecting key transition events in multivariate time-series data. The challenge dataset comprised of 2402 satellite trajectories spanning six months with a two-hour temporal resolution. The data are generated using high-fidelity satellite propagators based on simulated trajectories, Vector Covariance Message data, and two-line elements. This dataset features diverse operational behaviors and propulsion systems, providing a robust foundation for AI analysis. The challenge attracted over 100 teams worldwide, with more than 350 submissions showcasing a diverse range of AI approaches, including deep learning architectures (CNNs, LSTMs, transformers), gradient-boosting techniques (XGBoost, CatBoost), and hybrid models. The top performing teams demonstrated AI’s effectiveness in PoL characterization, with Hawaii2024 achieving an F2 score of 0.952 on the partial test set using a CNN-LSTM hybrid approach, followed closely by Millennial-IUP and QR_Is that utilized XGBoost with tailored transition-labeling and gradient-boosted decision tree with a model-stacking strategy, respectively. This paper presents an analysis of the competition’s dataset, evaluation methodology, and top-performing solutions.
</summary>
<dc:date>2025-08-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of smart imaging runtime</title>
<link href="https://hdl.handle.net/1721.1/163213" rel="alternate"/>
<author>
<name>Athey, Thomas</name>
</author>
<author>
<name>Sawmya, Shashata</name>
</author>
<author>
<name>Meirovitch, Yaron</name>
</author>
<author>
<name>Schalek, Richard</name>
</author>
<author>
<name>Potocek, Pavel</name>
</author>
<author>
<name>Chandok, Ishaan</name>
</author>
<author>
<name>Peemen, Maurice</name>
</author>
<author>
<name>Lichtman, Jeff</name>
</author>
<author>
<name>Samuel, Aravinthan</name>
</author>
<author>
<name>Shavit, Nir</name>
</author>
<id>https://hdl.handle.net/1721.1/163213</id>
<updated>2025-10-18T04:56:59Z</updated>
<published>2025-08-14T00:00:00Z</published>
<summary type="text">Analysis of smart imaging runtime
Athey, Thomas; Sawmya, Shashata; Meirovitch, Yaron; Schalek, Richard; Potocek, Pavel; Chandok, Ishaan; Peemen, Maurice; Lichtman, Jeff; Samuel, Aravinthan; Shavit, Nir
Smart microscopy is a new imaging approach that involves rapid imaging, prediction of important subregions, then selective re-imaging. This approach has been validated in reducing imaging beam time in electron microscopy connectomics, but the speedup depends on various imaging workflow parameters. Here we present the first runtime analysis of traditional vs. smart microscopy and show how these parameters can magnify, or diminish potential time savings. We provide a GUI application that calculates the theoretical time savings of smart microscopy from user input parameters describing their imaging workflow. Finally, we measure end-to-end runtime of SmartEM acquisition on an electron microscope to demonstrate two strategies for faster acquisition: mixed-precision neural networks and parallelization of microscope and support computer operations.
</summary>
<dc:date>2025-08-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>The ππ scattering amplitude at large Nc</title>
<link href="https://hdl.handle.net/1721.1/163212" rel="alternate"/>
<author>
<name>Baeza-Ballesteros, Jorge</name>
</author>
<author>
<name>Hernández, Pilar</name>
</author>
<author>
<name>Romero-López, Fernando</name>
</author>
<id>https://hdl.handle.net/1721.1/163212</id>
<updated>2025-10-18T04:56:56Z</updated>
<published>2025-08-14T00:00:00Z</published>
<summary type="text">The ππ scattering amplitude at large Nc
Baeza-Ballesteros, Jorge; Hernández, Pilar; Romero-López, Fernando
We study the scaling of meson-meson scattering amplitudes with the number of colors, Nc. We use lattice calculations in a theory with Nf = 4 degenerate flavors, with Nc = 3 – 6 and pion mass Mπ ≈ 560 MeV. We focus on three different scattering channels, two of which have the same quantum numbers as some tetraquark candidates recently found at LHCb: the T cs 0 0 2900 , T c s ¯ 0 + + 2900 , T c s ¯ 0 0 2900 and T cs 1 0 2900 states. Finite-volume energies are extracted using a large set of operators, containing two-particle operators with the form of two pions or two vector mesons, and local tetraquark operators. The resulting energy spectra is used to constrain the infinite-volume scattering amplitude by means of Lüscher’s quantization condition. We consider polynomial parametrizations of the phase shift, as well as one-loop chiral perturbation theory (ChPT) predictions. We find that our lattice results follow the expected Nc scaling and are sensitive to subleading Nc corrections. In addition, we constrain the scaling of different combinations of low-energy constants from matching to large Nc ChPT. The results for the channel corresponding to a π + D s + − K + D + state show evidence of a virtual bound state with energy Evirtual = 1.63(10)Mπ for Nc = 3, while this pole disappears at Nc &gt; 3. This may be connected to the exotic states found in experiment.
</summary>
<dc:date>2025-08-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2024</title>
<link href="https://hdl.handle.net/1721.1/163211" rel="alternate"/>
<author>
<name>Chakrabarty, Deepto</name>
</author>
<id>https://hdl.handle.net/1721.1/163211</id>
<updated>2025-10-18T04:58:34Z</updated>
<published>2024-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2024
Chakrabarty, Deepto
This report contains the following sections: Faculty Count, Promotions and Departures, Administration, Faculty Awards, Education, Research Highlights, Pappalardo Fellows and Community/Upcoming Events.
</summary>
<dc:date>2024-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Semi-automated last touch detection for out-of-bounds possession decisions in football</title>
<link href="https://hdl.handle.net/1721.1/163210" rel="alternate"/>
<author>
<name>Wang, Henry</name>
</author>
<author>
<name>Mills, Katie</name>
</author>
<author>
<name>Billingham, Johsan</name>
</author>
<author>
<name>Robertson, Sam</name>
</author>
<author>
<name>Hosoi, A. E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163210</id>
<updated>2025-10-18T04:57:01Z</updated>
<published>2025-08-27T00:00:00Z</published>
<summary type="text">Semi-automated last touch detection for out-of-bounds possession decisions in football
Wang, Henry; Mills, Katie; Billingham, Johsan; Robertson, Sam; Hosoi, A. E.
Football referees must make quick and accurate decisions in unforgiving environments. In parallel, advances in optical tracking have created new avenues for technology-assisted officiating. Using skeletal and ball tracking data, we present a novel diphase framework for Semi-automated Last Touch detection, designed to help referees adjudicate out-of-bounds possession decisions where player and ball occlusion may pose challenges. The proposed methodology uses a touch probability model to find the decision frame of the last touch before the ball goes out-of-bounds, and rules-based or supervised learning algorithms predict the player responsible for the touch. Leveraging principles of kinematics, human anthropometry, and machine learning, the models predict the correct possession decision with up to 82.5% accuracy on a test dataset of duels from the 2022 FIFA World Cup, including over 90% for aerial duels. Our results represent potential improvements in human performance reported in previous literature and provide a baseline benchmark for future studies.
</summary>
<dc:date>2025-08-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Small radius inclusive jet production at the LHC through NNLO+NNLL</title>
<link href="https://hdl.handle.net/1721.1/163209" rel="alternate"/>
<author>
<name>Generet, Terry</name>
</author>
<author>
<name>Lee, Kyle</name>
</author>
<author>
<name>Moult, Ian</name>
</author>
<author>
<name>Poncelet, Rene</name>
</author>
<author>
<name>Zhang, Xiaoyuan</name>
</author>
<id>https://hdl.handle.net/1721.1/163209</id>
<updated>2025-10-18T04:56:54Z</updated>
<published>2025-08-01T00:00:00Z</published>
<summary type="text">Small radius inclusive jet production at the LHC through NNLO+NNLL
Generet, Terry; Lee, Kyle; Moult, Ian; Poncelet, Rene; Zhang, Xiaoyuan
The study of hadronic jets and their substructure at hadronic colliders is crucial for improving our understanding of QCD, and searching for new physics. As such, there has been a significant effort to improve their theoretical description. In the small radius limit, inclusive jet production exhibits a universal factorization, enabling the resummation of logarithms which greatly stabilizes theoretical predictions. In this paper, we show how to combine a recently introduced framework for small-R resummation with the Stripper subtraction formalism for fragmentation, enabling next-to-next-to-leading order calculations of small-R inclusive jet production for a wide variety of processes at the LHC. We extract the two-loop constants for the jet functions, enabling for the first time next-to-next-to-leading logarithmic resummation matched to next-to-next-to-leading order perturbative calculation. We compare with CMS data for small-R jet production, and find that our results greatly improve the accuracy of the predictions at small-R, and stabilize the perturbative convergence and error estimates at larger R. Our approach is applicable to a wide class of jet substructure observables exhibiting similar factorization theorems, opening the door to an NNLO jet substructure program at the LHC.
</summary>
<dc:date>2025-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Endosomolytic Peptides Enable the Cellular Delivery of Peptide Nucleic Acids</title>
<link href="https://hdl.handle.net/1721.1/163208" rel="alternate"/>
<author>
<name>Giancola, JoLynn B.</name>
</author>
<author>
<name>Raines, Ronald T.</name>
</author>
<id>https://hdl.handle.net/1721.1/163208</id>
<updated>2026-03-08T03:21:27Z</updated>
<published>2024-11-11T00:00:00Z</published>
<summary type="text">Endosomolytic Peptides Enable the Cellular Delivery of Peptide Nucleic Acids
Giancola, JoLynn B.; Raines, Ronald T.
Precision genetic medicine enlists antisense oligonucleotides (ASOs) to bind to nucleic acid targets important for human disease. Peptide nucleic acids (PNAs) have many desirable attributes as ASOs but lack cellular permeability. Here, we use an assay based on the corrective splicing of an mRNA to assess the ability of synthetic peptides to deliver a functional PNA into a human cell. We find that the endosomolytic peptides L17E and L17ER4 are highly efficacious delivery vehicles. Co-treatment of a PNA with low micromolar L17E or L17ER4 enables robust corrective splicing in nearly all treated cells. Peptide–PNA conjugates are even more effective. These results enhance the utility of PNAs as research tools and potential therapeutic agents.
</summary>
<dc:date>2024-11-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>Archive Labeling Sequences</title>
<link href="https://hdl.handle.net/1721.1/163207" rel="alternate"/>
<author>
<name>Khovanova, Tanya</name>
</author>
<author>
<name>Marton, Gregory</name>
</author>
<id>https://hdl.handle.net/1721.1/163207</id>
<updated>2025-10-18T04:57:12Z</updated>
<published>2025-08-22T00:00:00Z</published>
<summary type="text">Archive Labeling Sequences
Khovanova, Tanya; Marton, Gregory
What follows is the story of a family of integer sequences, which started life as a Google interview puzzle back in the previous century when VHS video tapes were in use.
</summary>
<dc:date>2025-08-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025</title>
<link href="https://hdl.handle.net/1721.1/163206" rel="alternate"/>
<author>
<name>Chakrabarty, Deepto</name>
</author>
<id>https://hdl.handle.net/1721.1/163206</id>
<updated>2025-10-18T04:58:37Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025
Chakrabarty, Deepto
This report contains the following sections: Faculty Count, Promotions and Departures, Administration, Faculty Awards, Education, Research Highlights, Pappalardo Fellows and Community/Upcoming Events.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rivers Influence Reef Pass Formation in the Society Islands</title>
<link href="https://hdl.handle.net/1721.1/163205" rel="alternate"/>
<author>
<name>Gillen, Megan N</name>
</author>
<author>
<name>Ashton, Andrew D</name>
</author>
<author>
<name>Perron, J Taylor</name>
</author>
<id>https://hdl.handle.net/1721.1/163205</id>
<updated>2025-10-18T04:57:18Z</updated>
<published>2025-06-14T00:00:00Z</published>
<summary type="text">Rivers Influence Reef Pass Formation in the Society Islands
Gillen, Megan N; Ashton, Andrew D; Perron, J Taylor
Reef passes are deep, navigable channels dissecting coral reefs around volcanic islands. Many reef passes are located offshore of large island river basins, suggesting a potential causal relationship. To clarify the mechanisms that form and maintain reef passes, we quantify the relationships between reef pass location and drainage basin size in the Society Islands. River basins draining toward reef passes are larger than those draining toward unbroken reef flats, suggesting that rivers help create and sustain reef passes. The correlation between reef passes and large rivers weakens for older islands, suggesting that oceanographic processes increasingly maintain passes as islands age and subside. We propose two river-driven reef pass formation mechanisms: reef incision, in which rivers erode into reefs during sea-level lowstands, and reef encroachment, in which corals growing in lower-elevation submerged river valleys preferentially drown during periods of rapid sea-level rise, leaving gaps in the accreting reef.
</summary>
<dc:date>2025-06-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>“Lab‐Quakes”: Quantifying the Complete Energy Budget of High‐Pressure Laboratory Failure</title>
<link href="https://hdl.handle.net/1721.1/163204" rel="alternate"/>
<author>
<name>Ortega‐Arroyo, Daniel</name>
</author>
<author>
<name>O'Ghaffari, Hoagy</name>
</author>
<author>
<name>Peč, Matěj</name>
</author>
<author>
<name>Gong, Zheng</name>
</author>
<author>
<name>Fu, Roger R</name>
</author>
<author>
<name>Ohl, Markus</name>
</author>
<author>
<name>Cattania, Camilla</name>
</author>
<author>
<name>Plümper, Oliver</name>
</author>
<id>https://hdl.handle.net/1721.1/163204</id>
<updated>2025-10-18T04:57:09Z</updated>
<published>2025-08-28T00:00:00Z</published>
<summary type="text">“Lab‐Quakes”: Quantifying the Complete Energy Budget of High‐Pressure Laboratory Failure
Ortega‐Arroyo, Daniel; O'Ghaffari, Hoagy; Peč, Matěj; Gong, Zheng; Fu, Roger R; Ohl, Markus; Cattania, Camilla; Plümper, Oliver
Understanding the interplay of various energy sinks during seismic fault slip is essential for advancing earthquake physics and improving hazard assessment. However, quantifying the energy consumed by major dissipative processes remains a challenge. In this study, we investigate energy partitioning during laboratory earthquakes (“lab-quakes”) by performing general shear stick-slip experiments on synthetic granitic cataclasites at elevated confining pressure. Using ultrasound, microstructural, and novel magnetism-based thermal analyses, we independently quantified the energy allocated to seismic radiation, new surfaces, and heat dissipation. These estimates showed good agreement with far-field measurements of mechanical work during the lab-quake. Our findings revealed that under the experimental conditions the majority of the released energy (68%–98%) is dissipated as heat, while seismic radiation accounts for 1%–8%, and the creation of new surfaces consumes &lt;1%–32%. Microstructural observations indicate pre-failure deformation, which includes comminution and development of the principal slip zone, significantly influences energy partitioning. This effect is further evident in the measured shear stress drops, where events with higher stress drops proportionally emitted more energy as seismic waves. This study is the first to constrain the full energy budget of lab-quakes from an observational standpoint, providing critical insights into the dynamics of fault rupture and energy dissipation processes.
</summary>
<dc:date>2025-08-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>A High-Precision Analytical Technique for Dissolved N2 Isotopes in Aquatic Systems: Biogeochemical Applications and Determination of Solubility Equilibrium Isotope Effects</title>
<link href="https://hdl.handle.net/1721.1/163203" rel="alternate"/>
<author>
<name>McPaul, Katelyn</name>
</author>
<author>
<name>Wankel, Scott D.</name>
</author>
<author>
<name>Seltzer, Alan M.</name>
</author>
<id>https://hdl.handle.net/1721.1/163203</id>
<updated>2025-10-18T04:57:14Z</updated>
<published>2025-06-17T00:00:00Z</published>
<summary type="text">A High-Precision Analytical Technique for Dissolved N2 Isotopes in Aquatic Systems: Biogeochemical Applications and Determination of Solubility Equilibrium Isotope Effects
McPaul, Katelyn; Wankel, Scott D.; Seltzer, Alan M.
Rationale&#13;
The isotopic composition of dissolved dinitrogen gas (δ15N-N2) in water can offer a powerful constraint on the sources and pathways of nitrogen cycling in aquatic systems. However, because of the large presence of atmosphere-derived dissolved N2 in these systems, high-precision (on the order of 0.001‰) measurements of N2 isotopes paired with inert gas measurements are required to disentangle atmospheric and biogeochemical signals. Additionally, the solubility equilibrium isotope fractionation of N2 and its temperature and salinity dependence are underconstrained at this level of precision.&#13;
&#13;
Methods&#13;
We introduce a new technique for sample collection, processing, and dynamic dual-inlet mass spectrometry allowing for high-precision measurement of δ15N-N2 and δ(N2/Ar) with simultaneous measurement of δ(40Ar/36Ar) and δ(Kr/N2) in water. We evaluate the reproducibility of this technique and employ it to redetermine the solubility equilibrium isotope effects for dissolved N2 across a range of temperatures and salinities.&#13;
&#13;
Results&#13;
Our technique achieves measurement reproducibility (1σ) for δ15N-N2 (0.006‰) and δ(N2/Ar) (0.41‰) suitable for tracing biogeochemical nitrogen cycling in aquatic environments. Through a series of air–water equilibration experiments, we find a N2 solubility equilibrium isotope effect (ε = α/1000 − 1, where α = (29N2/28N2)dissolved/(29N2/28N2)gas) in water of ε(‰) = 0.753 − 0.004•T where T is the temperature (°C), with uncertainties on the order of 0.001‰ over the temperature range of ~2°C–23°C and salinity range of ~0–30 psu. We find no apparent dependence of ε on salinity.&#13;
&#13;
Conclusions&#13;
Our new method allows for high-precision measurements of the isotopic composition of dissolved N2 and Ar, and dissolved N2/Ar and Kr/N2 ratios, within the same sample. Pairing measurements of N2 with inert gases facilitates the quantification of excess N2 from biogeochemical sources and its isotopic composition. This method allows for a wide range of applications in marine, coastal, and freshwater environments to characterize and quantitatively constrain potential nitrogen-cycling sources and pathways and to differentiate between physical and biological isotope signals in these systems.
</summary>
<dc:date>2025-06-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Linking Lattice Strain and Fractal Dimensions to Non‐monotonic Volume Changes in Irradiated Nuclear Graphite</title>
<link href="https://hdl.handle.net/1721.1/163202" rel="alternate"/>
<author>
<name>Sprouster, David J</name>
</author>
<author>
<name>Fayfar, Sean</name>
</author>
<author>
<name>Rai, Durgesh K</name>
</author>
<author>
<name>Campbell, Anne</name>
</author>
<author>
<name>Ilavsky, Jan</name>
</author>
<author>
<name>Snead, Lance L</name>
</author>
<author>
<name>Khaykovich, Boris</name>
</author>
<id>https://hdl.handle.net/1721.1/163202</id>
<updated>2025-10-18T04:57:20Z</updated>
<published>2025-08-12T00:00:00Z</published>
<summary type="text">Linking Lattice Strain and Fractal Dimensions to Non‐monotonic Volume Changes in Irradiated Nuclear Graphite
Sprouster, David J; Fayfar, Sean; Rai, Durgesh K; Campbell, Anne; Ilavsky, Jan; Snead, Lance L; Khaykovich, Boris
Graphite's resilience to high temperatures and neutron damage makes it vital for nuclear reactors, yet irradiation alters its microstructure, degrading key properties. We used small- and wide-angle X-ray scattering to study neutron-irradiated fine-grain nuclear graphite (Grade G347A) across varied temperatures and fluences. Results show significant shifts in internal strain and porosity, correlating with radiation-induced volume changes. Notably, porosity volume distribution (fractal dimensions) follows non-monotonic volume changes, suggesting a link to the Weibull distribution of fracture stress.
</summary>
<dc:date>2025-08-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computing Skinning Weights via Convex Duality</title>
<link href="https://hdl.handle.net/1721.1/163201" rel="alternate"/>
<author>
<name>Solomon, J</name>
</author>
<author>
<name>Stein, O</name>
</author>
<id>https://hdl.handle.net/1721.1/163201</id>
<updated>2025-10-18T04:57:12Z</updated>
<published>2025-09-25T00:00:00Z</published>
<summary type="text">Computing Skinning Weights via Convex Duality
Solomon, J; Stein, O
We study the problem of optimising for skinning weights through the lens of convex duality. In particular, we show that the popular bounded biharmonic weight (BBW) model for skinning is dual to a non-negative least-squares problem, which is amenable to efficient solution via iterative algorithms; the final weights are then recoverable via a closed-form expression. Our formulation maintains convexity and is provably equivalent to the original problem. We also provide theoretical discussion giving intuition for the dual problem in the smooth case. Our final algorithm, which can be implemented in a few lines of code, achieves efficient convergence times relative to generic quadratic programming tools applied to the primal problem, without nonconvex formulations, relaxations or specialised optimisation techniques.
</summary>
<dc:date>2025-09-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Saving and Letting Live</title>
<link href="https://hdl.handle.net/1721.1/163200" rel="alternate"/>
<author>
<name>Byrne, Thomas</name>
</author>
<id>https://hdl.handle.net/1721.1/163200</id>
<updated>2025-10-18T04:57:15Z</updated>
<published>2025-07-31T00:00:00Z</published>
<summary type="text">Saving and Letting Live
Byrne, Thomas
There is a metaphysical difference between person Akilling person B and A merely letting B die. There isalso a metaphysical difference between A saving B andA merely letting B live. This paper argues that the meta-physical difference between saving and letting live givesrise to a moral difference. It then puts that moral differ-ence to work: for example, it accounts for the long-feltmoral difference between failing to rescue a drowningchild and failing to donate $4000 to Oxfam (sufficientfor them, in the aggregate, to prevent a child’s death).
</summary>
<dc:date>2025-07-31T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancement of Superconductivity in WP via Oxide-Assisted Chemical Vapor Transport</title>
<link href="https://hdl.handle.net/1721.1/163198" rel="alternate"/>
<author>
<name>Campbell, Daniel J.</name>
</author>
<author>
<name>Lin, Wen-Chen</name>
</author>
<author>
<name>Collini, John</name>
</author>
<author>
<name>Eo, Yun Suk</name>
</author>
<author>
<name>Anand, Yash</name>
</author>
<author>
<name>Saha, Shanta</name>
</author>
<author>
<name>Graf, David</name>
</author>
<author>
<name>Zavalij, Peter Y.</name>
</author>
<author>
<name>Paglione, Johnpierre</name>
</author>
<id>https://hdl.handle.net/1721.1/163198</id>
<updated>2026-03-08T03:27:23Z</updated>
<published>2025-09-28T00:00:00Z</published>
<summary type="text">Enhancement of Superconductivity in WP via Oxide-Assisted Chemical Vapor Transport
Campbell, Daniel J.; Lin, Wen-Chen; Collini, John; Eo, Yun Suk; Anand, Yash; Saha, Shanta; Graf, David; Zavalij, Peter Y.; Paglione, Johnpierre
Tungsten monophosphide (WP) has been reported to superconduct below 0.8 K, and theoretical work has predicted an unconventional Cooper pairing mechanism. Here we present&#13;
data for WP single crystals grown by means of chemical vapor transport (CVT) of WO3, P,&#13;
and I2. In comparison to synthesis using WP powder as a starting material, this technique&#13;
results in samples with substantially decreased low-temperature scattering and favors&#13;
a more three-dimensional morphology. We also find that the resistive superconducting&#13;
transitions in these samples begin above 1 K. Variation in Tc is often found in strongly&#13;
correlated superconductors, and its presence in WP could be the result of influence from a&#13;
competing order and/or a non-s-wave gap.
</summary>
<dc:date>2025-09-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Molecular Imbalances Between Striosome and Matrix Compartments Characterize the Pathogenesis and Pathophysiology of Huntington&amp;rsquo;s Disease Model Mouse</title>
<link href="https://hdl.handle.net/1721.1/163197" rel="alternate"/>
<author>
<name>Morigaki, Ryoma</name>
</author>
<author>
<name>Yoshida, Tomoko</name>
</author>
<author>
<name>Fujikawa, Joji</name>
</author>
<author>
<name>Crittenden, Jill R.</name>
</author>
<author>
<name>Graybiel, Ann M.</name>
</author>
<id>https://hdl.handle.net/1721.1/163197</id>
<updated>2026-03-08T03:24:39Z</updated>
<published>2025-09-02T00:00:00Z</published>
<summary type="text">Molecular Imbalances Between Striosome and Matrix Compartments Characterize the Pathogenesis and Pathophysiology of Huntington&amp;rsquo;s Disease Model Mouse
Morigaki, Ryoma; Yoshida, Tomoko; Fujikawa, Joji; Crittenden, Jill R.; Graybiel, Ann M.
The pathogenesis and pathophysiology of Huntington’s disease (HD) are still incompletely understood, despite the remarkable advances in identifying the molecular effects of the Htt mutation in this disease. Clinical positron emission tomography studies suggest that phosphodiesterase 10A (PDE10A) declines earlier than dopamine D1 and D2 receptors in HD, indicating that it might serve as a key molecular marker in understanding disease mechanisms. In movement disorders, mutations in the genes encoding PDE10A and G-protein α subunit (Gαolf), both critical cAMP regulators in striatal spiny projection neurons, have been linked to chorea and dystonia. These observations highlight the potential importance of striatal cyclic AMP (cAMP) signaling in these disorders, but how such dysfunction could come is unknown. Here, we suggest that a key to understanding signaling dysfunction might be to evaluate these messenger systems in light of the circuit-level compartmental organization of the caudoputamen, in which there is particular vulnerability of the striosome compartment in HD. We developed machine learning algorithms to define with high precision and reproducibility the borders of striosomes in the brains of Q175 knock-in (Q175KI) HD mice from 3–12 months of age. We demonstrate that the expression of multiple molecules, including Gαolf, PDE10A, dopamine D1 and D2 receptors, and adenosine A2A receptors, is significantly reduced in the striosomes of Q175KI mice as compared to wildtype controls, across 3, 6, and 12 months of age. By contrast, mu-opioid receptor (MOR1) expression is uniquely upregulated, suggesting a compartment-specific and age-dependent shift in molecular profiles in the Q175KI HD mouse model caudoputamen. These differential changes may serve as a useful platform to determine factors underlying the greater vulnerability of striatal projection neurons in the striosomes than in the matrix in HD.
</summary>
<dc:date>2025-09-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Field-Scale Rice Area and Yield Mapping in Sri Lanka with Optical Remote Sensing and Limited Training Data</title>
<link href="https://hdl.handle.net/1721.1/163196" rel="alternate"/>
<author>
<name>Özdoğan, Mutlu</name>
</author>
<author>
<name>Wang, Sherrie</name>
</author>
<author>
<name>Ghose, Devaki</name>
</author>
<author>
<name>Fraga, Eduardo</name>
</author>
<author>
<name>Fernandes, Ana</name>
</author>
<author>
<name>Varela, Gonzalo</name>
</author>
<id>https://hdl.handle.net/1721.1/163196</id>
<updated>2026-03-08T03:24:36Z</updated>
<published>2025-09-02T00:00:00Z</published>
<summary type="text">Field-Scale Rice Area and Yield Mapping in Sri Lanka with Optical Remote Sensing and Limited Training Data
Özdoğan, Mutlu; Wang, Sherrie; Ghose, Devaki; Fraga, Eduardo; Fernandes, Ana; Varela, Gonzalo
Rice is a staple crop for over half the world’s population, and accurate, timely information on its planted area and production is crucial for food security and agricultural policy, particularly in developing nations like Sri Lanka. However, reliable rice monitoring in regions like Sri Lanka faces significant challenges due to frequent cloud cover and the fragmented nature of smallholder farms. This research introduces a novel, cost-effective method for mapping rice-planted area and yield at field scales in Sri Lanka using optical satellite data. The rice-planted fields were identified and mapped using a phenologically tuned image classification algorithm that highlights rice presence by observing water occurrence during transplanting and vegetation activity during subsequent crop growth. To estimate yields, a random forest regression model was trained at the district level by incorporating a satellite-derived chlorophyll index and environmental variables and subsequently applied at the field level. The approach has enabled the creation of two decades (2000–2022) of reliable, field-scale rice area and yield estimates, achieving map accuracies between 70% and over 90% and yield estimates with less than 20% error. These highly granular results, which are not available through traditional surveys, show a strong correlation with government statistics. They also demonstrate the advantages of a rule-based, phenology-driven classification over purely statistical machine learning models for long-term consistency in dynamic agricultural environments. This work highlights the significant potential of remote sensing to provide accurate and detailed insights into rice cultivation, supporting policy decisions and enhancing food security in Sri Lanka and other cloud-prone regions.
</summary>
<dc:date>2025-09-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generalized Pitman–Stanley Polytope: Vertices and Faces</title>
<link href="https://hdl.handle.net/1721.1/163195" rel="alternate"/>
<author>
<name>Dugan, William T.</name>
</author>
<author>
<name>Hegarty, Maura</name>
</author>
<author>
<name>Morales, Alejandro H.</name>
</author>
<author>
<name>Raymond, Annie</name>
</author>
<id>https://hdl.handle.net/1721.1/163195</id>
<updated>2026-03-08T03:26:42Z</updated>
<published>2024-12-09T00:00:00Z</published>
<summary type="text">Generalized Pitman–Stanley Polytope: Vertices and Faces
Dugan, William T.; Hegarty, Maura; Morales, Alejandro H.; Raymond, Annie
In 1999, Pitman and Stanley introduced the polytope bearing their name along with a study of its faces, lattice points, and volume. The Pitman–Stanley polytope is well-studied due to its connections to probability, parking functions, the generalized permutahedra, and flow polytopes. Its lattice points correspond to plane partitions of skew shape with entries 0 and 1. Pitman and Stanley remarked that their polytope can be generalized so that lattice points correspond to plane partitions of skew shape with entries 0 , 1 , … , m . Since then, this generalization has been untouched. We study this generalization and show that it can also be realized as a flow polytope of a grid graph. We give multiple characterizations of its vertices in terms of plane partitions of skew shape and integer flows. For a fixed skew shape, we show that the number of vertices of this polytope is a polynomial in m whose leading term, in certain cases, counts standard Young tableaux of a skew shifted shape. Moreover, we give formulas for the number of faces, as well as generating functions for the number of vertices.
</summary>
<dc:date>2024-12-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>From stellar light to astrophysical insight: automating variable star research with machine learning</title>
<link href="https://hdl.handle.net/1721.1/163194" rel="alternate"/>
<author>
<name>Audenaert, Jeroen</name>
</author>
<id>https://hdl.handle.net/1721.1/163194</id>
<updated>2026-03-08T03:27:43Z</updated>
<published>2025-07-24T00:00:00Z</published>
<summary type="text">From stellar light to astrophysical insight: automating variable star research with machine learning
Audenaert, Jeroen
Large-scale photometric surveys are revolutionizing astronomy by delivering unprecedented amounts of data. The rich data sets from missions such as the NASA Kepler and TESS satellites, and the upcoming ESA PLATO mission, are a treasure trove for stellar variability, asteroseismology and exoplanet studies. In order to unlock the full scientific potential of these massive data sets, automated data-driven methods are needed. In this review, I illustrate how machine learning is bringing asteroseismology toward an era of automated scientific discovery, covering the full cycle from data cleaning to variability classification and parameter inference, while highlighting the recent advances in representation learning, multimodal datasets and foundation models. This invited review offers a guide to the challenges and opportunities machine learning brings for stellar variability research and how it could help unlock new frontiers in time-domain astronomy.
</summary>
<dc:date>2025-07-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hidden causality in Modern Greek</title>
<link href="https://hdl.handle.net/1721.1/163193" rel="alternate"/>
<author>
<name>Tsilia, Anastasia</name>
</author>
<id>https://hdl.handle.net/1721.1/163193</id>
<updated>2026-03-08T03:26:07Z</updated>
<published>2025-06-09T00:00:00Z</published>
<summary type="text">Hidden causality in Modern Greek
Tsilia, Anastasia
This paper explores the syntax and semantics of an attitudinal construction in Modern Greek (mg), where an attitude verb takes an accusative object followed by a complement clause. Building on existing syntactic literature (e.g., Hadjivassiliou et al. in 13th international symposium on theoretical and applied linguistics, Aristotle University of Thessaloniki, Thessaloniki, pp. 70–80, 2000; Kotzoglou in Reading Working Papers in Linguistics 6:39–56, 2002; Kotzoglou in Selected papers on theoretical and applied linguistics from 22nd ISTAL, Aristotle University of Thessaloniki, Thessaloniki, pp. 299–315, 2017; Kotzoglou and Papangeli in New horizons in the analysis of raising and control, Springer, Dordrecht, pp. 111–131, 2007), I show that the accusative object is base-generated higher than the lower clause. Yet, I show that it semantically behaves as if it is part of the intensionalized argument of the attitude verb, giving rise to de dicto readings (Tsilia in Proceedings of Sinn und Bedeutung 27, pp. 655–673, 2023). Building on this and on a causal semantic requirement associated with the accusative object, I suggest a clausal analysis of the phenomenon. More specifically, under this analysis the accusative object is the subject of a small intermediate vp clause headed by a silent proleptic cause, which then takes the complement clause as its object. This contributes to the literature suggesting that hidden clauses are cross-linguistically attested and can solve intensionality paradoxes (den Dikken et al. in Non-propositional intentionality, Oxford Academic, Oxford, pp. 46–94, 2018), as well as to the literature on prolepsis (Davies in Language 81:645–665, 2005; Salzmann in The Wiley-Blackwell companion to syntax, Blackwell, Malden, vol. 5, pp. 3203–3245, 2017a; Deal in Semantics and Linguistic Theory 28:622–648, 2018; Dawson and Deal in Proceedings of Sinn und Bedeutung 23, pp. 329–346, 2019) showing that proleptic constructions may have varying interpretations and syntactic analyses cross-linguistically.
</summary>
<dc:date>2025-06-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Type II RR string fields and exotic diffeomorphisms</title>
<link href="https://hdl.handle.net/1721.1/163192" rel="alternate"/>
<author>
<name>Mamade, Raji A.</name>
</author>
<author>
<name>Zwiebach, Barton</name>
</author>
<id>https://hdl.handle.net/1721.1/163192</id>
<updated>2026-03-08T03:26:09Z</updated>
<published>2025-09-05T00:00:00Z</published>
<summary type="text">Type II RR string fields and exotic diffeomorphisms
Mamade, Raji A.; Zwiebach, Barton
We study the theory of massless fields of type II strings arising from the string field theory that uses two string fields, a physical one and an extra one that allows the writing of an action, but whose degrees of freedom ultimately decouple. The mechanism allowing the description of the self-dual five-form of type IIB, anticipated by Sen, is used by the SFT to describe all Ramond-Ramond forms in type IIB and IIA in a manifestly duality-invariant way. We find explicit expressions for the leading terms in the gauge transformation of the RR fields and focus on diffeomorphisms, which are exotic for both the physical and the extra fields, perhaps as needed to describe propagating degrees of freedom that do not gravitate. The algebra of diffeomorphisms includes field-dependent structure constants and only closes on-shell, as predicted by the type II SFT gauge algebra.
</summary>
<dc:date>2025-09-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Maintenance of core temperature in SCUBA divers in cold water: contributions of anthropometrics, suit type, and sex</title>
<link href="https://hdl.handle.net/1721.1/163191" rel="alternate"/>
<author>
<name>Orman, Tucker</name>
</author>
<author>
<name>Bradbury, Karleigh E.</name>
</author>
<author>
<name>Grosshennig, Tim</name>
</author>
<author>
<name>Perez, Makayla</name>
</author>
<author>
<name>Möller, Fabian N.</name>
</author>
<author>
<name>Dujić, Željko</name>
</author>
<author>
<name>Lovering, Andrew T.</name>
</author>
<id>https://hdl.handle.net/1721.1/163191</id>
<updated>2026-03-08T03:26:20Z</updated>
<published>2025-09-04T00:00:00Z</published>
<summary type="text">Maintenance of core temperature in SCUBA divers in cold water: contributions of anthropometrics, suit type, and sex
Orman, Tucker; Bradbury, Karleigh E.; Grosshennig, Tim; Perez, Makayla; Möller, Fabian N.; Dujić, Željko; Lovering, Andrew T.
Maintenance of core temperature (Tc) is vital for health and physiological function while SCUBA diving in cold water, but there is little research investigating the influence of anthropometrics, suit type, and sex on the rate of change in Tc during real-world diving conditions. We measured the rate of change in Tc (telemetric pill) and thermal sensation (Ts; Young questionnaire) in 62 participants (32 female) before and after non-decompression SCUBA dives using open circuit apparatus breathing air at varied depths and durations in cold water (~ 10 °C). Twenty-three participants wore drysuits (11F), and 39 participants wore wetsuits (21F). There was a significant effect of suit type on the rate of change in Tc, with those in wetsuits having a greater decrease in Tc than those in drysuits. However, there was no effect of suit type on the rate of change in Ts. In wetsuit and drysuit groups, there were significant associations between Tc/min and BSA/BM, BMI, and BM. Estimated body fat % (BF%) was significantly associated with the rate of change in Tc in the wetsuit group only. When separated by sex, there were significant associations with all the anthropometric variables and the rate of change in Tc in the female participants, but only with BM in the wetsuit males. These results suggest that drysuits offer greater thermal protection compared to wetsuits in 10 °C water, and anthropometrics should be considered when selecting the degree of thermal protection, especially for female divers.
</summary>
<dc:date>2025-09-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Major-element, trace-element and sulfur-isotope evidence for arc-like magmatism in the 4.0–2.9 Ga Acasta Gneiss Complex</title>
<link href="https://hdl.handle.net/1721.1/163190" rel="alternate"/>
<author>
<name>Beaudry, Patrick</name>
</author>
<author>
<name>Jagoutz, Oliver</name>
</author>
<author>
<name>Bauer, Ann M.</name>
</author>
<author>
<name>Rezeau, Hervé</name>
</author>
<author>
<name>Reimink, Jesse R.</name>
</author>
<author>
<name>Grove, Timothy L.</name>
</author>
<author>
<name>Izon, Gareth</name>
</author>
<author>
<name>Ono, Shuhei</name>
</author>
<id>https://hdl.handle.net/1721.1/163190</id>
<updated>2026-03-08T03:26:18Z</updated>
<published>2025-08-22T00:00:00Z</published>
<summary type="text">Major-element, trace-element and sulfur-isotope evidence for arc-like magmatism in the 4.0–2.9 Ga Acasta Gneiss Complex
Beaudry, Patrick; Jagoutz, Oliver; Bauer, Ann M.; Rezeau, Hervé; Reimink, Jesse R.; Grove, Timothy L.; Izon, Gareth; Ono, Shuhei
The Acasta Gneiss Complex (AGC) in northwestern Canada comprises Earth’s oldest known evolved crust, with zircon U–Pb ages up to 4.03 Ga. Several pulses of crustal generation and metamorphism are preserved in tonalitic and granitic gneisses spanning over one billion years, along with mafic and ultramafic rocks of unknown age. Major elements, trace elements and radiogenic isotope signatures have been invoked to suggest that these rocks preserve the local onset of horizontal tectonic processes. However, the behavior and influence of volatiles, which have a defining role in modern arc magmatism, remain unconstrained. Here we combine new whole-rock major- and trace-element data with multiple sulfur isotope analyses in 4.0–2.9 Ga Acasta gneisses and spatially associated mafic and ultramafic rocks to investigate the petrogenesis of the AGC. We use a recently-published major element-based melt hygrometer to estimate dissolved water contents for all published plagioclase-saturated Acasta meta-igneous rocks, and find modes at &lt; 0.5 wt.% and 5 wt.% H2O, similar to modern arc magmas. Tholeiitic and calc-alkaline trends are both present, with the former being more prominent in the oldest (ca. 4.0 Ga) samples and in mafic rocks. Zircon trace element oxybarometry reveals a shift towards more oxidized magmatic conditions by 3.75 Ga. Sulfur isotopes record a limited range in δ34S values, suggesting a common igneous end-member at ~  + 1 ‰, and positively correlate with calculated H2O contents, with more positive values (up to + 5‰) appearing in the Paleoarchean (&lt; 3.6 Ga). The Eoarchean (4.0–3.6 Ga) δ34S values are consistent with a precursor Hadean crust having an enriched sulfur isotope signature, possibly resulting from hydrous alteration or from isotopic fractionation during its formation. The temporal progression to more positive δ34S values is consistent with a shift towards more hydrous and oxidized magmatic differentiation. Most samples have near-zero Δ33S that fall along a mass-dependent fractionation (MDF) array, but one 3.5 Ga metasedimentary sample has a negative MIF Δ33S signature of -0.60 ± 0.01 ‰. Additionally, two granitic gneisses dated at 3.3 and 2.9 Ga preserve small positive MIF Δ33S values of + 0.08 ± 0.02 ‰, which could reflect recycling of sedimentary material via subduction by 3.3 Ga. Overall, our data indicate that the Acasta Gneiss Complex preserves several modes of crustal generation evolving over time, with an increasing importance of deep hydrous magmatism by 3.75 Ga and of sedimentary inputs by 3.3 Ga.
</summary>
<dc:date>2025-08-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Future of Drug Delivery</title>
<link href="https://hdl.handle.net/1721.1/163189" rel="alternate"/>
<author>
<name>Gao, Jingjing</name>
</author>
<author>
<name>Karp, Jeffrey M</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Joshi, Nitin</name>
</author>
<id>https://hdl.handle.net/1721.1/163189</id>
<updated>2026-03-08T03:27:14Z</updated>
<published>2023-01-24T00:00:00Z</published>
<summary type="text">The Future of Drug Delivery
Gao, Jingjing; Karp, Jeffrey M; Langer, Robert; Joshi, Nitin
Drug delivery technologies have been proven to improve treatment outcomes in many ways, including enhancing therapeutic efficacy, reducing toxicity, increasing patient compliance, and enabling entirely new medical treatments. As the therapeutic landscape has evolved from small-molecule drugs to a new generation of therapeutics including proteins, peptides, monoclonal antibodies, nucleic acids, and even live cells, drug delivery technologies have also evolved to meet their unique delivery needs.
</summary>
<dc:date>2023-01-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Altered DNA repair pathway engagement by engineered CRISPR-Cas9 nucleases</title>
<link href="https://hdl.handle.net/1721.1/163188" rel="alternate"/>
<author>
<name>Chauhan, Vikash P</name>
</author>
<author>
<name>Sharp, Phillip A</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<id>https://hdl.handle.net/1721.1/163188</id>
<updated>2026-03-08T03:27:21Z</updated>
<published>2023-03-07T00:00:00Z</published>
<summary type="text">Altered DNA repair pathway engagement by engineered CRISPR-Cas9 nucleases
Chauhan, Vikash P; Sharp, Phillip A; Langer, Robert
CRISPR-Cas9 introduces targeted DNA breaks that engage competing DNA repair pathways, producing a spectrum of imprecise insertion/deletion mutations (indels) and precise templated mutations (precise edits). The relative frequencies of these pathways are thought to primarily depend on genomic sequence and cell state contexts, limiting control over mutational outcomes. Here, we report that engineered Cas9 nucleases that create different DNA break structures engage competing repair pathways at dramatically altered frequencies. We accordingly designed a Cas9 variant (vCas9) that produces breaks which suppress otherwise dominant nonhomologous end-joining (NHEJ) repair. Instead, breaks created by vCas9 are predominantly repaired by pathways utilizing homologous sequences, specifically microhomology-mediated end-joining (MMEJ) and homology-directed repair (HDR). Consequently, vCas9 enables efficient precise editing through HDR or MMEJ while suppressing indels caused by NHEJ in dividing and nondividing cells. These findings establish a paradigm of targeted nucleases custom-designed for specific mutational applications.
</summary>
<dc:date>2023-03-07T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hydrogels for RNA delivery</title>
<link href="https://hdl.handle.net/1721.1/163187" rel="alternate"/>
<author>
<name>Zhong, Ruibo</name>
</author>
<author>
<name>Talebian, Sepehr</name>
</author>
<author>
<name>Mendes, Bárbara B</name>
</author>
<author>
<name>Wallace, Gordon</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Conde, João</name>
</author>
<author>
<name>Shi, Jinjun</name>
</author>
<id>https://hdl.handle.net/1721.1/163187</id>
<updated>2026-03-08T03:27:24Z</updated>
<published>2023-03-20T00:00:00Z</published>
<summary type="text">Hydrogels for RNA delivery
Zhong, Ruibo; Talebian, Sepehr; Mendes, Bárbara B; Wallace, Gordon; Langer, Robert; Conde, João; Shi, Jinjun
RNA-based therapeutics have shown tremendous promise in disease intervention at the genetic level, and some have been approved for clinical use, including the recent COVID-19 messenger RNA vaccines. The clinical success of RNA therapy is largely dependent on the use of chemical modification, ligand conjugation or non-viral nanoparticles to improve RNA stability and facilitate intracellular delivery. Unlike molecular-level or nanoscale approaches, macroscopic hydrogels are soft, water-swollen three-dimensional structures that possess remarkable features such as biodegradability, tunable physiochemical properties and injectability, and recently they have attracted enormous attention for use in RNA therapy. Specifically, hydrogels can be engineered to exert precise spatiotemporal control over the release of RNA therapeutics, potentially minimizing systemic toxicity and enhancing in vivo efficacy. This Review provides a comprehensive overview of hydrogel loading of RNAs and hydrogel design for controlled release, highlights their biomedical applications and offers our perspectives on the opportunities and challenges in this exciting field of RNA delivery.
</summary>
<dc:date>2023-03-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Thermal Reactivity and Molecular Diversity of Particulate Organic Carbon in the Amazon River Mainstem</title>
<link href="https://hdl.handle.net/1721.1/163186" rel="alternate"/>
<author>
<name>Rosengard, Sarah Z.</name>
</author>
<author>
<name>Mauro S. Moura, Jose</name>
</author>
<author>
<name>Spencer, Robert G. M.</name>
</author>
<author>
<name>Johnson, Carl</name>
</author>
<author>
<name>McNichol, Ann</name>
</author>
<author>
<name>Boehman, Brenna</name>
</author>
<author>
<name>Galy, Valier</name>
</author>
<id>https://hdl.handle.net/1721.1/163186</id>
<updated>2026-03-08T03:27:27Z</updated>
<published>2025-06-18T00:00:00Z</published>
<summary type="text">The Thermal Reactivity and Molecular Diversity of Particulate Organic Carbon in the Amazon River Mainstem
Rosengard, Sarah Z.; Mauro S. Moura, Jose; Spencer, Robert G. M.; Johnson, Carl; McNichol, Ann; Boehman, Brenna; Galy, Valier
The Amazon River mobilizes one of the largest fluxes of particulate organic carbon (POC) fromland to coastal ocean sediments, playing an important role in the long‐term sequestration of biospheric organiccarbon in the ocean. Ramped oxidation (RPO) analyses of suspended sediments collected from the AmazonRiver mainstem, Solimões River, Madeira River, and Tapajós River presented an opportunity to parse riverinePOC by thermal reactivity, extract the activation energy distributions of specific biomolecular pools in thesesamples, and characterize the molecular diversity of POC across the floodplain. The thermal reactivity dataimply that POC from the Amazon River basin spans a wide but relatively homogenous activation energy rangeacross samples, suggesting that the degradation history of the organic carbon comprising riverine suspendedparticles is relatively constant across depths within the mainstem and different tributary locations. Couplingactivation energy distributions to stable and radiocarbon isotopic analyses shows that ca. 85% of mainstem POCderives from a range of partially degraded terrestrial sources, likely organic matter from mineral soil horizons,and that a similar range of soil sources influences the biomolecular diversity in tributary samples. In agreementwith earlier assessments, ca. 10% of the riverine POC flux is fresh vegetation and up to 5% of it is petrogenicorganic matter. Expanded RPO analyses of samples across the Amazon river‐to‐ocean continuum wouldprovide an opportunity to track the fate of these different organic matter pools downstream that is uniquelydifferent from, but complementary to, past compound‐specific and bulk analyses of riverine POC.
</summary>
<dc:date>2025-06-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Overcoming barriers to patient adherence: the case for developing innovative drug delivery systems</title>
<link href="https://hdl.handle.net/1721.1/163185" rel="alternate"/>
<author>
<name>Baryakova, Tsvetelina H</name>
</author>
<author>
<name>Pogostin, Brett H</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>McHugh, Kevin J</name>
</author>
<id>https://hdl.handle.net/1721.1/163185</id>
<updated>2026-03-08T03:27:19Z</updated>
<published>2023-03-27T00:00:00Z</published>
<summary type="text">Overcoming barriers to patient adherence: the case for developing innovative drug delivery systems
Baryakova, Tsvetelina H; Pogostin, Brett H; Langer, Robert; McHugh, Kevin J
Poor medication adherence is a pervasive issue with considerable health and socioeconomic consequences. Although the underlying reasons are generally understood, traditional intervention strategies rooted in patient-centric education and empowerment have proved to be prohibitively complex and/or ineffective. Formulating a pharmaceutical in a drug delivery system (DDS) is a promising alternative that can directly mitigate many common impediments to adherence, including frequent dosing, adverse effects and a delayed onset of action. Existing DDSs have already positively influenced patient acceptability and improved rates of adherence across various disease and intervention types. The next generation of systems have the potential to instate an even more radical paradigm shift by, for example, permitting oral delivery of biomacromolecules, allowing for autonomous dose regulation and enabling several doses to be mimicked with a single administration. Their success, however, is contingent on their ability to address the problems that have made DDSs unsuccessful in the past.
Provided to the PMC Covid-19 Collection by Springer Nature
</summary>
<dc:date>2023-03-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rousseau's Freedom as Recognition</title>
<link href="https://hdl.handle.net/1721.1/163184" rel="alternate"/>
<author>
<name>Perilla, Julian</name>
</author>
<id>https://hdl.handle.net/1721.1/163184</id>
<updated>2026-03-08T03:27:10Z</updated>
<published>2025-06-19T00:00:00Z</published>
<summary type="text">Rousseau's Freedom as Recognition
Perilla, Julian
To yearn for freedom is to want to be seen by others as someone. Rousseau, I believe, held such a conception of freedom, alongside his intricate theory of human passions. This essay examines how freedom relates to such passions, and in particular, to the Rousseauian notion of amour-propre. Importantly, the aim here is both interpretive and positive. The essay seeks to locate Rousseau within the old republican tradition in a manner that parts ways with most contemporary readings of Rousseau. But, in doing so, it argues that republican freedom essentially involves a particular status and the recognition of such status by others. On this Rousseauian view, one is free to the extent that others see one as a limit to their arbitrary interference and as entitled to interfere with them non-arbitrarily. Finally, republican freedom, so understood, is shown to be essential to meeting the demands of healthy amour-propre, thereby bringing Rousseau's political and psychological theories closer together.
</summary>
<dc:date>2025-06-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Electric Field Inhomogeneity in Colloidal QD‐LEDs</title>
<link href="https://hdl.handle.net/1721.1/163183" rel="alternate"/>
<author>
<name>Srinivasan, Shreyas</name>
</author>
<author>
<name>Zhang, Ruiqi</name>
</author>
<author>
<name>Dillender, Mike</name>
</author>
<author>
<name>Nguyen, Thienan</name>
</author>
<author>
<name>Laitz, Madeleine</name>
</author>
<author>
<name>Kim, Taehyung</name>
</author>
<author>
<name>Kim, Kwang‐Hee</name>
</author>
<author>
<name>Kim, Tae‐Gon</name>
</author>
<author>
<name>Bawendi, Moungi</name>
</author>
<author>
<name>Bulović, Vladimir</name>
</author>
<id>https://hdl.handle.net/1721.1/163183</id>
<updated>2026-03-08T03:27:12Z</updated>
<published>2025-06-13T00:00:00Z</published>
<summary type="text">Electric Field Inhomogeneity in Colloidal QD‐LEDs
Srinivasan, Shreyas; Zhang, Ruiqi; Dillender, Mike; Nguyen, Thienan; Laitz, Madeleine; Kim, Taehyung; Kim, Kwang‐Hee; Kim, Tae‐Gon; Bawendi, Moungi; Bulović, Vladimir
It is demonstrated that the electroluminescent layer in a colloidal quantum dotlight emitting diode (QD-LED), formed by stochastic methods such as spin-coating, incorporates morphological thickness inhomogeneities, resulting inlocal electric ﬁeld variations. These inhomogeneities can be directly visualizedand quantiﬁed using confocal micro-photoluminescence (PL) and micro-electroluminescence (EL), as showed in QD-LEDs with stochastically processedInP/ZnSe/ZnS colloidal quantum dots (QDs). Around 5% of the device showsEL darkspots under forward bias and PL hotspots under photoexcitation,with a strong spatial correlation between these features. The PL hotspots(EL darkspots) correspond to thicker regions in the stochastically-processedQD ﬁlm. This thickness variation leads to two distinct QD sub-populationsresponding diﬀerently to optical excitation. Time and energy-resolved spectraldiﬀusion measurements reveal that most excitons belong to a “more-mobile”sub-population with fast energy transfer and short, electric ﬁeld-dependentlifetimes, while a smaller fraction belongs to a “less-mobile” sub-populationwith slower energy transfer and longer, electric ﬁeld-independentlifetimes. The “less-mobile” excitons correlate with thicker QD regions. Theseﬁndings shed light on the local electric ﬁeld inhomogeneity in QD-LEDs,oﬀering insights into device operation, possible degradation mechanisms,and strategies for developing stochastically-processed micro-QD-LEDs.
</summary>
<dc:date>2025-06-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Addressing Favorable and Challenging Flexible Plastic Packaging Waste Flows: A Material Flow Analysis</title>
<link href="https://hdl.handle.net/1721.1/163182" rel="alternate"/>
<author>
<name>Makarova, Oksana A</name>
</author>
<author>
<name>Ravi, Basuhi</name>
</author>
<author>
<name>Sobkowicz, Margaret J</name>
</author>
<author>
<name>Masato, Davide</name>
</author>
<author>
<name>Olivetti, Elsa A</name>
</author>
<id>https://hdl.handle.net/1721.1/163182</id>
<updated>2026-03-08T03:27:16Z</updated>
<published>2025-06-05T00:00:00Z</published>
<summary type="text">Addressing Favorable and Challenging Flexible Plastic Packaging Waste Flows: A Material Flow Analysis
Makarova, Oksana A; Ravi, Basuhi; Sobkowicz, Margaret J; Masato, Davide; Olivetti, Elsa A
The majority of post-consumer flexible plastic packaging (FPP) in the United States ends up in landfills and incinerators. Thisrepresents a significant material loss because FPP, also referred to as plastic films or foils, comprises up to half of all plasticpackaging. Since FPP encompasses a diverse range of products with varying recycling potentials, improving material recoveryrates requires a detailed understanding of the composition and quantities of used films. This study quantifies post-consumerFPP flows in the US for 2021 and estimates the fraction most suitable for mechanical recycling. We conducted a material flowanalysis (MFA) by reconciling publicly available data on packaging film generation and recycling from the US and comparableeconomies. We then categorized post-consumer FPP into three broad categories based on factors affecting the quality of the re-sulting mechanically recycled material. Our analysis reveals that only 3%–8% of the estimated 5–15 million metric tonnes of post-consumer film were recycled in 2021. Furthermore, at most 40% of the FPP could be readily mechanically recyclable, while up tohalf would be deemed non-recoverable due to techno-economic constraints. The actual proportions of challenging-to-recycle andnon-recoverable FPP might be even higher, underscoring the need for updated studies on film generation and waste compositionto assess the feasibility of scaling up nationwide film recycling.
</summary>
<dc:date>2025-06-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Initiation of Sediment Resuspension by Combined Wave‐Current Conditions in an Artificial Seagrass Meadow</title>
<link href="https://hdl.handle.net/1721.1/163181" rel="alternate"/>
<author>
<name>Zhao, Chuyan</name>
</author>
<author>
<name>Nepf, Heidi</name>
</author>
<id>https://hdl.handle.net/1721.1/163181</id>
<updated>2026-03-08T03:27:25Z</updated>
<published>2025-06-08T00:00:00Z</published>
<summary type="text">Initiation of Sediment Resuspension by Combined Wave‐Current Conditions in an Artificial Seagrass Meadow
Zhao, Chuyan; Nepf, Heidi
Laboratory experiments examined the impact of current on ripple formation and the onset ofwave‐driven resuspension within an artificial seagrass meadow modeled after Zostera marina. Within themeadow, the current was less than or equal to the wave velocity. Meadows were constructed with three shootdensities: 247, 455 and 962 stems/m2, and each shoot had six flexible blades. The sediment bed, consisting of65 μm spherical grains, was initially 1.4 cm thick, allowing ripple and scour hole formation. The formation ofwave‐orbital ripples was dependent on meadow density and current magnitude. Over bare beds and sparsemeadows, ripples were present and not impacted by the addition of current, such that the wave velocityresuspension threshold with current was the same as that in pure wave conditions. In medium‐density meadows,the addition of current reduced ripple height due to plant‐generated turbulence. As current increased, ripple sizeand ripple‐generated turbulence decreased, requiring a higher wave velocity to resuspend sediment. That is, formedium density meadows, the critical wave velocity increased as the current velocity increased. Finally, indense meadows, no ripples formed and resuspension was driven by a critical value of plant‐induced turbulence,which was proportional to the total velocity (current plus wave velocity), such that as the current velocityincreased, the critical wave velocity decreased. A model predicting the critical wave velocity for the densemeadow was derived based on the assumption that resuspension was driven by a critical level of stem‐generatedturbulence.
</summary>
<dc:date>2025-06-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>Thermal and Dimensional Stability of Photocatalytic Material ZnPS3 Under Extreme Environmental Conditions</title>
<link href="https://hdl.handle.net/1721.1/163180" rel="alternate"/>
<author>
<name>Mukherjee, Abhishek</name>
</author>
<author>
<name>Santamaría‐García, Vivian J</name>
</author>
<author>
<name>Wlodarczyk, Damian</name>
</author>
<author>
<name>Somakumar, Ajeesh K</name>
</author>
<author>
<name>Sybilski, Piotr</name>
</author>
<author>
<name>Siebenaller, Ryan</name>
</author>
<author>
<name>Rowe, Emmanuel</name>
</author>
<author>
<name>Narayanan, Saranya</name>
</author>
<author>
<name>Susner, Michael A</name>
</author>
<author>
<name>Lozano‐Sanchez, L Marcelo</name>
</author>
<author>
<name>Suchocki, Andrzej</name>
</author>
<author>
<name>Palma, Julio L</name>
</author>
<author>
<name>Boriskina, Svetlana V</name>
</author>
<id>https://hdl.handle.net/1721.1/163180</id>
<updated>2026-03-08T03:27:13Z</updated>
<published>2025-06-27T00:00:00Z</published>
<summary type="text">Thermal and Dimensional Stability of Photocatalytic Material ZnPS3 Under Extreme Environmental Conditions
Mukherjee, Abhishek; Santamaría‐García, Vivian J; Wlodarczyk, Damian; Somakumar, Ajeesh K; Sybilski, Piotr; Siebenaller, Ryan; Rowe, Emmanuel; Narayanan, Saranya; Susner, Michael A; Lozano‐Sanchez, L Marcelo; Suchocki, Andrzej; Palma, Julio L; Boriskina, Svetlana V
Zinc phosphorus trisulﬁde (ZnPS 3 ), a promising material for photocatalysisand energy storage, is shown in this study to exhibit remarkable stabilityunder extreme conditions. Its optical and structural properties are exploredunder high pressure and cryogenic temperatures using photoluminescence(PL) spectroscopy, Raman scattering, and density functional theory (DFT). Theexperimental results identify a pressure-induced phase transition starting at6.75 GPa and stabilizing by 12.5 GPa, after which ZnPS 3 demonstrates robuststability across a broad pressure range up to 24.5 GPa. DFT calculationssupport these observations and further predict a semiconductor-to-semimetaltransition at 100 GPa, while PL measurements reveal defect-assisted emissionthat quench under pressure due to enhanced non-radiative recombination. Atcryogenic temperatures, PL quenching intensiﬁes as non-radiative processesdominate, driven by a rising Grüneisen parameter and reduced phononpopulation. Cryogenic X-ray diﬀraction (XRD) also reveals a high meanthermal expansion coeﬃcient (TEC) of (4.369 ± 0.393) × 10−5 K−1 , amongthe highest reported for 2D materials. This unique combination of tunableelectronic properties under low pressure and high thermal sensitivity makesZnPS3 a strong candidate for sensing applications in extreme environments.
</summary>
<dc:date>2025-06-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Formulation and Calibration of CATKE, a One‐Equation Parameterization for Microscale Ocean Mixing</title>
<link href="https://hdl.handle.net/1721.1/163179" rel="alternate"/>
<author>
<name>Wagner, Gregory LeClaire</name>
</author>
<author>
<name>Hillier, Adeline</name>
</author>
<author>
<name>Constantinou, Navid C</name>
</author>
<author>
<name>Silvestri, Simone</name>
</author>
<author>
<name>Souza, Andre</name>
</author>
<author>
<name>Burns, Keaton J</name>
</author>
<author>
<name>Hill, Chris</name>
</author>
<author>
<name>Campin, Jean‐Michel</name>
</author>
<author>
<name>Marshall, John</name>
</author>
<author>
<name>Ferrari, Raffaele</name>
</author>
<id>https://hdl.handle.net/1721.1/163179</id>
<updated>2026-03-08T03:27:18Z</updated>
<published>2025-04-21T00:00:00Z</published>
<summary type="text">Formulation and Calibration of CATKE, a One‐Equation Parameterization for Microscale Ocean Mixing
Wagner, Gregory LeClaire; Hillier, Adeline; Constantinou, Navid C; Silvestri, Simone; Souza, Andre; Burns, Keaton J; Hill, Chris; Campin, Jean‐Michel; Marshall, John; Ferrari, Raffaele
We describe CATKE, a parameterization for fluxes associated with small‐scale or “microscale”ocean turbulent mixing on scales between 1 and 100 m. CATKE uses a downgradient formulation that dependson a prognostic turbulent kinetic energy (TKE) variable and a diagnostic mixing length scale that includes adynamic convective adjustment (CA) component. With its dynamic convective mixing length, CATKE predictsnot just the depth spanned by convective plumes but also the characteristic convective mixing timescale, animportant aspect of turbulent convection not captured by simpler static CA schemes. As a result, CATKE candescribe the competition between convection and other processes such as shear‐driven mixing and baroclinicrestratification. To calibrate CATKE, we use Ensemble Kalman Inversion to minimize the error between 21large eddy simulations (LESs) and predictions of the LES data by CATKE‐parameterized single columnsimulations at three different vertical resolutions. We find that CATKE makes accurate predictions of bothidealized and realistic LES compared to microscale turbulence parameterizations commonly used in climatemodels.
</summary>
<dc:date>2025-04-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Materials Research Laboratory</title>
<link href="https://hdl.handle.net/1721.1/163178" rel="alternate"/>
<author>
<name>Tasan, Cemal Cem</name>
</author>
<id>https://hdl.handle.net/1721.1/163178</id>
<updated>2025-10-17T03:39:28Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Materials Research Laboratory
Tasan, Cemal Cem
This report contains the following sections: NSF Convergence Accelerator; CyberSteels: Accelerating Genomic Design; Initiative for Knowledge &amp; Innovation in Manufacturing (IKIM); Microphotonics Center (MPhC); Manufacturing USA Institutes; Major Programs in FY2025; Outreach; Promotions, Honors, and Awards; and Future Outlook by MRL Director.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Business and Digital Transformation Office (BDTO)</title>
<link href="https://hdl.handle.net/1721.1/163177" rel="alternate"/>
<author>
<name>Fournier, Renaud</name>
</author>
<id>https://hdl.handle.net/1721.1/163177</id>
<updated>2025-10-17T03:39:29Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Business and Digital Transformation Office (BDTO)
Fournier, Renaud
This report contains the following sections: Goals and Objectives of the BDTO, Accomplishments and Activities, Key Projects to Accelerate Digital Transformation, and Other Projects and Activities.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Department of Linguistics &amp; Philosophy</title>
<link href="https://hdl.handle.net/1721.1/163176" rel="alternate"/>
<author>
<name>Setiya, Kieran</name>
</author>
<id>https://hdl.handle.net/1721.1/163176</id>
<updated>2025-10-17T03:39:23Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Department of Linguistics &amp; Philosophy
Setiya, Kieran
This report contains the following sections: Department Research; Workshops and Conferences; Publications; Grants, Honors, Awards; Leaves of Absence; and Personnel Information.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Boundawall: A Proposal on the Nature of Black Holes</title>
<link href="https://hdl.handle.net/1721.1/163175" rel="alternate"/>
<author>
<name>Viaña, Javier</name>
</author>
<id>https://hdl.handle.net/1721.1/163175</id>
<updated>2025-10-16T03:01:59Z</updated>
<published>2025-10-15T00:00:00Z</published>
<summary type="text">The Boundawall: A Proposal on the Nature of Black Holes
Viaña, Javier
This research suggests a new interpretation of black holes in which the event horizon represents the termination of physical reality. In this view, when curvature approaches a critical threshold, the three-dimensional spatial geometry may undergo a dimensional compression into a two-dimensional manifold—the boundawall—that preserves gravitational continuity while preventing further causal evolution. Inside this surface, spacetime would cease to exist. All mass-energy and information would then be confined to the boundawall, forming a structure consistent with the external Schwarzschild geometry and the Bekenstein-Hawking entropy law. We outline a possible Dimensional Conversion Law that could govern this phenomenon, and discuss the conservation, causal, and thermodynamic implications of the boundawall. Finally, we comment on potential observational consistency and on limited predictions such as surface-mode signatures. In this theory, the event horizon is viewed not merely as a limit of observation, but as a potential boundary/wall of existence itself.
</summary>
<dc:date>2025-10-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>A GPU‐Based Ocean Dynamical Core for Routine Mesoscale‐Resolving Climate Simulations</title>
<link href="https://hdl.handle.net/1721.1/163174" rel="alternate"/>
<author>
<name>Silvestri, Simone</name>
</author>
<author>
<name>Wagner, Gregory L</name>
</author>
<author>
<name>Constantinou, Navid C</name>
</author>
<author>
<name>Hill, Christopher N</name>
</author>
<author>
<name>Campin, Jean‐Michel</name>
</author>
<author>
<name>Souza, Andre N</name>
</author>
<author>
<name>Bishnu, Siddhartha</name>
</author>
<author>
<name>Churavy, Valentin</name>
</author>
<author>
<name>Marshall, John</name>
</author>
<author>
<name>Ferrari, Raffaele</name>
</author>
<id>https://hdl.handle.net/1721.1/163174</id>
<updated>2026-03-08T03:27:30Z</updated>
<published>2025-04-21T00:00:00Z</published>
<summary type="text">A GPU‐Based Ocean Dynamical Core for Routine Mesoscale‐Resolving Climate Simulations
Silvestri, Simone; Wagner, Gregory L; Constantinou, Navid C; Hill, Christopher N; Campin, Jean‐Michel; Souza, Andre N; Bishnu, Siddhartha; Churavy, Valentin; Marshall, John; Ferrari, Raffaele
We describe an ocean hydrostatic dynamical core implemented in Oceananigans optimized forGraphical Processing Unit (GPU) architectures. On 64 A100 GPUs, equivalent to 16 computational nodes incurrent state‐of‐the‐art supercomputers, our dynamical core can simulate a decade of near‐global oceandynamics per wall‐clock day at an 8‐km horizontal resolution; a resolution adequate to resolve the ocean'smesoscale eddy field. Such efficiency, achieved with relatively modest hardware resources, suggests thatclimate simulations on GPUs can incorporate fully eddy‐resolving ocean models. This removes a major sourceof systematic bias in current IPCC coupled model projections, the parameterization of ocean eddies, andrepresents a major advance in climate modeling. We discuss the computational strategies, focusing on GPU‐specific optimization and numerical implementation details that enable such high performance.
</summary>
<dc:date>2025-04-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>A new upper bound for the growth factor in Gaussian elimination with complete pivoting</title>
<link href="https://hdl.handle.net/1721.1/163173" rel="alternate"/>
<author>
<name>Bisain, Ankit</name>
</author>
<author>
<name>Edelman, Alan</name>
</author>
<author>
<name>Urschel, John</name>
</author>
<id>https://hdl.handle.net/1721.1/163173</id>
<updated>2026-03-08T03:27:26Z</updated>
<published>2025-02-26T00:00:00Z</published>
<summary type="text">A new upper bound for the growth factor in Gaussian elimination with complete pivoting
Bisain, Ankit; Edelman, Alan; Urschel, John
The growth factor in Gaussian elimination measureshow large the entries of an LU factorization can be rel-ative to the entries of the original matrix. It is a keyparameter in error estimates, and one of the most fun-damental topics in numerical analysis. We produce anupper bound of &#119899; 0.2079 ln &#119899;+0.91 for the growth factor inGaussian elimination with complete pivoting — the firstimprovement upon Wilkinson’s original 1961 bound of2 &#119899; 0.25 ln &#119899;+0.5.
</summary>
<dc:date>2025-02-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Does U.S. Immigration Policy Facilitate Financial Misconduct?</title>
<link href="https://hdl.handle.net/1721.1/163172" rel="alternate"/>
<author>
<name>Dai, Ruiting</name>
</author>
<author>
<name>Dong, Xuanjun</name>
</author>
<author>
<name>Shroff, Nemit</name>
</author>
<author>
<name>Tan, Qin</name>
</author>
<id>https://hdl.handle.net/1721.1/163172</id>
<updated>2026-03-08T03:27:29Z</updated>
<published>2025-06-29T00:00:00Z</published>
<summary type="text">Does U.S. Immigration Policy Facilitate Financial Misconduct?
Dai, Ruiting; Dong, Xuanjun; Shroff, Nemit; Tan, Qin
We examine whether U.S. immigration policy, specifically the H-1B visa program, affects the likelihood of financial misconduct. We argue that employers have leverage over employees on H-1B visas because such employees must maintain H-1B–eligible employment to legally reside in the United States. We posit that companies relying on H-1B visas to hire workers in accounting roles have an increased ability to misreport their financial statements due to the greater costs H-1B employees face if they are unexpectedly fired for not following the demands of their bosses or for blowing the whistle on misconduct. Using the sharp reduction in the H-1B visa cap in 2004 as a shock to such employment, we find that companies that relied on this visa program for accounting roles pre-shock experience a 2.3 percentage point decline in accounting irregularities post-shock. Cross-sectional tests show that the reduction in irregularities is greater in companies where H-1B employees have (1) a greater influence on financial reporting or (2) fewer job opportunities. In addition, the relation between H-1B visa use and irregularities is stronger in companies whose investors are more focused on near-term earnings targets. We corroborate our findings using the outcome of H-1B visa lotteries as shocks to such employment.
</summary>
<dc:date>2025-06-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data‐Driven Modeling of 4D Ocean and Coastal Acidification in the Massachusetts and Cape Cod Bays From Surface Measurements</title>
<link href="https://hdl.handle.net/1721.1/163171" rel="alternate"/>
<author>
<name>Champenois, B</name>
</author>
<author>
<name>Bastidas, C</name>
</author>
<author>
<name>LaBash, B</name>
</author>
<author>
<name>Sapsis, TP</name>
</author>
<id>https://hdl.handle.net/1721.1/163171</id>
<updated>2026-03-08T03:27:27Z</updated>
<published>2025-06-03T00:00:00Z</published>
<summary type="text">Data‐Driven Modeling of 4D Ocean and Coastal Acidification in the Massachusetts and Cape Cod Bays From Surface Measurements
Champenois, B; Bastidas, C; LaBash, B; Sapsis, TP
A significant portion of atmospheric CO2 emissions is absorbed by the ocean, resulting inacidified seawater and altered carbonate composition that is harmful to marine life. Despite detrimental effects,assessing ocean and coastal acidification (OCA) is difficult due to the scarcity of in situ measurements and thehigh costs of computational modeling. We develop a parsimonious data‐driven framework to model indicatorsof OCA and test it in the Massachusetts Bay and Stellwagen Bank, a region with fishing and tourism industriesaffected by OCA. First, we trained a neural network to predict in‐depth fields for temperature and salinity(x, y, z) using surface quantities from satellites and in situ measurements (x, y). The relationship between 2Dsurface and 3D properties is captured through the in‐depth modes and coefficients obtained from principalcomponent analysis applied to a high‐resolution historical reanalysis data set. Next, we used Bayesianregression methods to estimate region‐specific relationships for in‐depth total alkalinity (TA), dissolvedinorganic carbon (DIC), and aragonite saturation state (ΩAr) as functions of temperature, salinity, andchlorophyll. Lastly, 4D daily field predictions are generated from surface measurements with a spatialresolution of 4 km horizontally and 45 sigma levels vertically. The model's performance is evaluated usingwithheld measurements across depths, locations, and seasons with RMSEs of 1.59°C, 0.31 PSU,37.54 μmol⋅kg-1, and 0.42 for temperature, salinity, TA, DIC, and ΩAr , respectively, at onewithheld location. The framework is useful for understanding OCA and includes uncertainty quantification for future planning and optimal sensor placement.
</summary>
<dc:date>2025-06-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Arsenic Accumulation in Microbial Biomass and the Interpretation of Signals of Early Arsenic‐Based Metabolisms</title>
<link href="https://hdl.handle.net/1721.1/163170" rel="alternate"/>
<author>
<name>Madrigal‐Trejo, David</name>
</author>
<author>
<name>Baldes, Matthew J</name>
</author>
<author>
<name>Tamura, Nobumichi</name>
</author>
<author>
<name>Klepac‐Ceraj, Vanja</name>
</author>
<author>
<name>Bosak, Tanja</name>
</author>
<id>https://hdl.handle.net/1721.1/163170</id>
<updated>2026-03-08T03:27:28Z</updated>
<published>2025-06-13T00:00:00Z</published>
<summary type="text">Arsenic Accumulation in Microbial Biomass and the Interpretation of Signals of Early Arsenic‐Based Metabolisms
Madrigal‐Trejo, David; Baldes, Matthew J; Tamura, Nobumichi; Klepac‐Ceraj, Vanja; Bosak, Tanja
Carbonaceous particles that concentrate arsenic in microbialites as old as ~3.5 Ga are similar to As-rich organic globules in mod-ern microbialites. The former particles have been interpreted as tracers of As cycling by early microbial metabolisms. However,it is unclear if arsenic accumulation is a consequence of biological activity or passive postmortem binding of arsenic by organicmatter during diagenesis in volcanically influenced, As-rich environments. Here, we address this uncertainty by evaluating theconcentrations, speciation, and detectability of As in active or heat-killed biofilms formed by cyanobacteria or anoxygenic pho-tosynthetic microbes exposed to environmentally relevant concentrations of As(III) or As(V) (50 μM to 3 mM). The genomes ormetagenomes of these biofilms contain genes involved in detoxifying or energy-yielding As metabolisms. Biomass accumulatesAs from the solution in a concentration-dependent manner and with a preference for oxidized As(V) over As(III). Autoclaved bio-mass accumulates As even more strongly than active biomass, likely because living biofilms actively detoxify As. Active biofilmsoxidize and reduce As and accumulate both As(III) and As(V), whereas a small fraction of As(V) can be reduced in inactive bio-films that bind As during diagenesis. Arsenic enrichments in the biomass are detectable by X-ray based spectroscopy techniques(XRF, EPMA-WDS) that are commonly used to analyze geological materials. These findings enable the reconstruction of pastactive and passive interactions of microbial biomass with arsenic in fossilized microbial biofilms and microbialites from the earlyEarth.
</summary>
<dc:date>2025-06-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sensor-Agnostic, LSTM-Based Human Motion Prediction Using sEMG Data</title>
<link href="https://hdl.handle.net/1721.1/163169" rel="alternate"/>
<author>
<name>Koo, Bon Ho</name>
</author>
<author>
<name>Siu, Ho Chit</name>
</author>
<author>
<name>Petersen, Lonnie G.</name>
</author>
<id>https://hdl.handle.net/1721.1/163169</id>
<updated>2026-03-08T03:24:38Z</updated>
<published>2025-09-02T00:00:00Z</published>
<summary type="text">Sensor-Agnostic, LSTM-Based Human Motion Prediction Using sEMG Data
Koo, Bon Ho; Siu, Ho Chit; Petersen, Lonnie G.
The use of surface electromyography (sEMG) for conventional motion classification and prediction has had limitations due to sensor hardware differences. With the popularization of deep learning-based approaches to the application of motion prediction, this study explores the effects that different hardware sensor platforms have on the performance of a deep learning neural network trained to predict the one-degree-of-freedom (DoF) angular trajectory of a human. Two different sEMG sensor platforms were used to collect raw data from subjects conducting exercises, which was used to train a neural network designed to predict the future angular trajectory of the arm. The results show that the raw data originating from different sensor hardware with different configurations (including the communication method, data acquisition unit (DAQ) usage, electrode configuration, buffering method, preprocessing method, and experimental variables like the sampling frequency) produced bi-LSTM networks that performed similarly. This points to the hardware-agnostic nature of such deep learning networks.
</summary>
<dc:date>2025-09-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Engineering kinetics of TLR7/8 agonist release from bottlebrush prodrugs enables tumor-focused immune stimulation</title>
<link href="https://hdl.handle.net/1721.1/163168" rel="alternate"/>
<author>
<name>Bhagchandani, Sachin H</name>
</author>
<author>
<name>Vohidov, Farrukh</name>
</author>
<author>
<name>Milling, Lauren E</name>
</author>
<author>
<name>Tong, Evelyn Yuzhou</name>
</author>
<author>
<name>Brown, Christopher M</name>
</author>
<author>
<name>Ramseier, Michelle L</name>
</author>
<author>
<name>Liu, Bin</name>
</author>
<author>
<name>Fessenden, Timothy B</name>
</author>
<author>
<name>Nguyen, Hung V-T</name>
</author>
<author>
<name>Kiel, Gavin R</name>
</author>
<author>
<name>Won, Lori</name>
</author>
<author>
<name>Langer, Robert S</name>
</author>
<author>
<name>Spranger, Stefani</name>
</author>
<author>
<name>Shalek, Alex K</name>
</author>
<author>
<name>Irvine, Darrell J</name>
</author>
<author>
<name>Johnson, Jeremiah A</name>
</author>
<id>https://hdl.handle.net/1721.1/163168</id>
<updated>2026-03-08T03:27:08Z</updated>
<published>2023-04-19T00:00:00Z</published>
<summary type="text">Engineering kinetics of TLR7/8 agonist release from bottlebrush prodrugs enables tumor-focused immune stimulation
Bhagchandani, Sachin H; Vohidov, Farrukh; Milling, Lauren E; Tong, Evelyn Yuzhou; Brown, Christopher M; Ramseier, Michelle L; Liu, Bin; Fessenden, Timothy B; Nguyen, Hung V-T; Kiel, Gavin R; Won, Lori; Langer, Robert S; Spranger, Stefani; Shalek, Alex K; Irvine, Darrell J; Johnson, Jeremiah A
Imidazoquinolines (IMDs), such as resiquimod (R848), are of great interest as potential cancer immunotherapies because of their ability to activate Toll-like receptor 7 (TLR7) and/or TLR8 on innate immune cells. Nevertheless, intravenous administration of IMDs causes severe immune-related toxicities, and attempts to improve their tissue-selective exposure while minimizing acute systemic inflammation have proven difficult. Here, using a library of R848 “bottlebrush prodrugs” (BPDs) that differ only by their R848 release kinetics, we explore how the timing of R848 exposure affects immune stimulation in vitro and in vivo. These studies led to the discovery of R848-BPDs that exhibit optimal activation kinetics to achieve potent stimulation of myeloid cells in tumors and substantial reductions in tumor growth following systemic administration in mouse syngeneic tumor models without any observable systemic toxicity. These results suggest that release kinetics can be tuned at the molecular level to provide safe yet effective systemically administered immunostimulant prodrugs for next-generation cancer immunotherapies.
</summary>
<dc:date>2023-04-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>A microneedle vaccine printer for thermostable COVID-19 mRNA vaccines</title>
<link href="https://hdl.handle.net/1721.1/163167" rel="alternate"/>
<author>
<name>vander Straeten, Aurélien</name>
</author>
<author>
<name>Sarmadi, Morteza</name>
</author>
<author>
<name>Daristotle, John L</name>
</author>
<author>
<name>Kanelli, Maria</name>
</author>
<author>
<name>Tostanoski, Lisa H</name>
</author>
<author>
<name>Collins, Joe</name>
</author>
<author>
<name>Pardeshi, Apurva</name>
</author>
<author>
<name>Han, Jooli</name>
</author>
<author>
<name>Varshney, Dhruv</name>
</author>
<author>
<name>Eshaghi, Behnaz</name>
</author>
<author>
<name>Garcia, Johnny</name>
</author>
<author>
<name>Forster, Timothy A</name>
</author>
<author>
<name>Li, Gary</name>
</author>
<author>
<name>Menon, Nandita</name>
</author>
<author>
<name>Pyon, Sydney L</name>
</author>
<author>
<name>Zhang, Linzixuan</name>
</author>
<author>
<name>Jacob-Dolan, Catherine</name>
</author>
<author>
<name>Powers, Olivia C</name>
</author>
<author>
<name>Hall, Kevin</name>
</author>
<author>
<name>Alsaiari, Shahad K</name>
</author>
<author>
<name>Wolf, Morris</name>
</author>
<author>
<name>Tibbitt, Mark W</name>
</author>
<author>
<name>Farra, Robert</name>
</author>
<author>
<name>Barouch, Dan H</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Jaklenec, Ana</name>
</author>
<id>https://hdl.handle.net/1721.1/163167</id>
<updated>2026-03-08T03:27:09Z</updated>
<published>2024-04-24T00:00:00Z</published>
<summary type="text">A microneedle vaccine printer for thermostable COVID-19 mRNA vaccines
vander Straeten, Aurélien; Sarmadi, Morteza; Daristotle, John L; Kanelli, Maria; Tostanoski, Lisa H; Collins, Joe; Pardeshi, Apurva; Han, Jooli; Varshney, Dhruv; Eshaghi, Behnaz; Garcia, Johnny; Forster, Timothy A; Li, Gary; Menon, Nandita; Pyon, Sydney L; Zhang, Linzixuan; Jacob-Dolan, Catherine; Powers, Olivia C; Hall, Kevin; Alsaiari, Shahad K; Wolf, Morris; Tibbitt, Mark W; Farra, Robert; Barouch, Dan H; Langer, Robert; Jaklenec, Ana
Decentralized manufacture of thermostable mRNA vaccines in a microneedle patch (MNP) format could enhance vaccine access in low-resource communities by eliminating the need for a cold chain and trained healthcare personnel. Here we describe an automated process for printing MNP Coronavirus Disease 2019 (COVID-19) mRNA vaccines in a standalone device. The vaccine ink is composed of lipid nanoparticles loaded with mRNA and a dissolvable polymer blend that was optimized for high bioactivity by screening formulations in vitro. We demonstrate that the resulting MNPs are shelf stable for at least 6 months at room temperature when assessed using a model mRNA construct. Vaccine loading efficiency and microneedle dissolution suggest that efficacious, microgram-scale doses of mRNA encapsulated in lipid nanoparticles could be delivered with a single patch. Immunizations in mice using manually produced MNPs with mRNA encoding severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) spike protein receptor-binding domain stimulate long-term immune responses similar to those of intramuscular administration.
</summary>
<dc:date>2024-04-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Topical application of Lactobacilli successfully eradicates Pseudomonas aeruginosa biofilms and promotes wound healing in chronic wounds</title>
<link href="https://hdl.handle.net/1721.1/163166" rel="alternate"/>
<author>
<name>Li, Zhihao</name>
</author>
<author>
<name>Zhang, Sixuan</name>
</author>
<author>
<name>Zuber, Flavia</name>
</author>
<author>
<name>Altenried, Stefanie</name>
</author>
<author>
<name>Jaklenec, Ana</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Ren, Qun</name>
</author>
<id>https://hdl.handle.net/1721.1/163166</id>
<updated>2026-03-08T03:27:08Z</updated>
<published>2023-11-01T00:00:00Z</published>
<summary type="text">Topical application of Lactobacilli successfully eradicates Pseudomonas aeruginosa biofilms and promotes wound healing in chronic wounds
Li, Zhihao; Zhang, Sixuan; Zuber, Flavia; Altenried, Stefanie; Jaklenec, Ana; Langer, Robert; Ren, Qun
Chronic wounds are difficult to treat due to the presence of biofilm which prevents wound healing. Pseudomonas aeruginosa is one of the most common pathogens found in chronic wounds and conventional treatment strategies have been ineffective in the eradication of its biofilm, without harming the surrounding healthy tissue at the same time. Here, we introduced an innovative approach applying the probiotic product Bio-K+ (containing three lactobacilli) topically as an antimicrobial and antibiofilm agent. We identified lactic acid as the main active component. While antibiotics and antiseptics such as silver-ions only demonstrated limited efficacy, Bio-K+ was able to completely eradicate mature P. aeruginosa biofilms established in an in-vitro and ex-vivo human skin model. Furthermore, it demonstrated biocompatibility in the co-culture with human dermal fibroblasts and accelerated the migration of fibroblasts in a cell migration assay promoting wound healing. To enhance clinical practicability, we introduced Bio-K+ into the hydrocolloid dressing Aquacel, achieving sustained release of lactic acid and biofilm eradication. This new treatment approach applying probiotics could represent a major improvement in the management of chronic wounds and can be extended in treating other biofilm-associated infections.
</summary>
<dc:date>2023-11-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Combinatorial development of nebulized mRNA delivery formulations for the lungs</title>
<link href="https://hdl.handle.net/1721.1/163165" rel="alternate"/>
<author>
<name>Jiang, Allen Y</name>
</author>
<author>
<name>Witten, Jacob</name>
</author>
<author>
<name>Raji, Idris O</name>
</author>
<author>
<name>Eweje, Feyisayo</name>
</author>
<author>
<name>MacIsaac, Corina</name>
</author>
<author>
<name>Meng, Sabrina</name>
</author>
<author>
<name>Oladimeji, Favour A</name>
</author>
<author>
<name>Hu, Yizong</name>
</author>
<author>
<name>Manan, Rajith S</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Anderson, Daniel G</name>
</author>
<id>https://hdl.handle.net/1721.1/163165</id>
<updated>2026-03-08T03:27:25Z</updated>
<published>2023-11-20T00:00:00Z</published>
<summary type="text">Combinatorial development of nebulized mRNA delivery formulations for the lungs
Jiang, Allen Y; Witten, Jacob; Raji, Idris O; Eweje, Feyisayo; MacIsaac, Corina; Meng, Sabrina; Oladimeji, Favour A; Hu, Yizong; Manan, Rajith S; Langer, Robert; Anderson, Daniel G
Inhaled delivery of mRNA has the potential to treat a wide variety of diseases. However, nebulized mRNA lipid nanoparticles (LNPs) face several unique challenges including stability during nebulization and penetration through both cellular and extracellular barriers. Here we develop a combinatorial approach addressing these barriers. First, we observe that LNP formulations can be stabilized to resist nebulization-induced aggregation by altering the nebulization buffer to increase the LNP charge during nebulization, and by the addition of a branched polymeric excipient. Next, we synthesize a combinatorial library of ionizable, degradable lipids using reductive amination, and evaluate their delivery potential using fully differentiated air–liquid interface cultured primary lung epithelial cells. The final combination of ionizable lipid, charge-stabilized formulation and stability-enhancing excipient yields a significant improvement in lung mRNA delivery over current state-of-the-art LNPs and polymeric nanoparticles.
</summary>
<dc:date>2023-11-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nanoparticle‐Mediated Delivery of Anti‐PU.1 siRNA via Localized Intracisternal Administration Reduces Neuroinflammation</title>
<link href="https://hdl.handle.net/1721.1/163164" rel="alternate"/>
<author>
<name>Ralvenius, William T</name>
</author>
<author>
<name>Andresen, Jason L</name>
</author>
<author>
<name>Huston, Margaret M</name>
</author>
<author>
<name>Penney, Jay</name>
</author>
<author>
<name>Bonner, Julia Maeve</name>
</author>
<author>
<name>Fenton, Owen S</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Tsai, Li‐Huei</name>
</author>
<id>https://hdl.handle.net/1721.1/163164</id>
<updated>2025-10-11T06:54:31Z</updated>
<published>2024-02-22T00:00:00Z</published>
<summary type="text">Nanoparticle‐Mediated Delivery of Anti‐PU.1 siRNA via Localized Intracisternal Administration Reduces Neuroinflammation
Ralvenius, William T; Andresen, Jason L; Huston, Margaret M; Penney, Jay; Bonner, Julia Maeve; Fenton, Owen S; Langer, Robert; Tsai, Li‐Huei
Neuroinflammation is a hallmark of neurodegenerative disorders including Alzheimer's disease (AD). Microglia, the brain's immune cells, express many of the AD‐risk loci identified in genome wide association studies and present a promising target for anti‐inflammatory RNA therapeutics but are difficult to transfect with current methods. Here, several lipid nanoparticle (LNP) formulations are examined, and a lead candidate that supports efficient RNA delivery in cultures of human stem cell‐derived microglia‐like cells (iMGLs) and animal models of neuroinflammation is identified. The lead microglia LNP (MG‐LNP) formulation shows minimal toxicity and improves delivery efficiency to inflammatory iMGLs, suggesting a preference for delivery into activated microglia. Intraperitoneal injection of the MG‐LNP formulation generates widespread expression of the delivered reporter construct in all organs, whereas local intracisternal injection directly into the cerebrospinal fluid leads to preferential expression in the brain. It is shown that LNP‐mediated delivery of siRNA targeting the PU.1 transcription factor, a known AD‐risk locus, successfully reduces PU.1 levels in iMGLs and reduces neuroinflammation in mice injected with LPS and in CK‐p25 mice that mimic the chronic neuroinflammation seen in AD patients. The LNP formulation represents an effective RNA delivery vehicle when applied intrathecally and can be broadly utilized to test potential neuroinflammation‐directed gene therapies.
</summary>
<dc:date>2024-02-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>CRISPR–Cas9 delivery strategies for the modulation of immune and non-immune cells</title>
<link href="https://hdl.handle.net/1721.1/163163" rel="alternate"/>
<author>
<name>Alsaiari, Shahad K</name>
</author>
<author>
<name>Eshaghi, Behnaz</name>
</author>
<author>
<name>Du, Bujie</name>
</author>
<author>
<name>Kanelli, Maria</name>
</author>
<author>
<name>Li, Gary</name>
</author>
<author>
<name>Wu, Xunhui</name>
</author>
<author>
<name>Zhang, Linzixuan</name>
</author>
<author>
<name>Chaddah, Mehr</name>
</author>
<author>
<name>Lau, Alicia</name>
</author>
<author>
<name>Yang, Xin</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Jaklenec, Ana</name>
</author>
<id>https://hdl.handle.net/1721.1/163163</id>
<updated>2025-10-11T06:54:33Z</updated>
<published>2024-10-16T00:00:00Z</published>
<summary type="text">CRISPR–Cas9 delivery strategies for the modulation of immune and non-immune cells
Alsaiari, Shahad K; Eshaghi, Behnaz; Du, Bujie; Kanelli, Maria; Li, Gary; Wu, Xunhui; Zhang, Linzixuan; Chaddah, Mehr; Lau, Alicia; Yang, Xin; Langer, Robert; Jaklenec, Ana
CRISPR–Cas9 genome editing technology is a promising tool for genetically engineering immune cells and modulating immune systems. Although ex vivo genome editing of immune cells has reached clinical trials, in vivo application is still restricted by the instability and inefficient delivery of CRISPR–Cas9 components to immune cells through circulation. In this Review, we summarize ex vivo and in vivo strategies to deliver CRISPR–Cas9 components to both non-immune and immune cells. We review the progress made in non-immune cells because it offers insights that can be applied to advancing research in immune cells. We also discuss principles and challenges of immune system modulation using CRISPR–Cas9 genome editing technology.
</summary>
<dc:date>2024-10-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Degradable poly(β-amino ester) microparticles for cleansing products and food fortification</title>
<link href="https://hdl.handle.net/1721.1/163162" rel="alternate"/>
<author>
<name>Zhang, Linzixuan</name>
</author>
<author>
<name>Xiao, Ruiqing</name>
</author>
<author>
<name>Jin, Tianyi</name>
</author>
<author>
<name>Pan, Xinyan</name>
</author>
<author>
<name>Fransen, Katharina A</name>
</author>
<author>
<name>Alsaiari, Shahad K</name>
</author>
<author>
<name>Lau, Alicia</name>
</author>
<author>
<name>He, Ruizhe</name>
</author>
<author>
<name>Han, Jooli</name>
</author>
<author>
<name>Pedretti, Benjamin J</name>
</author>
<author>
<name>Yeo, Jing Ying</name>
</author>
<author>
<name>Yang, Xin</name>
</author>
<author>
<name>Olsen, Bradley D</name>
</author>
<author>
<name>Alexander-Katz, Alfredo</name>
</author>
<author>
<name>Smith, Zachary P</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Jaklenec, Ana</name>
</author>
<id>https://hdl.handle.net/1721.1/163162</id>
<updated>2025-10-11T06:54:27Z</updated>
<published>2024-01-01T00:00:00Z</published>
<summary type="text">Degradable poly(β-amino ester) microparticles for cleansing products and food fortification
Zhang, Linzixuan; Xiao, Ruiqing; Jin, Tianyi; Pan, Xinyan; Fransen, Katharina A; Alsaiari, Shahad K; Lau, Alicia; He, Ruizhe; Han, Jooli; Pedretti, Benjamin J; Yeo, Jing Ying; Yang, Xin; Olsen, Bradley D; Alexander-Katz, Alfredo; Smith, Zachary P; Langer, Robert; Jaklenec, Ana
Microplastic pollution is a pressing global crisis caused by the extensive use of nondegradable microplastic materials in daily activities. One effective approach to mitigate this issue is to replace nondegradable plastics with degradable materials that have properties amendable for targeted applications. Here we present the development of a degradable microparticle (MP) platform based on a poly(β-amino ester) (PAE) that degrades into sugar and amino acid derivatives. This PAE MP platform showed functional replacement of nondegradable microplastics used in cleansing products and food fortification. In cleansing products, PAE MPs effectively enhanced the cleansing efficiency of a representative rinse-off product and showed effective removal of potentially toxic elements, as an alternative of traditional nondegradable microbeads. In food fortification, PAE MPs provided robust protection for multiple essential vitamins and minerals against extensive cooking and storage conditions with rapid nutrient release in a simulated human digestion system. Collectively, these PAE MPs present a potential platform to replace microplastic usage on a global scale in many applications.
</summary>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging next-generation materials for cancer neuroscience therapies in the central nervous system</title>
<link href="https://hdl.handle.net/1721.1/163161" rel="alternate"/>
<author>
<name>Bernstock, Joshua D</name>
</author>
<author>
<name>Johnston, Benjamin R</name>
</author>
<author>
<name>Friedman, Gregory K</name>
</author>
<author>
<name>Chiocca, EA</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Srinivasan, Shriya S</name>
</author>
<id>https://hdl.handle.net/1721.1/163161</id>
<updated>2025-10-11T06:54:32Z</updated>
<published>2024-04-22T00:00:00Z</published>
<summary type="text">Leveraging next-generation materials for cancer neuroscience therapies in the central nervous system
Bernstock, Joshua D; Johnston, Benjamin R; Friedman, Gregory K; Chiocca, EA; Langer, Robert; Srinivasan, Shriya S
Interdisciplinary strategies bridging oncology, neuroscience, bioelectronics and materials science will facilitate the development of next-generation therapies and devices for cancers of the central nervous system.
</summary>
<dc:date>2024-04-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>On‐Patient Temporary Medical Record for Accurate, Time‐Sensitive Information at the Point of Care</title>
<link href="https://hdl.handle.net/1721.1/163160" rel="alternate"/>
<author>
<name>Collins, Joe</name>
</author>
<author>
<name>Han, Jooli</name>
</author>
<author>
<name>Sarmadi, Morteza</name>
</author>
<author>
<name>Allison‐Logan, Stephanie</name>
</author>
<author>
<name>Straeten, Aurelien vander</name>
</author>
<author>
<name>Perkinson, Collin F</name>
</author>
<author>
<name>Acolaste, Sarah</name>
</author>
<author>
<name>Kanelli, Maria</name>
</author>
<author>
<name>Daristotle, John</name>
</author>
<author>
<name>Karchin, Ari</name>
</author>
<author>
<name>Henderson, Mitchell</name>
</author>
<author>
<name>Cruz, Mache</name>
</author>
<author>
<name>Artzi, Dolev</name>
</author>
<author>
<name>Alsaiari, Shahad K</name>
</author>
<author>
<name>Zhang, Linzixuan</name>
</author>
<author>
<name>Levy, Lauren</name>
</author>
<author>
<name>Wood, Lowell</name>
</author>
<author>
<name>Jing, Lihong</name>
</author>
<author>
<name>McHugh, Kevin J</name>
</author>
<author>
<name>Bawendi, Moungi G</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Jaklenec, Ana</name>
</author>
<id>https://hdl.handle.net/1721.1/163160</id>
<updated>2025-10-11T06:54:29Z</updated>
<published>2024-04-18T00:00:00Z</published>
<summary type="text">On‐Patient Temporary Medical Record for Accurate, Time‐Sensitive Information at the Point of Care
Collins, Joe; Han, Jooli; Sarmadi, Morteza; Allison‐Logan, Stephanie; Straeten, Aurelien vander; Perkinson, Collin F; Acolaste, Sarah; Kanelli, Maria; Daristotle, John; Karchin, Ari; Henderson, Mitchell; Cruz, Mache; Artzi, Dolev; Alsaiari, Shahad K; Zhang, Linzixuan; Levy, Lauren; Wood, Lowell; Jing, Lihong; McHugh, Kevin J; Bawendi, Moungi G; Langer, Robert; Jaklenec, Ana
Accurate medical recordkeeping is important for personal and public health. Conventional forms of on‐patient medical information, such as medical alert bracelets or finger‐markings, may compromise patient privacy because they are readily visible to other people. Here, the development of an invisible, temporary, and easily deployable on‐patient medical recordkeeping system is reported. Information is stored in unique patterns of spatially distributed near‐infrared (NIR) fluorescent quantum dots (QDs), which are delivered to the skin using dissolvable microneedle arrays. The patterns are invisible to the naked eye but detectable with an infrared camera, which can extract information with &amp;gt;98% accuracy using automated pattern recognition software. By encapsulating NIR QDs in an FDA‐approved biodegradable polymer, biodegradation rates can be tuned so that the encoded medical information can be conveyed in both a spatial and temporal manner, with some components fading within 100 days and others persisting for 6 months. This may be particularly useful for administering a series of vaccinations or treatments by indicating if enough time has passed for the patient to receive the next dose. Importantly, this system contains no personal information, does not require connection to a centralized database, and is not visible to the naked eye, ensuring patient privacy.
</summary>
<dc:date>2024-04-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluation of optimally windowed chirp signals in industrial rheological measurements: method development and data processing</title>
<link href="https://hdl.handle.net/1721.1/163157" rel="alternate"/>
<author>
<name>Perego, Alessandro</name>
</author>
<author>
<name>Vadillo, Damien C.</name>
</author>
<author>
<name>Mills, Matthew J. L.</name>
</author>
<author>
<name>Das, Mohua</name>
</author>
<author>
<name>McKinley FRS, Gareth H.</name>
</author>
<id>https://hdl.handle.net/1721.1/163157</id>
<updated>2025-10-11T06:54:17Z</updated>
<published>2025-08-15T00:00:00Z</published>
<summary type="text">Evaluation of optimally windowed chirp signals in industrial rheological measurements: method development and data processing
Perego, Alessandro; Vadillo, Damien C.; Mills, Matthew J. L.; Das, Mohua; McKinley FRS, Gareth H.
The optimally windowed chirp (OWCh) methodology offers an alternative to traditional discrete frequency sweeps, acquiring complete rheological spectra in seconds while preserving data density and accuracy. For thermorheologically simple materials, OWCh accelerates data collection, enabling rapid creation of time–temperature superposition (tTS) master curves, potentially saving hours of instrument time. For mutating materials, such as those undergoing curing, OWCh facilitates detailed rheological characterization of viscoelastic properties throughout these transition events. We implemented OWCh within an industrial analytical research framework using commercially available rheometers. This integration is enhanced by two custom Python packages, piblin and hermes-rheo, which streamline and automate analysis of rheological datasets. For thermorheologically simple materials, this framework reduces tTS master curve data collection time by 40% while increasing data density by an order of magnitude. For mutating materials, we leverage the mutation number to design OWCh waveforms, effectively probing the characteristic timescale of fast thermomechanical transitions during curing experiments.
</summary>
<dc:date>2025-08-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exponential Speedups for Quantum Walks in Random Hierarchical Graphs</title>
<link href="https://hdl.handle.net/1721.1/163156" rel="alternate"/>
<author>
<name>Balasubramanian, Shankar</name>
</author>
<author>
<name>Li, Tongyang</name>
</author>
<author>
<name>Harrow, Aram W.</name>
</author>
<id>https://hdl.handle.net/1721.1/163156</id>
<updated>2025-10-11T06:54:18Z</updated>
<published>2025-08-01T00:00:00Z</published>
<summary type="text">Exponential Speedups for Quantum Walks in Random Hierarchical Graphs
Balasubramanian, Shankar; Li, Tongyang; Harrow, Aram W.
There are few known exponential speedups for quantum algorithms and these tend to fall into even fewer families. One speedup that has mostly resisted generalization is the use of quantum walks to traverse the welded-tree graph, due to Childs, Cleve, Deotto, Farhi, Gutmann, and Spielman. We show how to generalize this to a large class of hierarchical graphs in which the vertices are grouped into “supervertices” which are arranged according to a d-dimensional lattice. Supervertices can have different sizes, and edges between supervertices correspond to random connections between their constituent vertices. The hitting times of quantum walks on these graphs are related to the localization properties of zero modes in certain disordered tight binding Hamiltonians. The speedups range from superpolynomial to exponential, depending on the underlying dimension and the random graph model. We also provide concrete realizations of these hierarchical graphs, and introduce a general method for constructing graphs with efficient quantum traversal times using graph sparsification.
</summary>
<dc:date>2025-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mean robust optimization</title>
<link href="https://hdl.handle.net/1721.1/163155" rel="alternate"/>
<author>
<name>Wang, Irina</name>
</author>
<author>
<name>Becker, Cole</name>
</author>
<author>
<name>Van Parys, Bart</name>
</author>
<author>
<name>Stellato, Bartolomeo</name>
</author>
<id>https://hdl.handle.net/1721.1/163155</id>
<updated>2025-10-11T06:54:20Z</updated>
<published>2024-11-28T00:00:00Z</published>
<summary type="text">Mean robust optimization
Wang, Irina; Becker, Cole; Van Parys, Bart; Stellato, Bartolomeo
Robust optimization is a tractable and expressive technique for decision-making under uncertainty, but it can lead to overly conservative decisions when pessimistic assumptions are made on the uncertain parameters. Wasserstein distributionally robust optimization can reduce conservatism by being data-driven, but it often leads to very large problems with prohibitive solution times. We introduce mean robust optimization, a general framework that combines the best of both worlds by providing a trade-off between computational effort and conservatism. We propose uncertainty sets constructed based on clustered data rather than on observed data points directly thereby significantly reducing problem size. By varying the number of clusters, our method bridges between robust and Wasserstein distributionally robust optimization. We show finite-sample performance guarantees and explicitly control the potential additional pessimism introduced by any clustering procedure. In addition, we prove conditions for which, when the uncertainty enters linearly in the constraints, clustering does not affect the optimal solution. We illustrate the efficiency and performance preservation of our method on several numerical examples, obtaining multiple orders of magnitude speedups in solution time with little-to-no effect on the solution quality.
</summary>
<dc:date>2024-11-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forced Gas Convection for Uniform Freezing of Lyophilization Vials</title>
<link href="https://hdl.handle.net/1721.1/163154" rel="alternate"/>
<author>
<name>Burcat, Steven J.</name>
</author>
<author>
<name>Kadambi, Rohan P.</name>
</author>
<author>
<name>Stratta, Lorenzo</name>
</author>
<author>
<name>Braatz, Richard D.</name>
</author>
<author>
<name>Pisano, Roberto</name>
</author>
<author>
<name>Slocum, Alexander H.</name>
</author>
<author>
<name>Trout, Bernhardt L.</name>
</author>
<id>https://hdl.handle.net/1721.1/163154</id>
<updated>2025-10-11T06:54:03Z</updated>
<published>2025-07-29T00:00:00Z</published>
<summary type="text">Forced Gas Convection for Uniform Freezing of Lyophilization Vials
Burcat, Steven J.; Kadambi, Rohan P.; Stratta, Lorenzo; Braatz, Richard D.; Pisano, Roberto; Slocum, Alexander H.; Trout, Bernhardt L.
Purpose Conventional shelf-freezing in pharmaceutical lyophilization suffers from batch variation and is potentially incompatible with emerging continuous lyophilization systems. This work presents a forced gas convective freezing chamber for suspended vials in cross-flow to improve the quality of the freezing process and meet the continuous lyophilization needs. Methods First, computational fluid dynamics simulations were performed to determine key process parameters. Then, physical chambers were built to meet these requirements. Sets of twenty 10R vials containing 3 mL of aqueous solution were frozen to characterize the per-vial heat transfer. Additionally, a novel nucleation technique was investigated where conditioned vials were exposed to an impulse of &lt; - 30 ∘ C gas. Finally, frozen vials were completely dried in 12 h in an attached vacuum chamber. Results The chambers conditioned vials from 25 ∘ C to −1 ∘ C in under 20 min, with final vial temperatures varying by less than 0.5 ∘ C. The impulse technique induced nucleation in all vials within 30 s without significantly cooling them. After nucleation, the system accessed slow (0.05 g/min) and rapid (1.0 g/min) solidification rates, as well as post-solidification procedures including typical ramp and hold protocols. Dried vials had residual moisture below 2.5 wt% and showed no signs of collapse. Conclusions This freezing chamber was demonstrated to track gas temperature setpoints as low as −50 ∘ C within ±1 ∘ C and induce nucleation in all vials virtually simultaneously, enabling excellent control of the freezing process. The chamber’s cooling via forced convection and its available front and back faces make it compatible with integration into a continuous lyophilization system.
</summary>
<dc:date>2025-07-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>EMT-ciliary signaling in quasi-mesenchymal-stem-like cells drives therapeutic resistance and is a druggable vulnerability in triple-negative breast cancer</title>
<link href="https://hdl.handle.net/1721.1/163153" rel="alternate"/>
<author>
<name>Tessier, Camille E.</name>
</author>
<author>
<name>Derrien, Jennifer</name>
</author>
<author>
<name>Dupuy, Aurore M. M.</name>
</author>
<author>
<name>Pelé, Thomas</name>
</author>
<author>
<name>Moquet, Martin</name>
</author>
<author>
<name>Roul, Julie</name>
</author>
<author>
<name>Douillard, Elise</name>
</author>
<author>
<name>El Harrif, Camille</name>
</author>
<author>
<name>Pinson, Xavier</name>
</author>
<author>
<name>Le Gallo, Matthieu</name>
</author>
<author>
<name>Godey, Florence</name>
</author>
<author>
<name>Tas, Patrick</name>
</author>
<author>
<name>Viel, Roselyne</name>
</author>
<author>
<name>Grasset, Eloïse</name>
</author>
<id>https://hdl.handle.net/1721.1/163153</id>
<updated>2025-10-11T06:54:12Z</updated>
<published>2025-08-26T00:00:00Z</published>
<summary type="text">EMT-ciliary signaling in quasi-mesenchymal-stem-like cells drives therapeutic resistance and is a druggable vulnerability in triple-negative breast cancer
Tessier, Camille E.; Derrien, Jennifer; Dupuy, Aurore M. M.; Pelé, Thomas; Moquet, Martin; Roul, Julie; Douillard, Elise; El Harrif, Camille; Pinson, Xavier; Le Gallo, Matthieu; Godey, Florence; Tas, Patrick; Viel, Roselyne; Grasset, Eloïse
Cancer therapeutic resistance is mediated, in part, by phenotypic heterogeneity and the plasticity of tumor cells, the latter being enabled by epithelial–mesenchymal transition (EMT). However, EMT in human cancer therapeutic response remains poorly understood. We developed patient-derived organoids (PDOs) from human triple-negative breast cancer (TNBC) and investigated their response to chemotherapy. We found that chemotherapy treatment kills the bulk of tumor cells in PDOs, but there is selective survival of malignant cells that had activated an EMT program, entered a quasi-mesenchymal, stem cell-like state and display primary cilia. We developed a family of small-molecule inhibitors of ciliogenesis and show that treatment with these inhibitors, or genetic ablation of primary cilia, is sufficient to suppress this chemoresistance via NFκB-induced cell death. We conclude that an EMT–ciliary signaling axis induces chemoresistance in quasi-mesenchymal ciliated stem-like cells to help tumors evade chemotherapy and represents a druggable vulnerability in human TNBC.
</summary>
<dc:date>2025-08-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pre-Clinical Models of Heart Failure with Preserved Ejection Fraction: Advancing Knowledge for Device Based Therapies</title>
<link href="https://hdl.handle.net/1721.1/163152" rel="alternate"/>
<author>
<name>Langer, Nina</name>
</author>
<author>
<name>Escher, Andreas</name>
</author>
<author>
<name>Ozturk, Caglar</name>
</author>
<author>
<name>Stephens, Andrew F.</name>
</author>
<author>
<name>Roche, Ellen T.</name>
</author>
<author>
<name>Granegger, Marcus</name>
</author>
<author>
<name>Kaye, David M.</name>
</author>
<author>
<name>Gregory, Shaun D.</name>
</author>
<id>https://hdl.handle.net/1721.1/163152</id>
<updated>2025-10-11T06:54:24Z</updated>
<published>2025-08-25T00:00:00Z</published>
<summary type="text">Pre-Clinical Models of Heart Failure with Preserved Ejection Fraction: Advancing Knowledge for Device Based Therapies
Langer, Nina; Escher, Andreas; Ozturk, Caglar; Stephens, Andrew F.; Roche, Ellen T.; Granegger, Marcus; Kaye, David M.; Gregory, Shaun D.
Heart failure with preserved ejection fraction (HFpEF) is a growing health problem worldwide, accounting for half of all heart failure cases. HFpEF patients present with diverse underlying causes and symptoms, making diagnosis and treatment challenging. Current pharmacological therapies are inadequate, while approved device-based therapies have shown limited success due to patient heterogeneity. This underscores the need for improved pre-clinical models, critical for guiding the design and development of effective therapeutic devices. This paper presents an overview of current pre-clinical HFpEF models, including in-silico, in-vitro, ex-vivo, and in-vivo approaches, aimed at advancing the understanding of HFpEF physiology and the development of device-based therapies. We examined each model's ability to replicate key HFpEF characteristics, discuss their respective strengths and limitations, and highlight their role in supporting the creation of clinically relevant solutions. Additionally, the potential of emerging advancements is explored.
</summary>
<dc:date>2025-08-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>A new approach to plurals-of-politeness and their number agreement</title>
<link href="https://hdl.handle.net/1721.1/163151" rel="alternate"/>
<author>
<name>Kaur, Gurmeet</name>
</author>
<author>
<name>Sinha, Yash</name>
</author>
<id>https://hdl.handle.net/1721.1/163151</id>
<updated>2025-10-11T06:54:21Z</updated>
<published>2025-08-25T00:00:00Z</published>
<summary type="text">A new approach to plurals-of-politeness and their number agreement
Kaur, Gurmeet; Sinha, Yash
Plural DPs, which indicate politeness or honorification towards a singular referent, have received significant attention in the literature. Unlike regular plurals that always trigger plural agreement, these DPs, which we call plurals-of-politeness/PoPs, can trigger singular agreement on some probes in some languages. Moreover, the distribution of singular agreement is subject to certain constraints. Expanding the class of PoPs to include not only pronominals but also nominals, which are crosslinguistically rarer and have received relatively less attention, this paper offers a new analysis of agreement with PoPs. We propose a structure of PoPs, in which the pl feature in a PoP is embedded further inside the DP than the pl feature in a regular plural. The core idea is that a probe that can access the pl feature in a regular plural can sometimes fail to do so in a PoP, resulting in singular agreement. This analysis can derive all the constraints on singular agreement with PoPs, which existing accounts of agreement with PoPs are unable to do. Additionally, by examining nominal and pronominal PoPs together, we provide the first unified account of DP-internal and external agreement with PoPs.
</summary>
<dc:date>2025-08-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Gallium and Indium Selective Sulfidation and Vapor Phase Transport from e-Waste Feedstocks</title>
<link href="https://hdl.handle.net/1721.1/163150" rel="alternate"/>
<author>
<name>Benderly-Kremen, Ethan</name>
</author>
<author>
<name>Daehn, Katrin</name>
</author>
<author>
<name>Allanore, Antoine</name>
</author>
<id>https://hdl.handle.net/1721.1/163150</id>
<updated>2025-10-11T06:54:23Z</updated>
<published>2025-08-25T00:00:00Z</published>
<summary type="text">Gallium and Indium Selective Sulfidation and Vapor Phase Transport from e-Waste Feedstocks
Benderly-Kremen, Ethan; Daehn, Katrin; Allanore, Antoine
Gallium (Ga) and indium (In) share similarities in their chemical behavior, their dilute presence in waste electronics (e-waste), and recycling rates close to 0% from such streams. Designing processes to extract gallium from LED chips and indium from LCD screens simultaneously reveals the potential and necessary distinctions for a flexible process based on elemental sulfur reactivity, which can be applied to both feedstocks. Whereas Ga- and In-compounds found in e-waste (gallium nitride, GaN; indium tin oxide, ‘ITO’) are recalcitrant to dissolution in aqueous feedstocks, the reaction with sulfur gas to form volatile sulfides may support their selective extraction from prepared e-waste. Process conditions for selective sulfidation are herein informed from thermodynamics and demonstrated experimentally. Vapor phase transport of the volatile sulfides is a powerful means to collect and enrich gallium and indium. Practical implementation likely calls for physical separation approaches to disassemble e-waste, remove excess material (epoxy, glass, metallic leads, and housing) from LED chips, and expose the ITO layer within LCD screens.
</summary>
<dc:date>2025-08-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI for Community: A Student-Led Initiative Promoting Sustainability Awareness Through App Development and Community Engagement</title>
<link href="https://hdl.handle.net/1721.1/163149" rel="alternate"/>
<author>
<name>Wang, Justin</name>
</author>
<author>
<name>Tang, Justin</name>
</author>
<id>https://hdl.handle.net/1721.1/163149</id>
<updated>2025-10-11T06:55:18Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">AI for Community: A Student-Led Initiative Promoting Sustainability Awareness Through App Development and Community Engagement
Wang, Justin; Tang, Justin
This paper presents AI for Community, a student-led initiative where high school students develop AI-powered solutions for sustainability. Starting with a biodiversity-focused Native Plant Awareness app, the initiative demonstrates the impactful intersection of technological innovation and environmental conservation. Building on this foundation, the initiative has expanded to address other sustainability challenges—such as ocean conservation and senior care, demonstrating how AI can drive both environmental and social impact.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI Generation – an AI Literacy curriculum for disadvantaged youth in Romania</title>
<link href="https://hdl.handle.net/1721.1/163148" rel="alternate"/>
<author>
<name>UiPath Foundation</name>
</author>
<id>https://hdl.handle.net/1721.1/163148</id>
<updated>2025-10-11T06:55:38Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">AI Generation – an AI Literacy curriculum for disadvantaged youth in Romania
UiPath Foundation
We believe technology is the key to unlocking educational opportunities for children in vulnerable communities, when thoughtfully integrated with the realities they face. By providing essential digital infrastructure such as tablets, laptops and reliable internet access, an online learning platform, we open doors to boundless learning possibilities. We equip children with essential programming and AI skills as a core part of their digital education. By mastering these technologies, they gain the tools needed to thrive in the future workforce.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating the Spatial Reasoning Capabilities of Large Multimodal Models on Chest X-Ray Anomaly Detection</title>
<link href="https://hdl.handle.net/1721.1/163147" rel="alternate"/>
<author>
<name>Li, Linday Skylar</name>
</author>
<id>https://hdl.handle.net/1721.1/163147</id>
<updated>2025-10-11T06:55:28Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">Evaluating the Spatial Reasoning Capabilities of Large Multimodal Models on Chest X-Ray Anomaly Detection
Li, Linday Skylar
While current results show potential in LMM-based diagnosis, it is unclear if the output of them are backed by strong spatial reasoning capabilities. To evaluate this, I provided GPT-4o with chest X-rays and asked it to return diagnoses and the coordinates of bounding boxes that surrounded any identified abnormalities on the NIH chest X-ray dataset. I find variable performance across different images in the dataset, suggesting the need for further development of the spatial reasoning capabilities of LMMs.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Democratizing Biotech: How AI-Powered Virtual Labs Could Transform Global Biotechnology Learning</title>
<link href="https://hdl.handle.net/1721.1/163146" rel="alternate"/>
<author>
<name>Kumar, Aarav</name>
</author>
<id>https://hdl.handle.net/1721.1/163146</id>
<updated>2025-10-11T06:55:40Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">Democratizing Biotech: How AI-Powered Virtual Labs Could Transform Global Biotechnology Learning
Kumar, Aarav
This paper examines how AI-powered virtual labs can democratize biotechnology education, enabling students in even the most remote areas to conduct sophisticated experiments.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mindset Math × Data Science: A Formula for a Multidisciplinary Framework in Math Instruction</title>
<link href="https://hdl.handle.net/1721.1/163145" rel="alternate"/>
<author>
<name>Senajon, Samantha Clarisse</name>
</author>
<author>
<name>Nethikunta, Sanvi</name>
</author>
<id>https://hdl.handle.net/1721.1/163145</id>
<updated>2025-10-11T06:55:23Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">Mindset Math × Data Science: A Formula for a Multidisciplinary Framework in Math Instruction
Senajon, Samantha Clarisse; Nethikunta, Sanvi
Mindset Math introduces an initiative based on a hybrid model of data science and traditional educational algebraic curricula built upon the success of previous projects. Highlighting the diverse use of Artificial Intelligence (AI) in career and technical fields, Mindset Math aims to use data science’s multidisciplinary properties to supplement the growth of data literacy and quantitative analysis abilities.”
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multicultural Education with AI: The Case of “A World at the Table”</title>
<link href="https://hdl.handle.net/1721.1/163144" rel="alternate"/>
<author>
<name>Peluso, Anna Lucia</name>
</author>
<id>https://hdl.handle.net/1721.1/163144</id>
<updated>2025-10-11T06:55:31Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">Multicultural Education with AI: The Case of “A World at the Table”
Peluso, Anna Lucia
The ‘A World at the Table’ project describes an interdisciplinary teaching experience conducted in a multicultural lower secondary school class at the IC “Ferrajolo – Siani” in Acerra (NA). Its aim was to promote integration, inclusion, and global citizenship through the collaborative writing of a song. Using digital tools and generative Artificial Intelligence (GenAI), students from diverse cultural backgrounds (Albania, Belarus, Brazil, Italy, Serbia, Morocco, Ukraine) co-created song lyrics that reflect the value of diversity, inclusion, and intercultural dialogue, inspired by traditional dishes from their home countries.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI on AI: Can GenAI Tools Design and Evaluate Course Outlines Better Than We Think?</title>
<link href="https://hdl.handle.net/1721.1/163143" rel="alternate"/>
<author>
<name>Kumar, Jeya Amantha</name>
</author>
<id>https://hdl.handle.net/1721.1/163143</id>
<updated>2025-10-11T06:55:41Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">AI on AI: Can GenAI Tools Design and Evaluate Course Outlines Better Than We Think?
Kumar, Jeya Amantha
Despite the increasing use of generative AI (GenAI) tools in education, little is known about their effectiveness in producing pedagogically sound instructional materials. Therefore, this study evaluated the performance of six GenAI tools as instructional designers in generating a unit or module outline for an undergraduate course, focusing on developing learning objectives based on Universal Design for Learning (UDL) principles and later evaluating each outcome.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Swype AI: A Multimodal Voice and Gesture Control System for Accessible Education</title>
<link href="https://hdl.handle.net/1721.1/163142" rel="alternate"/>
<author>
<name>Ganeshkumar, Dhanvinkumar</name>
</author>
<id>https://hdl.handle.net/1721.1/163142</id>
<updated>2025-10-11T06:55:34Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">Swype AI: A Multimodal Voice and Gesture Control System for Accessible Education
Ganeshkumar, Dhanvinkumar
Swype AI [uses] a real-time software system that combines natural voice and gesture control to replace traditional peripherals. It runs on consumer laptops without requiring specialized hardware.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transplant waiting list: Technology transforming lives</title>
<link href="https://hdl.handle.net/1721.1/163141" rel="alternate"/>
<author>
<name>Ferraz, Carolina Lima Duarte</name>
</author>
<author>
<name>Ramirez, Julia Beltrão Lemos</name>
</author>
<id>https://hdl.handle.net/1721.1/163141</id>
<updated>2025-10-11T06:55:29Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">Transplant waiting list: Technology transforming lives
Ferraz, Carolina Lima Duarte; Ramirez, Julia Beltrão Lemos
Although Brazil is a world reference in organ transplants, having the largest public transplant system in the world, the shortage of organs is still a worrying scenario in the country, since the number of effective donors does not meet the demand for transplants (Brasil, 2024). Thus, the work aims to investigate and understand the waiting list model in the organ transplant process, in such a way that it is possible to mitigate this scenario through the development of an app.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Policy Can Help Ensure the Proper Use of AI in K-12 Education</title>
<link href="https://hdl.handle.net/1721.1/163140" rel="alternate"/>
<author>
<name>DiPaola, Daniella</name>
</author>
<author>
<name>Salazar-Gómez, Andrés F.</name>
</author>
<author>
<name>Abelson, Hal</name>
</author>
<author>
<name>Klopfer, Eric</name>
</author>
<author>
<name>Goldston, David</name>
</author>
<author>
<name>Breazeal, Cynthia</name>
</author>
<id>https://hdl.handle.net/1721.1/163140</id>
<updated>2025-10-11T06:55:38Z</updated>
<published>2024-07-19T00:00:00Z</published>
<summary type="text">How Policy Can Help Ensure the Proper Use of AI in K-12 Education
DiPaola, Daniella; Salazar-Gómez, Andrés F.; Abelson, Hal; Klopfer, Eric; Goldston, David; Breazeal, Cynthia
</summary>
<dc:date>2024-07-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing AI Characters as Facilitators of Children’s Learning Experiences</title>
<link href="https://hdl.handle.net/1721.1/163139" rel="alternate"/>
<author>
<name>Tiwari, Sonia</name>
</author>
<id>https://hdl.handle.net/1721.1/163139</id>
<updated>2025-10-11T06:55:33Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">Assessing AI Characters as Facilitators of Children’s Learning Experiences
Tiwari, Sonia
This study examines how children’s interaction with AI characters can shape their learning experiences. Drawing on a literature review of child–AI interactions in educational contexts, this study presents AI Character Assessment (AIC-A) as an analytical framework for researchers assessing how well an AI character’s design and interaction align with children’s needs.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Localized Intelligence: Designing an AI-Enhanced OER Course for Faculty Development in a Low-Resource Language Context</title>
<link href="https://hdl.handle.net/1721.1/163138" rel="alternate"/>
<author>
<name>Shilibekova, Aigerim</name>
</author>
<id>https://hdl.handle.net/1721.1/163138</id>
<updated>2025-10-11T06:55:30Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">Localized Intelligence: Designing an AI-Enhanced OER Course for Faculty Development in a Low-Resource Language Context
Shilibekova, Aigerim
The paper introduces the concept of localized intelligence, a pedagogical principle that frames instructional design with AI as a situated and context-responsive practice, guided by human expertise. This approach challenges assumptions of AI scalability and oZers a replicable model for designing inclusive, culturally aligned professional learning experiences, with implications for multilingual faculty development across global contexts.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Return of the Jibo: Generative AI &amp; Social Robots for Virtual Production Education</title>
<link href="https://hdl.handle.net/1721.1/163137" rel="alternate"/>
<author>
<name>Pillis, D.</name>
</author>
<author>
<name>Ferguson, Jon</name>
</author>
<id>https://hdl.handle.net/1721.1/163137</id>
<updated>2025-10-11T06:55:40Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">Return of the Jibo: Generative AI &amp; Social Robots for Virtual Production Education
Pillis, D.; Ferguson, Jon
This paper discusses the use of Jibo, a socially intelligent robot developed at MIT, paired with the introduction of generative AI, as a method to introduce AI in educational settings, specifically, their combined integration into film-making pedagogy at a film school.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Personalized Reading and Writing Tutor: Improving Literacy Skills and Assessment Accuracy</title>
<link href="https://hdl.handle.net/1721.1/163136" rel="alternate"/>
<author>
<name>Kopikar, Moksh</name>
</author>
<author>
<name>Mandloi, Naman</name>
</author>
<id>https://hdl.handle.net/1721.1/163136</id>
<updated>2025-10-11T06:55:30Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">Personalized Reading and Writing Tutor: Improving Literacy Skills and Assessment Accuracy
Kopikar, Moksh; Mandloi, Naman
This paper presents the development and evaluation of the Reading Writing Tutor (RWT), a personalized learning assistant powered by Large Language Models (LLMs), designed to enhance students’ reading and writing skills according to state education standards.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reflections from UK and US Classrooms on Building Responsible AI Literacy</title>
<link href="https://hdl.handle.net/1721.1/163135" rel="alternate"/>
<author>
<name>Wang, Justinia J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163135</id>
<updated>2025-10-11T06:55:25Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">Reflections from UK and US Classrooms on Building Responsible AI Literacy
Wang, Justinia J.
This position paper highlights the importance of fostering responsible AI literacy in secondary education, drawing on personal observations from contrasting technology-use environments in UK and US schools. The author critiques simplistic regulatory approaches, emphasizing that unchecked AI use can amplify misinformation, limit student creativity, and impair critical thinking. The paper advocates for comprehensive AI education through curriculum enhancements and targeted training. Such measures aim to help students un-derstand AI’s capabilities, limitations, and ethical implications, and to encourage informed, balanced engagement with the technology.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lexington High School AI in Education Policy: A Proposal</title>
<link href="https://hdl.handle.net/1721.1/163134" rel="alternate"/>
<author>
<name>Tang, Ryan</name>
</author>
<id>https://hdl.handle.net/1721.1/163134</id>
<updated>2025-10-11T06:55:43Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">Lexington High School AI in Education Policy: A Proposal
Tang, Ryan
This AI policy includes guidance on appropriate usage of genAI, inappropriate usage of genAI, consequences of inappropriately using genAI, how to cite genAI, and data privacy. Currently, Lexington High School in Massachusetts lacks a clear policy on genAI for students. This policy fills the gap to encourage consistent application of usage and discipline concerning genAI in education. Overall, the goal of this policy is to promote student-teacher understanding of how to use genAI appropriately as an educational tool in the classroom.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Teaching AI with Humanity: Breaking Barriers. A Journey into Ethical and Inclusive Artificial Intelligence through Hands-On, Student-Centered Learning</title>
<link href="https://hdl.handle.net/1721.1/163133" rel="alternate"/>
<author>
<name>Pieraccini, Daniela</name>
</author>
<id>https://hdl.handle.net/1721.1/163133</id>
<updated>2025-10-11T06:55:26Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">Teaching AI with Humanity: Breaking Barriers. A Journey into Ethical and Inclusive Artificial Intelligence through Hands-On, Student-Centered Learning
Pieraccini, Daniela
By integrating supervised machine learning, block-based programming, and inclusive design, the initiative empowered students to understand, criticize, and apply AI with ethical sensitivity, and practical relevance.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Future architecture: Use of Artificial Intelligence in sustainable construction projects</title>
<link href="https://hdl.handle.net/1721.1/163132" rel="alternate"/>
<author>
<name>Leitão, André Cunha</name>
</author>
<author>
<name>Amaral, Carl Erhard Dolder</name>
</author>
<author>
<name>Henriques, Pedro Laudisio</name>
</author>
<id>https://hdl.handle.net/1721.1/163132</id>
<updated>2025-10-11T06:55:22Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">Future architecture: Use of Artificial Intelligence in sustainable construction projects
Leitão, André Cunha; Amaral, Carl Erhard Dolder; Henriques, Pedro Laudisio
The development of an App allowed the experience of using an innovative and digital solution with sustainability, as well as the confirmation that the main contribution of the study is to help society with actions to minimize damage to the environment.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Socratic AI Tutoring in Primary School Mathematics: A Case Study on the Development of Problem-Solving and Digital Competence According to DigComp 2.2</title>
<link href="https://hdl.handle.net/1721.1/163131" rel="alternate"/>
<author>
<name>Avella, Barbara</name>
</author>
<id>https://hdl.handle.net/1721.1/163131</id>
<updated>2025-10-11T06:55:24Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">Socratic AI Tutoring in Primary School Mathematics: A Case Study on the Development of Problem-Solving and Digital Competence According to DigComp 2.2
Avella, Barbara
This study investigates the effectiveness of a digital Socratic tutoring approach in enhancing mathematical problem-solving skills in primary school students, using the European DigComp 2.2 framework as a reference. Through a qualitative case study conducted in a fifth-grade classroom, the interaction between students and an AI tutor during math activities was analyzed. The preliminary results are promising. They show that Socratic questioning fosters both mathematical and digital competencies, aligning with recent research on AI-mediated reflective learning.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>BINGO!: A Novel Neural Network Pruning Mechanism to Allow For Physical Computing in AI Education</title>
<link href="https://hdl.handle.net/1721.1/163130" rel="alternate"/>
<author>
<name>Panangat, Aditya</name>
</author>
<id>https://hdl.handle.net/1721.1/163130</id>
<updated>2025-10-11T06:55:32Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">BINGO!: A Novel Neural Network Pruning Mechanism to Allow For Physical Computing in AI Education
Panangat, Aditya
BINGO, during the training pass, studies specific subsets of a neural network one at a time to gauge how significant of a role each weight plays in contributing to a network’s accuracy. By the time training is done, BINGO generates a significance score for each weight, allowing for insignificant weights to be pruned in one shot. BINGO provides an accuracy-preserving pruning technique that is less computationally intensive than current methods, allowing for a world where students can learn about AI through engaging physical computing activities.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Markov Chain Tool for Grade 6-12 Learners to Explore Generative AI</title>
<link href="https://hdl.handle.net/1721.1/163129" rel="alternate"/>
<author>
<name>Ellis, Rebecca</name>
</author>
<author>
<name>Chao, Jie</name>
</author>
<author>
<name>Rosé, Carolyn</name>
</author>
<author>
<name>Jiang, Shiyan</name>
</author>
<id>https://hdl.handle.net/1721.1/163129</id>
<updated>2025-10-11T06:55:27Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">A Markov Chain Tool for Grade 6-12 Learners to Explore Generative AI
Ellis, Rebecca; Chao, Jie; Rosé, Carolyn; Jiang, Shiyan
The AI Education Across the Curriculum Project (also known as StoryQII) has developed a digital tool to support students to represent, inspect, and generate text using a Markov chain. The tool is designed for use in an English Language Arts (ELA) class and does not require coding or statistics. Using this tool and its accompanying curriculum module, secondary students learn the basics of text generation and how it relates to the core ELA concepts of voice, authorship, and creativity. This tool has been tested in ninth grade, eleventh, and twelfth grade ELA classes with promising results for teaching students about generative AI.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Impact of Generative AI on Middle and High School Students’ Willingness to Engage with Teachers in Class</title>
<link href="https://hdl.handle.net/1721.1/163128" rel="alternate"/>
<author>
<name>Cai, Riley</name>
</author>
<id>https://hdl.handle.net/1721.1/163128</id>
<updated>2025-10-11T06:55:42Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">The Impact of Generative AI on Middle and High School Students’ Willingness to Engage with Teachers in Class
Cai, Riley
This paper focuses on whether preparing questions with generative AI increases students ‘ willingness to ask questions in class. […] Results indicate AI functions as a pre-questioning helper that reduces anxiety and strengthens student-teacher interaction rather than replacing it.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>GreenMiles: Utilizing Deep Learning to Analyze Vehicular Carbon Emission Trends</title>
<link href="https://hdl.handle.net/1721.1/163127" rel="alternate"/>
<author>
<name>Arunkumar, Rohan</name>
</author>
<id>https://hdl.handle.net/1721.1/163127</id>
<updated>2025-10-11T06:55:33Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">GreenMiles: Utilizing Deep Learning to Analyze Vehicular Carbon Emission Trends
Arunkumar, Rohan
I developed a set of deep learning models that analyze patterns in vehicle-related carbon emissions using an official dataset from the Canadian government. The models identified which vehicle settings (such as fuel type and transmission) are most strongly associated with high emissions. After testing, the best-performing model was deployed on a user-friendly web application, where consumers can input different vehicle parameters and receive predicted CO₂ emission levels
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Opportunities, Issues, and Challenges for Generative AI in Fostering Equitable Pathways in Computing Education</title>
<link href="https://hdl.handle.net/1721.1/163126" rel="alternate"/>
<author>
<name>Breazeal, Cynthia</name>
</author>
<author>
<name>Rai, Arun</name>
</author>
<author>
<name>Ramesh, Balasubramaniam</name>
</author>
<author>
<name>Chen, Liwei</name>
</author>
<author>
<name>Long, Yuan</name>
</author>
<author>
<name>Aria, Andrea</name>
</author>
<author>
<name>Loi, Hao</name>
</author>
<author>
<name>Torralba, Antonio</name>
</author>
<author>
<name>Bernstein, Jeremy</name>
</author>
<author>
<name>Reich, Justin</name>
</author>
<author>
<name>Klopfer, Eric</name>
</author>
<author>
<name>Abelson, Hal</name>
</author>
<author>
<name>Westerman, George</name>
</author>
<author>
<name>Bosch, Christina</name>
</author>
<id>https://hdl.handle.net/1721.1/163126</id>
<updated>2025-10-10T12:36:12Z</updated>
<published>2024-08-28T00:00:00Z</published>
<summary type="text">Opportunities, Issues, and Challenges for Generative AI in Fostering Equitable Pathways in Computing Education
Breazeal, Cynthia; Rai, Arun; Ramesh, Balasubramaniam; Chen, Liwei; Long, Yuan; Aria, Andrea; Loi, Hao; Torralba, Antonio; Bernstein, Jeremy; Reich, Justin; Klopfer, Eric; Abelson, Hal; Westerman, George; Bosch, Christina
The objective of this whitepaper is to identify opportunities, issues, and challenges facing equitable education pathways for careers in computing and the particular role that generative artificial intelligence (AI) could play to support postsecondary education at minority-serving institutions (MSIs) and community colleges (CCs).
</summary>
<dc:date>2024-08-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Listening with Language Models: Using LLMs to Collect and Interpret Classroom Feedback</title>
<link href="https://hdl.handle.net/1721.1/163125" rel="alternate"/>
<author>
<name>Maram, Sai Siddartha</name>
</author>
<author>
<name>Zaman, Ulia</name>
</author>
<author>
<name>El-Nasr, Magy Seif</name>
</author>
<id>https://hdl.handle.net/1721.1/163125</id>
<updated>2025-10-11T06:55:23Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">Listening with Language Models: Using LLMs to Collect and Interpret Classroom Feedback
Maram, Sai Siddartha; Zaman, Ulia; El-Nasr, Magy Seif
Our findings suggest that LLM-based feedback systems offer richer insights, greater contextual relevance, and higher engagement compared to standard survey tools. Instructors valued the system’s adaptability, specificity, and ability to support mid-course adjustments, while students appreciated the conversational format and opportunity for elaboration.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Supernote: Crowdsource the Best Ideas and Democratize Class Notes</title>
<link href="https://hdl.handle.net/1721.1/163124" rel="alternate"/>
<author>
<name>Mandloi, Naman</name>
</author>
<author>
<name>Kopikar, Moksh</name>
</author>
<id>https://hdl.handle.net/1721.1/163124</id>
<updated>2025-10-11T06:55:25Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">Supernote: Crowdsource the Best Ideas and Democratize Class Notes
Mandloi, Naman; Kopikar, Moksh
Recognizing that students often struggle to capture all the information explained by the teacher due to various reasons, such as reading writing disabilities, language barriers, absences, and difficulty in listening and taking notes simultaneously, Supernote […] utilizes a Large Language Model (LLM) architecture to compile incomplete notes from multiple students, compare captured points, and synthesize comprehensive notes that include missing information from the teacher’s lesson.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CliniKiosk: An Innovative Technology to Expand Healthcare Access</title>
<link href="https://hdl.handle.net/1721.1/163123" rel="alternate"/>
<author>
<name>Adhikari, Ela</name>
</author>
<id>https://hdl.handle.net/1721.1/163123</id>
<updated>2025-10-11T06:55:28Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">CliniKiosk: An Innovative Technology to Expand Healthcare Access
Adhikari, Ela
This investigation proposes CliniKiosk (Figure 1), an Artificial intelligence (AI)-powered digital health kiosk designed to deliver real-time, evidence-based, multilingual, empathetic, and personalized health assessments that are adaptable to culturally diverse communities. In contrast to traditional health chatbots, it dynamically adapts to users by analyzing demographics, symptoms, and medical history to provide both empathetic and personalized health recommendations.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nonlinear conjugate gradient methods: worst-case convergence rates via computer-assisted analyses</title>
<link href="https://hdl.handle.net/1721.1/163122" rel="alternate"/>
<author>
<name>Das Gupta, Shuvomoy</name>
</author>
<author>
<name>Freund, Robert M.</name>
</author>
<author>
<name>Sun, Xu A.</name>
</author>
<author>
<name>Taylor, Adrien</name>
</author>
<id>https://hdl.handle.net/1721.1/163122</id>
<updated>2026-03-08T03:26:21Z</updated>
<published>2024-08-22T00:00:00Z</published>
<summary type="text">Nonlinear conjugate gradient methods: worst-case convergence rates via computer-assisted analyses
Das Gupta, Shuvomoy; Freund, Robert M.; Sun, Xu A.; Taylor, Adrien
We propose a computer-assisted approach to the analysis of the worst-case convergence of nonlinear conjugate gradient methods (NCGMs). Those methods are known for their generally good empirical performances for large-scale optimization, while having relatively incomplete analyses. Using our computer-assisted approach, we establish novel complexity bounds for the Polak-Ribière-Polyak (PRP) and the Fletcher-Reeves (FR) NCGMs for smooth strongly convex minimization. In particular, we construct mathematical proofs that establish the first non-asymptotic convergence bound for FR (which is historically the first developed NCGM), and a much improved non-asymptotic convergence bound for PRP. Additionally, we provide simple adversarial examples on which these methods do not perform better than gradient descent with exact line search, leaving very little room for improvements on the same class of problems.
</summary>
<dc:date>2024-08-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>A multi-modal network equilibrium model considering captive travelers and mode correlation</title>
<link href="https://hdl.handle.net/1721.1/163121" rel="alternate"/>
<author>
<name>Wang, Guangchao</name>
</author>
<author>
<name>Song, Defeng</name>
</author>
<author>
<name>Qi, Hang</name>
</author>
<author>
<name>Zhou, Juanhua</name>
</author>
<author>
<name>He, Zhengbing</name>
</author>
<id>https://hdl.handle.net/1721.1/163121</id>
<updated>2026-03-08T03:26:08Z</updated>
<published>2024-04-08T00:00:00Z</published>
<summary type="text">A multi-modal network equilibrium model considering captive travelers and mode correlation
Wang, Guangchao; Song, Defeng; Qi, Hang; Zhou, Juanhua; He, Zhengbing
In making daily commuting trips, a part of travelers, which are called captive travelers, rely on one transport mode due to a lack of access or affordability to other transport modes. To account for the effect of such captive travelers on network equilibrium performances, this paper proposes a multi-modal network equilibrium (MMNE) model that accounts for the captive travelers and the correlations between modes and between routes. First, a hybrid mode choice model is developed by integrating the dogit and nested logit (NL) models. The hybrid dogit–NL (DNL) model has smaller direct and cross elasticity than the NL model, it alleviates the property of irrelevant from independent alternatives and takes the dogit and NL modal splits as bounds. Second, the path-size logit (PSL) model is adopted for predicting travelers’ route choices with overlapping routes. The DNL–PSL MMNE model is formulated as a mathematical programming problem that admits an equivalent and unique solution. Then, a partial linearization algorithm with the Barzilai–Borwein (BB) step sizes is developed. The numerical results reveal that captive travelers lead to lower sensitivity toward transport policies and may cause higher network total travel time; while the perception of mode similarity may impair the overall attractiveness of modes with a high degree of similarity. The observations indicate that to promote green transportation, policy efforts should be made to make use of or adjust the captivity structure and produce diversified perceptions of and preferences for different green transport modes. The BB step sizes are suggested for low travel demand cases when solving the combined travel choice problems. Further, extensions of the DNL model with bundle captivities are discussed. The results of the paper help improve the network equilibrium prediction and support transport policymaking.
</summary>
<dc:date>2024-04-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>Addressing Grid Convergence and Log-Layer Mismatch in Wall Modeled Large Eddy Simulations of Geophysical Flows Over Rough Surfaces and Canopies</title>
<link href="https://hdl.handle.net/1721.1/163120" rel="alternate"/>
<author>
<name>Shin, E. Y.</name>
</author>
<author>
<name>Yang, X. I. A.</name>
</author>
<author>
<name>Howland, M. F.</name>
</author>
<id>https://hdl.handle.net/1721.1/163120</id>
<updated>2026-03-08T03:26:14Z</updated>
<published>2025-08-28T00:00:00Z</published>
<summary type="text">Addressing Grid Convergence and Log-Layer Mismatch in Wall Modeled Large Eddy Simulations of Geophysical Flows Over Rough Surfaces and Canopies
Shin, E. Y.; Yang, X. I. A.; Howland, M. F.
Wall modeled large eddy simulations are the primary scale-resolving method used to investigate boundary layer meteorology. Wall models are used to parameterize momentum, heat, and other exchanges at the surface to achieve computationally efficient predictions given the very high Reynolds numbers of planetary boundary layers and the importance of small-scales near the surface. However, wall modeled large eddy simulations can be contaminated by log-layer mismatch, where the prediction of wall shear stress (friction velocity) deviates from the intended value. It is not clear how this log-layer mismatch in boundary layers depends on parameters that represent unresolved roughness elements and on the computational setup. This study elucidates how log-layer mismatch depends on the roughness length, displacement distance, matching velocity filtering strength, and vertical grid resolution using 135 channel flow, 24 conventionally neutral boundary layer, and 12 truly neutral boundary layer wall modeled large eddy simulations. The results demonstrate two sources of log-layer mismatch. First, a spurious correlation between the friction velocity and the fluctuation of the matching velocity causes log-layer mismatch that increases with roughness length, displacement distance, and increasing grid resolution. This log-layer mismatch can be eliminated by filtering the matching velocity, but the filter timescale necessary to eliminate the error depends on the roughness parameters and grid resolution. Second, an additional source of log-layer mismatch is identified, depending on the displacement distance. This mechanism of log-layer mismatch is not alleviated by filtering the matching velocity. An analytical model of this log-layer mismatch mechanism is derived and validated against the large eddy simulations. The results demonstrate that the analytical model is able to predict the magnitude of this log layer mismatch based on a priori information about the simulation to within the uncertainty of the von Kármán constant.
</summary>
<dc:date>2025-08-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effective last-mile delivery using reinforcement learning and social media-based traffic prediction in underdeveloped megacities</title>
<link href="https://hdl.handle.net/1721.1/163119" rel="alternate"/>
<author>
<name>Rabelo, Luis</name>
</author>
<author>
<name>Rincón-Guio, Cristian</name>
</author>
<author>
<name>Laynes, Valeria</name>
</author>
<author>
<name>Gutierrez-Franco, Edgar</name>
</author>
<author>
<name>Bhat, Vasanth</name>
</author>
<author>
<name>Zamora-Aguas, Juan</name>
</author>
<author>
<name>Elkamel, Marwen</name>
</author>
<id>https://hdl.handle.net/1721.1/163119</id>
<updated>2026-03-08T03:26:12Z</updated>
<published>2025-08-17T00:00:00Z</published>
<summary type="text">Effective last-mile delivery using reinforcement learning and social media-based traffic prediction in underdeveloped megacities
Rabelo, Luis; Rincón-Guio, Cristian; Laynes, Valeria; Gutierrez-Franco, Edgar; Bhat, Vasanth; Zamora-Aguas, Juan; Elkamel, Marwen
This paper presents a framework for effective last-mile delivery in underdeveloped megacities by combining social media, machine learning, and reinforcement learning. Leveraging a Graph Convolutional Networks and a Long Short-Term Memory model for traffic prediction, the framework incorporates multimodal data sources, such as social media sentiment analysis, to provide real-time insights into traffic dynamics. By framing the delivery problem as a Markov Decision Process, reinforcement learning dynamically adapts routing decisions to obtain delivery efficiency, reduce delays, and minimize fuel consumption. A case study in Bogotá demonstrates the framework’s effectiveness in mitigating urban traffic challenges. This work highlights the transformative potential of integrating adaptive learning technologies to address urban logistics’ environmental, economic, and operational complexities. Future research explores advanced methodologies, including multi-agent systems and transformer-based architectures, to further enhance scalability and adaptability in dynamic urban environments.
</summary>
<dc:date>2025-08-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spatiotemporally constrained 3D reconstruction from biplanar digital subtraction angiography</title>
<link href="https://hdl.handle.net/1721.1/163118" rel="alternate"/>
<author>
<name>Frisken, Sarah</name>
</author>
<author>
<name>Gopalakrishnan, Vivek</name>
</author>
<author>
<name>Chlorogiannis, David D.</name>
</author>
<author>
<name>Haouchine, Nazim</name>
</author>
<author>
<name>Cafaro, Alexandre</name>
</author>
<author>
<name>Golby, Alexandra J.</name>
</author>
<author>
<name>Wells III, William M.</name>
</author>
<author>
<name>Du, Rose</name>
</author>
<id>https://hdl.handle.net/1721.1/163118</id>
<updated>2025-10-10T03:08:25Z</updated>
<published>2025-06-01T00:00:00Z</published>
<summary type="text">Spatiotemporally constrained 3D reconstruction from biplanar digital subtraction angiography
Frisken, Sarah; Gopalakrishnan, Vivek; Chlorogiannis, David D.; Haouchine, Nazim; Cafaro, Alexandre; Golby, Alexandra J.; Wells III, William M.; Du, Rose
Purpose Our goal is to reconstruct 3D cerebral vessels from two 2D digital subtraction angiography (DSA) images acquired using a biplane scanner. This could provide intraoperative 3D imaging with 2–5 × spatial and 20 × temporal resolution of 3D magnetic resonance angiography, computed tomography angiography (CTA), or rotational DSA. Because many interventional radiology suites have biplane scanners, our method could be easily integrated into clinical workflows. Methods We present a constrained 3D reconstruction method that utilizes vessel centerlines, radii, and the flow of contrast agent through vessels from DSA. The reconstructed volume samples ‘vesselness’ at each voxel, i.e., its probability of containing a vessel. We present evaluation metrics which we used to optimize reconstruction parameters and evaluate our method on synthetic data. We provide preliminary results on clinical data. To handle clinical data, we developed a software tool for extracting vessel centerlines, radii, and contrast arrival times from clinical DSA. We provide an automated method for registering DSA to CTA which allows us to compare reconstructed vessels with vessels extracted from CTA. Result Our method reduced reconstruction artifacts in vesselness volumes for both synthetic and clinical data. In synthetic DSA, where 3D ground-truth vessel centerlines are available, our constrained reconstruction method improved accuracy, selectivity, and Dice scores with two views compared to existing sparse reconstruction methods with up to 16 views. Conclusion Incorporating additional constraints into 3D reconstruction can successfully reduce artifacts introduced when a complex 3D structure like the brain vasculature is reconstructed from a small number of 2D views.
</summary>
<dc:date>2025-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nuclear Ship Safety Handbook</title>
<link href="https://hdl.handle.net/1721.1/163117" rel="alternate"/>
<author>
<name>Valiaveedu, Anthony</name>
</author>
<author>
<name>Edmonds, Nat</name>
</author>
<author>
<name>Izurieta, Jose</name>
</author>
<id>https://hdl.handle.net/1721.1/163117</id>
<updated>2026-01-15T13:47:50Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Nuclear Ship Safety Handbook
Valiaveedu, Anthony; Edmonds, Nat; Izurieta, Jose
At present, there exists no clear, unified public document in the incorporation of design safety for nuclear civilian ships. Historically, there has been developed research into this area due to political development in the “Atoms for Peace” era. However, as of recent, the only development has been through standards institutions related to Floating Nuclear Power Plants (commonly known as FLOPPS) and by the Russian Federation with their nuclear icebreaker development. This paper uses this research data and standards and combines it with the operational experiences during civilian maritime nuclear operations to provide unique insights into potential issues and resolutions in the design efficacy of maritime nuclear operations. The goal, therefore, is to provide a strong basis for initial safety on key areas that require nuclear and maritime regulatory research and development in the coming years to prepare for nuclear propulsion in the maritime industry. The paper is isolated into multiple chapters in the areas that involve overlapping nuclear/maritime safety design decisions that will be encountered by engineers. Chapter 1 establishes the principles andm philosophy behind the safety discussion for nuclear maritime and discusses key topics that relate to the overall ship design. Chapter 2 provides design details on the reactor compartment and other considerations when designing the reactor compartment. Chapter 3 describes the various hazards the reactor plant should be resilient against and avenues in establishing resiliency. Chapter 4 discusses the propulsion system and key considerations when evaluating different propulsion designs. Chapter 5 provides emergency power considerations for design determinations. Chapter 6 provides an event tree analysis on the major initiating events when operating a nuclear ship. Chapter 7 outlines the port operating procedures including avenues for establishing porting requirements for nuclear ships.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing the Functionality of Immunoisolated Human SC‐βeta Cell Clusters through Prior Resizing</title>
<link href="https://hdl.handle.net/1721.1/163116" rel="alternate"/>
<author>
<name>Bochenek, Matthew A</name>
</author>
<author>
<name>Walters, Ben</name>
</author>
<author>
<name>Zhang, Jingping</name>
</author>
<author>
<name>Fenton, Owen S</name>
</author>
<author>
<name>Facklam, Amanda</name>
</author>
<author>
<name>Kroneková, Zuzana</name>
</author>
<author>
<name>Pelach, Michal</name>
</author>
<author>
<name>Engquist, Elise N</name>
</author>
<author>
<name>Leite, Nayara C</name>
</author>
<author>
<name>Morgart, Alex</name>
</author>
<author>
<name>Lacík, Igor</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Anderson, Daniel G</name>
</author>
<id>https://hdl.handle.net/1721.1/163116</id>
<updated>2026-03-08T03:27:07Z</updated>
<published>2024-01-11T00:00:00Z</published>
<summary type="text">Enhancing the Functionality of Immunoisolated Human SC‐βeta Cell Clusters through Prior Resizing
Bochenek, Matthew A; Walters, Ben; Zhang, Jingping; Fenton, Owen S; Facklam, Amanda; Kroneková, Zuzana; Pelach, Michal; Engquist, Elise N; Leite, Nayara C; Morgart, Alex; Lacík, Igor; Langer, Robert; Anderson, Daniel G
The transplantation of immunoisolated stem cell derived beta cell clusters (SC‐β) has the potential to restore physiological glycemic control in patients with type I diabetes. This strategy is attractive as it uses a renewable β‐cell source without the need for systemic immune suppression. SC‐β cells have been shown to reverse diabetes in immune compromised mice when transplanted as ≈300 µm diameter clusters into sites where they can become revascularized. However, immunoisolated SC‐β clusters are not directly revascularized and rely on slower diffusion of nutrients through a membrane. It is hypothesized that smaller SC‐β cell clusters (≈150 µm diameter), more similar to islets, will perform better within immunoisolation devices due to enhanced mass transport. To test this, SC‐β cells are resized into small clusters, encapsulated in alginate spheres, and coated with a biocompatible A10 polycation coating that resists fibrosis. After transplantation into diabetic immune competent C57BL/6 mice, the “resized” SC‐β cells plus the A10 biocompatible polycation coating induced long‐term euglycemia in the mice (6 months). After retrieval, the resized A10 SC‐β cells exhibited the least amount of fibrosis and enhanced markers of β‐cell maturation. The utilization of small SC‐β cell clusters within immunoprotection devices may improve clinical translation in the future.
</summary>
<dc:date>2024-01-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>Drinkable in situ-forming tough hydrogels for gastrointestinal therapeutics</title>
<link href="https://hdl.handle.net/1721.1/163115" rel="alternate"/>
<author>
<name>Liu, Gary W</name>
</author>
<author>
<name>Pickett, Matthew J</name>
</author>
<author>
<name>Kuosmanen, Johannes LP</name>
</author>
<author>
<name>Ishida, Keiko</name>
</author>
<author>
<name>Madani, Wiam AM</name>
</author>
<author>
<name>White, Georgia N</name>
</author>
<author>
<name>Jenkins, Joshua</name>
</author>
<author>
<name>Park, Sanghyun</name>
</author>
<author>
<name>Feig, Vivian R</name>
</author>
<author>
<name>Jimenez, Miguel</name>
</author>
<author>
<name>Karavasili, Christina</name>
</author>
<author>
<name>Lal, Nikhil B</name>
</author>
<author>
<name>Murphy, Matt</name>
</author>
<author>
<name>Lopes, Aaron</name>
</author>
<author>
<name>Morimoto, Joshua</name>
</author>
<author>
<name>Fitzgerald, Nina</name>
</author>
<author>
<name>Cheah, Jaime H</name>
</author>
<author>
<name>Soule, Christian K</name>
</author>
<author>
<name>Fabian, Niora</name>
</author>
<author>
<name>Hayward, Alison</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Traverso, Giovanni</name>
</author>
<id>https://hdl.handle.net/1721.1/163115</id>
<updated>2026-03-08T03:26:57Z</updated>
<published>2024-02-27T00:00:00Z</published>
<summary type="text">Drinkable in situ-forming tough hydrogels for gastrointestinal therapeutics
Liu, Gary W; Pickett, Matthew J; Kuosmanen, Johannes LP; Ishida, Keiko; Madani, Wiam AM; White, Georgia N; Jenkins, Joshua; Park, Sanghyun; Feig, Vivian R; Jimenez, Miguel; Karavasili, Christina; Lal, Nikhil B; Murphy, Matt; Lopes, Aaron; Morimoto, Joshua; Fitzgerald, Nina; Cheah, Jaime H; Soule, Christian K; Fabian, Niora; Hayward, Alison; Langer, Robert; Traverso, Giovanni
Pills are a cornerstone of medicine but can be challenging to swallow. While liquid formulations are easier to ingest, they lack the capacity to localize therapeutics with excipients nor act as controlled release devices. Here we describe drug formulations based on liquid in situ-forming tough (LIFT) hydrogels that bridge the advantages of solid and liquid dosage forms. LIFT hydrogels form directly in the stomach through sequential ingestion of a crosslinker solution of calcium and dithiol crosslinkers, followed by a drug-containing polymer solution of alginate and four-arm poly(ethylene glycol)-maleimide. We show that LIFT hydrogels robustly form in the stomachs of live rats and pigs, and are mechanically tough, biocompatible and safely cleared after 24 h. LIFT hydrogels deliver a total drug dose comparable to unencapsulated drug in a controlled manner, and protect encapsulated therapeutic enzymes and bacteria from gastric acid-mediated deactivation. Overall, LIFT hydrogels may expand access to advanced therapeutics for patients with difficulty swallowing.
</summary>
<dc:date>2024-02-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI‐Driven Defect Engineering for Advanced Thermoelectric Materials</title>
<link href="https://hdl.handle.net/1721.1/163114" rel="alternate"/>
<author>
<name>Fu, Chu‐Liang</name>
</author>
<author>
<name>Cheng, Mouyang</name>
</author>
<author>
<name>Hung, Nguyen Tuan</name>
</author>
<author>
<name>Rha, Eunbi</name>
</author>
<author>
<name>Chen, Zhantao</name>
</author>
<author>
<name>Okabe, Ryotaro</name>
</author>
<author>
<name>Carrizales, Denisse Córdova</name>
</author>
<author>
<name>Mandal, Manasi</name>
</author>
<author>
<name>Cheng, Yongqiang</name>
</author>
<author>
<name>Li, Mingda</name>
</author>
<id>https://hdl.handle.net/1721.1/163114</id>
<updated>2026-03-08T03:26:58Z</updated>
<published>2025-06-23T00:00:00Z</published>
<summary type="text">AI‐Driven Defect Engineering for Advanced Thermoelectric Materials
Fu, Chu‐Liang; Cheng, Mouyang; Hung, Nguyen Tuan; Rha, Eunbi; Chen, Zhantao; Okabe, Ryotaro; Carrizales, Denisse Córdova; Mandal, Manasi; Cheng, Yongqiang; Li, Mingda
Thermoelectric materials oﬀer a promising pathway to directly convertwaste heat to electricity. However, achieving high performance remainschallenging due to intrinsic trade-oﬀs between electrical conductivity, theSeebeck coeﬃcient, and thermal conductivity, which are further complicatedby the presence of defects. This review explores how artiﬁcial intelligence (AI)and machine learning (ML) are transforming thermoelectric materials design.Advanced ML approaches including deep neural networks, graph-basedmodels, and transformer architectures, integrated with high-throughputsimulations and growing databases, eﬀectively capture structure-propertyrelationships in a complex multiscale defect space and overcome the “curse ofdimensionality”. This review discusses AI-enhanced defect engineering strate-gies such as composition optimization, entropy and dislocation engineering,and grain boundary design, along with emerging inverse design techniquesfor generating materials with targeted properties. Finally, it outlines futureopportunities in novel physics mechanisms and sustainability, highlightingthe critical role of AI in accelerating the discovery of thermoelectric materials.
</summary>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Intracellular proteomics and extracellular vesiculomics as a metric of disease recapitulation in 3D-bioprinted aortic valve arrays</title>
<link href="https://hdl.handle.net/1721.1/163113" rel="alternate"/>
<author>
<name>Clift, Cassandra L</name>
</author>
<author>
<name>Blaser, Mark C</name>
</author>
<author>
<name>Gerrits, Willem</name>
</author>
<author>
<name>Turner, Mandy E</name>
</author>
<author>
<name>Sonawane, Abhijeet</name>
</author>
<author>
<name>Pham, Tan</name>
</author>
<author>
<name>Andresen, Jason L</name>
</author>
<author>
<name>Fenton, Owen S</name>
</author>
<author>
<name>Grolman, Joshua M</name>
</author>
<author>
<name>Campedelli, Alesandra</name>
</author>
<author>
<name>Buffolo, Fabrizio</name>
</author>
<author>
<name>Schoen, Frederick J</name>
</author>
<author>
<name>Hjortnaes, Jesper</name>
</author>
<author>
<name>Muehlschlegel, Jochen D</name>
</author>
<author>
<name>Mooney, David J</name>
</author>
<author>
<name>Aikawa, Masanori</name>
</author>
<author>
<name>Singh, Sasha A</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Aikawa, Elena</name>
</author>
<id>https://hdl.handle.net/1721.1/163113</id>
<updated>2026-03-08T03:26:55Z</updated>
<published>2024-02-28T00:00:00Z</published>
<summary type="text">Intracellular proteomics and extracellular vesiculomics as a metric of disease recapitulation in 3D-bioprinted aortic valve arrays
Clift, Cassandra L; Blaser, Mark C; Gerrits, Willem; Turner, Mandy E; Sonawane, Abhijeet; Pham, Tan; Andresen, Jason L; Fenton, Owen S; Grolman, Joshua M; Campedelli, Alesandra; Buffolo, Fabrizio; Schoen, Frederick J; Hjortnaes, Jesper; Muehlschlegel, Jochen D; Mooney, David J; Aikawa, Masanori; Singh, Sasha A; Langer, Robert; Aikawa, Elena
In calcific aortic valve disease (CAVD), mechanosensitive valvular cells respond to fibrosis- and calcification-induced tissue stiffening, further driving pathophysiology. No pharmacotherapeutics are available to treat CAVD because of the paucity of (i) appropriate experimental models that recapitulate this complex environment and (ii) benchmarking novel engineered aortic valve (AV)–model performance. We established a biomaterial-based CAVD model mimicking the biomechanics of the human AV disease-prone fibrosa layer, three-dimensional (3D)–bioprinted into 96-well arrays. Liquid chromatography–tandem mass spectrometry analyses probed the cellular proteome and vesiculome to compare the 3D-bioprinted model versus traditional 2D monoculture, against human CAVD tissue. The 3D-bioprinted model highly recapitulated the CAVD cellular proteome (94% versus 70% of 2D proteins). Integration of cellular and vesicular datasets identified known and unknown proteins ubiquitous to AV calcification. This study explores how 2D versus 3D-bioengineered systems recapitulate unique aspects of human disease, positions multiomics as a technique for the evaluation of high throughput–based bioengineered model systems, and potentiates future drug discovery.
</summary>
<dc:date>2024-02-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dual‐Wavelength Vat Photopolymerization With Dissolvable, Recyclable Support Structures</title>
<link href="https://hdl.handle.net/1721.1/163112" rel="alternate"/>
<author>
<name>Diaco, Nicholas S</name>
</author>
<author>
<name>Thrasher, Carl J</name>
</author>
<author>
<name>Hughes, Max M</name>
</author>
<author>
<name>Zhou, Kevin A</name>
</author>
<author>
<name>Durso, Michael N</name>
</author>
<author>
<name>Yap, Saechow</name>
</author>
<author>
<name>Macfarlane, Robert J</name>
</author>
<author>
<name>Hart, A John</name>
</author>
<id>https://hdl.handle.net/1721.1/163112</id>
<updated>2026-03-08T03:26:56Z</updated>
<published>2025-06-02T00:00:00Z</published>
<summary type="text">Dual‐Wavelength Vat Photopolymerization With Dissolvable, Recyclable Support Structures
Diaco, Nicholas S; Thrasher, Carl J; Hughes, Max M; Zhou, Kevin A; Durso, Michael N; Yap, Saechow; Macfarlane, Robert J; Hart, A John
Vat photopolymerization (VP) additive manufacturing (AM) is valued for itsspeed, precision, and material versatility. However, its requirement forsupport structures limits printable geometries, complicates post-processing,and generates non-recyclable waste when typical thermoset resins are used.Here, a wavelength-selective resin system for VP that enables single-vat,multi-material printing with dissolvable supports is introduced. Exposure tovisible light produces a rigid, dissolvable thermoplastic, while UV light formsa crosslinked thermoset resistant to dissolution. This process, termedselective solubility vat photopolymerization (SSVP), eliminates the geometricconstraints imposed by conventional VP methods, facilitating the creation ofcomplex objects with supports that are removable using green and food-safesolvents such as D-limonene and ethyl acetate, as well as mineral oil.Post-print heat treatment tunes crosslink density and solubility. Dissolvedsupports can be recycled into fresh resin and reprinted without mechanicalproperty loss, oﬀering a practical, scalable route to reducing waste.Additionally, SSVP provides spatial control of dissolution kinetics, enablingprogrammable 3D dissolution proﬁles. By enabling the integration ofdissolvable and insoluble regions in a single print, SSVP sets the stage forfully automated and more sustainable AM workﬂows.
</summary>
<dc:date>2025-06-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Recent advances in nanoparticulate RNA delivery systems</title>
<link href="https://hdl.handle.net/1721.1/163111" rel="alternate"/>
<author>
<name>Witten, Jacob</name>
</author>
<author>
<name>Hu, Yizong</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Anderson, Daniel G</name>
</author>
<id>https://hdl.handle.net/1721.1/163111</id>
<updated>2026-03-08T03:26:52Z</updated>
<published>2024-03-04T00:00:00Z</published>
<summary type="text">Recent advances in nanoparticulate RNA delivery systems
Witten, Jacob; Hu, Yizong; Langer, Robert; Anderson, Daniel G
Nanoparticle-based RNA delivery has shown great progress in recent years with the approval of two mRNA vaccines for Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) and a liver-targeted siRNA therapy. Here, we discuss the preclinical and clinical advancement of new generations of RNA delivery therapies along multiple axes. Improvements in cargo design such as RNA circularization and data-driven untranslated region optimization can drive better mRNA expression. New materials discovery research has driven improved delivery to extrahepatic targets such as the lung and splenic immune cells, which could lead to pulmonary gene therapy and better cancer vaccines, respectively. Other organs and even specific cell types can be targeted for delivery via conjugation of small molecule ligands, antibodies, or peptides to RNA delivery nanoparticles. Moreover, the immune response to any RNA delivery nanoparticle plays a crucial role in determining efficacy. Targeting increased immunogenicity without induction of reactogenic side effects is crucial for vaccines, while minimization of immune response is important for gene therapies. New developments have addressed each of these priorities. Last, we discuss the range of RNA delivery clinical trials targeting diverse organs, cell types, and diseases and suggest some key advances that may play a role in the next wave of therapies.
</summary>
<dc:date>2024-03-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>On-patient medical record and mRNA therapeutics using intradermal microneedles</title>
<link href="https://hdl.handle.net/1721.1/163110" rel="alternate"/>
<author>
<name>Han, Jooli</name>
</author>
<author>
<name>Kanelli, Maria</name>
</author>
<author>
<name>Liu, Yang</name>
</author>
<author>
<name>Daristotle, John L</name>
</author>
<author>
<name>Pardeshi, Apurva</name>
</author>
<author>
<name>Forster, Timothy A</name>
</author>
<author>
<name>Karchin, Ari</name>
</author>
<author>
<name>Folk, Brandon</name>
</author>
<author>
<name>Murmann, Lukas</name>
</author>
<author>
<name>Tostanoski, Lisa H</name>
</author>
<author>
<name>Carrasco, Sebastian E</name>
</author>
<author>
<name>Alsaiari, Shahad K</name>
</author>
<author>
<name>Wang, Erika Yan</name>
</author>
<author>
<name>Tran, Khanh</name>
</author>
<author>
<name>Zhang, Linzixuan</name>
</author>
<author>
<name>Eshaghi, Behnaz</name>
</author>
<author>
<name>Levy, Lauren</name>
</author>
<author>
<name>Pyon, Sydney</name>
</author>
<author>
<name>Sloane, Charles</name>
</author>
<author>
<name>Lin, Stacey Qiaohui</name>
</author>
<author>
<name>Lau, Alicia</name>
</author>
<author>
<name>Perkinson, Collin F</name>
</author>
<author>
<name>Bawendi, Moungi G</name>
</author>
<author>
<name>Barouch, Dan H</name>
</author>
<author>
<name>Durand, Frédo</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Jaklenec, Ana</name>
</author>
<id>https://hdl.handle.net/1721.1/163110</id>
<updated>2026-03-08T03:26:41Z</updated>
<published>2025-02-24T00:00:00Z</published>
<summary type="text">On-patient medical record and mRNA therapeutics using intradermal microneedles
Han, Jooli; Kanelli, Maria; Liu, Yang; Daristotle, John L; Pardeshi, Apurva; Forster, Timothy A; Karchin, Ari; Folk, Brandon; Murmann, Lukas; Tostanoski, Lisa H; Carrasco, Sebastian E; Alsaiari, Shahad K; Wang, Erika Yan; Tran, Khanh; Zhang, Linzixuan; Eshaghi, Behnaz; Levy, Lauren; Pyon, Sydney; Sloane, Charles; Lin, Stacey Qiaohui; Lau, Alicia; Perkinson, Collin F; Bawendi, Moungi G; Barouch, Dan H; Durand, Frédo; Langer, Robert; Jaklenec, Ana
Medical interventions often require timed series of doses, thus necessitating accurate medical record-keeping. In many global settings, these records are unreliable or unavailable at the point of care, leading to less effective treatments or disease prevention. Here we present an invisible-to-the-naked-eye on-patient medical record-keeping technology that accurately stores medical information in the patient skin as part of microneedles that are used for intradermal therapeutics. We optimize the microneedle design for both a reliable delivery of messenger RNA (mRNA) therapeutics and the near-infrared fluorescent microparticles that encode the on-patient medical record-keeping. Deep learning-based image processing enables encoding and decoding of the information with excellent temporal and spatial robustness. Long-term studies in a swine model demonstrate the safety, efficacy and reliability of this approach for the co-delivery of on-patient medical record-keeping and the mRNA vaccine encoding severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). This technology could help healthcare workers make informed decisions in circumstances where reliable record-keeping is unavailable, thus contributing to global healthcare equity.
</summary>
<dc:date>2025-02-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Insights Into Summertime Surface Ozone Formation From Diurnal Variations in Formaldehyde and Nitrogen Dioxide Along a Transect Through New York City</title>
<link href="https://hdl.handle.net/1721.1/163109" rel="alternate"/>
<author>
<name>Tao, Madankui</name>
</author>
<author>
<name>Fiore, Arlene M</name>
</author>
<author>
<name>Karambelas, Alexandra</name>
</author>
<author>
<name>Miller, Paul J</name>
</author>
<author>
<name>Valin, Lukas C</name>
</author>
<author>
<name>Judd, Laura M</name>
</author>
<author>
<name>Tzortziou, Maria</name>
</author>
<author>
<name>Whitehill, Andrew</name>
</author>
<author>
<name>Teora, Amanda</name>
</author>
<author>
<name>Tian, Yuhong</name>
</author>
<author>
<name>Civerolo, Kevin L</name>
</author>
<author>
<name>Tong, Daniel</name>
</author>
<author>
<name>Ma, Siqi</name>
</author>
<author>
<name>Adamo, Susana B</name>
</author>
<author>
<name>Holloway, Tracey</name>
</author>
<id>https://hdl.handle.net/1721.1/163109</id>
<updated>2026-03-08T03:27:01Z</updated>
<published>2025-05-12T00:00:00Z</published>
<summary type="text">Insights Into Summertime Surface Ozone Formation From Diurnal Variations in Formaldehyde and Nitrogen Dioxide Along a Transect Through New York City
Tao, Madankui; Fiore, Arlene M; Karambelas, Alexandra; Miller, Paul J; Valin, Lukas C; Judd, Laura M; Tzortziou, Maria; Whitehill, Andrew; Teora, Amanda; Tian, Yuhong; Civerolo, Kevin L; Tong, Daniel; Ma, Siqi; Adamo, Susana B; Holloway, Tracey
Estimating tropospheric ozone (O3) production from observations is challenging but possible given the close coupling of O3 with formaldehyde (HCHO) and nitrogen dioxide (NO2), two remotely sensed air pollutants. The previous reliance on once-daily satellite overpasses highlights the need to study diurnal changes and surface-column relationships. Using surface observations, Pandora spectrometer retrievals, and a high-resolution (1.33 km) air quality model (WRF-CMAQ), we characterize diurnal patterns of HCHO and NO2 at seven locations along an upwind-downwind pathway through New York City during June–August 2018. Diurnal patterns of limited surface HCHO measurements suggest biogenic emission influence, while a bimodal surface NO2 pattern indicates the impact of local anthropogenic nitrogen oxides emissions. Details of these patterns vary by site: an afternoon NO2 spike at New Haven (CT) indicates traffic emissions, while a delayed daily HCHO peak at Westport (CT) relative to other sites likely reflects sea breeze dynamics. Peak column concentrations generally lag surface peaks by about four hours, occurring at 9–10 a.m. for morning NO2 (from Pandora and WRF-CMAQ) and around 4 p.m. for midday HCHO (from WRF-CMAQ). TROPOMI overpass time at 1:30 p.m. misses peak column HCHO and NO2 concentrations. A box model (F0AM) constrained with site-level observations and WRF-CMAQ fields indicates 1–9 ppb hr−1 higher noontime local O3 production rates on three sets of paired high- versus mid-to-low-O3 days. F0AM sensitivity analyses on these six days suggest a predominantly transitional O3 formation regime at urban and downwind sites, differing at some sites from the NOx-saturated regime diagnosed for summertime average conditions via the weekday-weekend effect.
</summary>
<dc:date>2025-05-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Seeing the Fictional</title>
<link href="https://hdl.handle.net/1721.1/163108" rel="alternate"/>
<author>
<name>Khoo, Justin</name>
</author>
<id>https://hdl.handle.net/1721.1/163108</id>
<updated>2026-03-08T03:26:59Z</updated>
<published>2025-06-03T00:00:00Z</published>
<summary type="text">Seeing the Fictional
Khoo, Justin
When we see a movie or a play, do we see the fictional entities and events depicted? On the one hand, it seems incredibly natural tothink we do. For instance, it seems obvious that one thing that differentiates Smith, who watches Star Wars, from Bob, who merelyreads the novelization of Star Wars, is that Smith, but not Bob, has seen Darth Vader kill Obi-Wan Kenobi. Yet, no philosophersworking on fiction think this is literally true. And they have good reasons to be skeptical. For, if you have seen Darth Vader killObi-Wan Kenobi, then it seems to follow that Darth Vader must have killed Obi-Wan Kenobi, in which case, it follows that bothwere at one point living, flesh-and-blood, entities. But if Darth Vader is a flesh and blood being, then he must be spatiotemporallylocated, in which case, where is he? In this paper, I argue that we do in fact literally see (and hear) fictional entities when we seefilms. I do so in three stages. First, I argue against various error theories that attempt to account for the intuitions that we do seefictional entities in film. Then, I sketch a metaphysics of fictional entities, which vindicates our genuinely seeing them. Finally, Iexplore some of the interesting controversies and objections raised to this ontology of the fictional.
</summary>
<dc:date>2025-06-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Why Schonland Failed in His Search for Runaway Electrons From Thunderstorms</title>
<link href="https://hdl.handle.net/1721.1/163107" rel="alternate"/>
<author>
<name>Chilingarian, A</name>
</author>
<author>
<name>Williams, E</name>
</author>
<author>
<name>Hovsepyan, G</name>
</author>
<author>
<name>Mkrtchyan, H</name>
</author>
<id>https://hdl.handle.net/1721.1/163107</id>
<updated>2026-03-08T03:26:51Z</updated>
<published>2025-05-22T00:00:00Z</published>
<summary type="text">Why Schonland Failed in His Search for Runaway Electrons From Thunderstorms
Chilingarian, A; Williams, E; Hovsepyan, G; Mkrtchyan, H
B.F.J. Schonland, advised and encouraged by C.T.R. Wilson, made two unsuccessful searches forrunaway electrons from thunderstorms in the 1930s. These findings stand in marked contrast with researchresults over the last decade and ironically set this field of research back many decades. Schonland's lack ofsuccess is traced to gamma ray attenuation in the atmosphere above Johannesburg (1,780 m MSL) and to hisrestriction to nine thunderstorms.
</summary>
<dc:date>2025-05-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advanced Oral Delivery Systems for Nutraceuticals</title>
<link href="https://hdl.handle.net/1721.1/163106" rel="alternate"/>
<author>
<name>Yang, Xin</name>
</author>
<author>
<name>Zhang, Linzixuan</name>
</author>
<author>
<name>Zheng, Zhiling</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Jaklenec, Ana</name>
</author>
<id>https://hdl.handle.net/1721.1/163106</id>
<updated>2026-03-08T03:26:54Z</updated>
<published>2025-06-11T00:00:00Z</published>
<summary type="text">Advanced Oral Delivery Systems for Nutraceuticals
Yang, Xin; Zhang, Linzixuan; Zheng, Zhiling; Langer, Robert; Jaklenec, Ana
Oral delivery is the most preferred route for nutraceuticals due to its convenience and high patient compliance. However, bioavailability is often compromised by poor solubility, instability, and first‐pass metabolism in the gastrointestinal tract. This review examines current and emerging oral delivery platforms designed to overcome these barriers and enhance nutraceutical efficacy. Traditional carriers—proteins, lipids, and carbohydrates—highlighting their delivery mechanisms and limitations, are first explored. Advancements in material science have led to novel platforms such as biodegradable polymers, metal–organic frameworks (MOFs), metal–polyphenol networks (MPNs), and 3D printing technologies. Biodegradable polymers improve stability and enable controlled release of bioactives. MOFs offer high surface area and tunable porosity for encapsulating and protecting sensitive compounds. MPNs provide biocompatible, stimuli‐responsive systems for targeted nutrient delivery. Meanwhile, 3D printing facilitates the fabrication of personalized delivery systems with precise control over composition and release kinetics, especially when integrated with artificial intelligence (AI) for precision nutrition. By comparing traditional and next‐generation strategies, this review outlines key design principles for optimizing oral delivery systems. The transformative potential of these innovations is underscored to improve the bioavailability and therapeutic outcomes of nutraceuticals, ultimately advancing personalized and targeted nutrition solutions.
</summary>
<dc:date>2025-06-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation of consolidation and plastic resistance on clays</title>
<link href="https://hdl.handle.net/1721.1/163105" rel="alternate"/>
<author>
<name>Marsal, Raúl J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163105</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1944-01-01T00:00:00Z</published>
<summary type="text">Investigation of consolidation and plastic resistance on clays
Marsal, Raúl J.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1944; Vita. Appendix contains numerous pamphlets.
</summary>
<dc:date>1944-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An anthropological study based upon observations of complexion and cephalic measurements of students at the Massachusetts Institute of Technology</title>
<link href="https://hdl.handle.net/1721.1/163104" rel="alternate"/>
<author>
<name>Fisk, Harry George.</name>
</author>
<author>
<name>Melluish, James George.</name>
</author>
<id>https://hdl.handle.net/1721.1/163104</id>
<updated>2025-10-10T03:05:27Z</updated>
<published>1896-01-01T00:00:00Z</published>
<summary type="text">An anthropological study based upon observations of complexion and cephalic measurements of students at the Massachusetts Institute of Technology
Fisk, Harry George.; Melluish, James George.
Thesis: B.S., Massachusetts Institute of Technology, Department of General Studies, 1896
</summary>
<dc:date>1896-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The design and construction of a new density photometer</title>
<link href="https://hdl.handle.net/1721.1/163103" rel="alternate"/>
<author>
<name>Brown, Sherwood Fiske.</name>
</author>
<author>
<name>Perkins, Oliver L.</name>
</author>
<id>https://hdl.handle.net/1721.1/163103</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1923-01-01T00:00:00Z</published>
<summary type="text">The design and construction of a new density photometer
Brown, Sherwood Fiske.; Perkins, Oliver L.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrochemical Engineering, 1923; Includes bibliographical references (leaves 17-18).
</summary>
<dc:date>1923-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An optical instrument for the synthesis of sound</title>
<link href="https://hdl.handle.net/1721.1/163102" rel="alternate"/>
<author>
<name>Brown, Sherwood Fiske.</name>
</author>
<id>https://hdl.handle.net/1721.1/163102</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1930-01-01T00:00:00Z</published>
<summary type="text">An optical instrument for the synthesis of sound
Brown, Sherwood Fiske.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1930
</summary>
<dc:date>1930-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The chemical and physical constitution of reduced copper-red glazes</title>
<link href="https://hdl.handle.net/1721.1/163101" rel="alternate"/>
<author>
<name>Brown, Sherwood Fiske.</name>
</author>
<id>https://hdl.handle.net/1721.1/163101</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1961-01-01T00:00:00Z</published>
<summary type="text">The chemical and physical constitution of reduced copper-red glazes
Brown, Sherwood Fiske.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Metallurgy, 1961; Includes bibliographical references (leaf 60).
</summary>
<dc:date>1961-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Pleasant Valley, Nova Scotia, Limestone</title>
<link href="https://hdl.handle.net/1721.1/163100" rel="alternate"/>
<author>
<name>Jeffries, James T.</name>
</author>
<author>
<name>Manlove, Robert F.</name>
</author>
<id>https://hdl.handle.net/1721.1/163100</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>1959-01-01T00:00:00Z</published>
<summary type="text">The Pleasant Valley, Nova Scotia, Limestone
Jeffries, James T.; Manlove, Robert F.
Thesis: B.S., Massachusetts Institute of Technology, Department of Geology, 1959; Includes bibliographical references (leaf 63).
</summary>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A spiritual and cultural synagogue center for modern American Jewry : an architectural and sociological study</title>
<link href="https://hdl.handle.net/1721.1/163099" rel="alternate"/>
<author>
<name>Goody, Marvin E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163099</id>
<updated>2025-10-10T03:04:41Z</updated>
<published>1951-01-01T00:00:00Z</published>
<summary type="text">A spiritual and cultural synagogue center for modern American Jewry : an architectural and sociological study
Goody, Marvin E.
Thesis: M. Arch., Massachusetts Institute of Technology, Department of Architecture, 1951; "A thesis submitted in partial fulfillment of the requirements for the degree of Master in Architecture, Massachusetts Institute of Technology, August 22, 1951."; Includes bibliographical references (leaves 93-95).
</summary>
<dc:date>1951-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Essays on structuralism and development</title>
<link href="https://hdl.handle.net/1721.1/163098" rel="alternate"/>
<author>
<name>Boutros-Ghali, Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/163098</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1981-01-01T00:00:00Z</published>
<summary type="text">Essays on structuralism and development
Boutros-Ghali, Y.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 1981; Includes bibliographies.
</summary>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and evaluation of a frequency-shifting hearing aid.</title>
<link href="https://hdl.handle.net/1721.1/163097" rel="alternate"/>
<author>
<name>Falkenburg, Douglas Emil.</name>
</author>
<id>https://hdl.handle.net/1721.1/163097</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Design and evaluation of a frequency-shifting hearing aid.
Falkenburg, Douglas Emil.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1978; Bibliography: leaves 103-104.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Neutron scattering study of the magnetism and structural phases of superconducting La₂CuO₄₊y̳</title>
<link href="https://hdl.handle.net/1721.1/163096" rel="alternate"/>
<author>
<name>Lee, Young Sang,
            1971-</name>
</author>
<id>https://hdl.handle.net/1721.1/163096</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>2000-01-01T00:00:00Z</published>
<summary type="text">Neutron scattering study of the magnetism and structural phases of superconducting La₂CuO₄₊y̳
Lee, Young Sang,
            1971-
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2000; In title on t.p., double-underscored "y" appears as subscript.; Includes bibliographical references (p. 195-215).
</summary>
<dc:date>2000-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Voices of Nanomedicine: Blueprint Guidelines for Collaboration in Addressing Global Unmet Medical Needs</title>
<link href="https://hdl.handle.net/1721.1/163095" rel="alternate"/>
<author>
<name>Prasad, Rajendra</name>
</author>
<author>
<name>Ghosh, Arnab</name>
</author>
<author>
<name>Patel, Vinay</name>
</author>
<author>
<name>Peng, Berney</name>
</author>
<author>
<name>Mendes, Bárbara B</name>
</author>
<author>
<name>Win, Eaint Honey Aung</name>
</author>
<author>
<name>Delogu, Lucia Gemma</name>
</author>
<author>
<name>Wong, Joyce Y</name>
</author>
<author>
<name>Pischel, Kristin J</name>
</author>
<author>
<name>Bellare, Jayesh R</name>
</author>
<author>
<name>Bar-Shir, Amnon</name>
</author>
<author>
<name>Thakor, Avnesh S</name>
</author>
<author>
<name>Parak, Wolfgang J</name>
</author>
<author>
<name>Bhujwalla, Zaver M</name>
</author>
<author>
<name>Zhang, Yu Shrike</name>
</author>
<author>
<name>Kommineni, Nagavendra</name>
</author>
<author>
<name>Rotello, Vince M</name>
</author>
<author>
<name>Cai, Weibo</name>
</author>
<author>
<name>Lammers, Twan</name>
</author>
<author>
<name>Odom, Teri W</name>
</author>
<author>
<name>Padmanaban, Govindarajan</name>
</author>
<author>
<name>Peer, Dan</name>
</author>
<author>
<name>Lovell, Jonathan F</name>
</author>
<author>
<name>Srivastava, Rohit</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Conde, João</name>
</author>
<id>https://hdl.handle.net/1721.1/163095</id>
<updated>2026-03-08T03:27:00Z</updated>
<published>2025-01-10T00:00:00Z</published>
<summary type="text">Voices of Nanomedicine: Blueprint Guidelines for Collaboration in Addressing Global Unmet Medical Needs
Prasad, Rajendra; Ghosh, Arnab; Patel, Vinay; Peng, Berney; Mendes, Bárbara B; Win, Eaint Honey Aung; Delogu, Lucia Gemma; Wong, Joyce Y; Pischel, Kristin J; Bellare, Jayesh R; Bar-Shir, Amnon; Thakor, Avnesh S; Parak, Wolfgang J; Bhujwalla, Zaver M; Zhang, Yu Shrike; Kommineni, Nagavendra; Rotello, Vince M; Cai, Weibo; Lammers, Twan; Odom, Teri W; Padmanaban, Govindarajan; Peer, Dan; Lovell, Jonathan F; Srivastava, Rohit; Langer, Robert; Conde, João
The “Voices” under this Perspective underline the importance of interdisciplinary collaboration and partnerships across several disciplines, such as medical science and technology, medicine, bioengineering, and computational approaches, in bridging the gap between research, manufacturing, and clinical applications. Effective communication is key to bridging team gaps, enhancing trust, and resolving conflicts, thereby fostering teamwork and individual growth toward shared goals. Drawing from the success of the COVID-19 vaccine development, we advocate the application of similar collaborative models in other complex health areas such as nanomedicine and biomedical engineering. The role of digital technology and big data in healthcare innovation is highlighted along with the necessity for specialized education in collaborative practices. This approach is decisive in advancing healthcare solutions, leading to improved treatment and patient outcomes.
</summary>
<dc:date>2025-01-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Biological Cohesion of Sediment Bed Diminishes Net Deposition of Fine Non‐Cohesive Particles Over Bare Bed and Within Model Emergent Canopies</title>
<link href="https://hdl.handle.net/1721.1/163094" rel="alternate"/>
<author>
<name>Park, Hyoungchul</name>
</author>
<author>
<name>Nepf, Heidi</name>
</author>
<id>https://hdl.handle.net/1721.1/163094</id>
<updated>2026-03-08T03:27:03Z</updated>
<published>2025-05-12T00:00:00Z</published>
<summary type="text">Biological Cohesion of Sediment Bed Diminishes Net Deposition of Fine Non‐Cohesive Particles Over Bare Bed and Within Model Emergent Canopies
Park, Hyoungchul; Nepf, Heidi
This study investigated how Extracelluar Polymetric Substances (EPS) produced bymicroorganisms influenced particle deposition to a sediment bed. The particle deposition decreased withincreasing EPS, because the EPS filled the pore spaces between individual sediment grains, reducing theporosity of the sediment bed. With decreased porosity, newly deposited particles could not settle in between thegrains of the bed, so that particles were more exposed to the flow, making resuspension easier and leading todecreased deposition. For the same level of bio‐cohesion, increasing the near‐bed turbulence diminisheddeposition. For the vegetated channel, as bio‐cohesion increased, particles were easily resuspended aroundindividual stems due to the enhanced exposure effect, expanding the regions where deposition was excluded andleading to a more heterogeneous spatial distribution of deposition. The effect of EPS was negligible for thesmallest velocity magnitude, for which all particles deposited, and for largest velocity magnitude, for whichmost particles were resuspended.
</summary>
<dc:date>2025-05-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Record‐High Ozone in the Austral Mid‐Latitude Tropopause Region Driven by Dynamical and Chemical Effects of the 2019 Sudden Stratospheric Warming</title>
<link href="https://hdl.handle.net/1721.1/163093" rel="alternate"/>
<author>
<name>Zhang, Selena</name>
</author>
<author>
<name>Solomon, Susan</name>
</author>
<author>
<name>Zhang, Jun</name>
</author>
<author>
<name>Kinnison, Douglas</name>
</author>
<id>https://hdl.handle.net/1721.1/163093</id>
<updated>2026-03-08T03:26:57Z</updated>
<published>2025-05-10T00:00:00Z</published>
<summary type="text">Record‐High Ozone in the Austral Mid‐Latitude Tropopause Region Driven by Dynamical and Chemical Effects of the 2019 Sudden Stratospheric Warming
Zhang, Selena; Solomon, Susan; Zhang, Jun; Kinnison, Douglas
In January 2020, tropopause‐level ozone in the austral mid‐latitudes was the highest everobserved in the available Microwave Limb Sounder data record since 2004. Two extreme events preceded thisanomaly: the Australian Black Summer fires and the 2019 sudden stratospheric warming (SSW), raising thequestion of how these disruptions influenced Southern Hemisphere ozone. Here, we investigate the dynamicaland chemical contributions to the ozone anomaly using a chemistry‐climate model and satellite observations.We find that downward transport of polar ozone‐enriched air due to the SSW later spread equatorward. Suchtransport together with photochemical ozone production from emissions of wildfires (fueled by dry and hotconditions previously attributed to the SSW) increased tropopause‐level ozone by up to 30 ppb, with transport asthe dominant factor (around 80%). While chemical ozone production from wildfires is well‐recognized, ourresults highlight that SSWs can greatly influence mid‐latitude ozone through dynamical effects.
</summary>
<dc:date>2025-05-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pure Event Semantics</title>
<link href="https://hdl.handle.net/1721.1/163092" rel="alternate"/>
<author>
<name>Schwarzschild, Roger</name>
</author>
<id>https://hdl.handle.net/1721.1/163092</id>
<updated>2026-03-08T03:26:49Z</updated>
<published>2025-05-28T00:00:00Z</published>
<summary type="text">Pure Event Semantics
Schwarzschild, Roger
In a pure event semantics for natural language, the domain of quantification and predication is limited to events and states. I offerpure event semantic analyses of several phenomena, some of which have not been treated before in formal semantics. In the pureevent semantics sketched in the second section, nouns are state predicates, and this provides the starting point for the analyses.The phenomena involve grammatical number, the mass-count distinction, adjectival modification, count adjectives, diminutives,lexical plurals, duals, and mass gender. In the conclusion, there is a brief discussion of potential metaphysical or psychologicalramifications of doing semantics this way.
</summary>
<dc:date>2025-05-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Grice Is Right: Grice's Non‐Cooperation Problem and the Structure of Conversation</title>
<link href="https://hdl.handle.net/1721.1/163091" rel="alternate"/>
<author>
<name>Berstler, Sam</name>
</author>
<id>https://hdl.handle.net/1721.1/163091</id>
<updated>2026-03-08T03:26:53Z</updated>
<published>2025-05-26T00:00:00Z</published>
<summary type="text">The Grice Is Right: Grice's Non‐Cooperation Problem and the Structure of Conversation
Berstler, Sam
H. P. Grice seemed to rest his theory of conversational implicature on the assumption that speakers aim to cooperatively exchangeinformation with each other. In the real world, speakers often don’t. Does one of the most influential theories in 20th-centuryphilosophy of language rest on a mistake? Yes—but not in the way that philosophers have thought. I argue that Grice shouldhave rested his theory on a different assumption: that speakers aim to appear to aim to cooperatively exchange informationwith each other. This proposal dissolves Grice’s Non-Cooperation Problem but preserves Grice’s central insights about the natureof conversational implicatures. More generally, it enables the Gricean to illuminate the structure of many non-cooperative orotherwise “non-ideal” conversations.
</summary>
<dc:date>2025-05-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>My Struggles and Dreams as a Chemical Engineer</title>
<link href="https://hdl.handle.net/1721.1/163090" rel="alternate"/>
<author>
<name>Langer, Robert</name>
</author>
<id>https://hdl.handle.net/1721.1/163090</id>
<updated>2026-03-08T03:26:51Z</updated>
<published>2025-03-03T00:00:00Z</published>
<summary type="text">My Struggles and Dreams as a Chemical Engineer
Langer, Robert
My career has not been straightforward. Although I am a chemical engineer, and I'm proud of that, I took a path from chemistry and engineering to one that also involved experimental biology and medicine. This was very unusual many decades ago. In so doing, I met with rejection and ridicule early in my career. However, by going down that path, I was able to make discoveries and inventions that I hope have saved and improved lives, and I've been able to train a great number of people who are going down the road I began traveling over many years ago.
</summary>
<dc:date>2025-03-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>An In Situ Curing, Shear‐Responsive Biomaterial Designed for Durable Embolization of Microvasculature</title>
<link href="https://hdl.handle.net/1721.1/163089" rel="alternate"/>
<author>
<name>Pham, Quynh P</name>
</author>
<author>
<name>Groom, Jeffrey V</name>
</author>
<author>
<name>Sadasivan, Chander</name>
</author>
<author>
<name>Fiorella, David J</name>
</author>
<author>
<name>Madoff, David C</name>
</author>
<author>
<name>Guo, Lee‐Jae</name>
</author>
<author>
<name>Fornaciari, Michael</name>
</author>
<author>
<name>Guertin, Courtney</name>
</author>
<author>
<name>Wiltsey, Craig</name>
</author>
<author>
<name>Core, Lee</name>
</author>
<author>
<name>Merlo, Jonathan</name>
</author>
<author>
<name>Wustenberg, William</name>
</author>
<author>
<name>Virmani, Renu</name>
</author>
<author>
<name>Arthur, Adam S</name>
</author>
<author>
<name>Langer, Robert S</name>
</author>
<author>
<name>Whitesides, George M</name>
</author>
<author>
<name>Sharma, Upma</name>
</author>
<id>https://hdl.handle.net/1721.1/163089</id>
<updated>2026-03-08T03:26:47Z</updated>
<published>2025-03-11T00:00:00Z</published>
<summary type="text">An In Situ Curing, Shear‐Responsive Biomaterial Designed for Durable Embolization of Microvasculature
Pham, Quynh P; Groom, Jeffrey V; Sadasivan, Chander; Fiorella, David J; Madoff, David C; Guo, Lee‐Jae; Fornaciari, Michael; Guertin, Courtney; Wiltsey, Craig; Core, Lee; Merlo, Jonathan; Wustenberg, William; Virmani, Renu; Arthur, Adam S; Langer, Robert S; Whitesides, George M; Sharma, Upma
Endovascular embolization is a minimally‐invasive technique whereby blood vessels supplying pathological structures are selectively occluded with various embolic agents. In many scenarios, it is desirable for the embolic to distally penetrate to the level of the microvasculature, which maximizes devascularization. Existing agents exhibit inconsistent distal penetration and have other limitations including tendency for proximal reflux, patient pain during infusion, lack of fluoroscopic radiopacity, potential for catheter adhesion, susceptibility to recanalization, and other usability challenges. NeoCast is an in situ curing, solvent‐free, non‐adhesive biomaterial composed of polydimethylsiloxane, bismuth trioxide, and fumed silica that possesses shear‐responsive properties enabling manual injectability through commercially‐available microcatheters with large and small diameter lumens. Here, embolization performance with and without flow arrest, in both arterial and venous preclinical anatomies is reported. NeoCast reproducibly achieves a rate of distal penetration with microvascular occlusion that is superior to existing agents, exhibits excellent fluoroscopic visibility, and provides durable occlusion. There is mild inflammation when NeoCast is infused into blood vessels and absence of neurotoxicity when implanted directly into brain tissue. The engineered NeoCast material is poised to become a next‐generation, liquid embolic agent for applications in which distal microvascular occlusion is desired.
</summary>
<dc:date>2025-03-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>A constitutive neural network for incompressible hyperelastic materials</title>
<link href="https://hdl.handle.net/1721.1/163088" rel="alternate"/>
<author>
<name>Lee, Sanghee</name>
</author>
<author>
<name>Bathe, Klaus-Jürgen</name>
</author>
<id>https://hdl.handle.net/1721.1/163088</id>
<updated>2025-10-09T03:45:34Z</updated>
<published>2025-08-20T00:00:00Z</published>
<summary type="text">A constitutive neural network for incompressible hyperelastic materials
Lee, Sanghee; Bathe, Klaus-Jürgen
We propose a B-spline-based constitutive neural network to model the mechanical behavior of incompressible isotropic materials. The theoretical foundation of this network is the Sussman-Bathe model which interpolates tension–compression test data points and recovers the strain energy function. Our neural network uses regression to self-optimize the knot configurations of the B-splines and to determine a twice differentiable curve of the material response that is closely aligned with the given data points. We address datasets displaying physically complicated behaviors. Through the patch test validation of the constitutive model and illustrative example solutions, we highlight the flexibility inherent in spline-based models and the automated approximation capabilities enabled by neural networks.
</summary>
<dc:date>2025-08-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bounds on the Ground State Energy of Quantum p-Spin Hamiltonians</title>
<link href="https://hdl.handle.net/1721.1/163087" rel="alternate"/>
<author>
<name>Anschuetz, Eric R.</name>
</author>
<author>
<name>Gamarnik, David</name>
</author>
<author>
<name>Kiani, Bobak T.</name>
</author>
<id>https://hdl.handle.net/1721.1/163087</id>
<updated>2025-10-09T03:45:21Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Bounds on the Ground State Energy of Quantum p-Spin Hamiltonians
Anschuetz, Eric R.; Gamarnik, David; Kiani, Bobak T.
We consider the problem of estimating the ground state energy of quantum p-local spin glass random Hamiltonians, the quantum analogues of widely studied classical spin glass models. Our main result shows that the maximum energy achievable by product states has a well-defined limit (for even p) as n → ∞ and is E product ∗ = 2 log p in the limit of large p. This value is interpreted as the maximal energy of a much simpler so-called Random Energy Model, widely studied in the setting of classical spin glasses. The proof of the limit existing follows from an extension of Fekete’s Lemma after we demonstrate near super-additivity of the (normalized) quenched free energy. The proof of the value follows from a second moment method on the number of states achieving a given energy when restricting to an ϵ -net of product states. Furthermore, we relate the maximal energy achieved over all states to a p-dependent constant γ p , which is defined by the degree of violation of a certain asymptotic dependence ansatz over graph matchings. We show that the maximal energy achieved by all states E ∗ p in the limit of large n is at most γ p E product ∗ . We also prove using Lindeberg’s interpolation method that the limiting E ∗ p is robust with respect to the choice of the randomness and, for instance, also applies to the case of sparse random Hamiltonians. This robustness in the randomness extends to a wide range of random Hamiltonian models including SYK and random quantum max-cut.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Riemannian Adaptive Regularized Newton Methods with Hölder Continuous Hessians</title>
<link href="https://hdl.handle.net/1721.1/163086" rel="alternate"/>
<author>
<name>Zhang, Chenyu</name>
</author>
<author>
<name>Jiang, Rujun</name>
</author>
<id>https://hdl.handle.net/1721.1/163086</id>
<updated>2025-10-09T03:45:23Z</updated>
<published>2025-05-21T00:00:00Z</published>
<summary type="text">Riemannian Adaptive Regularized Newton Methods with Hölder Continuous Hessians
Zhang, Chenyu; Jiang, Rujun
This paper presents strong worst-case iteration and operation complexity guarantees for Riemannian adaptive regularized Newton methods, a unified framework encompassing both Riemannian adaptive regularization (RAR) methods and Riemannian trust region (RTR) methods. We comprehensively characterize the sources of approximation in second-order manifold optimization methods: the objective function’s smoothness, retraction’s smoothness, and subproblem solver’s inexactness. Specifically, for a function with a μ -Hölder continuous Hessian, when equipped with a retraction featuring a ν -Hölder continuous differential and a θ -inexact subproblem solver, both RTR and RAR with 2 + α regularization (where α = min { μ , ν , θ } ) locate an ( ϵ , ϵ α / ( 1 + α ) ) -approximate second-order stationary point within at most O ( ϵ - ( 2 + α ) / ( 1 + α ) ) iterations and at most O ~ ( ϵ - ( 4 + 3 α ) / ( 2 ( 1 + α ) ) ) Hessian-vector products with high probability. These complexity results are novel and sharp, and reduce to an iteration complexity of O ( ϵ - 3 / 2 ) and an operation complexity of O ~ ( ϵ - 7 / 4 ) when α = 1 .
</summary>
<dc:date>2025-05-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effect of Die Bearing Geometry on Extrudability of High-Strength AA6082 Alloy with Cu</title>
<link href="https://hdl.handle.net/1721.1/163085" rel="alternate"/>
<author>
<name>Wang, Xiaoying</name>
</author>
<author>
<name>Khan, Muhammad S.</name>
</author>
<author>
<name>Wells, Mary A.</name>
</author>
<author>
<name>Poole, Warren J.</name>
</author>
<author>
<name>Parson, Nick</name>
</author>
<id>https://hdl.handle.net/1721.1/163085</id>
<updated>2026-03-08T03:26:10Z</updated>
<published>2025-08-25T00:00:00Z</published>
<summary type="text">Effect of Die Bearing Geometry on Extrudability of High-Strength AA6082 Alloy with Cu
Wang, Xiaoying; Khan, Muhammad S.; Wells, Mary A.; Poole, Warren J.; Parson, Nick
This study investigated the impact of die bearing geometry on the surface cracking behavior, of a high strength AA6xxx alloy. Experimental and numerical methods were employed, along with differential scanning calorimetry tests to determine the material’s solidus temperature. Four different die geometries were employed in both the extrusion trial and the simulation. Extrusion trials were conducted for each die geometry over a range of extrusion speeds with the resulting surface defects being examined using SEM. The findings indicate that die bearing geometry significantly affects surface morphology and crack occurrence. Choked dies enabled crack-free extrusion at higher speeds, particularly a 12 mm choked bearing with a 1° angle, outperforming a 25 mm flat bearing and zero-bearing die. The 35 mm choked bearing achieved crack-free extrusion even at maximum extrusion speed, yielding smoother surfaces than the other dies. Numerical simulations demonstrated the differences in stress states using different die bearing geometries, showing that the choked bearings alter the stress state at the die corner to cause a transition from high tensile stress to lower tensile or compressive stress. The extrusion limit diagrams for different die bearings were also constructed based on the extrusion trial data to provide guidance for choosing appropriate extrusion parameters for future studies. This study adds a valuable contribution to the existing literature by shedding light on the role of die bearing geometry in controlling surface morphology and surface crack formation, providing important insights that can be used to optimize the extrusion process.
</summary>
<dc:date>2025-08-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Single cells are compactly and accurately described as fractional Kelvin-Voigt materials</title>
<link href="https://hdl.handle.net/1721.1/163084" rel="alternate"/>
<author>
<name>Das, Mohua</name>
</author>
<author>
<name>Waeterloos, Jarno L.</name>
</author>
<author>
<name>Clasen, Christian</name>
</author>
<author>
<name>McKinley, Gareth H.</name>
</author>
<id>https://hdl.handle.net/1721.1/163084</id>
<updated>2026-03-08T03:26:10Z</updated>
<published>2025-08-25T00:00:00Z</published>
<summary type="text">Single cells are compactly and accurately described as fractional Kelvin-Voigt materials
Das, Mohua; Waeterloos, Jarno L.; Clasen, Christian; McKinley, Gareth H.
The mechanobiology of single cells plays a crucial role in various biological processes, including embryonic development, cancer treatment, and wound healing. This study highlights the use of the fractional Kelvin-Voigt model (FKVM)—a viscoelastic model consisting of two Scott Blair elements in parallel—to compactly and accurately characterize single-cell rheology. Unlike traditional power law models, which primarily capture the key features of the mechanical response at long timescales, the FKVM effectively captures both short- and long-timescale mechanical responses with a minimal number of constitutive parameters. Experimental small-amplitude oscillatory shear (SAOS) data for dividing canine kidney cells, creep data of human K562 erythroleukemic cells, and creep recovery data of blastomere cytoplasm are all analyzed to showcase the accuracy and versatility of the FKVM. Additionally, for the first time, the continuous relaxation and retardation spectra corresponding to the fractional differential formulation of the FKVM are derived. These results establish a comprehensive framework for predictive analysis of single-cell rheology in both the time and frequency domains. Graphical abstract
</summary>
<dc:date>2025-08-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Gray matter abnormalities in sight deprivation and sight restoration</title>
<link href="https://hdl.handle.net/1721.1/163083" rel="alternate"/>
<author>
<name>Pedersini, Caterina A.</name>
</author>
<author>
<name>Fracasso, Alessio</name>
</author>
<author>
<name>Dogar, Amna</name>
</author>
<author>
<name>Rokers, Bas</name>
</author>
<author>
<name>Sinha, Pawan</name>
</author>
<id>https://hdl.handle.net/1721.1/163083</id>
<updated>2026-03-08T03:26:20Z</updated>
<published>2025-08-12T00:00:00Z</published>
<summary type="text">Gray matter abnormalities in sight deprivation and sight restoration
Pedersini, Caterina A.; Fracasso, Alessio; Dogar, Amna; Rokers, Bas; Sinha, Pawan
Blindness provides a unique model for investigating brain plasticity in response to sensory deprivation. While structural changes in both gray and white matter have been widely documented, particularly in cases of early or congenital visual deprivation, gray matter studies have traditionally focused on cortical thickness, often finding cortical thickening in posterior regions. However, other aspects of gray matter integrity, such as cortical myelin content, remain underexplored. In this study, we examined the effects of visual deprivation on cortical structure in a cohort of early blind individuals who received eye surgery during adolescence, expanding beyond conventional measures to include cortical thickness, curvature, and T1-weighted signal intensity. This multi-faceted approach offers a more comprehensive view of cortical adaptations to early sensory deprivation. While blindness offers valuable insights into sensory-driven brain plasticity, an intriguing and unresolved question is whether structural plasticity reverses after sight restoration, enabling typical visual processing circuits to develop despite the initial period of deprivation. To address this, we assessed the effect of sight-recovering eye surgery on gray matter changes. Critically, individuals in this cohort received surgery after the closure of the sensitive period for visual development. We did not find evidence of gray matter changes after surgery. However, in a previous study conducted on the same cohort, we reported that notable plasticity in white matter emerged in this same population. These results suggest that white matter may potentially serve as a biomarker of structural plasticity following sight restoration, even beyond the sensitive developmental window.
</summary>
<dc:date>2025-08-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficacy and Safety of Toludesvenlafaxine Hydrochloride Sustained‐Release Tablets in Depression With Anhedonia: A Single‐Arm, Multicenter Clinical Study</title>
<link href="https://hdl.handle.net/1721.1/163082" rel="alternate"/>
<author>
<name>Wang, San-wang</name>
</author>
<author>
<name>Mi, Wei-feng</name>
</author>
<author>
<name>Hao, Xiao-nan</name>
</author>
<author>
<name>Liu, Xiao-xing</name>
</author>
<author>
<name>Wen, Xin</name>
</author>
<author>
<name>Zhao, Min</name>
</author>
<author>
<name>Jiang, Hai-feng</name>
</author>
<author>
<name>Wang, Wen-zheng</name>
</author>
<author>
<name>Li, Tao</name>
</author>
<author>
<name>Tan, Zhong-Lin</name>
</author>
<author>
<name>Chen, Song</name>
</author>
<author>
<name>Lv, Wen</name>
</author>
<author>
<name>Ning, Yu-ping</name>
</author>
<author>
<name>Zhou, Yan-ling</name>
</author>
<author>
<name>Chen, Ying-mei</name>
</author>
<author>
<name>Tang, Xiang-dong</name>
</author>
<author>
<name>Li, Bin</name>
</author>
<author>
<name>Liu, Yang</name>
</author>
<author>
<name>Ma, Xian-cang</name>
</author>
<author>
<name>Dong, Ying–ying</name>
</author>
<author>
<name>Chen, Yun-chun</name>
</author>
<author>
<name>Wang, Hui-ling</name>
</author>
<author>
<name>Huang, Yong-lan</name>
</author>
<author>
<name>Zhang, Hua</name>
</author>
<author>
<name>Lu, Lin</name>
</author>
<id>https://hdl.handle.net/1721.1/163082</id>
<updated>2026-03-08T03:26:53Z</updated>
<published>2025-05-06T00:00:00Z</published>
<summary type="text">Efficacy and Safety of Toludesvenlafaxine Hydrochloride Sustained‐Release Tablets in Depression With Anhedonia: A Single‐Arm, Multicenter Clinical Study
Wang, San-wang; Mi, Wei-feng; Hao, Xiao-nan; Liu, Xiao-xing; Wen, Xin; Zhao, Min; Jiang, Hai-feng; Wang, Wen-zheng; Li, Tao; Tan, Zhong-Lin; Chen, Song; Lv, Wen; Ning, Yu-ping; Zhou, Yan-ling; Chen, Ying-mei; Tang, Xiang-dong; Li, Bin; Liu, Yang; Ma, Xian-cang; Dong, Ying–ying; Chen, Yun-chun; Wang, Hui-ling; Huang, Yong-lan; Zhang, Hua; Lu, Lin
Toludesvenlafaxine hydrochloride sustained-release tablets, as China’s first independently developed chemical Class 1 innovative drug with independent intellectual property rights for the treatment of depression and a new molecular entity, represent a novel triple reuptake inhibitor (TRI) with specific target selectivity for serotonin (5-HT), norepinephrine (NE), and dopamine (DA). This single-arm, multicenter clinical study aimed to evaluate the efficacy and safety of toludesvenlafaxine in alleviating anhedonia symptoms in patients with major depressive disorder (MDD). A total of 123 patients aged 18–65 years were enrolled between April 2023 and April 2024 and received an 8-week treatment with toludesvenlafaxine sustained-release tablets (80–160 mg/day). The primary efficacy endpoint was the change in the total score of the Dimensional Anhedonia Rating Scale (DARS) at weeks 2, 4, and 8. Significant improvements in DARS scores were observed, with mean changes from baseline of 8.4 (95% CI [6.4, 10.4], p &lt; 0.0001), 14.1 (95% CI [12.0, 16.2], p &lt; 0.0001), and 20.4 (95% CI [18.0, 22.9], p &lt; 0.0001), respectively. Additionally, after 8 weeks of treatment, plasma levels of neurotrophic factors, including mature brain-derived neurotrophic factor (mBDNF) (t = 28.78, p &lt; 0.0001), pro-BDNF (t = 27.71, p &lt; 0.0001), and vascular endothelial growth factor (VEGF) (t = 31.07, p &lt; 0.0001), were significantly increased, and the plasma level of IGF-1 was not significantly changed (t = 0.35, p = 0.7269). No association was found between the percentage of changes in neurotrophic factors and the percentage of symptom improvements. Toludesvenlafaxine was generally well-tolerated, with treatment-emergent adverse events (AEs) (TEAEs) reported in 83.7% of participants and treatment-related AEs (TRAEs) in 76.4%. These findings indicate that toludesvenlafaxine hydrochloride sustained-release tablets are safe, well-tolerated, and effective in alleviating anhedonia symptoms in patients with depression.
</summary>
<dc:date>2025-05-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>Is Deuterium Sequestering by Reactive Carbon Atoms an Important Mechanism to Reduce Deuterium Content in Biological Water?</title>
<link href="https://hdl.handle.net/1721.1/163081" rel="alternate"/>
<author>
<name>Seneff, Stephanie</name>
</author>
<author>
<name>Nigh, Greg</name>
</author>
<author>
<name>Kyriakopoulos, Anthony M.</name>
</author>
<id>https://hdl.handle.net/1721.1/163081</id>
<updated>2026-03-08T03:26:45Z</updated>
<published>2025-05-14T00:00:00Z</published>
<summary type="text">Is Deuterium Sequestering by Reactive Carbon Atoms an Important Mechanism to Reduce Deuterium Content in Biological Water?
Seneff, Stephanie; Nigh, Greg; Kyriakopoulos, Anthony M.
Deuterium is a natural heavy isotope of hydrogen, having a neutron as well as a proton. Deuterium disrupts ATP synthesis inmitochondria, causing increased production of reactive oxygen species and reduced synthesis of ATP. Gut microbes likely playa significant role in providing deuterium depleted short chain fatty acids (SCFAs) to human colonocytes through hydrogengas recycling. The production of deuterium depleted (deupleted) nutrients necessarily leaves behind deuterium enriched water,unless there is a process that can sequester deuterium in small molecules that are excreted through the feces. Here, we provideevidence that a small number of classes of uniquely structured carbon-nitrogen rings and bis-allylic carbon atoms in certainbiologically active small molecules may play a crucial role in sequestering deuterium for export into feces or urine. Specifically,we have identified the imidazole ring present in histidine, histamine, and microbial derivatives of histidine, the tetraterpenoidlutein, bilirubin and the derivatives urobilinogen and stercobilinogen produced by gut microbes, and the bis-allylic carbons inpolyunsaturated fatty acids as likely candidates for sequestering deuterium and thereby reducing the deuterium levels in thewater-based medium. Normally, carbon atoms never exchange their bound protons with deuterons from the medium, but all theabove classes of molecules are important exceptions to this rule, as has been shown experimentally.
</summary>
<dc:date>2025-05-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Surrogate-Assisted Adaptive Experimentation for Fused Filament Fabrication Process Optimization</title>
<link href="https://hdl.handle.net/1721.1/163080" rel="alternate"/>
<author>
<name>Mojumder, Satyajit</name>
</author>
<author>
<name>Liao, Shuheng</name>
</author>
<author>
<name>Liu, Wing K.</name>
</author>
<id>https://hdl.handle.net/1721.1/163080</id>
<updated>2025-10-09T03:45:28Z</updated>
<published>2025-09-15T00:00:00Z</published>
<summary type="text">Surrogate-Assisted Adaptive Experimentation for Fused Filament Fabrication Process Optimization
Mojumder, Satyajit; Liao, Shuheng; Liu, Wing K.
Fused Filament Fabrication (FFF) is an advanced manufacturing process that requires precise control of multiple parameters, including nozzle temperature, print speed, and layer height. Due to the complexity of this high-dimensional process design space, experimental evaluations are often constrained. A key challenge in FFF is understanding how these parameters influence print quality and identifying optimal process conditions efficiently. This study addresses this challenge by developing a physics-based thermal model for FFF, implemented using a graphics processing unit-accelerated finite element method. The model is calibrated and validated against experimental thermal data for printing polylactic acid (PLA). It is then used to investigate the effects of nozzle temperature, print speed, bed temperature, and layer thickness on print quality by developing a cooling rate metric. A series of simulations is conducted within the process window using the physics-based model, and the resulting data are analyzed with SHapley Additive exPlanations to understand the influence of process parameters on print quality. The results indicate that layer height is the most critical factor affecting the quality of tensile samples. To enhance process optimization, a surrogate model is trained and optimized using data generated from the physics-based model, enabling the identification of an optimal processing window for PLA. By combining physics-based and data-driven modeling, this approach accelerates thermal prediction in the FFF process, facilitating the study of high-dimensional design spaces and the optimization of material-specific printing parameters. The proposed methodology provides a scalable framework for improving the efficiency and quality of extrusion-based additive manufacturing processes, demonstrating its potential for broader applications in process optimization.
</summary>
<dc:date>2025-09-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Origins and Alteration of Ediacaran Carbonates Recording the Shuram Excursion in Oman</title>
<link href="https://hdl.handle.net/1721.1/163079" rel="alternate"/>
<author>
<name>Bergmann, Kristin D</name>
</author>
<author>
<name>Osburn, Magdalena R</name>
</author>
<author>
<name>Anderson, Noah T</name>
</author>
<author>
<name>Hayhow, Claire</name>
</author>
<author>
<name>Wilcots, Julia</name>
</author>
<author>
<name>Cantine, Marjorie D</name>
</author>
<author>
<name>Fischer, Woodward W</name>
</author>
<author>
<name>Bonifacie, Magali</name>
</author>
<id>https://hdl.handle.net/1721.1/163079</id>
<updated>2026-03-08T03:26:49Z</updated>
<published>2025-05-14T00:00:00Z</published>
<summary type="text">Origins and Alteration of Ediacaran Carbonates Recording the Shuram Excursion in Oman
Bergmann, Kristin D; Osburn, Magdalena R; Anderson, Noah T; Hayhow, Claire; Wilcots, Julia; Cantine, Marjorie D; Fischer, Woodward W; Bonifacie, Magali
The Shuram excursion is the largest known negative carbon isotope excursion in Earth's history.Recognized globally, it follows the Ediacaran Gaskiers glaciation and precedes a marked increase in thediversity and complexity of the earliest macroscopic multicellular organisms in the fossil record. A key questionis whether this excursion reflects a primary perturbation to the carbon cycle, which would provide crucialinsights into the environmental conditions shaping the earliest animals, or whether it is largely an artifact of laterdiagenetic alteration. To evaluate the extent of diagenesis in these rocks and constrain how much of theexcursion reflects a primary signal, we investigate the sedimentology and geochemistry of carbonate strata inOman using a variety of techniques spanning multiple spatial and temporal scales. Our multi‐faceted analysisidentifies and characterizes four modes of diagenetic alteration, with sediment‐buffered conditions andauthigenic carbonate precipitation as the dominant processes. However, the degree of alteration is insufficient toaccount for the range of marine sedimentologic and geochemical trends across the carbon isotope excursion.This suggests that, even with evidence of diagenesis, the rocks preserve a measurable record of changingconditions in both terrestrial and marine environments, offering unique insights into Earth's systems during apivotal time in early animal evolution.
</summary>
<dc:date>2025-05-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Preeclampsia is Associated with Altered Expression of Ferroptosis Biomarkers in Placental but not Maternal Vasculature</title>
<link href="https://hdl.handle.net/1721.1/163078" rel="alternate"/>
<author>
<name>Ng, Shu-Wing</name>
</author>
<author>
<name>Ng, Allen C.</name>
</author>
<author>
<name>Ng, Michelle C.</name>
</author>
<author>
<name>Ng, Shu-Kay</name>
</author>
<author>
<name>Arcuri, Felice</name>
</author>
<author>
<name>Genega, Elizabeth M.</name>
</author>
<author>
<name>Watkins, Jaclyn C.</name>
</author>
<author>
<name>Roberts, Drucilla J.</name>
</author>
<author>
<name>House, Michael D.</name>
</author>
<author>
<name>O’Tierney-Ginn, Perrie F.</name>
</author>
<author>
<name>Jacobsen, Daniel P.</name>
</author>
<author>
<name>Staff, Anne C.</name>
</author>
<author>
<name>Norwitz, Errol R.</name>
</author>
<id>https://hdl.handle.net/1721.1/163078</id>
<updated>2025-10-09T03:45:33Z</updated>
<published>2025-08-06T00:00:00Z</published>
<summary type="text">Preeclampsia is Associated with Altered Expression of Ferroptosis Biomarkers in Placental but not Maternal Vasculature
Ng, Shu-Wing; Ng, Allen C.; Ng, Michelle C.; Ng, Shu-Kay; Arcuri, Felice; Genega, Elizabeth M.; Watkins, Jaclyn C.; Roberts, Drucilla J.; House, Michael D.; O’Tierney-Ginn, Perrie F.; Jacobsen, Daniel P.; Staff, Anne C.; Norwitz, Errol R.
Ferroptosis, an iron-dependent mechanism of programmed cell death, has been implicated in the pathogenesis of preeclampsia (PE). Here, we investigate the expression of key ferroptosis biomarkers in placental and decidua basalis tissues. Immunohistochemical (IHC) staining showed high expression of the ferroptosis suppressor, ferroptosis-suppressor protein 1 (FSP1), and the end product malondialdehyde (MDA), in healthy CD31-positive placental endothelium. The staining of all three markers was significantly reduced in PE placentas (P = 0.028). In vitro studies showed that an immortalized endometrial endothelial cell line, and its fetal counterpart, human umbilical vein endothelial cells, are intrinsically highly resistant to erastin-induced ferroptotic cell death compared with trophoblast, endometrial epithelial, and stromal fibroblast cell types. FSP1 was specifically expressed in the endometrial endothelial cells. Both FSP1 and another ferroptosis suppressor protein, GPX4, were degraded when the cells underwent ferroptotic cell death. Interestingly, staining of these same markers in maternal decidua basalis tissues did not show endothelium-specific staining, and no significant difference in staining was noted between healthy and PE tissues. Since previous studies have shown that endometrial cells can activate ferroptosis to produce pro-angiogenic cytokines, we posit that healthy placental endothelial cells activate ferroptosis, as evidenced by high MDA, to promote vasculature development without undergoing cell death, whereas PE placentas show reduced ferroptosis and vasculature underdevelopment. In contrast, both healthy and PE decidua basalis tissues were considered to be in a resting stage with regard to ferroptosis. Further studies are warranted to investigate how ferroptosis is regulated in both healthy and PE pregnancies.
</summary>
<dc:date>2025-08-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design, Modeling, and Control of a Soft Robotic Diaphragm‐Assist Device in a Respiratory Simulator</title>
<link href="https://hdl.handle.net/1721.1/163077" rel="alternate"/>
<author>
<name>Quevedo‐Moreno, Diego</name>
</author>
<author>
<name>Lee, Sang‐Yoep</name>
</author>
<author>
<name>Tagoe, Jonathan</name>
</author>
<author>
<name>Emani, Vishnu</name>
</author>
<author>
<name>Bonnemain, Jean</name>
</author>
<author>
<name>Roche, Ellen T</name>
</author>
<id>https://hdl.handle.net/1721.1/163077</id>
<updated>2026-03-08T03:26:50Z</updated>
<published>2025-04-28T00:00:00Z</published>
<summary type="text">Design, Modeling, and Control of a Soft Robotic Diaphragm‐Assist Device in a Respiratory Simulator
Quevedo‐Moreno, Diego; Lee, Sang‐Yoep; Tagoe, Jonathan; Emani, Vishnu; Bonnemain, Jean; Roche, Ellen T
The diaphragm is a critical muscle for respiration, responsible for up to 70% ofthe inspiratory effort. Standard treatment for patients with severe diaphragmdysfunction is permanently tethering the airway to a mechanical ventilator, whichgreatly impacts patient autonomy and quality of life. Soft robots are ideal to assistin complex biological functions, such as diaphragm contraction. This articleintroduces a soft robotic diaphragm-assist device designed as a therapeutictreatment for diaphragm dysfunction, moreover a clinically relevant respiratorysimulator is designed and proposed as a validation and testing tool for thistreatment. The device uses fabric-based pneumatic actuators to provide targetedmechanical assistance during inhalation. A two-step control system is imple-mented to optimize synchronization and support: 1) detecting breath intentionfrom the pleural pressure signal to trigger the device and 2) regulating thedevice’s input pressure to assist in inhalation. Using the respiratory simulator,the device demonstrated the ability to restore pleural and abdominal pressuresand signiﬁcantly increased transdiaphragmatic pressure during simulated con-ditions of diaphragm dysfunction. This research advances the ﬁeld of softrobotics in respiratory care, providing a foundational platform for the develop-ment of next-generation therapeutic devices aimed at improving the quality of lifefor patients with diaphragm dysfunction.
</summary>
<dc:date>2025-04-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>MeshModule: A Playful Modular Mesh System for Creative Construction</title>
<link href="https://hdl.handle.net/1721.1/163076" rel="alternate"/>
<author>
<name>Youn, Hye Jun</name>
</author>
<author>
<name>Sara, Serena</name>
</author>
<author>
<name>Ishii, Hiroshi</name>
</author>
<id>https://hdl.handle.net/1721.1/163076</id>
<updated>2026-03-08T03:25:15Z</updated>
<published>2025-09-27T00:00:00Z</published>
<summary type="text">MeshModule: A Playful Modular Mesh System for Creative Construction
Youn, Hye Jun; Sara, Serena; Ishii, Hiroshi
MeshModule is a modular construction platform composed of soft, 3D-printed mesh units designed for rapid prototyping of interactive, reconfigurable structures. Each module integrates a flexible mesh body with interlocking connectors, enabling assemblies that are both structurally robust and mechanically compliant. By varying infill patterns, material properties (PLA, TPU, and conductive filament), and geometries, MeshModule supports a range of mechanical behaviors, including bending and folding. The system also accommodates embedded electronics for responsive functionality, making it suitable for applications in wearable computing, education, and interactive art installations. Inspired by tactile learning toolkits, MeshModule fosters hands-on creativity, inclusivity, and scalable interaction design. This work demonstrates how soft digital fabrication can expand the boundaries of modular systems, enabling expressive, accessible, and programmable physical interfaces.
UIST Adjunct ’25, Busan, Republic of Korea
</summary>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>POET: Supporting Prompting Creativity and Personalization with Automated Expansion of Text-to-Image Generation</title>
<link href="https://hdl.handle.net/1721.1/163075" rel="alternate"/>
<author>
<name>Han, Evans Xu</name>
</author>
<author>
<name>Zhang, Alice</name>
</author>
<author>
<name>Zhu, Haiyi</name>
</author>
<author>
<name>Shen, Hong</name>
</author>
<author>
<name>Liang, Paul Pu</name>
</author>
<author>
<name>Hsieh, Jane</name>
</author>
<id>https://hdl.handle.net/1721.1/163075</id>
<updated>2026-03-08T03:25:45Z</updated>
<published>2025-09-27T00:00:00Z</published>
<summary type="text">POET: Supporting Prompting Creativity and Personalization with Automated Expansion of Text-to-Image Generation
Han, Evans Xu; Zhang, Alice; Zhu, Haiyi; Shen, Hong; Liang, Paul Pu; Hsieh, Jane
State-of-the-art visual generative AI tools hold immense potential to assist users in the early ideation stages of creative tasks — offering the ability to generate (rather than search for) novel and unprecedented (instead of existing) images of considerable quality that also adhere to boundless combinations of user specifications. However, many large-scale text-to-image systems are designed for broad applicability, yielding conventional output that may limit creative exploration. They also employ interaction methods that may be difficult for beginners. Given that creative end-users often operate in diverse, context-specific ways that are often unpredictable, more variation and personalization are necessary. We introduce POET, a real-time interactive tool that (1) automatically discovers dimensions of homogeneity in text-to-image generative models, (2) expands these dimensions to diversify the output space of generated images, and (3) learns from user feedback to personalize expansions. An evaluation with 28 users spanning four creative task domains demonstrated POET’s ability to generate results with higher perceived diversity and help users reach satisfaction in fewer prompts during creative tasks, thereby prompting them to deliberate and reflect more on a wider range of possible produced results during the co-creative process. Focusing on visual creativity, POET offers a first glimpse of how interaction techniques of future text-to-image generation tools may support and align with more pluralistic values and the needs of end-users during the ideation stages of their work.
UIST ’25, Busan, Republic of Korea
</summary>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Text2Texture: Generating 3D-Printed Models with Textures based on Text and Image Prompts</title>
<link href="https://hdl.handle.net/1721.1/163074" rel="alternate"/>
<author>
<name>Yin, Joshua</name>
</author>
<author>
<name>Faruqi, Faraz</name>
</author>
<author>
<name>Nisser, Martin</name>
</author>
<id>https://hdl.handle.net/1721.1/163074</id>
<updated>2026-03-08T03:25:13Z</updated>
<published>2025-09-27T00:00:00Z</published>
<summary type="text">Text2Texture: Generating 3D-Printed Models with Textures based on Text and Image Prompts
Yin, Joshua; Faruqi, Faraz; Nisser, Martin
To support users’ understanding of physical properties in 2D images, we propose Text2Texture, a webtool that converts 2D color images into textured 3D objects ready for 3D printing. This is achieved by extracting depth information using a monocular estimator, extracting local texture information using a fine-tuned stable diffusion model, and superimposing these macro- and micro-scale geometries to produce a composite 3D model with color, depth and texture. Images can be uploaded directly or generated via text prompt, and we print a variety of objects generated using each approach to suggest applications in physicallizing virtual worlds, adding haptic cues to photographs, and conveying information about scale in images.
UIST Adjunct ’25, Busan, Republic of Korea
</summary>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrating Motion Sensing into 3D-Printed Bending Structures</title>
<link href="https://hdl.handle.net/1721.1/163073" rel="alternate"/>
<author>
<name>Li, Mingming</name>
</author>
<author>
<name>Li, Jiaji</name>
</author>
<author>
<name>Chen, Haotian</name>
</author>
<author>
<name>Cao, Dingning</name>
</author>
<author>
<name>Sahin, Karla</name>
</author>
<author>
<name>Mueller, Stefanie</name>
</author>
<id>https://hdl.handle.net/1721.1/163073</id>
<updated>2026-03-08T03:25:12Z</updated>
<published>2025-09-28T00:00:00Z</published>
<summary type="text">Integrating Motion Sensing into 3D-Printed Bending Structures
Li, Mingming; Li, Jiaji; Chen, Haotian; Cao, Dingning; Sahin, Karla; Mueller, Stefanie
We present a design and fabrication method for converting static 3D models into motion-capable, self-sensing structures using multi-material FDM 3D printing. Our method allows users to configure deformation behaviors, automatically generate printable circuits, and fabricate interactive objects using 3D printing in a single step without post-assembly or manual sensor integration. The 3D-printed circuits enable real-time detection of bending motions through a time-division multiplexing (TDM) circuit scheme. We demonstrate the effectiveness of our approach through sensing performance evaluation and several application examples.
UIST Adjunct ’25, Busan, Republic of Korea
</summary>
<dc:date>2025-09-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>EI-Lite: Electrical Impedance Sensing for Micro-gesture Recognition and Pinch Force Estimation</title>
<link href="https://hdl.handle.net/1721.1/163072" rel="alternate"/>
<author>
<name>Zhu, Junyi</name>
</author>
<author>
<name>Xu, Tianyu</name>
</author>
<author>
<name>Wang, Jiayu</name>
</author>
<author>
<name>Guan, Emily</name>
</author>
<author>
<name>Moon, JaeYoung</name>
</author>
<author>
<name>Morvan, Stiven</name>
</author>
<author>
<name>Shin, D</name>
</author>
<author>
<name>Cola?o, Andrea</name>
</author>
<author>
<name>Mueller, Stefanie</name>
</author>
<author>
<name>Ahuja, Karan</name>
</author>
<author>
<name>Luo, Yiyue</name>
</author>
<author>
<name>Chatterjee, Ishan</name>
</author>
<id>https://hdl.handle.net/1721.1/163072</id>
<updated>2026-03-08T03:25:43Z</updated>
<published>2025-09-28T00:00:00Z</published>
<summary type="text">EI-Lite: Electrical Impedance Sensing for Micro-gesture Recognition and Pinch Force Estimation
Zhu, Junyi; Xu, Tianyu; Wang, Jiayu; Guan, Emily; Moon, JaeYoung; Morvan, Stiven; Shin, D; Cola?o, Andrea; Mueller, Stefanie; Ahuja, Karan; Luo, Yiyue; Chatterjee, Ishan
Micro-gesture recognition and fine-grain pinch press enables intuitive and discreet control of devices, offering significant potential for enhancing human-computer interaction (HCI). In this paper, we present EI-Lite, a lightweight wrist-worn electrical impedance sensing device for micro-gesture recognition and continuous pinch force estimation. We elicit an optimal and simplified device architecture through an ablation study on electrode placement with 13 users, and implement the elicited designs through 3D printing. We capture data on 15 participants on (1) six common micro-gestures (plus idle state) and (2) index finger pinch forces, then develop machine learning models that interpret the impedance signals generated by these micro-gestures and pinch forces. Our system is capable of accurate recognition of micro-gesture events (96.33% accuracy), as well as continuously estimating the pinch force of the index finger in physical units (Newton), with the mean-squared-error (MSE) of 0.3071 (or mean-force-variance of 0.55 Newtons) over 15 participants. Finally, we demonstrate EI-Lite’s applicability via three applications in AR/VR, gaming, and assistive technologies.
UIST ’25, Busan, Republic of Korea
</summary>
<dc:date>2025-09-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Empowering Learners with a Low-Barrier Mobile Data Science Toolkit</title>
<link href="https://hdl.handle.net/1721.1/163071" rel="alternate"/>
<author>
<name>Elhashemy, Hanya</name>
</author>
<author>
<name>Parks, Robert</name>
</author>
<author>
<name>Kim, David YJ</name>
</author>
<author>
<name>Patton, Evan</name>
</author>
<author>
<name>Abelson, Harold</name>
</author>
<id>https://hdl.handle.net/1721.1/163071</id>
<updated>2026-03-08T03:26:15Z</updated>
<published>2024-10-01T00:00:00Z</published>
<summary type="text">Empowering Learners with a Low-Barrier Mobile Data Science Toolkit
Elhashemy, Hanya; Parks, Robert; Kim, David YJ; Patton, Evan; Abelson, Harold
This paper introduces a novel data science toolkit designed specifically for children, enabling them to create mobile apps integrated with data science capabilities. The toolkit showcases new features that simplify the data science process for young users. Additionally, the paper presents a collection of example apps created using the toolkit, highlighting the versatility and potential of this innovative platform. By empowering children to explore data science through app development, this toolkit opens exciting opportunities for hands-on learning and creative expression in the field of citizen science.
</summary>
<dc:date>2024-10-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Top-Down SBP: Turning Graph Clustering Upside Down</title>
<link href="https://hdl.handle.net/1721.1/163070" rel="alternate"/>
<author>
<name>Wanye, Frank</name>
</author>
<author>
<name>Gleyzer, Vitaliy</name>
</author>
<author>
<name>Kao, Edward</name>
</author>
<author>
<name>Feng, Wu-chun</name>
</author>
<id>https://hdl.handle.net/1721.1/163070</id>
<updated>2026-03-08T03:25:00Z</updated>
<published>2025-07-20T00:00:00Z</published>
<summary type="text">Top-Down SBP: Turning Graph Clustering Upside Down
Wanye, Frank; Gleyzer, Vitaliy; Kao, Edward; Feng, Wu-chun
Stochastic block partitioning (SBP) is a statistical inference-based&#13;
algorithm for clustering vertices within a graph. It has been shown&#13;
to be statistically robust and highly accurate even on graphs with&#13;
a complex structure, but its poor scalability limits its usability to&#13;
smaller-sized graphs. In this manuscript we argue that one reason&#13;
for its poor scalability is the agglomerative, or bottom-up, nature&#13;
of SBP’s algorithmic design; the agglomerative computations cause&#13;
high memory usage and create a large search space that slows&#13;
down statistical inference, particularly in the algorithm’s initial&#13;
iterations. To address this bottleneck, we propose Top-Down SBP, a&#13;
novel algorithm that replaces the agglomerative (bottom-up) block&#13;
merges in SBP with a block-splitting operation. This enables the&#13;
algorithm to start with all vertices in one cluster and subdivide&#13;
them over time into smaller clusters. We show that Top-Down&#13;
SBP is up to 7.7× faster than Bottom-Up SBP without sacrificing&#13;
accuracy and can process larger graphs than Bottom-Up SBP on&#13;
the same hardware due to an up to 4.1× decrease in memory usage.&#13;
Additionally, we adapt existing methods for accelerating BottomUp SBP to the Top-Down approach, leading to up to 13.2× speedup&#13;
over accelerated Bottom-Up SBP and up to 403× speedup over&#13;
sequential Bottom-Up SBP on 64 compute nodes. Thus, Top-Down&#13;
SBP represents substantial improvements to the scalability of SBP,&#13;
enabling the analysis of larger datasets on the same hardware.
HPDC ’25, Notre Dame, IN, USA
</summary>
<dc:date>2025-07-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring Prompt Engineering for Generative AI-Based App Generation</title>
<link href="https://hdl.handle.net/1721.1/163069" rel="alternate"/>
<author>
<name>Shone, Jasmin L.</name>
</author>
<author>
<name>Liu, Robin</name>
</author>
<author>
<name>Patton, Evan</name>
</author>
<author>
<name>Kim, David YJ</name>
</author>
<id>https://hdl.handle.net/1721.1/163069</id>
<updated>2026-03-08T03:26:13Z</updated>
<published>2023-04-01T00:00:00Z</published>
<summary type="text">Exploring Prompt Engineering for Generative AI-Based App Generation
Shone, Jasmin L.; Liu, Robin; Patton, Evan; Kim, David YJ
We introduce a cutting-edge learning platform powered by large language models that enables students to effortlessly generate mobile applications for smartphones and tablets from natural language descriptions. We further demonstrate that these user-generated apps can be further optimized with minor adjustments to the generative model's input, or, its "prompt." To maximize the efficacy of the prompt in producing a desired application, we explore three different methods of modification: 1) altering the selection mechanism of example pairs, 2) varying the number of example pairs, and 3) changing the order of pairs within the prompt. The prompts are constructed from a collection of example pairs, which comprise a textual description of an example app and its corresponding code, in addition to a description of the desired app. We test the model's performance by evaluating it with 18 different mobile application task descriptions, ranging from basic to complex, and then leveraging BLEU score to compare the model's outputs to manually created apps. Our findings indicate that the method of determining example pair selection and varying the number of examples included can significantly influence the quality of the generated apps. However, reordering the placement of the example pairs within the prompt does not affect the outcome. Finally, we conclude with a discussion on the potential implications for computer science education. The platform we present in this paper aims to further the democratization of app creation through enabling users to create apps with ease, regardless of their technical background.
</summary>
<dc:date>2023-04-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Boosting hydrogel conductivity via water-dispersible conducting polymers for injectable bioelectronics</title>
<link href="https://hdl.handle.net/1721.1/163068" rel="alternate"/>
<author>
<name>Montazerian, Hossein</name>
</author>
<author>
<name>Davoodi, Elham</name>
</author>
<author>
<name>Wang, Canran</name>
</author>
<author>
<name>Lorestani, Farnaz</name>
</author>
<author>
<name>Li, Jiahong</name>
</author>
<author>
<name>Haghniaz, Reihaneh</name>
</author>
<author>
<name>Sampath, Rohan R</name>
</author>
<author>
<name>Mohaghegh, Neda</name>
</author>
<author>
<name>Khosravi, Safoora</name>
</author>
<author>
<name>Zehtabi, Fatemeh</name>
</author>
<author>
<name>Zhao, Yichao</name>
</author>
<author>
<name>Hosseinzadeh, Negar</name>
</author>
<author>
<name>Liu, Tianhan</name>
</author>
<author>
<name>Hsiai, Tzung K</name>
</author>
<author>
<name>Najafabadi, Alireza Hassani</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Anderson, Daniel G</name>
</author>
<author>
<name>Weiss, Paul S</name>
</author>
<author>
<name>Khademhosseini, Ali</name>
</author>
<author>
<name>Gao, Wei</name>
</author>
<id>https://hdl.handle.net/1721.1/163068</id>
<updated>2026-03-08T03:26:17Z</updated>
<published>2025-04-22T00:00:00Z</published>
<summary type="text">Boosting hydrogel conductivity via water-dispersible conducting polymers for injectable bioelectronics
Montazerian, Hossein; Davoodi, Elham; Wang, Canran; Lorestani, Farnaz; Li, Jiahong; Haghniaz, Reihaneh; Sampath, Rohan R; Mohaghegh, Neda; Khosravi, Safoora; Zehtabi, Fatemeh; Zhao, Yichao; Hosseinzadeh, Negar; Liu, Tianhan; Hsiai, Tzung K; Najafabadi, Alireza Hassani; Langer, Robert; Anderson, Daniel G; Weiss, Paul S; Khademhosseini, Ali; Gao, Wei
Bioelectronic devices hold transformative potential for healthcare diagnostics and therapeutics. Yet, traditional electronic implants often require invasive surgeries and  are mechanically incompatible with biological tissues. Injectable hydrogel bioelectronics offer a minimally invasive alternative that interfaces with soft tissue seamlessly. A major challenge is the low conductivity of bioelectronic systems, stemming from poor dispersibility of conductive additives in hydrogel mixtures. We address this issue by engineering doping conditions with hydrophilic biomacromolecules, enhancing the dispersibility of conductive polymers in aqueous systems. This approach achieves a 5-fold increase in dispersibility and a 20-fold boost in conductivity compared to conventional methods. The resulting conductive polymers are molecularly and in vivo degradable, making them suitable for transient bioelectronics applications. These additives are compatible with various hydrogel systems, such as alginate, forming ionically cross-linkable conductive inks for 3D-printed wearable electronics toward high-performance physiological monitoring. Furthermore, integrating conductive fillers with gelatin-based bioadhesive hydrogels substantially enhances conductivity for injectable sealants, achieving 250% greater sensitivity in pH sensing for chronic wound monitoring. Our findings indicate that hydrophilic dopants effectively tailor conducting polymers for hydrogel fillers, enhancing their biodegradability and expanding applications in transient implantable biomonitoring.
</summary>
<dc:date>2025-04-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nanomedicine for targeting brain Neurodegeneration: Critical barriers and circadian rhythm Considerations</title>
<link href="https://hdl.handle.net/1721.1/163067" rel="alternate"/>
<author>
<name>Pineiro-Alonso, Laura</name>
</author>
<author>
<name>Rubio-Prego, Inés</name>
</author>
<author>
<name>Lobyntseva, Alexandra</name>
</author>
<author>
<name>González-Freire, Eva</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Alonso, María José</name>
</author>
<id>https://hdl.handle.net/1721.1/163067</id>
<updated>2026-03-08T03:26:11Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">Nanomedicine for targeting brain Neurodegeneration: Critical barriers and circadian rhythm Considerations
Pineiro-Alonso, Laura; Rubio-Prego, Inés; Lobyntseva, Alexandra; González-Freire, Eva; Langer, Robert; Alonso, María José
The development of novel therapies for central nervous system (CNS) diseases, particularly neurodegenerative disorders like Alzheimer's disease (AD), is a critical global health priority. Biotherapeutics, such as monoclonal antibodies (mAbs) and RNA-based therapies, have shown potential for treating brain disorders. However, their clinical progress is limited by their difficult access to their brain targets. At the preclinical level, nanotechnology has been shown, to help these molecules overcome the biological barriers that imped their adequate brain delivery. This review highlights advances in this area and the challenges for the translation to the clinic. Key nanotechnology-based strategies, such as surface modifications utilizing endogenous protein corona, functionalization with targeting ligands, therapeutic ultrasound-mediated microbubble oscillation were particularly analyzed. Additionally, in line with the focus of the Special Issue, this review integrates the concept of chronotherapy, with a focus on AD treatment, highlighting the idea that, by aligning nanoparticle (NP)-based drug delivery with circadian rhythms, it may be possible to improve therapeutic outcomes. Finally, the article analyzes current strategies in CNS drug delivery in clinical trials and provides future directions within this frame, notably in the area of AD.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Study on molecular orientation and stratification in RNA-lipid nanoparticles by cryogenic orbitrap secondary ion mass spectrometry</title>
<link href="https://hdl.handle.net/1721.1/163066" rel="alternate"/>
<author>
<name>Kotowska, Anna M</name>
</author>
<author>
<name>Fay, Michael</name>
</author>
<author>
<name>Watts, Julie A</name>
</author>
<author>
<name>Gilmore, Ian S</name>
</author>
<author>
<name>Scurr, David J</name>
</author>
<author>
<name>Howe, Alaina</name>
</author>
<author>
<name>Capka, Vladimir</name>
</author>
<author>
<name>Perez, Corey E</name>
</author>
<author>
<name>Doud, Devin</name>
</author>
<author>
<name>Patel, Siddharth</name>
</author>
<author>
<name>Umbarger, Mark</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Alexander, Morgan R</name>
</author>
<id>https://hdl.handle.net/1721.1/163066</id>
<updated>2026-03-08T03:26:25Z</updated>
<published>2025-05-22T00:00:00Z</published>
<summary type="text">Study on molecular orientation and stratification in RNA-lipid nanoparticles by cryogenic orbitrap secondary ion mass spectrometry
Kotowska, Anna M; Fay, Michael; Watts, Julie A; Gilmore, Ian S; Scurr, David J; Howe, Alaina; Capka, Vladimir; Perez, Corey E; Doud, Devin; Patel, Siddharth; Umbarger, Mark; Langer, Robert; Alexander, Morgan R
Lipid nanoparticle RNA (LNP-RNA) formulations are used for the delivery of vaccines and other therapies. RNA molecules are encapsulated within their interior through electrostatic interactions with positively charged lipids. The identity of the lipids that present at their surface play a role in how they interact with and are perceived by the body and their resultant potency. Here, we use a model formulation to develop cryogenic sample preparation for molecular depth profiling Orbitrap secondary ion mass spectrometry (Cryo-OrbiSIMS) preceded by morphological characterisation using cryogenic transmission electron microscopy (Cryo-TEM). It is found that the depth distribution of individual lipid components is revealed relative to the surface and the RNA cargo defining the core. A preferential lipid orientation can be determined for the 1,2-Dimyristoyl-glycero-3-methox-polyethylene glycol 2000 (DMG-PEG2k) molecule, by comparing the profiles of PEG to DMG fragments. PEG fragments are found immediately during analysis of the LNP surface, while the DMG fragments are deeper, coincident with RNA ions located in the core, in agreement with established models of LNPs. This laboratory-based de novo analysis technique requires no labelling, providing advantages over large facility neutron scattering characterisation.
</summary>
<dc:date>2025-05-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Non-Line-of-Sight 3D Object Reconstruction via mmWave Surface Normal Estimation</title>
<link href="https://hdl.handle.net/1721.1/163065" rel="alternate"/>
<author>
<name>Dodds, Laura</name>
</author>
<author>
<name>Boroushaki, Tara</name>
</author>
<author>
<name>Zhou, Kaichen</name>
</author>
<author>
<name>Adib, Fadel</name>
</author>
<id>https://hdl.handle.net/1721.1/163065</id>
<updated>2026-03-08T03:24:59Z</updated>
<published>2025-06-01T00:00:00Z</published>
<summary type="text">Non-Line-of-Sight 3D Object Reconstruction via mmWave Surface Normal Estimation
Dodds, Laura; Boroushaki, Tara; Zhou, Kaichen; Adib, Fadel
This paper presents the design, implementation, and evaluation of&#13;
mmNorm, a new and highly-accurate method for non-line-of-sight&#13;
3D object reconstruction using millimeter wave (mmWave) signals.&#13;
In contrast to past approaches for millimeter-wave-based imaging&#13;
that perform backprojection for 3D object reconstruction, mmNorm&#13;
reconstructs the surface by estimating the object’s surface normals.&#13;
To do this, it introduces a novel algorithm that directly estimates&#13;
the surface normal vector field from mmWave reflections. By then&#13;
inverting the normal field, it can reconstruct structural isosurfaces,&#13;
then solve for the exact surface through a novel mmWave optimization framework.&#13;
We built an end-to-end prototype of mmNorm using a TI IWR1443&#13;
Boost mmWave radar and a UR5e Robotic Arm, and evaluated it&#13;
in over 110 real-world experiments across more than 60 different&#13;
everyday objects. In a head-to-head comparison with state-of-theart baselines, mmNorm achieves 96% reconstruction accuracy (3D&#13;
F-score) compared to 78% for the best-performing baseline. These&#13;
results show that mmNorm is capable of high-accuracy mmWave&#13;
object reconstruction. The codebase and a video demonstration are&#13;
available here: https://github.com/signalkinetics/mmNorm
MobiSys ’25, Anaheim, CA, USA
</summary>
<dc:date>2025-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>BioLIG: Functionalizing Biocomposites with Laser-induced Graphene for Bio-Rapid Prototyping of Electronics</title>
<link href="https://hdl.handle.net/1721.1/163064" rel="alternate"/>
<author>
<name>Li, Yuqing Lucy</name>
</author>
<author>
<name>Kubu?ov?, Vlasta</name>
</author>
<author>
<name>Babatain, Wedyan</name>
</author>
<author>
<name>Labrune, Jean-Baptiste</name>
</author>
<author>
<name>Widder, Sage</name>
</author>
<author>
<name>Sun, Bernice</name>
</author>
<author>
<name>Forman, Jack</name>
</author>
<author>
<name>Ishii, Hiroshi</name>
</author>
<id>https://hdl.handle.net/1721.1/163064</id>
<updated>2026-03-08T03:25:39Z</updated>
<published>2025-09-27T00:00:00Z</published>
<summary type="text">BioLIG: Functionalizing Biocomposites with Laser-induced Graphene for Bio-Rapid Prototyping of Electronics
Li, Yuqing Lucy; Kubu?ov?, Vlasta; Babatain, Wedyan; Labrune, Jean-Baptiste; Widder, Sage; Sun, Bernice; Forman, Jack; Ishii, Hiroshi
In HCI, there is a rapidly growing interest in prototyping with conductive bio-based materials. However, the methods for conductive making of bio-based materials to suit the diverse needs of makers remain underexplored. We introduce BioLIG, a fabrication framework that functionalizes affordable and optimized bio-based substrates with a conventional CO2 laser to create highly conductive traces for sensors and circuits. To illustrate the framework, we first contribute five bio-based materials: three sheets (paper-like, fabric-like, plastic-like) and two paints (lignin-ink, chitosan-stain). A formal electrical characterization of our conductors highlight that they surpass activated charcoal, are on par with carbon black, and one ink is even comparable with the most common synthetic material used for laser-induced graphene. Then, we present three biodegradable coatings that ensure functionality and durability and balance protection with controlled degradation. Next, we build upon our sheets, paints, and coatings to form multifunctional biodegradable biocomposites and implement five end-to-end applications. Lastly, we define three strategies of how the framework supports a circular making culture. BioLIG enables accessible, fast, and bio-rapid prototyping, adding new directions for designing sustainable electronics with environmental integration.
UIST ’25, Busan, Republic of Korea
</summary>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>SustainaPrint: Making the Most of Eco-Friendly Filaments</title>
<link href="https://hdl.handle.net/1721.1/163063" rel="alternate"/>
<author>
<name>Perroni-Scharf, Maxine</name>
</author>
<author>
<name>Xiao, Jennifer</name>
</author>
<author>
<name>Paulin, Cole</name>
</author>
<author>
<name>Wang, Zhi Ray</name>
</author>
<author>
<name>Sethapakdi, Ticha</name>
</author>
<author>
<name>Abdullah, Muhammad</name>
</author>
<author>
<name>Baudisch, Patrick</name>
</author>
<author>
<name>Mueller, Stefanie</name>
</author>
<id>https://hdl.handle.net/1721.1/163063</id>
<updated>2026-03-08T03:25:37Z</updated>
<published>2025-09-27T00:00:00Z</published>
<summary type="text">SustainaPrint: Making the Most of Eco-Friendly Filaments
Perroni-Scharf, Maxine; Xiao, Jennifer; Paulin, Cole; Wang, Zhi Ray; Sethapakdi, Ticha; Abdullah, Muhammad; Baudisch, Patrick; Mueller, Stefanie
We present SustainaPrint, a system for integrating eco-friendly filaments into 3D printing without compromising structural integrity. While biodegradable and recycled 3D printing filaments offer environmental benefits, there is a trade-off in using them as they may suffer from degraded or unpredictable mechanical properties, which can limit their use in load-bearing applications. SustainaPrint addresses this by strategically assigning eco-friendly and standard filaments to different regions of a multi-material print—reinforcing the areas that are most likely to break with stronger material while maximizing the use of sustainable filament elsewhere. As eco-friendly filaments often do not come with technical datasheets, we also introduce a low-cost, at-home mechanical testing toolkit that enables users to evaluate filament strength before deciding if they want to use that filament in our pipeline. We validate SustainaPrint through real-world fabrication and mechanical testing, demonstrating its effectiveness across a range of functional 3D printing tasks.
UIST ’25, Busan, Republic of Korea
</summary>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Novel Strategies for Developing Next-Generation Vaccines to Combat Infectious Viral Diseases</title>
<link href="https://hdl.handle.net/1721.1/163062" rel="alternate"/>
<author>
<name>Yuan, Fangfeng</name>
</author>
<author>
<name>Bluth, Martin H.</name>
</author>
<id>https://hdl.handle.net/1721.1/163062</id>
<updated>2026-03-08T03:25:14Z</updated>
<published>2025-09-16T00:00:00Z</published>
<summary type="text">Novel Strategies for Developing Next-Generation Vaccines to Combat Infectious Viral Diseases
Yuan, Fangfeng; Bluth, Martin H.
The development of viral vaccines faces persistent scientific and logistical challenges, particularly in the wake of the COVID-19 pandemic. This review critically examines emerging strategies to overcome key barriers in viral vaccine design and deployment. We focus on four major areas: (1) structure-guided antigen engineering to stabilize conformations; (2) the mRNA platform and its delivery system; (3) advanced adjuvant systems that enhance cellular and humoral immunity; and (4) approaches to mitigate immune imprinting and antigenic variability, such as chimeric antigens and glycan shielding. We also explore anti-idiotypic vaccination strategies and the limitations of current animal models in predicting human immune responses. In addition, to address vaccine hesitancy and inequitable access, we advocate for global collaboration in manufacturing, distribution, and public education to ensure inclusive immunization strategies. By integrating molecular insights with platform technologies, we aim to inform the rational design of future vaccines with improved efficacy and public acceptance.
</summary>
<dc:date>2025-09-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Biphasic Adaptations of Gastric Epithelial Cells in Chronic H. pylori Infection from Stress to Tolerance</title>
<link href="https://hdl.handle.net/1721.1/163061" rel="alternate"/>
<author>
<name>Zhang, Xiulin</name>
</author>
<author>
<name>He, Yang</name>
</author>
<author>
<name>Zhang, Xiaolu</name>
</author>
<author>
<name>Liang, Ziyi</name>
</author>
<author>
<name>Wang, Wendong</name>
</author>
<author>
<name>Da, Zhenyu</name>
</author>
<author>
<name>Lv, Jianyi</name>
</author>
<author>
<name>Guo, Meng</name>
</author>
<author>
<name>Huo, Xueyun</name>
</author>
<author>
<name>Liu, Xin</name>
</author>
<author>
<name>Lu, Jing</name>
</author>
<author>
<name>Cao, Lixue</name>
</author>
<author>
<name>Du, Xiaoyan</name>
</author>
<author>
<name>Ge, Zhongming</name>
</author>
<author>
<name>Chen, Zhenwen</name>
</author>
<author>
<name>Lu, Xuancheng</name>
</author>
<author>
<name>Zhang, Jianzhong</name>
</author>
<author>
<name>Li, Changlong</name>
</author>
<id>https://hdl.handle.net/1721.1/163061</id>
<updated>2026-03-08T03:25:11Z</updated>
<published>2025-09-15T00:00:00Z</published>
<summary type="text">Biphasic Adaptations of Gastric Epithelial Cells in Chronic H. pylori Infection from Stress to Tolerance
Zhang, Xiulin; He, Yang; Zhang, Xiaolu; Liang, Ziyi; Wang, Wendong; Da, Zhenyu; Lv, Jianyi; Guo, Meng; Huo, Xueyun; Liu, Xin; Lu, Jing; Cao, Lixue; Du, Xiaoyan; Ge, Zhongming; Chen, Zhenwen; Lu, Xuancheng; Zhang, Jianzhong; Li, Changlong
Helicobacter pylori (H. pylori) is a well-known pathogen associated with chronic gastric infection, progressing from gastritis to gastric adenocarcinoma, but the dynamic phenotypic and molecular characteristics of gastric epithelial cells during sustained infection remain unclear. We established a chronic infection model using the human gastric epithelial cell line GES-1, exposed to H. pylori or its lysate across 30 generations, dynamically assessing cell proliferation, migration, invasion, apoptosis, autophagy, and epithelial–mesenchymal transition (EMT) markers, with RNA sequencing for transcriptomic changes and a Mongolian gerbil model to validate chronic pathological progression. Acute H. pylori exposure induced pronounced morphological changes; suppressed proliferation, migration, and invasion; triggered apoptosis; and blocked autophagic flux, while long-term stimulation reversed these effects. EMT markers showed progressive loss of epithelial characteristics with chronic infection. RNA sequencing revealed a dynamic shift from inflammation-driven apoptosis to adaptive survival mechanisms. In vivo, prolonged infection induced dynamic TLR expression alongside progressive gastric pathology, including atrophy and dysplasia. Our study provides new molecular evidence for dynamic cellular and immunological adaptations of gastric epithelial cells under chronic H. pylori infection, highlighting critical intervention windows for preventing gastric carcinogenesis.
</summary>
<dc:date>2025-09-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Refashion: Reconfigurable Garments via Modular Design</title>
<link href="https://hdl.handle.net/1721.1/163060" rel="alternate"/>
<author>
<name>Lin, Rebecca</name>
</author>
<author>
<name>Leake, Mackenzie</name>
</author>
<author>
<name>Lukáč, Michal</name>
</author>
<id>https://hdl.handle.net/1721.1/163060</id>
<updated>2026-03-08T03:25:41Z</updated>
<published>2025-09-27T00:00:00Z</published>
<summary type="text">Refashion: Reconfigurable Garments via Modular Design
Lin, Rebecca; Leake, Mackenzie; Lukáč, Michal
While bodies change over time and trends vary, most store-bought clothing comes in fixed sizes and styles and fails to adapt to these changes. Alterations can enable small changes to otherwise static garments, but these changes often require sewing and are non-reversible. We propose a modular approach to garment design that considers resizing, restyling, and reuse earlier in the design process. Our contributions include a compact set of modules and connectors that form the building blocks of modular garments, a method to decompose a garment into modules via integer linear programming, and a digital design tool that supports modular garment design and simulation. Our user evaluation suggests that our approach to modular design can support the creation of a wide range of garments and can help users transform them across sizes and styles while reusing the same building blocks.
UIST ’25, Busan, Republic of Korea
</summary>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Probabilistic Deliverability Assessment of Distributed Energy Resources via Scenario-Based AC Optimal Power Flow</title>
<link href="https://hdl.handle.net/1721.1/163059" rel="alternate"/>
<author>
<name>Anton, Laurenţiu L.</name>
</author>
<author>
<name>Ilić, Marija D.</name>
</author>
<id>https://hdl.handle.net/1721.1/163059</id>
<updated>2026-03-08T03:25:16Z</updated>
<published>2025-09-10T00:00:00Z</published>
<summary type="text">Probabilistic Deliverability Assessment of Distributed Energy Resources via Scenario-Based AC Optimal Power Flow
Anton, Laurenţiu L.; Ilić, Marija D.
As electric grids decarbonize and distributed energy resources (DERs) become increasingly prevalent, interconnection assessments must evolve to reflect operational variability and control flexibility. This paper highlights key modeling limitations observed in practice and reviews approaches for modeling uncertainty. It then introduces a Probabilistic Deliverability Assessment (PDA) framework designed to complement and extend existing procedures. The framework integrates scenario-based AC optimal power flow (AC OPF), corrective dispatch, and optional multi-temporal constraints. Together, these form a structured methodology for quantifying DER utilization, deliverability, and reliability under uncertainty in load, generation, and topology. Outputs include interpretable metrics with confidence intervals that inform siting decisions and evaluate compliance with reliability thresholds across sampled operating conditions. A case study on Puerto Rico&amp;rsquo;s publicly available bulk power system model demonstrates the framework&amp;rsquo;s application using minimal input data, consistent with current interconnection practice. Across staged fossil generation retirements, the PDA identifies high-value DER sites and regions requiring additional reactive power support. Results are presented through mean dispatch signals, reliability metrics, and geospatial visualizations, demonstrating how the framework provides transparent, data-driven siting recommendations. The framework&amp;rsquo;s modular design supports incremental adoption within existing workflows, encouraging broader use of AC OPF in interconnection and planning contexts.
</summary>
<dc:date>2025-09-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stable Natural Iron Complex Micronutrient Powder for Enhanced Cellular Uptake</title>
<link href="https://hdl.handle.net/1721.1/163058" rel="alternate"/>
<author>
<name>Alsaiari, Shahad K</name>
</author>
<author>
<name>Zhang, Linzixuan</name>
</author>
<author>
<name>Yang, Xin</name>
</author>
<author>
<name>Duan, Aranda R</name>
</author>
<author>
<name>Daristotle, John L</name>
</author>
<author>
<name>Straeten, Aurelien vander</name>
</author>
<author>
<name>Weinstock, Shelley B</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Jaklenec, Ana</name>
</author>
<id>https://hdl.handle.net/1721.1/163058</id>
<updated>2026-03-08T03:26:06Z</updated>
<published>2025-07-21T00:00:00Z</published>
<summary type="text">Stable Natural Iron Complex Micronutrient Powder for Enhanced Cellular Uptake
Alsaiari, Shahad K; Zhang, Linzixuan; Yang, Xin; Duan, Aranda R; Daristotle, John L; Straeten, Aurelien vander; Weinstock, Shelley B; Langer, Robert; Jaklenec, Ana
Iron deficiency anemia (IDA) is a persistent global health challenge, particularly in low- and middle-income countries, necessitating effective iron fortification strategies. In this study, we developed FeC-4-1, a novel iron complex composed of ferrous sulfate, vitamin C (VC), and histidine, to enhance iron stability, cellular iron uptake, and compatibility with food matrices. FeC-4-1 exhibited high stability across a broad pH range (3–12). Under simulated gastric conditions, FeC-4-1 released nearly 100% of its iron and VC within 10 min, ensuring efficient cellular iron uptake. FeC-4-1 also demonstrated superior oxidation resistance compared to FeSO4, exhibiting 2.5-fold lower color change in polyphenol-rich banana milk after 2-h treatment. Long-term storage studies revealed that FeC-4-1 maintained 60% of its initial total iron content with the ferrous iron fraction remaining at ∼80% after 12 months, indicating minimal oxidation over time. Bioaccessibility studies following an established INFOGEST protocol showed that FeC-4-1 provided about 2-fold higher bioaccessible iron compared to FeSO4 under room temperature conditions. In addition, FeC-4-1 resulted in approximately a 3.2-fold increase in total intracellular iron compared to FeSO4 in Caco-2 cells. Sensory evaluation results demonstrated that FeC-4-1 fortification at 16 mg per serving (50% RDA of iron) in bouillon soup did not alter flavor or mouthfeel. These findings suggest that FeC-4-1 is a technically feasible and effective iron fortificant, offering enhanced stability, bioaccessibility, and consumer acceptability for in-home iron fortification.
</summary>
<dc:date>2025-07-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Polyanhydride-Based Microparticles for Programmable Pulsatile Release of Diphtheria Toxoid (DT) for Single-Injection Self-Boosting Vaccines</title>
<link href="https://hdl.handle.net/1721.1/163057" rel="alternate"/>
<author>
<name>Zhang, Linzixuan</name>
</author>
<author>
<name>Xiao, Ruiqing</name>
</author>
<author>
<name>Gao, Wenhao</name>
</author>
<author>
<name>Garcia, Johnny</name>
</author>
<author>
<name>Pan, Xinyan</name>
</author>
<author>
<name>Daristotle, John L</name>
</author>
<author>
<name>Forster, Timothy</name>
</author>
<author>
<name>Han, Jooli</name>
</author>
<author>
<name>Chaddah, Mehr</name>
</author>
<author>
<name>Varshney, Dhruv</name>
</author>
<author>
<name>Menon, Nandita</name>
</author>
<author>
<name>McHugh, Kevin J</name>
</author>
<author>
<name>Pedretti, Benjamin J</name>
</author>
<author>
<name>Yeo, Jing Ying</name>
</author>
<author>
<name>Yang, Xin</name>
</author>
<author>
<name>MacDonald, Sydney</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Jaklenec, Ana</name>
</author>
<id>https://hdl.handle.net/1721.1/163057</id>
<updated>2026-03-08T03:26:05Z</updated>
<published>2025-08-14T00:00:00Z</published>
<summary type="text">Polyanhydride-Based Microparticles for Programmable Pulsatile Release of Diphtheria Toxoid (DT) for Single-Injection Self-Boosting Vaccines
Zhang, Linzixuan; Xiao, Ruiqing; Gao, Wenhao; Garcia, Johnny; Pan, Xinyan; Daristotle, John L; Forster, Timothy; Han, Jooli; Chaddah, Mehr; Varshney, Dhruv; Menon, Nandita; McHugh, Kevin J; Pedretti, Benjamin J; Yeo, Jing Ying; Yang, Xin; MacDonald, Sydney; Langer, Robert; Jaklenec, Ana
Single‐Injection Self‐Boosting Vaccines A single‐injection platform for self‐boosting vaccines is developed using a polyanhydride‐based delivery system. The platform enables pulsatile antigen release, protects pH‐sensitive cargo, and elicits immune responses comparable to traditional multi‐dose regimens. Machine learning enhances design by accurately predicting release profiles, offering a promising solution to improve global vaccine coverage and reduce under‐immunization. More details can be found in article number 2501168 by Robert Langer, Ana Jaklenec, and co‐workers.
</summary>
<dc:date>2025-08-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Gastrointestinal neuroprosthesis for motility and metabolic neuromodulation</title>
<link href="https://hdl.handle.net/1721.1/163056" rel="alternate"/>
<author>
<name>Srinivasan, Shriya</name>
</author>
<author>
<name>Antonini, Marc-Joseph</name>
</author>
<author>
<name>Alshareef, Amro</name>
</author>
<author>
<name>Sahasrabudhe, Atharva</name>
</author>
<author>
<name>Jenkins, Josh</name>
</author>
<author>
<name>Ishida, Keiko</name>
</author>
<author>
<name>Kuosmanen, Johannes</name>
</author>
<author>
<name>Hayward, Alison</name>
</author>
<author>
<name>Min, Seokkee</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Anikeeva, Polina</name>
</author>
<author>
<name>Traverso, Giovanni</name>
</author>
<id>https://hdl.handle.net/1721.1/163056</id>
<updated>2026-03-08T03:25:55Z</updated>
<published>2025-08-10T00:00:00Z</published>
<summary type="text">Gastrointestinal neuroprosthesis for motility and metabolic neuromodulation
Srinivasan, Shriya; Antonini, Marc-Joseph; Alshareef, Amro; Sahasrabudhe, Atharva; Jenkins, Josh; Ishida, Keiko; Kuosmanen, Johannes; Hayward, Alison; Min, Seokkee; Langer, Robert; Anikeeva, Polina; Traverso, Giovanni
Gastrointestinal (GI) dysmotility and associated conditions affect over 20% of population, yet pharmacological, behavioural, and surgical interventions offer limited therapeutic efficacy. Targeted electrical stimulation addressing underlying neuromuscular pathology stands to transform our ability to treat dysmotility. Here, we developed a closed-loop GI neuroprosthesis which activates or relaxes GI tract musculature through electrochemical stimulation in response to sensed food stimuli. We additionally describe a tool supporting minimally invasive endoscopically guided implantation that can penetrate the mucosa, accurately localize the submucosa, and safely deploy this device to directly interface with the enteric nervous system. The neuroprosthesis enables generation of coordinated peristaltic waves, significantly increasing the motility rate in a swine model of oesophageal and stomach dysmotility (p &lt; 0.05, student’s t-test). Further, by directly modulating the myenteric plexus and thus mimicking meal ingestion, we induce peristalsis in a fasted state and achieve a metabolic response commensurate with a fed or satiated state. This neuroprosthesis and implantation platform expand opportunities in fundamental studies and treatments of metabolic and neuromuscular pathologies affecting the GI tract.
</summary>
<dc:date>2025-08-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Operational Value Stream Analysis for Developmental Excellence</title>
<link href="https://hdl.handle.net/1721.1/163055" rel="alternate"/>
<author>
<name>Shaw, Eric T.</name>
</author>
<id>https://hdl.handle.net/1721.1/163055</id>
<updated>2025-10-07T04:14:42Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Operational Value Stream Analysis for Developmental Excellence
Shaw, Eric T.
The aerospace and defense industry faces increasing challenges in new product development, where financial constraints and risk aversion hinder innovation. Using a multidisciplinary approach that integrates contract theory, computational fluid dynamics (CFD), and machine learning, this research explores the impacts of engineering requirements, financial alignment among stakeholders, and improved efficiencies in predictive modeling techniques for two separate air vehicle programs: A and B. A Monte Carlo analysis using SEER-H estimation software quantifies the financial and schedule impacts of engineering requirements, revealing a 10–30% cost increase due to volatility in air vehicle development design parameters. Moreover, a game-theoretic contract negotiation simulation illustrates the importance and opportunity of financial incentive alignment among key stakeholders. Additionally, predictive analytics leveraging machine learning models better capture the relevant flow mechanics, improving the circumferential distortion estimations in nacelle aerodynamics by over 10% compared to traditional heuristics. Finally, a CFD-based actuator disk source modeling approach demonstrates a 60% reduction in steady-state distortion at some portions of the flight envelope, due to the impact of the fan upstream influence on inlet flow distortion suggesting increased operational capability for the air vehicle program B. This research provides actionable recommendations to enhance the operational value stream of new air vehicle program development, emphasizing the need for pre-RFP requirements validation, advanced machine learning applications for predictive engineering, and refined CFD modeling to identify technical risks earlier in the design process.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Computational Framework for Simulating Entanglement-Based Drone Countermeasures with Flexible Filaments Immersed in Viscous Flow</title>
<link href="https://hdl.handle.net/1721.1/163054" rel="alternate"/>
<author>
<name>Sonandres, Jake T.</name>
</author>
<id>https://hdl.handle.net/1721.1/163054</id>
<updated>2025-10-07T04:15:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Computational Framework for Simulating Entanglement-Based Drone Countermeasures with Flexible Filaments Immersed in Viscous Flow
Sonandres, Jake T.
In this work, we present a computational framework for modeling the coupled dynamic interactions of highly flexible slender filaments immersed in a viscous flow and their entanglement with themselves and moving structures. This work is motivated by a novel drone countermeasure that entangles propellers with flexible filament clouds, inducing a loss of thrust and control authority in the drone. However, the framework is relevant to a wider range of applications, including actin filaments in cell biology, carbon nanotubes in composite materials, and rope-like structures in industrial settings. Each filament is modeled with the three-dimensional geometrically exact Kirchhoff-Love torsion-free finite element beam formulation. The fluid flow resulting from filament aerodynamic interaction is described through a Boundary Integral (BI) formulation of the incompressible Stokes equations based on the Stokeslet discretization. The heavy computational load of the resulting dense system is addressed through the use of fast GPU-based dense linear solvers. The BI formulation is coupled to the filament solid mechanics by enforcing momentum balance at the dynamically evolving filament-fluid interface. Additionally, the solid contact interactions between filaments are modeled with a point-to-point frictional contact algorithm that applies discrete contact and frictional forces at the closest point between the beam elements. We address the difficulties associated with contact between elements represented with third-order Hermitian polynomial shape functions and the strategies adopted to overcome these challenges. To capture propeller fouling for drone countermeasures, we incorporate a propeller and motor model whose thrust and torque responses are affected by contact interactions during entanglement. We verify our framework against simple analytical solutions and demonstrate its capabilities with numerical examples that attempt to capture large-scale filament entanglement behavior. In particular, we apply our methodology to demonstrate the process by which filament entanglement can restrict motion and reduce the efficacy of propellers. The results show that the framework can be used to understand the connection between filament entanglement, key system properties, and the resulting thrust generated by the propeller.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Global sustainable aviation fuel production potential from&#13;
current agricultural production: a holistic data analytics&#13;
and systems analysis approach</title>
<link href="https://hdl.handle.net/1721.1/163053" rel="alternate"/>
<author>
<name>Martin, Estelle Claude Aline</name>
</author>
<id>https://hdl.handle.net/1721.1/163053</id>
<updated>2025-10-07T04:14:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Global sustainable aviation fuel production potential from&#13;
current agricultural production: a holistic data analytics&#13;
and systems analysis approach
Martin, Estelle Claude Aline
Aviation contributes significantly to global greenhouse gas emissions, driven primarily by its dependency on fossil-based jet fuel. Sustainable Aviation Fuel (SAF) offers a short-term option to mitigate these emissions. However, its current scalability remains limited, constrained by access to sustainable biomass. Realizing SAF’s potential in the near term, using the agricultural and industrial systems already in place requires a detailed understanding of biomass availability, resource competition, and the scalability of SAF production. This thesis presents a comprehensive system analysis framework and a data-driven methodology for evaluating SAF production potential based on current agricultural output, without assuming land expansion or major yield improvements and preserving food utilization. It evaluates the SAF production potential from increasing biomass availability by redirecting biomass currently used for some non-food purposes, and by utilizing processing and agricultural residue. In-depth analysis of four high-potential case studies, one for each main biomass family (starchy, sugary, oily, and fats and greases), was used to construct a detailed model of the supply chain. This structure was then applied globally across all countries and relevant feedstocks to estimate SAF production potential and associated system requirements.&#13;
&#13;
Findings from the case studies show that these four high-potential opportunities could collectively meet only up to 13.1% of global jet fuel demand in 2023, assuming 100% neat SAF. The global analysis estimates that the SAF production potential from the considered streams of increased biomass availability could meet up to about two-thirds of global jet fuel demand, with 28.7% derived from agricultural residues, 25.9% from redirected main products, and 12.5% from processing residues. These contributions hence remain insufficient to fully displace fossil jet fuel. This work provides an estimate of what could be achieved using the existing agricultural and industrial systems, what resource would be required, and how it compares to global resource availability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Causal inference for complex systems and applications to turbulent flows</title>
<link href="https://hdl.handle.net/1721.1/163052" rel="alternate"/>
<author>
<name>Sánchez, Álvaro Martínez</name>
</author>
<id>https://hdl.handle.net/1721.1/163052</id>
<updated>2025-10-07T04:14:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Causal inference for complex systems and applications to turbulent flows
Sánchez, Álvaro Martínez
Causality lies at the heart of scientific inquiry, serving as the fundamental basis for understanding interactions among variables in physical systems. Despite its central role, current methods for causal inference face significant challenges due to nonlinear dependencies, stochastic interactions, self-causation, collider effects, and influences from exogenous factors, among others. While existing methods can effectively address some of these challenges, no single approach has successfully integrated all these aspects. Here, we address these challenges with SURD: Synergistic-Unique-Redundant Decomposition of causality (Nat. Commun., vol. 15, 2024, p. 9296). SURD quantifies causality as the increments of redundant, unique, and synergistic information gained about future events from past observations. The formulation is non-intrusive and applicable to both computational and experimental investigations, even when samples are scarce. We benchmark SURD in scenarios that pose significant challenges for causal inference and demonstrate that it offers a more reliable quantification of causality compared to previous methods. We further illustrate the applicability of our approach in two turbulent-flow scenarios: the energy transfer across scales in the turbulent energy cascade and the interaction between motions across scales in a turbulent boundary layer. Our results show that, without accounting for redundant and synergistic effects, traditional approaches to causal inference may lead to incomplete or misleading conclusions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systems Theoretic Process Analysis of Sociotechnical Systems</title>
<link href="https://hdl.handle.net/1721.1/163051" rel="alternate"/>
<author>
<name>Harrington, Polly</name>
</author>
<id>https://hdl.handle.net/1721.1/163051</id>
<updated>2025-10-07T04:14:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Systems Theoretic Process Analysis of Sociotechnical Systems
Harrington, Polly
The safety and success of complex modern systems, such as hospitals, aircraft, or software, depend on their ability to integrate people and technical components. For example, doctors must be able to use their computerized surgical tools to treat their patients successfully, airplane pilots must be able to operate the required controls for takeoff and landing, and regulators must be able to interpret the data they receive to make critical decisions. However, designing systems that facilitate safe interactions between humans and technology is not a simple task. System designers must consider not only the constraints of the technical components but also human requirements throughout the entire system. However, accidents in modern systems continue to prove that more work is needed to identify and prevent unsafe interactions between humans and technology Systems Theoretic Process Analysis (STPA) is a hazard analysis methodology based on systems theory that has been used to improve system safety in various industries, including healthcare, aviation, nuclear power, and automotive design. However, if hazard analysts using STPA lack significant expertise in human factors engineering (HFE), they may be unable to thoroughly and rigorously identify critical unsafe interactions. This thesis presents a process for utilizing HFE to improve the results of STPA analyses on sociotechnical systems. In particular, the process focuses on the thorough identification of causal scenarios in sociotechnical systems by incorporating relevant human factors concepts. The process allows analysts without significant training in HFE to improve their ability to identify useful scenarios for humans in their system. The effectiveness of the improved process is demonstrated using a healthcare case study on over-the-counter clinical laboratory tests in the United States. By establishing a process for non-HFE experts to use when conducting STPA analyses, more systems can be developed that enhance human performance rather than increase conflict between humans and the engineered system.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Algorithms for Quantitative Analysis of&#13;
Long Electrical Arcs in Crossflows</title>
<link href="https://hdl.handle.net/1721.1/163050" rel="alternate"/>
<author>
<name>Lin, Fayleon</name>
</author>
<id>https://hdl.handle.net/1721.1/163050</id>
<updated>2025-10-07T04:14:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Development of Algorithms for Quantitative Analysis of&#13;
Long Electrical Arcs in Crossflows
Lin, Fayleon
A single lightning strike can deliver a steady current of hundreds of amps during its attachment to an aircraft. Therefore, it is imperative to have an adequate lightning protection system in the aircraft to minimize the probability of catastrophic accidents. Current guidelines for lightning protection systems are based on prior service experience and historical data, which might become insufficient with future generation aircraft. These often adopt novel and unconventional aircraft designs, often deviating significantly from current designs. Therefore, efforts are underway to update these guidelines with novel methods such as designs aided by numerical simulation that can accurately model the behavior of lightning attachment and the subsequent swept-stroke phase. To aid in the development of these numerical methods, ample data of not only the electrical arcs but also their interactions with the surrounding flow are necessary for validation. However, most studies on long electrical arcs lack a detailed investigation of the coupling between the electrical arcs and the surrounding flow field. For that purpose, teams from the Massachusetts Institute of Technology (MIT), ONERA, and Universitat Politècnica de Catalunya (UPC) conducted an extensive experimental campaign in April 2024 that investigates this coupling in detail for the first time. Data gathered from this experiment include electrical properties of the arc, high-speed video of the arc column, and the velocity field of the surrounding flow. Approximately 200 cases were conducted with various geometrical and electrical configurations. To meaningfully analyze all the data, a set of algorithms was developed to automatically process, analyze, and visualize these data. Detailed analysis of the root and column behavior was performed; electrical properties were verified to be consistent with literature values; and coupling between the velocities of the arc column and the flow field was determined by simultaneous visualization of both data forms.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performance and Analysis of a Deployable Diffractive Optical Element for Small Satellite Missions</title>
<link href="https://hdl.handle.net/1721.1/163049" rel="alternate"/>
<author>
<name>Bahlous-Boldi, Adam A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163049</id>
<updated>2026-01-13T19:42:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Performance and Analysis of a Deployable Diffractive Optical Element for Small Satellite Missions
Bahlous-Boldi, Adam A.
As space missions push toward smaller, lighter, and more deployable instrumentation, diffractive optical elements (DOEs) offer a compelling alternative to traditional optics. Their ability to focus light through engineered phase profiles rather than curved surfaces allows for large-aperture, flat optics that are far lighter and easier to package for launch. However, this benefit comes with trade-offs: DOEs are sensitive to wavelength mismatch, manufacturing errors, and environmental deformations—especially thermal gradients and membrane tensioning in space. This thesis develops a comprehensive framework for understanding and simulating the performance of DOEs under realistic operating conditions. Beginning from first principles, the work contrasts geometric and wave-optical models for Fresnel zone plates and multilevel diffractive lenses, leading to quantitative predictions of diffraction efficiency and PSF quality under non-idealities. A key contribution is the analytical and numerical analysis of how uniform thickness errors, wavelength mismatches, and thermal expansions degrade optical performance, both in efficiency and wavefront fidelity. To evaluate these effects in detail, a flexible simulation tool was developed in MATLAB, enabling both Fourier and integral-based propagation through arbitrarily deformed DOEs. These models are applied to a conceptual space-based LIDAR system—SPECIES—that uses a deployable DOE optic to demonstrate the feasibility and limitations of this approach. The results show that DOEs can tolerate some global deformations - for example, a 1 mm deformation results in a 38% performance loss in an F3 LiDAR system with a 1 mm detector diameter. However, they remain highly sensitive to fine-scale shape errors, posing significant challenges for high-precision applications like fiber coupling or imaging. The findings provide new insight into the tolerances, benefits, and trade-offs of DOEbased systems in space, and lay the groundwork for future missions seeking to leverage lightweight diffractive optics for remote sensing and optical communication.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Partial Gravity Load Simulation Using Mechanical Off-Loading and Lower Body Negative Pressure</title>
<link href="https://hdl.handle.net/1721.1/163048" rel="alternate"/>
<author>
<name>Davalos, Daniela L.</name>
</author>
<id>https://hdl.handle.net/1721.1/163048</id>
<updated>2025-10-07T04:14:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Partial Gravity Load Simulation Using Mechanical Off-Loading and Lower Body Negative Pressure
Davalos, Daniela L.
Prolonged exposure to reduced gravity environments can lead to significant deconditioning of the cardiovascular, musculoskeletal, and ocular systems. These effects increase the risk of orthostatic intolerance, bone loss, and conditions such as Spaceflight Associated Neuro-ocular Syndrome (SANS). As spaceflight missions grow longer and more frequent, especially with increased extravehicular activity (EVA) on the Moon or Mars, it is critical to develop effective countermeasures and Earth-based analogs to simulate these gravitational environments and evaluate physiological impacts. This thesis addresses these challenges through two complementary approaches. First, it presents the design and development of the MIT Moonwalker IV, a passive mechanical offloading system that simulates partial gravity by applying vertical support via a spring-cable mechanism. In a treadmill-based pilot study, one participant showed at least a 50% reduction in metabolic demand while running under simulated Martian gravity. These findings validate the Moonwalker IV as a metabolic analog for EVA task simulation. Second, this thesis evaluates a collapsible lower body negative pressure (LBNP) suit as a wearable countermeasure for micro and partial gravity environments. By applying negative pressure to the lower body, the suit helps restore the mechanical loading and hydrostatic fluid gradients typically provided by Earth’s gravity. The suit was tested in both simulated reduced gravity via a head-down/head-up tilt paradigm and and true reduced gravity via parabolic flight. Each condition was evaluated both with and without –20 mmHg of LBNP. Results demonstrated that the collapsible LBNP suit produced cardiovascular responses comparable to those observed in traditional rigid LBNP chambers. It also induced lower body fluid shifts as measured by segmental leg bioimpedance, reduced intraocular pressure, and generated ground reaction forces similar to standing in 1G. These findings support the complementary use of Earth-based analog systems to simulate partial gravity and wearable devices to simulate Earth gravity in reduced gravity environments. They offer valuable tools for preparing astronauts and preserving physiological health during long-duration space missions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unforgettable Generalization in Language Models</title>
<link href="https://hdl.handle.net/1721.1/163047" rel="alternate"/>
<author>
<name>Zhang, Eric</name>
</author>
<id>https://hdl.handle.net/1721.1/163047</id>
<updated>2025-10-07T04:14:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Unforgettable Generalization in Language Models
Zhang, Eric
When language models (LMs) are trained to forget (or “unlearn”) a skill, how precisely does their behavior change? We study the behavior of transformer LMs in which tasks have been forgotten via fine-tuning on randomized labels. Such LMs learn to generate near-random predictions for individual examples in the “training” set used for forgetting. Across tasks, however, LMs exhibit extreme variability in whether LM predictions change on examples outside the training set. In some tasks (like entailment classification), forgetting generalizes robustly, and causes models to produce uninformative predictions on new task instances; in other tasks (like physical commonsense reasoning and scientific question answering) forgetting affects only the training examples, and models continue to perform the “forgotten” task accurately even for examples very similar to those that appeared in the training set. Dataset difficulty is not predictive of whether a behavior can be forgotten; instead, generalization in forgetting is (weakly) predicted by the confidence of LMs’ initial task predictions and the variability of LM representations of training data, with low confidence and low variability both associated with greater generalization. Perhaps most surprisingly, random-label forgetting appears to be somewhat insensitive to the contents of the training set: for example, models trained on science questions with random labels continue to answer other science questions accurately, but begin to produce random labels on entailment classification tasks. Finally, we show that even generalizable forgetting is shallow: linear probes trained on LMs’ representations can still perform tasks reliably after forgetting. Our results highlight the difficulty and unpredictability of performing targeted skill removal from models via fine-tuning.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Guessing Random Additive Noise Decoding in Coded&#13;
Multiple-Input Multiple-Output Systems</title>
<link href="https://hdl.handle.net/1721.1/163045" rel="alternate"/>
<author>
<name>Wu, Benjamin</name>
</author>
<id>https://hdl.handle.net/1721.1/163045</id>
<updated>2025-10-07T04:14:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Guessing Random Additive Noise Decoding in Coded&#13;
Multiple-Input Multiple-Output Systems
Wu, Benjamin
Multiple-Input Multiple-Output (MIMO) wireless communication systems incorporate forward error correction (FEC) to achieve high reliability under fading and interference. In this thesis, we explore the emerging FEC paradigm of Guessing Random Additive Noise Decoding (GRAND) in a point-to-point MIMO system. &#13;
Treating GRAND as an FEC decoder disjoint from the MIMO detector, we compare the soft-decision Ordered Reliability Bits GRAND (ORBGRAND) to CRC-Assisted Successive Cancellation List (CA-SCL) decoding of the CRC-Assisted Polar (CA-Polar) [105, 128] code found in the 5G New Radio standard. For this code, we find that ORBGRAND outperforms CA-SCL (list size 16) by 1 dB E_b/N₀ at block error rate of 10⁻³, under 16-QAM and Linear Minimum Mean Square Error detection, with two transmit antennas and four receive antennas. We also show that ORBGRAND, when paired with other moderate redundancy linear codes, can yield substantial savings in the range of 0.5 − 2 dB in E_b/N₀ over CA-SCL decoding (list size 16) of CA-Polar codes with the same code parameters, for a block error rate of 10⁻³. We provide extensive benchmarks comparing ORBGRAND to CA-SCL and other soft-decision GRAND variants. We also integrate a GRAND decoder producing soft output into a MIMO iterative detection and decoding (IDD) receiver. Specifically, we apply an established technique which utilizes soft-output GRAND as the component decoder for the block turbo decoding of product codes. This block turbo decoder is evaluated as a soft output decoder within a MIMO IDD receiver. We demonstrate competitive or superior performance relative to Belief Propagation (BP) decoding of 5G Low-Density Parity Check (LDPC) codes. This approach also marks a use of GRAND for low-rate, high-redundancy FEC in a MIMO system. With GRAND in MIMO still being an emerging area of research, this work is an exploratory evaluation of GRAND for FEC in MIMO, and highlights GRAND’s potential as a versatile and performant decoder in different MIMO receiver architectures.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Accuracy Predictions of Companion Classifiers&#13;
for LLM Routing</title>
<link href="https://hdl.handle.net/1721.1/163044" rel="alternate"/>
<author>
<name>Wu, Jessica L.</name>
</author>
<id>https://hdl.handle.net/1721.1/163044</id>
<updated>2025-10-07T04:14:19Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Improving Accuracy Predictions of Companion Classifiers&#13;
for LLM Routing
Wu, Jessica L.
The increasing versatility of Large Language Models (LLMs) calls for developing effective routing systems to match tasks with the most suitable models, balancing accuracy and computational cost. This research introduces a novel meta-cascade routing framework that combines meta-routing, where a predictive model selects the appropriate LLM for a task, and cascading, where models are queried in sequence to optimize cost and performance. A critical component of this framework is the companion classifier, defined as a fine-tuned model trained to predict whether a particular LLM will generate an accurate response. We investigate whether incorporating features such as model responses into these classifiers can improve routing accuracy. Our preliminary experiments, using the Routerbench dataset, focus on training companion models that provide more stable and accurate routing decisions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Formal Verification of Relational Algebra Transformations&#13;
in Fiat2 Using Coq</title>
<link href="https://hdl.handle.net/1721.1/163042" rel="alternate"/>
<author>
<name>Teshome, Christian</name>
</author>
<id>https://hdl.handle.net/1721.1/163042</id>
<updated>2025-10-07T04:14:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Formal Verification of Relational Algebra Transformations&#13;
in Fiat2 Using Coq
Teshome, Christian
Data-intensive applications often involve operations over structured datasets, such as filtering, joining, and projecting records. Relational database systems generally use query planners to optimize high-level SQL queries into efficient execution plans. While these systems apply well-established query transformations, they typically assume the correctness of these transformations rather than formally proving them. The absence of formal guarantees can be a significant limitation for systems with strict correctness requirements. This thesis contributes to Fiat2, a Python-like high-level programming language for data-intensive workloads that integrates formal verification via the Coq proof assistant. We focus on proving the correctness of several rewrite-based query optimizations commonly used in database engines. Specifically, we formalize and prove the correctness of algebraic rewrites involving combinations of filters, joins, and projections, as well as join-reordering rewrites. All rewrites are proven in Coq to preserve the semantics of the original program under list semantics, meaning that the output lists are fully equivalent (or permutations, in the case of join reordering). These verified rewrites serve as a foundation for future optimization in Fiat2, enabling significant optimizations while preserving the semantics of the original queries with correctness guarantees. The results demonstrate the feasibility of integrating formally verified query optimizations into a practical high-level programming language.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Converting PyTorch Models to StreamIt Pipelines</title>
<link href="https://hdl.handle.net/1721.1/163041" rel="alternate"/>
<author>
<name>Rajvee, Muhender Raj</name>
</author>
<id>https://hdl.handle.net/1721.1/163041</id>
<updated>2025-10-07T04:14:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Converting PyTorch Models to StreamIt Pipelines
Rajvee, Muhender Raj
With the rise of large language models, there have been efforts to optimize machine learning inference to support a large volume of queries. Currently, the two main ways to do this are running optimized kernels for computing the forward inference pass and distributing computation across multiple GPUs or different cores in a GPU. Machine learning libraries such as PyTorch produce dynamic computation graphs in order to represent the forward pass of the model. PyTorch allows conversion of these dynamic graphs into static ones through just-in-time (JIT) compilation. These graphs can then be optimized further by the compiler. We propose an alternate way of optimizing these dynamic graphs. We convert the dynamic computation graph of PyTorch to pipelines in StreamIt, a domain specific language (DSL) for streaming applications, and use the multi-stage compilation property of BuildIt to compile this pipeline in stages to inference code. We found that, while the inference latencies of models compiled in this way are slightly higher, they are still comparable to those of PyTorch models and are open to future optimizations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Interactive Visual Paradigm for Knowledge Graph&#13;
Question-Answering</title>
<link href="https://hdl.handle.net/1721.1/163040" rel="alternate"/>
<author>
<name>Ramkumar, Vayd Sai</name>
</author>
<id>https://hdl.handle.net/1721.1/163040</id>
<updated>2025-10-07T04:14:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Interactive Visual Paradigm for Knowledge Graph&#13;
Question-Answering
Ramkumar, Vayd Sai
In an era of information overload, verifying data reliability and provenance is critical, yet knowledge graphs (KGs) often remain complex for non-expert users. This thesis introduces TRACE, a Reasoning and Answer-path Comprehension Engine, a visualization tool enhancing transparency in KG question answering (KGQA). By abstracting intricate KGs into intuitive meta-nodes, TRACE simplifies exploration of large, multi-topic datasets. Its interactive interface allows users to navigate semantic communities and trace reasoning paths, fostering trust through clear answer derivation. Unlike cluttered traditional graph visualizations, TRACE’s meta-node approach provides a scalable, user-friendly solution, concealing technical complexities while enabling robust query validation. Large language models support natural language query parsing and community summarization, making KGs accessible to diverse audiences. TRACE positions itself as a vital widget for information platforms, empowering users to counter misinformation confidently. A user study and pipeline evaluation confirmed TRACE’s intuitive interface excels for complex queries, though multi-hop paths pose challenges, while processing tests demonstrated its scalable paradigm for large datasets. By prioritizing transparency and usability, TRACE redefines KGs as reliable tools for knowledge discovery, laying a foundation for future systems to deliver trustworthy, accessible information in a digital landscape fraught with uncertainty.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spectral Analysis of Local Atomic Environments</title>
<link href="https://hdl.handle.net/1721.1/163039" rel="alternate"/>
<author>
<name>Phung, Tuong</name>
</author>
<id>https://hdl.handle.net/1721.1/163039</id>
<updated>2025-10-07T04:14:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Spectral Analysis of Local Atomic Environments
Phung, Tuong
The representation of local environments is a cornerstone challenge in computational materials science, with profound implications for property prediction and materials discovery. This thesis presents a comprehensive investigation of spectral descriptors constructed from spherical harmonic expansions to represent the geometries of local atomic environments. Systematic computational experiments evaluate the robustness of these descriptors to geometric perturbations and their capacity to differentiate structurally similar configurations. The findings reveal a clear performance hierarchy, with higher-order descriptors offering increased geometric expressivity and reconstruction accuracy in resolving challenging structural cases. This research further examines methods for inverting spectral representations back to atomic coordinates, demonstrating that directly optimizing three-dimensional positions through gradient-based techniques yields markedly better reconstruction accuracy than approaches operating in Fourier space. Dimensionality reduction via latent space embeddings is also explored, showing that essential geometric features can be preserved in significantly compressed representations. Through methodical analysis of descriptor limitations, performance boundaries, and sensitivity to hyperparameters, this work establishes practical benchmarks and implementation guidelines for spectral descriptors. These contributions strengthen the foundation for reliable machine learning models in computational materials science, advancing both the accuracy and efficiency of atomic-scale modeling for materials design and discovery.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Optimization of Shipping Container for Package-Less Units</title>
<link href="https://hdl.handle.net/1721.1/163038" rel="alternate"/>
<author>
<name>Minja, Baraka</name>
</author>
<id>https://hdl.handle.net/1721.1/163038</id>
<updated>2025-10-07T04:14:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design and Optimization of Shipping Container for Package-Less Units
Minja, Baraka
Package-less shipping aims to deliver units without company X’s added packaging. This requires the fulfillment systems and processes to have gentler handling. Part of this change involves the design and implementation of a container that will carry units from a distribution center to a delivery facility. This thesis presents the container analysis that was completed to determine what the optimal container features and container type are for package-less shipping. &#13;
Collapsible bags provide the best solution for package-less shipping in comparison to nestable and collapsible totes. Since ergonomic weight is the limiting constraint, the lower weight of the collapsible bag will allow for 1 or 2 more units per container. In addition, it benefits from 1) lower process cost for returning to dock (3.7% cost reduction as compared to a nestable tote) 2) better ergonomics (collapsible tote has undesirable pinch points) and 3) improved cycle time (estimated 2s to open/collapse compared to 4s for collapsible tote).&#13;
Additional considerations that require more analysis relate to units per container and relocation.  Based on company X’s past orders and unit types for the package-less shipping process, it is estimated that ~210 units per container (17.08 cu. Ft.) is the max achievable for NA before it reaches the ergonomic weight cap. However, company X is expecting the package-less shipping distribution center process to be constrained to ~105-133 units. Analysis of container relocation from delivery facilities to distribution centers indicates it is worthwhile investigating alternative relocation strategies in lieu of dedicated 53-foot container trailers to achieve lower relocation costs. &#13;
The collapsible bag is the best option assuming it has at least an expected lifetime of 2 years, which is when its NPV exceeds that of the two alternatives. These results are sensitive to assumptions made, and it is necessary to fine tune this analysis when the end-to-end package-less shipping process has been fully mapped out.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transformation Tolerance of Facial Recognition&#13;
Technology and Informative Evaluation Metrics</title>
<link href="https://hdl.handle.net/1721.1/163037" rel="alternate"/>
<author>
<name>Nakamura, Haley Marie</name>
</author>
<id>https://hdl.handle.net/1721.1/163037</id>
<updated>2025-10-07T04:14:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Transformation Tolerance of Facial Recognition&#13;
Technology and Informative Evaluation Metrics
Nakamura, Haley Marie
Over the last decade, machine learning based facial recognition (FR) systems have continued to increase in popularity while spreading to unique deployment settings. Despite the large variance among FR input distributions, popular facial recognition benchmarks continue to characterize system performance using one aggregate score over a single dataset. In many cases, the limitations of this score are unclear to downstream users: assuming benchmark accuracy is high, how is it expected to change for an image sampled from a distinct distribution? Which transformations can the model handle robustly, and which cause failure? Meanwhile, there is a large body of human facial perception research that aims to understand the underlying mechanisms of human recognition. This field offers methodological inspiration for more informative evaluation techniques, including the characterization of recognition performance as a function of a quantifiable input transformation. This work performs such an analysis. The performance scores of five state-of-the-art FR models are characterized as a function of Gaussian blur strength, intersecting with color variation. The performance-blur relationship is modeled as an s-curve, creating a highly interpretable format for discussion. Blur strength was consistently statistically significant to performance, but color variation did not significantly impact any model. Results are then compared to prior human recognition experiments. The best models outperform humans in low-blur regimes while humans outperform all models in high-blur regimes. These results motivate the need for modern benchmarks that capture a range of input distributions. The analysis presented can lead to a deeper understanding of FR systems, and provide a clearer interpretation of how model performance changes under quantified distribution shifts.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Transfer as a Lever for Accelerated Medical Device Innovation: A Case-Based Mapping Approach</title>
<link href="https://hdl.handle.net/1721.1/163036" rel="alternate"/>
<author>
<name>Magzoub, Amna Ahmed Eltayeb</name>
</author>
<id>https://hdl.handle.net/1721.1/163036</id>
<updated>2025-10-07T04:13:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design Transfer as a Lever for Accelerated Medical Device Innovation: A Case-Based Mapping Approach
Magzoub, Amna Ahmed Eltayeb
In highly regulated industries such as medical devices, accelerating New Product Development (NPD) without compromising quality or compliance is a persistent challenge. This thesis investigates the design transfer process, a critical, yet under- examined phase of NPD, as a strategic lever to reduce time-to-market. The project uses swimlane flowcharts and Design Structure Matrices (DSM) to map real-world processes, identify breakpoints, and classify rework (both planned and unplanned) in four case studies from Stryker Corporation. Key patterns emerged across case types: insufficient early-stage validation, misaligned cross-functional communication, and inadequate integration with suppliers were recurrent drivers of inefficiency. Compara- tive analysis revealed that concurrent engineering practices and knowledge sharing significantly reduce unplanned rework cycles and improve development speed. The study proposes actionable recommendations for optimizing design transfer including: leveraging corporate know-how through intentional knowledge transfer meetings dur- ing the process benchmarking process, increased risk-taking during the development process by embracing concurrent engineering approaches, and investing in early-stage co-development by adopting regular collaboration activities with suppliers. These findings can inform broader process improvements in the development of medical devices, and serve as a blueprint for other complex, cross-functional environments.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Effect of the Solar Cycle on Satellite Orbital Lifetime</title>
<link href="https://hdl.handle.net/1721.1/163035" rel="alternate"/>
<author>
<name>Lisy, Celvi A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163035</id>
<updated>2025-10-07T04:15:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Effect of the Solar Cycle on Satellite Orbital Lifetime
Lisy, Celvi A.
The lifetime of a satellite in Low Earth Orbit (LEO) is affected by the 11-year solar cycle. At a fixed altitude, increasing solar activity increases atmospheric density which leads to an increase in drag, and a decrease in mission lifetime without using propulsion to recover altitude. Satellites may have longer orbital lifetimes if more of their mission is operational during a solar minimum due to lower solar activity and lower atmospheric drag. Satellites with larger area-to-mass ratios generally have shorter orbital lifetimes than satellites with small area-to-mass ratios. Missions that get delayed and have more of their operations during solar maximum than planned originally may have too short of a mission lifetime or, conversely, may be at risk of increasing their orbital lifetime past regulatory limits (five years for satellites in LEO according to the FCC) if they launch closer to solar minimum. For example, a satellite with an area-to-mass ratio of 0.014 m2/kg – such as a 1U CubeSat – and a one-year mission that is launched in 2021 without onboard propulsion, would have an orbital lifetime of 1.051 years. However, if that mission were delayed a year, a common occurrence in the industry, it would no longer be able to achieve its mission as its orbital lifetime with a deployment in 2022 is 0.44 years. Conversely, if the same 1U CubeSat is launched during solar max in January 2025, it would have an orbital lifetime of 2.2 years, and would re-enter in February of 2027. However, if that mission were delayed a year, the satellite would launch in January 2026 and instead be in orbit for 6.4 years before re-entering. They could be fined for violating the FCC deorbit limit of five years. This thesis quantifies the effect of launch or processing delays on satellite orbital lifetime based on their orbit altitude and vehicle parameters such as mass, cross sectional area, altitude, and bus size. In general, it is found that four-year and six-year delays have the greatest effect on a satellite’s orbital lifetime because the satellite will be deorbiting almost half a solar cycle (5.5 years) from its intended deployment year. However, two-year delays can still affect satellite operators, as they can increase the orbital lifetime, even by up to 1.5 years for low area-to-mass ratio satellites in 400 km orbits and almost five years for satellites in orbits higher than 500 km. Two-year delays can also decrease the orbital lifetime of a satellite by up to 1.7 years for low area-to-mass ratio satellites in 400 km orbits and almost two years at altitudes higher than 500 km.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Electrical Diagnostics for Nanosecond Pulsed Discharge Reactors</title>
<link href="https://hdl.handle.net/1721.1/163034" rel="alternate"/>
<author>
<name>Rao, Sankarsh R.</name>
</author>
<id>https://hdl.handle.net/1721.1/163034</id>
<updated>2025-11-24T15:39:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Electrical Diagnostics for Nanosecond Pulsed Discharge Reactors
Rao, Sankarsh R.
This thesis provides an introduction to transmission line theory (telegrapher’s equations) as the mathematical background needed to correctly perform and interpret electrical measurements in nanosecond pulsed discharge reactors. The mathematical framework is implemented in a numerical tool called VI-View, which is made available to the community to aid with the interpretation of electrical measurements and help explain discrepancies between different experimental arrangements and probe configurations. A brief manual on how to use the tool is provided, followed by a series of six case studies relevant to experimental setups/situations encountered in practice. The analysis of these case studies summarizes best practices when performing electrical and energy measurements in nanosecond pulsed discharge reactors. Case Studies 1 and 2 cover in-situ and remote measurements for reactors using one voltage and one current probe. Case Study 3 covers how two current probes, one on the high-voltage end and one on the low-voltage end, can achieve the same energy measurements as Case Studies 1 and 2. Case Studies 4 and 5 show how cables with varying lengths and dissimilar properties — as can sometimes be encountered in practice — affect the electrical signals. Case Study 6 shows how a variable resistance — a step drop from 50MΩ to 10Ω — within a load can be a first approximation to a plasma reactor with a discharge. Finally, an outlook on how these case studies connect to real, experimental waveforms is presented along with the limitations of the tool.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stochastic Methods for Setting Effective Aviation NOₓ&#13;
Policies</title>
<link href="https://hdl.handle.net/1721.1/163033" rel="alternate"/>
<author>
<name>Reider, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/163033</id>
<updated>2025-10-07T04:15:10Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Stochastic Methods for Setting Effective Aviation NOₓ&#13;
Policies
Reider, Sarah
Nitrogen Oxides (NOₓ) from aviation emissions are well known to have detrimental effects on air quality and the climate. Presently, they are regulated to preserve local air quality around airports. As part of the regulation process, aircraft engines are placed on a test stand with NOₓ levels measured at different thrust settings meant to mimic the aircraft’s emissions during landing and take-off. These are then constrained as a function of the engine’s overall pressure ratio (OPR) and rated thrust, with the allowed NOₓ emissions increasing with OPR. Despite increases in the stringency of this regulation, recent research suggests this regulation is insufficient for protecting surface air quality degradation from NOₓ emissions at cruise. Moreover, at high OPRs, NOₓ emissions increase substantially for relatively small reductions in fuel burn. In light of this, a new metric representative of cruise emissions is being investigated. This work considers effective methods to define this new regulation given a wide range of uncertainties in the tradeoff between NOₓ and CO₂ emissions at high OPRs. First, an estimate for the combined climate and air quality cost of NOₓ from aviation cruise emissions is estimated as ∼$95,000/tonne using a 2019 flight inventory. Then, cruise limits are proposed informed by the combined impact of NOₓ and CO₂ at cruise and with a similar slope to the current LTO standard. Finally, a Monte Carlo simulation is run, sampling NOₓ and CO₂ social costs for a series of hypothetical aircraft designed using the open-source Transportation Aircraft System OPTimization (TASOPT) model. This work takes a worst-case scenario approach, where the only response engine manufacturers can make to stricter standards is to reduce OPR and sacrifice fuel efficiency. Each aircraft’s emissions are evaluated during cruise to determine the probability of increasing environmental harm under different policy scenarios given these uncertainties. The combined cost of NOₓ and CO₂ are compared to the baseline engines that meet current regulations for each scenario. Results show defining a cruise metric informed by the weighted combined cost of CO₂ and NOₓ could reduce total environmental cost at cruise by 15 – 43% while carrying a 6 – 7.4% risk of increasing total environmental cost for wide-body aircraft engines in the most stringent scenario. Less stringent scenarios showed similar risks of increasing harm for smaller potential environmental savings. In all cases, the risks associated with the proposed limits are driven by low-likelihood extremes in the uncertainty distributions of NOₓ and CO₂, further suggesting the benefit of an environmentally conscious standard.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Domain-Independent Mode Estimation for Human-Robot&#13;
Collaboration</title>
<link href="https://hdl.handle.net/1721.1/163032" rel="alternate"/>
<author>
<name>Gomez, Annabel Reyna</name>
</author>
<id>https://hdl.handle.net/1721.1/163032</id>
<updated>2025-10-07T04:15:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Domain-Independent Mode Estimation for Human-Robot&#13;
Collaboration
Gomez, Annabel Reyna
To collaborate safely and intelligently with humans, robots must infer high-level semantic sates, such as intentions or interaction modes, from uncertain sensor input. While dynamic, probabilistic mode estimation is commonly used in fault diagnosis, this thesis extends the problem to activity recognition, where the goal is to estimate qualitative, symbolic human-object interaction states in real time. Robust human activity recognition is essential for collaborative and assistive robotics, particularly in dynamic or safety-critical environments. The core solution presented in this thesis is a mode-estimator and its efficient implementation using the A* with bounding conflicts (A*BC) algorithm. This performs best-first enumeration over symbolic activity states while integrating recursive Bayesian filtering to maintain belief under noisy observations. Unlike low-level trajectory tracking or deep-learned classifiers, qualitative spatial filtering operates at the right level of abstraction to recognize symbolic actions. It can also generalize across domains with minimal retraining and support efficient, probabilistically grounded reasoning about uncertainty in both perception and symbolic mode transitions. The proposed system fuses RGB-D perception, object segmentation, qualitative spatial reasoning (QSR), and probabilistic inference into a real-time pipeline capable of tracking and inferring symbolic human-object interaction states. Evaluated in a human-robot rehabilitation setting, this domain-independent system successfully infers latent human and object activity states from noise RGB-D data. It resolves ambiguity using Vision-Language Model (VLM)-guided semantic arbitration and demonstrates robustness and adaptability in unstructured environments. This work establishes qualitative spatial filtering with A*BC as a generalizable and efficient solution for semantic activity recognition, laying the foundation for future perception-driven collaborative systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distributed Energy Dynamics Control for Stable Power&#13;
Electronic-Enabled Electric Power Systems</title>
<link href="https://hdl.handle.net/1721.1/163031" rel="alternate"/>
<author>
<name>Gada, Hiya Akhil</name>
</author>
<id>https://hdl.handle.net/1721.1/163031</id>
<updated>2025-10-07T04:15:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Distributed Energy Dynamics Control for Stable Power&#13;
Electronic-Enabled Electric Power Systems
Gada, Hiya Akhil
The increasing penetration of renewable and inverter-based resources is transforming modern power systems into fast, nonlinear, and heterogeneous networks. These converterdominated systems operate on timescales much faster than traditional synchronous machines, making conventional modeling and control approaches, rooted in quasi-static phasor analysis and centralized architectures, inadequate for ensuring stability and scalability. This thesis adopts an energy space modeling approach grounded in first principles of energy conservation and system interconnection. It extends the previously introduced second-order energy dynamics model by relaxing the assumption that energy in tangent space can be treated as an independent disturbance. The resulting contribution is a third-order model that treats stored energy in tangent space as a dynamic state, enabling more expressive and accurate modeling of fast-timescale system behavior. Leveraging this extended energy space model, the thesis develops a multilayered distributed control architecture in which the nonlinear physical dynamics of each component are lifted to the higher-level linear energy space, capturing internal energy dynamics and real/reactive power flows, and integrated with the lower-level physical dynamics with well-defined mappings. Distributed controllers are designed in this energy space using only local states and minimal neighbor interaction, assuming a system-level coordination mechanism provides consistent references. Two control strategies, energy-based feedback linearizing control and sliding mode control, are developed and shown to achieve asymptotic convergence to reference outputs. The framework is validated on two systems: an inverter-controlled RLC circuit and a synchronous generator under load. Finally, the energy space framework is extended to structurally model inter-area oscillations (IAOs). An inter-area variable is defined as the difference between power incident on a tie-line from Area I and power reflected into tie-line from Area II. Simulations on a 3-bus, 2-area system confirm consistency with eigenmode analysis and show how tie-line strength and generator inertia affect IAO dynamics. A novel resonance phenomenon is also identified: instability arising from interaction between a system’s natural IAO frequency and time-varying disturbances from intermittent DERs. This previously unmodeled behavior is captured explicitly within the energy dynamics framework and may help explain recent blackout events in the Iberian Peninsula.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pushing the Limits of Active Data Selection with Gradient Matching</title>
<link href="https://hdl.handle.net/1721.1/163030" rel="alternate"/>
<author>
<name>Zhang, Chris</name>
</author>
<id>https://hdl.handle.net/1721.1/163030</id>
<updated>2026-01-23T15:40:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Pushing the Limits of Active Data Selection with Gradient Matching
Zhang, Chris
As modern machine learning systems grow in scale, the inefficiencies of training on large, noisy, and imbalanced datasets have become increasingly pronounced—particularly in computer vision, where real-world data often contain labeling errors, occlusions, and redundancy. While large models can partially compensate by training exhaustively on massive datasets, this indiscriminate approach is computationally expensive and often inefficient. Active data selection offers a more efficient alternative by prioritizing examples that contribute most to model improvement. However, existing selection strategies (such as Rho Loss) still fall short of the optimal achievable performance. In this work, we propose the Gradient Informed Selection Technique (GIST), an active data selection method that prioritizes examples based on their gradient alignment with a small, fixed holdout set. At each training step, GIST computes perexample gradients and selects those that are most aligned with the holdout gradient, thereby guiding model updates toward better generalization. We evaluate GIST on noisy (Clothing1M) and clean (ImageNet) datasets and show that it consistently outperforms baselines across a range of selection ratios—that is, the proportion of a batch of data that the model selects to update weights on. To address the computational overhead of gradient-based selection, we introduce efficient variants using restricted-layer gradients, low-rank approximations, and gradient quantization. We also analyze GIST’s selection behavior, showing that it implicitly balances classes and repeatedly selects high-utility examples—two factors that enhance both robustness and learning efficiency. Our findings suggest that a more effective data curriculum is both discoverable and practical, and that GIST is a step toward achieving it.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Phase Transition for Recovering a Random&#13;
Hypergraph from its Edge Data</title>
<link href="https://hdl.handle.net/1721.1/163029" rel="alternate"/>
<author>
<name>Yao, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/163029</id>
<updated>2025-10-07T04:14:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Phase Transition for Recovering a Random&#13;
Hypergraph from its Edge Data
Yao, Andrew
The weighted projection of a hypergraph is the weighted undirected graph with the same vertex set and edge weight equal to the number of hyperedges that contain the edge; the projection is the unweighted graph with the same vertex set and edge set consisting of edges with weight at least one. For d ≥ 3, after observing the unweighted and weighted projection of a random d-uniform hypergraph that is sampled using a generalization of the Erdős–Rényi random model, we study the recovery of a fraction of the hyperedges and the entire hypergraph. For both cases, we show that there is a sharp phase transition in the feasibility of recovery based on the density of the hypergraph, with recovery possible only when the hypergraph is sufficiently sparse. Particularly, we resolve numerous conjectures from [5]. Furthermore, we display an efficient algorithm that is optimal for both exact and partial recovery. We also analyze the phase transition for exact recovery by exhibiting a regime of probabilities that is below the exact recovery threshold by a polylogarithmic factor for which exact recovery is possible.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Empowering Mobile-Only App Generation — Offline AI&#13;
Code Generation with App Inventor</title>
<link href="https://hdl.handle.net/1721.1/163028" rel="alternate"/>
<author>
<name>Yuan, Joyce</name>
</author>
<id>https://hdl.handle.net/1721.1/163028</id>
<updated>2025-10-07T04:14:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Empowering Mobile-Only App Generation — Offline AI&#13;
Code Generation with App Inventor
Yuan, Joyce
As digital tools become more accessible, creating software is becoming a powerful way for anyone to make real-world impact. Computational action—the idea that learners can build computing artifacts with authentic relevance to their lives and communities—reframes computing as a tool for empowerment. Low-code platforms like MIT App Inventor support this vision by fostering digital agency through purposeful creation. Recent advances in large language models (LLMs) expand these possibilities further by enabling code generation from natural language, offering a timely opportunity to lower the barrier to app creation. MIT App Inventor has long championed accessibility, allowing even young learners in underserved regions to build meaningful mobile apps. Its natural language tool, Aptly, enables users to describe app ideas and generate functional code. However, Aptly’s reliance on cloud-based LLMs limits access for users without stable internet—often those who could benefit most. This thesis addresses that challenge by enabling AI-powered app creation to run entirely offline on mobile devices. We fine-tune and quantize LLaMA 3B using QLoRA and deploy it on iOS with MLC LLM, enabling on-device inference without internet. We also introduce a custom evaluation framework tailored to Aptly’s grammar, combining a Tree-sitter parser and a modified CodeBLEU metric to assess both semantic and syntactic quality. Using curated evaluation datasets, we benchmark out-of-box and fine-tuned models across prompting strategies. In our evaluations, fine-tuned GPT-4.1 achieved the highest normalized CodeBLEU score (0.36 ± 0.12) and parsed over 81% of completions, outperforming its baseline by more than 5%. QLoRA-finetuned LLaMA improved parseability by 11.7% over its base model, showing progress in adapting smaller models to the Aptly domain, though semantic fidelity remains a challenge. Our results show that offline natural language–to–app generation is feasible, and that smaller models can be adapted to the Aptly domain. By lowering the technical and infrastructural barriers to app creation, this work lays the foundation to empower AI-assisted programming that is accessible, offline, and on the phone.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AutoDiff: A Scalable Framework for Automated Model&#13;
Comparison</title>
<link href="https://hdl.handle.net/1721.1/163027" rel="alternate"/>
<author>
<name>Woo, Andrew Kyoungwan</name>
</author>
<id>https://hdl.handle.net/1721.1/163027</id>
<updated>2025-10-07T04:15:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">AutoDiff: A Scalable Framework for Automated Model&#13;
Comparison
Woo, Andrew Kyoungwan
Post-training adaptations such as supervised fine-tuning, quantization, and reinforcement learning can cause large language models (LLMs) with identical architectures to exhibit divergent behaviors. However, the mechanisms driving these behavioral shifts remain largely opaque, limiting the reliability and interpretability of adapted models. AutoDiff is a scalable, automated framework for tracing model divergence on a per-neuron basis. It exhaustively profiles every feed-forward (MLP) unit across a pair of models, identifies the neurons with the largest activation gaps, and links these differences to downstream behavioral changes. The pipeline identifies exemplars that maximize between-model activation divergence and clusters the highest-gap neurons into an interpretable, queryable difference report. Proof-ofconcept experiments on GPT-2 small validate AutoDiff’s ability to rediscover synthetic perturbations without manual supervision. A larger case study on Llama3.1–8B contrasts the base model with several adapted variants, surfacing neurons whose behavioral shifts align with observed topic-level gains and losses. By uncovering these mechanistic divergences, AutoDiff transforms black-box model updates into actionable insights, enabling safer deployment, principled debugging, and interpretable model evaluation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Schrödinger’s Carbon: Until Measured, Operational Emissions Remain Uncertain</title>
<link href="https://hdl.handle.net/1721.1/163026" rel="alternate"/>
<author>
<name>Xia, Julia</name>
</author>
<id>https://hdl.handle.net/1721.1/163026</id>
<updated>2025-10-07T04:15:07Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Schrödinger’s Carbon: Until Measured, Operational Emissions Remain Uncertain
Xia, Julia
Rapidly improving generative artificial intelligence has led to significant investments in datacenter infrastructure, driving power demand, and raising environmental concerns. This has led to a growing body of research towards modeling embodied and operational carbon of datacenter servers across a variety of paradigms. However, most existing models take in deterministic inputs and output a singular average value that does not capture the inherent variability in estimating embodied and operational carbon emissions. Further, these average outputs obscure the impact of interacting factors, such as those related to deployment or software characteristics; each of which has its own underlying uncertainty distribution. This means in most cases, these averages do not accurately represent a particular server’s context. This thesis explicitly parameterizes and quantifies the full probabilistic distribution of operational carbon in AI inference tasks. It explores several factors of variability— deployment, spatiotemporal, and computational profile— and quantifies their impact on the overall carbon footprint through statistical and sensitivity analysis. While this work focuses on operational carbon, uncertainty propagation and understanding of variability should be used across a datacenter server’s entire life cycle. When this methodology is used alongside the existing uncertainty-aware embodied carbon measurements, it enables a holistic assessment from cradle to grave. This facilitates informed decision-making in server replacement, workload scheduling, hardware procurement, capacity planning, and more scenarios.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ideator Explorer: Enhancing AI-Assisted Ideation through&#13;
Interactive Visualization</title>
<link href="https://hdl.handle.net/1721.1/163025" rel="alternate"/>
<author>
<name>Wen, Haoran</name>
</author>
<id>https://hdl.handle.net/1721.1/163025</id>
<updated>2025-10-07T04:15:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Ideator Explorer: Enhancing AI-Assisted Ideation through&#13;
Interactive Visualization
Wen, Haoran
Current AI-assisted ideation systems, often based on linear chat interfaces, struggle to help users effectively manage the complexity of creative exploration, hindering both divergent thinking across multiple paths and the convergent synthesis of ideas. This thesis introduces and evaluates Ideator Explorer, a human-AI ideation system built upon an interactive graph visualization interface designed to overcome these limitations. The core of the system is its spatial, tree-like representation of branching idea sequences. Formative user studies indicate that this visualization approach is preferred over chat interfaces for its organizational benefits and its effectiveness in helping users track parallel lines of thought during exploration. The spatial layout inherently supports both the exploration of diverse idea branches (divergence) and the identification of potential connections (convergence). This research focuses on the design and evaluation of this interactive graph interface, examining how its specific visualization and interaction techniques impact the user’s ability to navigate, organize, and develop ideas within complex ideation processes. The primary contribution is a novel, visually driven interface paradigm for human-AI collaboration that enhances the management and exploration of the creative solution space.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Type Checker for Annotated Assembly Programs</title>
<link href="https://hdl.handle.net/1721.1/163024" rel="alternate"/>
<author>
<name>Zanders, Julian</name>
</author>
<id>https://hdl.handle.net/1721.1/163024</id>
<updated>2025-10-07T04:14:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Type Checker for Annotated Assembly Programs
Zanders, Julian
The rise of speculative-execution attacks, such as Spectre, has presented a security challenge to developers. Speculation on secret data can expose it, but running without speculation is suboptimal for runtime. To fix this, researchers have been evaluating “smart” speculation schemes, which determine when to speculate and when not to in order to balance runtime with security. Our lab proposes Octal, a solution that utilizes software and hardware in tandem. Data values are marked as secret or public using type inference, and the veracity of inference is checked using a type checker. Then, hardware can separate the secret and public values. My contributions were to the type checker, as well as some scripting to evaluate results.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Real-Time Non-Line-of-Sight Imaging Using&#13;
Single-Photon LiDAR</title>
<link href="https://hdl.handle.net/1721.1/163023" rel="alternate"/>
<author>
<name>Tsao, Nicholas</name>
</author>
<id>https://hdl.handle.net/1721.1/163023</id>
<updated>2025-10-07T04:15:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Real-Time Non-Line-of-Sight Imaging Using&#13;
Single-Photon LiDAR
Tsao, Nicholas
Robust real-time imaging systems have allowed for many advances in robotics and autonomous navigation, though limited visibility in many real-world settings remains a significant challenge. Non-Line-of-Sight (NLOS) sensing allows for imaging systems to “see around corners", expanding their range of perception, providing access information for realtime decision-making. A promising approach to NLOS sensing is through single-photon LiDAR, which is commonly used for range-finding in many imaging systems. In addition to range-finding, single-photon LiDAR systems can provide a deeply rich data source in the form of photon count histograms after reflecting off scene geometry, capturing detailed information from multiple bounces. NLOS imaging can be achieved by parsing third-bounce light from such single-photon LiDAR sensors, which can be used for a variety of detection and localization tasks, and recent work has demonstrated capabilities in a wide range of applications. This work aims to further develop the NLOS imaging system by demonstrating a fully functional NLOS system using low-cost, consumer-grade SPAD hardware for real-time NLOS imaging, detection, and localization. We lay the ground work for NLOS imaging systems by developing infrastructure for NLOS processing in real-time, and we examine the potential for NLOS systems to operate on cheap hardware using data-driven approaches. Our work implements and demonstrates full end-to-end capacity for these NLOS imaging systems in a number of applications including person detection and localization, facilitating future research in this field and paving the way for NLOS integration into consumer devices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Finetuning via Sparse Autoencoders</title>
<link href="https://hdl.handle.net/1721.1/163022" rel="alternate"/>
<author>
<name>Sivakumar, Ragulan</name>
</author>
<id>https://hdl.handle.net/1721.1/163022</id>
<updated>2025-10-07T04:14:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Automated Finetuning via Sparse Autoencoders
Sivakumar, Ragulan
Currently, the field of interpretability is traditionally confined to diagnostics. However, this thesis presents a novel method using interpretability in sparse autoencoders to achieve better performance in small models via instruction finetuning. Specifically, we present UnderstandTune, an autonomous method for assembling high-quality instruction finetuning datasets with minimal human intervention, requiring only concise task descriptions rather than evaluation dataset distributions. Our empirical evaluations show that UnderstandTune consistently outperforms uninformed finetuning baselines across multiple benchmarks. Complementing this, Lalon introduces a mixture-of-informed-experts (MoIE) architecture that routes queries to specialized models independently finetuned via UnderstandTune. This modular approach achieves competitive performance against larger monolithic models in specialized domains, while utilizing fewer parameters, training examples, and computational resources. The framework’s modularity enables independent optimization of components from sparse autoencoders to MoIE routing mechanisms. This research demonstrates how interpretability can be used to enhance performance through intelligent data curation and suggests a new paradigm where interpretability and efficiency reinforce each other toward more capable, resource-efficient AI systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generative Machine Learning Models for RNA Structure&#13;
Prediction and Design</title>
<link href="https://hdl.handle.net/1721.1/163021" rel="alternate"/>
<author>
<name>Rubin, Dana</name>
</author>
<id>https://hdl.handle.net/1721.1/163021</id>
<updated>2025-10-07T04:14:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Generative Machine Learning Models for RNA Structure&#13;
Prediction and Design
Rubin, Dana
Ribonucleic acid (RNA) is a fundamental molecule in biology, central to the regulation and execution of life’s most essential processes. Its diverse roles range from encoding genetic information to catalyzing biochemical reactions. Beyond its modern biological functions, RNA is also believed to have played a pivotal role in the origins of life which underscores the evolutionary significance of RNA. Unlocking the full potential of RNA research and design requires a deep understanding of the intricate relationship between RNA’s three-dimensional structure and sequence. Predicting RNA 3D structures remains a challenging problem due to the complexity of its folding landscape and the limited availability of high-resolution structural data. Inspired by recent advances in deep learning for protein folding and design, this thesis explores novel geometric and generative architectures for modeling RNA. We first present a systematic study on RNA structure prediction using equivariant neural networks within diffusion probabilistic models (DDPMs). Our folding model, named Klotho, captures local atomic interactions and structural features using SO(3)-equivariant message passing layers with a point cloud data representation. Ablation studies confirm that Klotho’s model performance scales with higher dimensionality and improves with enriching the input with secondary structure information and sequence embeddings from RNA foundation models. Building on this foundation, we introduce RiboGen, a multi modal deep learning model to jointly generate both RNA sequence and all-atom 3D structure. RiboGen integrates Flow Matching and Discrete Flow Matching within a unified multi modal representation and employs Euclidean Equivariant Neural Networks to learn geometric features. Our results demonstrate that RiboGen can generate chemically plausible, self-consistent RNA molecules, highlighting the potential of co-generative models to explore the sequence–structure landscape of RNA in a unified, data-driven framework. Together, these contributions advance the field of RNA modeling by offering scalable, symmetry-aware architectures for prediction and design. They lay the groundwork for future generative systems in RNA biology, therapeutic development, and biotechnological innovations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Resilient Object Perception for Robotics</title>
<link href="https://hdl.handle.net/1721.1/163020" rel="alternate"/>
<author>
<name>Shi, Jingnan</name>
</author>
<id>https://hdl.handle.net/1721.1/163020</id>
<updated>2025-10-07T04:10:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Resilient Object Perception for Robotics
Shi, Jingnan
A broad array of applications, ranging from search and rescue to self-driving vehicles, requires robots to perceive and understand the geometry of objects in the environment. Object perception needs to reliably work in a variety of scenarios and preserve a desired level of performance in the face of outliers and shifts from the training domain. Obtaining such a level of performance requires robust estimation algorithms that are able to identify and reject outliers, as well as techniques to continually improve performance of learningbased perception modules during test-time. In this thesis, we address these challenges by proposing (1) certifiably optimal solvers and a graph-theoretic framework that together help achieve state-of-the-art pose estimation performance even under high outlier rates, (2) self-supervised object pose estimators that can improve performance during test-time with accuracy comparable to state-of-the-art supervised methods, and (3) a test-time adaptation method for both object shape reconstruction and pose estimation without the need for CAD models. Throughout the thesis, we demonstrate that by using a variety of tools from optimization and learning, we can develop resilient object perception systems that perform reliably in a wide range of conditions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing a Data-Centric Framework for Predictive Maintenance of Wind Turbines</title>
<link href="https://hdl.handle.net/1721.1/163019" rel="alternate"/>
<author>
<name>Pan, Raymond</name>
</author>
<id>https://hdl.handle.net/1721.1/163019</id>
<updated>2026-01-23T15:35:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Enhancing a Data-Centric Framework for Predictive Maintenance of Wind Turbines
Pan, Raymond
Predictive maintenance of wind turbines is a machine learning task aimed at minimizing repair costs and improving efficiency in the wind turbine and renewable energy industry. Existing machine learning solutions often fail to meet real-world deployment requirements due to fragmented pipelines, lack of domain integration, and reliance on black-box models. Zephyr, a data-centric machine learning framework, addresses these challenges by enabling Subject Matter Experts (SMEs) to incorporate their domain knowledge into the prediction process, and to leverage automated tools for labeling, feature engineering, and prediction tasks without requiring extensive technical knowledge. However, the current version of Zephyr still has limitations, including usability gaps and a reliance on external tools for certain steps. Case studies with real-world data from the renewable energy company Iberdrola demonstrate Zephyr’s potential to integrate domain expertise into wind turbine predictive maintenance (thus streamlining the process) but also expose a sub-optimal user experience. This thesis explores gaps in the current state of the Zephyr framework and proposes refinements to enhance its usability. Key improvements include the consolidation of current tooling and relevant external libraries into a single API, state management with careful logging and exception handling, and improved support for model evaluation. These enhancements aim to support seamless end-to-end predictive modeling workflows, and to provide a more refined and flexible user experience for the Zephyr user base.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Minimalist Approach to End-to-End Vision Language&#13;
Navigation with Multi-Modal Foundation Model Features</title>
<link href="https://hdl.handle.net/1721.1/163018" rel="alternate"/>
<author>
<name>Mishra, Kartikesh</name>
</author>
<id>https://hdl.handle.net/1721.1/163018</id>
<updated>2025-10-07T04:14:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Minimalist Approach to End-to-End Vision Language&#13;
Navigation with Multi-Modal Foundation Model Features
Mishra, Kartikesh
Recent vision-language navigation (VLN) approaches leverage large models, prompt engineering, and/or explicit reasoning for instruction interpretation and agent guidance. We introduce MiniNav, a minimalist framework employing frozen vision-language foundation models as patch-wise feature extractors, avoiding data and compute heavy fine-tuning and cumbersome language model reasoning. Our lightweight control policies (∼ 10⁵ trainable parameters) are trained on a compact dataset of language-based specified navigational behaviors (∼ 10² runs, ∼ 10⁴ frames per behavior). We demonstrate generalization to novel objects and scenes, including direct real-world transfer, despite training on only two objects in a single simulated environment. Through its simple and scalable design, MiniNav provides an alternative to computationally intensive pipelines for robust real-world instruction-following. Our solution can provide a reference for evaluating the effective edge of more complex and larger VLN policies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategic Sampling: A Framework for Enhancing Speed&#13;
and Performance of Financial Fraud Detection Models</title>
<link href="https://hdl.handle.net/1721.1/163017" rel="alternate"/>
<author>
<name>Mitchell, Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/163017</id>
<updated>2025-10-07T04:14:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Strategic Sampling: A Framework for Enhancing Speed&#13;
and Performance of Financial Fraud Detection Models
Mitchell, Samuel
Financial fraud detection is a high-stakes field where rapid inference is essential. While state-of-the-art fraud detection models vary in terms of architectural decisions and appear to exhibit unique computational bottlenecks, we highlight that their run-times are all dominated by extensive information-gathering steps. These steps involve aggregating information from a large set of nodes or edges within a graph, and these intensive steps are performed O(|V |) or O(|E|) times during an inference forward pass, on a graph with |V | nodes and |E| edges. We introduce Strategic Sampling, a general method to accelerate these information-gathering steps. Our approach tailors sampling strategies based on the specific objective function used in each model’s information-gathering process, selecting the most relevant pieces of information to use in each step. This ensures that critical information is retained while significantly reducing the amount of data processed, thus speeding up the computation. We conceptually demonstrate how Strategic Sampling can be applied to message-passing Graph Neural Networks, Graph Transformers, and TGEditor (a state-of-the-art graph editing algorithm). To showcase the effectiveness of our proposed Strategic Sampling method, we implement it in the TGEditor codebase. Our results show that Strategic Sampling not only significantly reduces computation time by more than an order of magnitude, but also improves the F1 score, enhancing both efficiency and performance. This study underscores the potential of Strategic Sampling to universally boost the performance of various financial fraud detection models, paving the way for faster and more accurate fraud detection.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Grain Boundary Solute Segregation in Vanadium</title>
<link href="https://hdl.handle.net/1721.1/163016" rel="alternate"/>
<author>
<name>Ng, Daniel S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163016</id>
<updated>2025-10-07T04:10:42Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Grain Boundary Solute Segregation in Vanadium
Ng, Daniel S.
Vanadium alloys are a candidate structural material in nuclear fusion applications, where the presence of grain boundaries can improve mechanical properties and act as a sink for radiation- induced defects. Solutes with a thermodynamic preference to segregate to grain boundaries can stabilize them, making this a prime consideration for alloy design, but there are limited quantitative solute segregation data for vanadium. Based on results from an ab-initio computational framework for predicting the spectrum of grain boundary segregation energies across the periodic table, select nanocrystalline vanadium-based binary alloy systems were synthesized via mechanical alloying for targeted experiments characterizing differences in segregation strength. Scanning transmission electron microscopy and energy-dispersive x-ray spectroscopy measurements of solute concentrations in the grain boundary and bulk validate computational predictions of the average segregation strengths for different solutes, while showing inhomogeneous solute distributions along the grain boundary network that confirm the necessity for a spectral model that captures the behavior of site-specific segregation energies.&#13;
&#13;
After establishing the segregation behavior of different solutes in vanadium, the effects of solute segregation on other properties are examined. Heating experiments demonstrate that vanadium alloys containing strongly segregating species retain smaller grain sizes upon thermal annealing, indicating better grain boundary stability. The powder metallurgical route used produce these vanadium alloys requires a subsequent sintering step to densify powders into bulk parts for engineering applications, and dilatometry experiments reveal that that the addition of strongly segregating solutes also dramatically suppresses the sintering behavior. A kinetic analysis of the dilatometry data suggests that rapid grain boundary diffusion pathways that are necessary for effective sintering are obstructed by solute segregant, which has important repercussions for the processability of these alloys. Finally, microstructural characterization and nanohardness testing after ion-irradiation experiments demonstrate that the alloys with solute-stabilized grain boundaries are more resistant to nanovoid formation and radiation hardening. The work in this thesis advances our understanding of solute segregation and its effects in vanadium alloys, and highlights an approach for controlling grain boundaries that may facilitate future alloy design efforts for improved microstructural stability and radiation damage tolerance.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modular Construction of Complex-Architected Bottlebrush Block Copolymers and Their Self-Assembly Behaviors</title>
<link href="https://hdl.handle.net/1721.1/163015" rel="alternate"/>
<author>
<name>Sun, Zehao</name>
</author>
<id>https://hdl.handle.net/1721.1/163015</id>
<updated>2025-10-07T04:10:25Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Modular Construction of Complex-Architected Bottlebrush Block Copolymers and Their Self-Assembly Behaviors
Sun, Zehao
Microphase-separated block copolymers are attractive materials for self-assembled nanolithography, yet there is a disconnect between the simple patterns commonly formed by block copolymers and the complex patterns required for many nanoscale applications, particularly in microelectronics. To meet this challenge, researchers have sought to design and build copolymer systems at ever-increasing levels of complexity in the (macro)molecular level, which promises to show emergent intriguing properties that are otherwise absent. However, the synthetic challenge as well as the vastly increased parameter space have obscured the systematic study of such complex systems. An efficient, modular synthetic route is thus highly desired for Lego-like molecular construction of property-decoupled, individually-tunable target materials.&#13;
&#13;
In this thesis, we will highlight the research endeavor in developing a multiblock Janus bottlebrush copolymer architecture as a novel platform for generation of diverse nanostructures that have been challenging to fabricate. The architecture, which features two orthogonal Janus domains, can be facilely constructed from corresponding building blocks by graft-through synthesis and can yield hierarchically engineerable phase-in-phase patterns.&#13;
&#13;
Surprisingly, the two constituent domains, though relatively independent of each other, behave significantly differently when combined together under certain circumstances. Their collective behavior gives rise to two low-symmetry mesh-like network phases (monoclinic and tetragonal respectively) that have not been observed in other soft materials before, which are of both fundamental and technological interest. Through a suite of experimental and computational study, we show that this peculiar phenomenon is an outcome of intrinsic molecular confinement, an emergent effect unique to multi-body, multi-hierarchy complex architectures. This work demonstrates that intrinsic molecular confinement is a viable path to bottom-up assembly of new geometrical phases of soft matter, extending the capabilities of block copolymer nanofabrication.&#13;
&#13;
As another example of modular synthesis, we will show an iterative polymerization methodology for controlled synthesis of bottlebrush copolymers with expanded compositional and architectural scope. When synergizing with other components, this strategy allows rapid access to functional materials that display different phase behavior when compared to the self-assembly of conventional copolymers.&#13;
&#13;
Our work introduced here is expected to facilitate the synthesis of complex functional copolymers, spark interest in the exploration of their property-function relationship, and enable more opportunities for their application in nanopatterning and other advanced materials.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization Techniques for Trustworthy 3D Object Understanding</title>
<link href="https://hdl.handle.net/1721.1/163014" rel="alternate"/>
<author>
<name>Shaikewitz, Lorenzo Franceschini</name>
</author>
<id>https://hdl.handle.net/1721.1/163014</id>
<updated>2025-10-07T04:14:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimization Techniques for Trustworthy 3D Object Understanding
Shaikewitz, Lorenzo Franceschini
Autonomous machines require reliable 3D object understanding to interpret and interact with their environment. In this thesis, we consider two tightly coupled 3D object understanding problems. Shape estimation seeks a consistent 3D model of an object given sensor data and some set of priors. Pose estimation seeks an estimate of the object’s position and orientation relative to an invariant shape frame. In general, these problems are non-convex and thus difficult to solve. We present algorithms which nonetheless solve shape and pose estimation efficiently and with assurances in terms of of optimality, uncertainty, or latency. We begin in the multi-frame tracking setting, where we propose the certifiably optimal estimator CAST⋆ for simultaneous shape estimation and object tracking. CAST⋆ uses 3D keypoint measurements extracted from an RGB-D image sequence and phrases the estimation as fixed-lag smoothing. Temporal constraints enforce rigidity and continuous motion. Despite the non-convexity of this problem, we solve it to certifiable optimality using a smallsize semidefinite relaxation. We also present a compatibility-based outlier rejection scheme to handle outliers, and evaluate the proposed approach on synthetic and real data. Next, we focus on estimating the pose of an object given its shape and a single RGB image (no depth). Assuming only bounded noise on 2D keypoint measurements (e.g., from conformal prediction), we derive an estimator for the most likely object pose which uses a semidefinite relaxation to initialize a local solver. We pair this with an efficient uncertainty estimation routine which relies on a generalization of the S-Lemma to propagate keypoint uncertainty to high-probability translation and rotation bounds. The high-probability bounds hold regardless of the accuracy of the pose estimate, and are reasonably tight when tested on the LineMOD-Occluded dataset. Lastly, we propose a sub-millisecond solution to simultaneous estimation of object shape and pose from a single RGB-D image. Our approach converts the first-order optimality conditions of the non-convex optimization problem to a nonlinear eigenproblem in the quaternion representation of orientation. We use self-consistent field iteration to efficiently arrive at a local stationary point, finding solutions more than an order of magnitude faster than Gauss-Newton or on-manifold local solvers on synthetically generated data.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Joint Localization and Synchronization via User&#13;
Cooperation in Non-Terrestrial Networks</title>
<link href="https://hdl.handle.net/1721.1/163013" rel="alternate"/>
<author>
<name>Morrison, James C.</name>
</author>
<id>https://hdl.handle.net/1721.1/163013</id>
<updated>2025-10-07T04:12:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Joint Localization and Synchronization via User&#13;
Cooperation in Non-Terrestrial Networks
Morrison, James C.
Next-generation (xG) wireless networks require accurate localization and synchronization for&#13;
efficient resource management and emerging applications. Non-terrestrial networks (NTN)&#13;
with low Earth orbit (LEO) satellites offer a promising alternative for positioning, navigation, and timing (PNT) by providing diversity and increasing the signal-to-noise ratio (SNR)&#13;
over global navigation satellite systems (GNSS). However, the primary challenge in NTNbased localization with LEO satellites is the lack of precise clock synchronization, which&#13;
introduces biases in time-of-arrival (TOA) measurements and limits localization accuracy.&#13;
This paper introduces a joint cooperative localization and synchronization (JCLS) framework that addresses this challenge through spatiotemporal cooperation, soft information,&#13;
and simultaneous synchronization. Furthermore, we propose a three-step algorithm for performing JCLS. The first step calculates a coarse position estimate using TOA measurements&#13;
and the Gauss-Newton method. Then, this coarse estimate is updated using the LevenbergMarquardt method which performs joint localization and synchronization. Finally, we derive a soft information-based filter that is used to continuously refine the position and clock error estimates as new measurements are available. We characterize the fundamental performance limits of JCLS using Fisher information, which offers insight into its localization and synchronization accuracy bounds. Furthermore, simulation results based on TOA measurements of the 3rd Generation Partnership Project (3GPP) 5G New Radio positioning&#13;
reference signal (PRS) demonstrate that the proposed algorithm for JCLS significantly improves localization and synchronization accuracy compared to non-cooperative methods.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multipartite Quantum Clock Synchronization Via Collective Symmetric States</title>
<link href="https://hdl.handle.net/1721.1/163012" rel="alternate"/>
<author>
<name>Keskin, Ufuk</name>
</author>
<id>https://hdl.handle.net/1721.1/163012</id>
<updated>2026-01-16T19:10:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Multipartite Quantum Clock Synchronization Via Collective Symmetric States
Keskin, Ufuk
This thesis investigates multipartite quantum clock synchronization (QCS) tasks using a class of quantum states, called collective symmetric (CS) states, which generalize Dicke and N00N states. Employment of CS states in previous QCS procedures is shown to improve synchronization performance in various network scenarios. The focus of the paper is on QCS procedures that, after the distribution of quantum states, rely exclusively on local operations and classical communication (LOCC), ensuring compatibility with highly noisy quantum channels. Two synchronization scenarios are considered: (i) synchronization between the two nodes of an arbitrarily chosen pair of nodes, and (ii) global synchronization where all nodes wish to synchronize their clocks to a common average time. First, a framework in which the previous procedures operate employing the CS states is introduced. Using such framework, possible limitations of the QCS procedures in terms of estimation ambiguity and lack of robustness are pointed out. Second, a procedure referred to as the tactical delay procedure (TDP) is proposed for each of the two synchronization scenarios. The TDP resolves the mentioned limitations and outperforms the state-of-the-art multi-partite QCS procedures in terms of synchronization precision without requiring additional quantum resources.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating Embedded HOWFSC Algorithms</title>
<link href="https://hdl.handle.net/1721.1/163011" rel="alternate"/>
<author>
<name>Eickert, Brandon</name>
</author>
<id>https://hdl.handle.net/1721.1/163011</id>
<updated>2025-10-07T04:14:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Accelerating Embedded HOWFSC Algorithms
Eickert, Brandon
The quest to directly image planets of other solar systems demands not only state-of-the- art coronagraphs, but also places extreme performance demands on space-based processors. Direct imaging requires precise wavefront control to acquire the 1010 contrast necessary to reveal a dim, Earth-like exoplanet. This precise level of control is only possible if high-order wavefront sensing and control (HOWFSC) algorithms are executed with enough speed to offset wavefront error accumulation. Of the many aspects that make high-contrast imaging difficult, a central bottleneck is the speed at which we can run these algorithms. At the center of this work, we aim to accelerate the execution of two foundational HOWFSC algorithms: optical modeling and Electric Field Conjugation (EFC). Optical modeling underpins both Jacobian-based EFC, and a relatively new variant of EFC, called adjoint-based EFC.&#13;
The two main contributions of this thesis are to port bottleneck HOWFSC algorithms to the relevant computing environments, and quantify speedups attained by both algorithm choice and implementation optimization. This work explores the acceleration of optical modeling for a vector vortex coronagraph through the use of the FFTW library, and the acceleration of EFC by implementing adjoint-based EFC in an embedded context. We utilize functional analogs to radiation-hardened processors, using the NXP T1040 in place of the BAE RAD5545, and the NXP LS1046 in place of the LS1046-Space. We find that the FFTW library enabled a factor of six speedup for 4096 × 4096 fast Fourier transforms (FFTs), and a factor of five for 2048 × 2048 FFTs. With these significant speedups, the bottleneck within the vortex operations of the optical model shifts from the FFT to matrix multiplication. We additionally time the execution of the underlying routines of Jacobian-based EFC and AD-EFC to estimate that AD-EFC is 46 times faster than Jacobian-based EFC. Despite these speedups, AD-EFC is still a factor of 124 away from 100-second latency for our specific optical model. These results demonstrate that one to two orders of magnitude of speedup must be attained by either further optimizing algorithm implementations, or exploring other parallelization strategies, computing architectures, and mission paradigms to achieve a latency on the order of 100 seconds.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Formalizing Causal Models Through the Semantics of Conditional Independence</title>
<link href="https://hdl.handle.net/1721.1/163010" rel="alternate"/>
<author>
<name>Zhang, Anna</name>
</author>
<id>https://hdl.handle.net/1721.1/163010</id>
<updated>2026-01-21T18:53:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Formalizing Causal Models Through the Semantics of Conditional Independence
Zhang, Anna
Many foundational tools in causal inference are based on graphical structure and can involve complex conditions that obscure the underlying causal logic. Given the inherent complexity and subtlety of cause-and-effect phenomena, establishing formal guarantees about these tools is both challenging and important. This thesis presents a semantics-driven formalization of causal models within the Coq proof assistant, enabling precise, mechanized reasoning about causal relationships. Central to this work is a new function-based definition of conditional independence, which captures how changes propagate through a causal graph. We prove that this semantic notion is equivalent to the standard graphical criterion of d-separation, thereby establishing a rigorous bridge between structural and semantic interpretations of independence. The formalization includes a library of graph-theoretic and causal-reasoning tools, encompassing key concepts such as mediators, confounders, and colliders. By linking the syntactic and semantic perspectives on causality, this work lays a robust foundation for formally verifying causal assumptions and guiding experimental design.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Contextual Knowledge Sharing in Multi-Agent Long-Horizon Planning Settings with Centralized Communication and Coordination</title>
<link href="https://hdl.handle.net/1721.1/163009" rel="alternate"/>
<author>
<name>Zhang, Jackson</name>
</author>
<id>https://hdl.handle.net/1721.1/163009</id>
<updated>2025-10-07T04:14:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Contextual Knowledge Sharing in Multi-Agent Long-Horizon Planning Settings with Centralized Communication and Coordination
Zhang, Jackson
Embodied multi-agent systems, comprising autonomous agents interacting within shared environments, enable intelligent, collaborative solutions for tasks requiring real-time coordination and adaptability. While applications span diverse fields, from disaster response to healthcare, planning in these systems remains challenging due to partial egocentric observations and limited environmental awareness. This work addresses these challenges by introducing a software module that synthesizes a shared world state from individual agent views, maintaining spatial information about objects and agents to support more effective joint action planning. Integrated into the LLAMAR framework, this module aims to improve planning accuracy and efficiency. The proposed approach is evaluated using metrics such as success rate, transport efficiency, and coverage performance. Our evaluation demonstrates that utilizing a perfect (oracle-generated) world state significantly enhances planning effectiveness. Notably, under these ideal conditions, the success rate of the LLAMAR planner improved by over 16%. These findings underscore the critical impact of accurate world state representation on multi-agent performance and highlight the potential for significant advancements in collaborative task execution in dynamic, unstructured settings.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterization, Processing, and Synthesis of Extreme-Performance Continuous Carbon Nanotube Network Composites</title>
<link href="https://hdl.handle.net/1721.1/163008" rel="alternate"/>
<author>
<name>Durso, Michael Nathan</name>
</author>
<id>https://hdl.handle.net/1721.1/163008</id>
<updated>2025-10-07T04:10:34Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Characterization, Processing, and Synthesis of Extreme-Performance Continuous Carbon Nanotube Network Composites
Durso, Michael Nathan
Continuous carbon nanotube (CNT) networks are an emerging, hierarchically-structured, and commercially available nanomaterial built from countless CNT nanocrystals. These macroscopic yarn materials promise to bridge the gap between microscopic CNT fibers – which are well-known for their superlative material properties – and human-scale fiber reinforcements for extreme-performance composites. Yet because the constituent CNTs interact only via intermolecular forces, network properties fall short of their building blocks. Although these materials show promise as reinforcement in composites, the networks’ low-permeability and tortuous nanoporous structure renders imbibition with liquids like a polymer matrix or surface functionalizing agents challenging. Thus, traditional composite fabrication strategies can be ineffective when applied to CNT yarns, especially commercial products subject to proprietary microstructural manipulation.&#13;
&#13;
Using commercially-available CNT yarns fabricated through floating-catalyst chemical vapor deposition (FCCVD) as model systems, we first explore yarn characteristics which are unique to their hierarchical, bundled-fiber structure, placing focus on the oxygen-rich amorphous carbon phase found in pre-densified, chemically-stretched yarns. A green hydrothermal technique is explored to remove this phase from the surface level inward, allowing for purification and improved infiltrability. However, we find this phase is distinct from previously-reported amorphous carbons found in CNTs, showing it behaves as a matrix which may improve polymer bonding. An analysis of imbibition and fluid transport in these CNT yarns finds that while infiltration of low-viscosity liquids like water is thermodynamically-favored, it is limited when surpassing the threshold of capillary pore percolation. Nevertheless, infiltration in lower-density networks is not only observed, but exploited through the demonstration of dielectric heating in a microwave reactor, where we show fluid imbibed within the network can be boiled to induce swelling and exfoliation of CNT bundles (or conversely, this may be avoided) through optimization of the heating parameters and solvent.&#13;
&#13;
Next, with a firm understanding of the yarn networks’ properties and the impact of various processing effects, we demonstrate two techniques of producing polymer in-situ using dissolved monomers to side-step slow infiltration. The first technique is in-situ interfacial polymerization (ISIP), which is adapted to the yarns studied in this work to yield polyetherimide–CNT yarn composites. When applied to chemically-stretched yarn, specific strengths as high as 2.2 GPa/(g-cm3) are achieved in the flexible and durable yarn composite. We show parameters and conditions which maximize tensile properties and challenges associated with the rapid nature of the process, concluding with the successful demonstration of a roll-to-roll fabrication scheme for producing arbitrary amounts of polymer.&#13;
&#13;
In our second technique, we produce extreme-performance polyimide and polybenzimidazole composites through green in-situ polymerizations (ISSP) in CNTs and macroscopic fiber networks. This approach utilizes superheated water and alcohol as a powerful medium to disperse monomers and initiate polymerization of high-performance coatings within a porous network. We demonstrate ISSP-CNT composites with variable coating morphologies (conformal, shish-kebab, etc.), in-air stability to over 500°C, and doubled specific stiffness and specific strength. Finally, we validate the multifunctional behavior of polyimide-CNT composites by showing a strong, flexible composite can store energy and behave as a free-standing battery electrode.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parametric Study of Novel Passive Thermal Control&#13;
Technology for Spacecraft</title>
<link href="https://hdl.handle.net/1721.1/163007" rel="alternate"/>
<author>
<name>Shafer, Emma</name>
</author>
<id>https://hdl.handle.net/1721.1/163007</id>
<updated>2025-10-07T04:14:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Parametric Study of Novel Passive Thermal Control&#13;
Technology for Spacecraft
Shafer, Emma
Thermochromic variable emissivity materials (VEMs) are a relatively new passive thermal control technology used for spacecraft radiators. VEMs passively change their emissivity based on their temperature, with VEMs having low emissivity at low temperatures and high emissivity at high temperatures. This property of VEMs allows for spacecraft to have reduced heater power and less extreme temperature swings without adding active thermal control systems. There is a potential for VEM technology to become more widely used in spacecraft radiators. Because thermochromic VEMs are still a relatively new technology, there has not yet been a study with a parametric sweep of some possible VEM profiles and common spacecraft parameters to determine the best-case uses of particular VEM profiles. This thesis models a single-node spacecraft in an equatorial low Earth orbit, varying the spacecraft’s shape, surface area, and thermal mass using Thermal Desktop. The temperature history of the spacecraft in orbit, particularly its orbit minimum temperature, orbit maximum temperature, orbit average temperature, and orbit temperature range, is recorded, and twelve VEM profiles are compared against default black and white paint materials to see how the twelve VEM profiles change orbit minimum temperature, maximum temperature, average temperature, and temperature range. The desired outcome is for the VEMs to reduce the temperature range the most compared to black or white paint while keeping temperatures within typical temperature requirements for spacecraft components. It is found that, compared to white paint, VEMs always increase the orbit minimum temperature, maximum temperature, average temperature, and temperature range across all nodal thermal masses and surface areas studied. For spacecraft with lower surface areas, having only white paint decreases the temperature too much for typical spacecraft components, so even though white paint always decreases temperature range compared to VEMs, it is recommended to have VEMs instead of white paint for lower surface area spacecraft due to VEMs being better than white paint at keeping components within typical temperature requirements. When VEMs are compared to black paint, it is found that black paint has lower minimum temperatures and greater maximum temperatures than all VEMs at greater surface areas. For lesser surface areas, the node covered in black typically has minimum and maximum temperatures in the middle of the VEMs’ minimum and maximum temperatures. For all surface areas and thermal masses, the average temperature of the black node is typically in the middle of the average temperatures of the nodes with VEMs; in relation to the VEMs’ average temperatures, the black average temperature decreases as node height increases. For all node heights and thermal masses, VEMs always decrease the temperature range compared to black. VEMs are shown to be better than black paint in having spacecraft components stay within typical temperature requirements, and which VEM to choose depends on what the specific spacecraft component is and its specific temperature requirements. The biggest difference in individual VEM profiles compared to each other is the orbit average temperature; the lower the VEM’s transition temperature, the lower the average temperature. Only at the greatest nodal surface areas and smallest nodal heights is there a significant difference in temperature range between individual VEM profiles; typically, the lower the transition temperature of the VEM, the less its temperature range. Future work includes expanding on the parameters studied and studying spacecraft in different orbits, different spacecraft shapes, and different VEM profiles.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Path Planning for Autonomous Sailing Vessels: Developing Robust and Efficient Survey Strategies</title>
<link href="https://hdl.handle.net/1721.1/163006" rel="alternate"/>
<author>
<name>Ahlers, Matthew C.</name>
</author>
<id>https://hdl.handle.net/1721.1/163006</id>
<updated>2026-01-05T16:06:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Path Planning for Autonomous Sailing Vessels: Developing Robust and Efficient Survey Strategies
Ahlers, Matthew C.
Autonomous sailing vessels offer a promising solution for maritime research, offering low maintenance and sustainable platforms for environmental monitoring and data collection. These vessels utilize wind power, eliminating the need for conventional fuel and enabling long-duration operations with minimal environmental impact. Their applications range from oceanographic studies to maritime surveillance, where persistent and autonomous data collection is essential. This thesis explores the challenges and methodologies associated with path planning for autonomous sailing, particularly in the context of survey operations. Unlike traditional motorized vessels, sailing autonomy must account for wind variability, sail dynamics, and limited maneuverability, requiring specialized path-planning techniques to ensure efficient and reliable navigation. The research investigates various sail and hull configurations, the dynamics of windpowered propulsion, and the application of autonomy frameworks such as MOOS-IvP. A key focus is on optimizing continuous coverage path planning (CPP) to maximize efficiency while adapting to environmental constraints. By integrating real-time wind data and vessel performance characteristics, the study refines survey strategies that enhance mission effectiveness. Different survey strategies are implemented and evaluated using both simulation and real-world testing on the Charles River. These trials demonstrate the feasibility of fixed-path decomposition approaches and adaptive moving horizon control methods, evaluating methods with the impact of wind conditions on autonomous sailing performance. The results contribute to the development of robust and efficient survey strategies that improve the autonomy and reliability of wind-powered marine vessels.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Census-Based Population Autonomy for Marine Robots: Theory and Experiments</title>
<link href="https://hdl.handle.net/1721.1/163005" rel="alternate"/>
<author>
<name>Paine, Tyler</name>
</author>
<id>https://hdl.handle.net/1721.1/163005</id>
<updated>2025-10-07T04:10:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Census-Based Population Autonomy for Marine Robots: Theory and Experiments
Paine, Tyler
Collaborating groups of robots show promise due in their ability to complete missions more efficiently and with improved robustness, attributes that are particularly useful for systems operating in marine environments. A key issue is how to model, analyze, and design these multi-robot systems to realize the full benefits of collaboration even with limited communication, a challenging task since the domain of multi-robot autonomy encompasses both collective and individual behaviors. This thesis presents a layered model of multi-robot autonomy that uses the principle of census, or a weighted count of the inputs from neighbors, for collective decision-making coupled with multi-objective behavior optimization for individual decision-making. The census component is expressed as a nonlinear opinion dynamics model and the multi-objective behavior optimization is accomplished using interval programming. This model can be reduced to recover foundational algorithms in distributed optimization and control, while the full model enables new types of collective behaviors that are useful in real-world scenarios. To illustrate these points, a new method for distributed optimization of subgroup allocation is introduced where robots use a gradient descent algorithm to minimize portions of the cost functions that are locally known, while being influenced by the opinion states from neighbors to account for the unobservable costs. With this method the group can collectively use the information contained in the Hessian matrix of the total global cost. In addition, the critical issue of controlling subgroup size to minimize a collective cost signal is addressed, an initial step toward establishing a general definition of controllability of the nonlinear opinion dynamics model. The utility of this model is experimentally validated in three categorically different experiments with fleets of autonomous surface vehicles: an adaptive sampling scenario, a high value unit protection scenario, and a competitive game of capture the flag.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reliable and Generalizable Real-World Planning with LLM-based Formalized Programming</title>
<link href="https://hdl.handle.net/1721.1/163004" rel="alternate"/>
<author>
<name>Hao, Yilun</name>
</author>
<id>https://hdl.handle.net/1721.1/163004</id>
<updated>2025-10-07T04:13:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Reliable and Generalizable Real-World Planning with LLM-based Formalized Programming
Hao, Yilun
While large language models (LLMs) have recently demonstrated strong potential in solving planning problems, LLMs, as zero-shot planners themselves, are still not capable of directly generating valid plans for complex planning problems such as multi-constraint or long-horizon tasks. This motivates the needs to develop a robust and reliable planning system for complex real-world planning problems. Furthermore, many frameworks aiming to solve complex planning problems often rely on task-specific preparatory efforts, such as task-specific in-context examples and pre-defined critics or verifiers, which limits their cross-task generalization capability. This motivates the needs to extend the robust and reliable planning systems to have strong generalization capability. In this thesis, we first develop an LLM-based planning framework that formalizes and solves complex multi-constraint planning problems as constrained satisfiability problems and can reliably identify the unsatisfiable cores for unsatisfiable requirements, provide failure reasons, and offers personalized modification suggestions. Then, we generalize the paradigm by proposing a general-purpose framework that leverages LLMs to capture key information from planning problems and formally formulate and solve them as optimization problems from scratch, with no task-specific examples needed. Comprehensive experimental results have shown that our frameworks significantly outperform the baselines and have strong performance across tasks and LLMs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Experimental Quantification of the Phonon Drag Deformation Mechanism in Metals at Extreme Strain Rates</title>
<link href="https://hdl.handle.net/1721.1/163003" rel="alternate"/>
<author>
<name>Dowding, Ian</name>
</author>
<id>https://hdl.handle.net/1721.1/163003</id>
<updated>2025-10-07T04:10:37Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Experimental Quantification of the Phonon Drag Deformation Mechanism in Metals at Extreme Strain Rates
Dowding, Ian
Extreme strain rate deformations, above 10⁶ s⁻¹, are seen across many fields of science and engineering; from meteorite impacts and impact induced crystallographic phase changes to high-speed machining and additive manufacturing. Despite the range of applications, many common high-rate impact experiments are intrinsically limited to strain rates of only 10⁴ s⁻¹ before complicating the material deformation with a superimposing state of shock due to high impact pressures. However, recent advances in optically driven microballistics using laser induced projectile impact tests have provided a new quantitative look into extreme mechanics of materials, at rates above 106 s-1 and well below the onset of shock effects.&#13;
As deformation strain rates increase, additional strengthening mechanisms in metals become available, leading to a change in the underlying physics of dislocation motion and an increase in strength. This thesis first explores the mechanical properties of pure metals when deformed at extreme strain rates − both in ambient conditions and elevated temperatures. Using an array of complimentary characterization methods, two independent measurements of strength, the dynamic strength and dynamic hardness, are assessed. As the temperature is increased from ambient, the strength and hardness of pure metals both increase an appreciable amount. At these deformation rates, conventional thermal softening effects are now in competition with anti-thermal hardening that arises from ballistic transport of dislocations from phonon interactions in the crystal lattice. These effects are quantified systematically and it is shown that the anomalous thermal strengthening seen is, thermodynamically and kinetically, the expected form of plasticity under these impact conditions.&#13;
Next, the limits of where this anomalous thermal strengthening occur in metals are investigated. First, solute elements are added to pure Ni to evaluate how additional dislocation pinning mechanisms effect the strength at ambient and elevated temperatures during extreme strain rate deformations. The strengthen increase due to solute pinning of dislocations is additive to the other strengthening mechanisms, yet thermally controlled, which provides a transition from ballistic transport of dislocations to thermally activated strengthening at a critical concentration of solutes. Finally, the upper bound of temperature for dislocation phonon drag strengthening is assessed. While it was shown that pure metals increase strength with increasing temperature, this “hotter-is-stronger” trend breaks down as the temperature approaches the melting point of the metal. Using Sn, due to its low melting temperature, the breakdown from “hotter-is-stronger” to “hotter-is-softer” as the initial substrate temperature approaches the melting temperature is systematically explored.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Concentration-Dependent Thermodynamics and Kinetics in Lithium-Metal Battery Electrolytes: Implications for Coulombic Efficiency</title>
<link href="https://hdl.handle.net/1721.1/163002" rel="alternate"/>
<author>
<name>Plaza Rivera, Christian O.</name>
</author>
<id>https://hdl.handle.net/1721.1/163002</id>
<updated>2025-10-07T04:13:04Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Concentration-Dependent Thermodynamics and Kinetics in Lithium-Metal Battery Electrolytes: Implications for Coulombic Efficiency
Plaza Rivera, Christian O.
Lithium (Li)-metal batteries (LMBs) present a promising avenue for high-energy applications. However, their practical adoption is constrained by challenges such as dendrite formation and unstable interphases. This study investigates the intricate interplay between electrolytedependent thermodynamics, kinetics, and transport properties in LMBs, focusing on the concentration effects in fluoroethylene carbonate (FEC) and 1,2-dimethoxyethane-based electrolytes containing lithium bis(fluorosulfonyl)imide. Due to FEC’s unique properties, these electrolytes facilitate significant upshifts in the Li redox potential and contribute to stable interphases and voltage profiles. Our findings reveal that the redox potential is primarily governed by the solvent’s electron-donating ability, reflecting underlying solvation dynamics, while the electrolyte permittivity influences reaction entropy trends. The results show entropy changes from increased molecular disorder at moderate concentrations to reduced entropy in highly concentrated regimes, driven by the formation of ion–solvent complexes. Kinetic analyses demonstrate a volcano-shaped dependence of exchange current density on concentration, centered at 2 M. Two prevailing perspectives propose that either kinetic–transport interplay or thermodynamic properties govern Coulombic efficiency (CE). However, separating these contributions is complex, since both higher exchange current density and upshifts in the Li redox potential enhance CE. Furthermore, CE strongly aligns with the combined effects of kinetics, thermodynamics, and transport, emphasizing the need for a holistic electrolyte design approach. Optimizing these three factors makes it possible to stabilize the interphase, promote uniform Li deposition, and elevate the overall safety and performance of next-generation LMBs.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>1500W High Voltage DC-DC Converter for Electroaerodynamic Aircraft Applications</title>
<link href="https://hdl.handle.net/1721.1/163001" rel="alternate"/>
<author>
<name>Shevgaonkar, Mihir</name>
</author>
<id>https://hdl.handle.net/1721.1/163001</id>
<updated>2025-10-07T04:13:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">1500W High Voltage DC-DC Converter for Electroaerodynamic Aircraft Applications
Shevgaonkar, Mihir
Electroaerodynamic (EAD) propulsion is a novel form of propulsion that is nearly silent and has no moving parts. The first functional untethered heavier-than-air EAD aircraft had an endurance of 90 seconds and could only fly in a straight line. To enable a practical fixed wing EAD aircraft that can fly outdoors with a payload for an extended period of time, improved power conversion technology is necessary. Prior work specifies a practical EAD aicraft as one with an endurance of 10 minutes, a payload capacity of 200 g, and full controllability. This work explores methods of increasing the specific power of power converters for EAD aircraft from 1.15 kilowatts per kilogram to over 2.0 kilowatts per kilogram. Such an increase can be achieved by utilizing magnetics integration and thermal management techniques, as well as adjustments in the operating point of the power converter. The power converter for the first generation EAD aircraft had an input voltage of 200 V, an output voltage of 40 kV, an output power of 600 W, a specific power of 1.15 kilowatts per kilogram, and an efficiency of 85 percent. In this work, a power converter with an input voltage of 200 V, an output voltage of 20 kV, an output power of 1476 W, a specific power of 2.7 kilowatts per kilogram, and an efficiency of 96 percent was demonstrated to work for a 40 second duration. At the end of the test, device temperatures continued to increase, so it has not been proven that the converter can work in thermal steady state as required for a 10 minute flight. Future work would involve modifying the test setup to allow for adequate ventilation of the ambient air around the converter, as well as modifying the converter with adequate thermal management so as to enable operation under thermal steady state.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Feasibility Analysis and Fuel Burn Benefits of Relaxing Constraints in High Altitude Cruise</title>
<link href="https://hdl.handle.net/1721.1/163000" rel="alternate"/>
<author>
<name>Cezairli, Mina</name>
</author>
<id>https://hdl.handle.net/1721.1/163000</id>
<updated>2025-10-07T04:12:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Feasibility Analysis and Fuel Burn Benefits of Relaxing Constraints in High Altitude Cruise
Cezairli, Mina
Operational interventions, such as enabling more fuel-efficient trajectories, are desirable in mitigating the environmental impact of air travel due to their relatively fast implementation potential. In particular, the vertical inefficiency arising from the altitude stratification in the airspace can be mitigated by relaxing vertical constraints. The feasibility of vertical flexibility is evaluated by quantifying the rate of close encounters and the frequency of alerts that would be needed to prevent them. Substantial diurnal variability in the number of close encounters was found in the airspace, with lower rates of events during the nighttime period. Furthermore, regional differences among Air Route Traffic Control Centers were observed in the number of close encounters. The frequency of controller intervention events that would have to occur was evaluated at 25 NM and 50 NM alerting distance levels, and it was found that, given sufficient technological capabilities for alerting at the 25 NM reaction distance, most centers would have fewer than 10 alerts per hour during the nighttime period. Boston, Miami, and Seattle appeared especially promising, with approximately one alert per hour for each region. Finally, the potential fuel benefit from enabling vertically optimal trajectories was estimated to be up to 100,000 gallons of fuel savings per month in the case of a CONUS-wide nighttime implementation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Risk Management in Air Traffic Applications: Data-Driven Modeling, Prediction, and Generation of Realistic Weather Disruptions and Other Unfavorable Conditions</title>
<link href="https://hdl.handle.net/1721.1/162999" rel="alternate"/>
<author>
<name>Zhang, Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/162999</id>
<updated>2025-10-07T04:13:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Risk Management in Air Traffic Applications: Data-Driven Modeling, Prediction, and Generation of Realistic Weather Disruptions and Other Unfavorable Conditions
Zhang, Joseph
Understanding the interaction between weather and disruptions in complex air transportation network is important to the design and evaluation of preemptive measures and responses taken by air traffic managers. However, the occurrence of disruptive weather events is often rather limited compared to the amount of data available for nominal operations.  Additionally, in large-scale systems with many known and unknown confounding factors, it can be difficult to identify the relevance of existing data to different underlying distributions of interest. Furthermore, existing work generally follows a frequentist paradigm in predicting disruptions based on weather, and does not easily lend itself to inferring the causes of disruptions, which can be important both in building models and using them to make predictions, and generate test cases to stress-test proposed design decisions. In this thesis, we develop a hierarchical Bayesian model for air traffic network operations, and investigate methods for learning these models in data-constrained settings, by extend existing work on retrospectively analyzing failures. We also include a guiding case study performed on LaGuardia Airport, in which a generative model is developed for the interaction between weather conditions and airport-level parameters within a single airport, trained on unlabeled historical data, and evaluated by simulating disruptions on historical schedules.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interposing the Syscall Boundary: Transparent Python Execution in SigmaOS</title>
<link href="https://hdl.handle.net/1721.1/162998" rel="alternate"/>
<author>
<name>Wu, Ivy</name>
</author>
<id>https://hdl.handle.net/1721.1/162998</id>
<updated>2025-10-07T04:13:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Interposing the Syscall Boundary: Transparent Python Execution in SigmaOS
Wu, Ivy
σOS aims to provide both serverless and stateful support to cloud applications while maintaining strong isolation, security, and efficient startup times and scheduling among multiple users. While σOS and its container startup times have been successfully benchmarked for tasks written, compiled, and statically linked in Golang and Rust, it currently lacks support for other languages, including interpreted ones like Python. To bridge this gap, this paper presents the first integration of an interpreted language into σOS, enabling native Python support without compromising the system’s core principles. Our design, σPy, achieves this through three key ideas: (1) system call interposition via LD_PRELOAD to enable just-in-time dependency management, where Python libraries are fetched on-demand from tenant-specified AWS S3 buckets, avoiding overhead during container initialization; (2) a multi-layered mount namespace that spans the local machine, a per-realm Docker container, and a per-proc σcontainer, enabling efficient dependency caching at the per-tenant granularity; and (3) a hybrid C++, C, and Python API layer that bridges σOS’s Protobuf-based RPC system with Python’s dynamic types. Preliminary benchmarks demonstrate that σPy achieves performance comparable to that of compiled languages like Golang when interacting with the σOS API, with only 0.2 - 0.3 additional milliseconds of overhead on all tested API calls, validating the success of Python programs on the σOS architecture.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulating LLM Runtime Latency</title>
<link href="https://hdl.handle.net/1721.1/162997" rel="alternate"/>
<author>
<name>Wang, Sarah Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/162997</id>
<updated>2025-10-07T04:14:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Simulating LLM Runtime Latency
Wang, Sarah Y.
Large Language Models (LLMs) are expensive to run and can incur high latencies. Each LLM application has its own cost and latency targets. For example, AI voice assistants operate under low latency objectives, while large document batch processing jobs are typically cost-sensitive. However, navigating these trade-offs is not trivial, as LLM latency is highly task– specific and depends on factors such as the offered query load, the hardware configurations, request properties, and various model characteristics. To support the user in configuring their deployment according to their application needs, we introduce vLLMSim, an accurate simulator that estimates the latency of a given workload on different hardware configurations. vLLMSim advances two key avenues toward latency-aligned LLM deployments. First, the simulated latency metrics inform the user’s model and hardware choice, so they can use a configuration that is ideal for their workload. Second, our simulator enables researchers to quickly test latency-improving ideas, bypassing the need for time-consuming implementations before validating their effectiveness. In fact, vLLMSim is already used in two research projects with the goal of reducing latency and cost of LLM inference. In this thesis, we show how vLLMSim’s design allows it to accurately support the use cases above, while providing highly accurate runtime predictions. To support hardware exploration without GPU access, vLLMSim provides precomputed performance profiles that are sufficient to accurately simulate the user’s workload. The simulator code can be found here, and the instrumented vLLM code for creating profiles can be found here.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Methods for Latent Space Interpretation via In-the-loop Fine-Tuning</title>
<link href="https://hdl.handle.net/1721.1/162996" rel="alternate"/>
<author>
<name>Wen, Collin</name>
</author>
<id>https://hdl.handle.net/1721.1/162996</id>
<updated>2025-10-07T04:13:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Methods for Latent Space Interpretation via In-the-loop Fine-Tuning
Wen, Collin
With language models increasing exponentially in scale, being able to interpret and justify model outputs is an area of increasing interest. Although enhancing the performance of these models in chat mediums has been the focus of interaction with AI, the visualization of model latent space offers a novel modality of interpreting information. Embedding models have traditionally served as a means of retrieving relevant information to a topic by converting text into a high-dimensional vector. The high-dimensional vector spaces created via embedding offer a way to encode information that captures similarities and differences in ideas, and visualizing these nuances in terms of meaningful dimensions can offer novel insights into the specific qualities that make two item similar. Leveraging fine-tuning mechanisms, dimension reduction algorithms and Sparse Autoencoders (SAEs), this work surveys state-of-the-art techniques to visualize the latent space in highly interpretable dimensions. ConceptAxes, derived from these techniques, is a framework is provided to produce axes that can capture high-level ideas that are ingrained into embedding models. ConceptAxes with highly interpretable dimensions allow for better justification for the latent space and clusters. This method of increasing embedding transparency proves valuable in various domains: (1) AI-enhanced creative exploration can be more guided and customized for a particular experience and (2) high-level insights can be made more intuitive with vast text datasets.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Commanding, Telemetry, and Software Strategy for&#13;
CubeSat Laser Infrared CrosslinK (CLICK) Mission</title>
<link href="https://hdl.handle.net/1721.1/162995" rel="alternate"/>
<author>
<name>Whitmore, Garrett</name>
</author>
<id>https://hdl.handle.net/1721.1/162995</id>
<updated>2025-10-07T04:13:21Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Commanding, Telemetry, and Software Strategy for&#13;
CubeSat Laser Infrared CrosslinK (CLICK) Mission
Whitmore, Garrett
This work outlines the software-related requirements necessary for successful operations of the NASA-sponsored Cubesat Laser Infrared CrosslinK (CLICK) B/C mission [1] [2]. This twin-cubesat mission will demonstrate peer-to-peer laser-communication capabilities novel at this small terminal scale. Optical laser communication terminals can have lower Size, Weight, and Power (SWaP) compared with traditional radio communication, as well as fewer licensing regulations and improved link security. CLICK-B/C follows from CLICK-A, a risk-reduction mission that successfully performed laser downlink with a ground station at MIT [3]. In addition to downlink, B/C will perform crosslink experiments at a data transmission rate over 20 Mbps at ranges between 20 and 580 km in Low-Earth Orbit (LEO). This thesis focuses on the software related to the function of the satellite payload, in particular, the improvements and additions made to the operating system, software systems that were ported over from CLICK-A, the integration and testing of these subsystems, and analyses done to prepare for in-flight operations before launch. An overview of the MIT &amp; UF payload hardware and electronics is given before detailing interactions with components as necessary. A deep dive into the payload software libraries, internal and external communication channels, and operating system build details are given. A description of functional testing and its results are laid out as well as a template crosslink experiment script and further specifications for mission-related analyses and pre-launch preparations. This work on software upgrades, verification, and examination is necessary for CLICK-B/C to reach its stated mission goals, here on Earth and in its orbit.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Foundational Verification of Running-Time Bounds for&#13;
Interactive Programs</title>
<link href="https://hdl.handle.net/1721.1/162994" rel="alternate"/>
<author>
<name>Tockman, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/162994</id>
<updated>2025-10-07T04:14:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Foundational Verification of Running-Time Bounds for&#13;
Interactive Programs
Tockman, Andrew
The field of formal methods has a rich history of practical application in verification of the correctness of software. Existing verification tooling operates at a wide range of rigor, from proving relatively weak properties via traditional static analysis to powerful theorem provers that can express very precise specifications. It is sometimes desirable to prove properties about programs that make reference to not just semantic behavior but also to other metaproperties of the program’s execution, such as runtime or I/O histories. There is also a wide variety of existing tooling for proving bounds on program runtime. However, there is no prior work on a maximally rigorous verification system that can prove predicates involving all of semantic behavior, runtime, and I/O. Our contribution is exactly that – we extend the existing Bedrock2 framework, which implements a C-like systems language within a powerful proof engine together with a verified compiler capable of expressing arbitrary proof conditions involving behavior and I/O, and augment it to add the capacity to reason about runtime as well. As a capstone proof of concept, we apply the new metrics machinery to an IoT lightbulb controller (already verified with respect to the previous framework) and produce a new specification with time bounds based on arrival of network packets.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Graph Neural Networks for City Policy Recommendations&#13;
as a Link Prediction Task</title>
<link href="https://hdl.handle.net/1721.1/162992" rel="alternate"/>
<author>
<name>Rozario, Consecrata Maria</name>
</author>
<id>https://hdl.handle.net/1721.1/162992</id>
<updated>2025-10-07T04:13:10Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Graph Neural Networks for City Policy Recommendations&#13;
as a Link Prediction Task
Rozario, Consecrata Maria
Graph Neural Networks (GNNs) have become a widely utilized tool in recommender systems in various contexts. While recommendation tasks can be approached using a multitude of data structures and types, graph-structured data is particularly well-suited for this domain, as graphs naturally capture a variety of relationships and interactions between entities. By leveraging graph representation learning, we can effectively encode these complex dependencies, enabling robust and context-aware recommendations. We use this methodology in the domain of policy recommendations for urban centers. To recommend policies, we would learn the complex local and global relationships between cities, their environmental features, and currently implemented policies. We construct a graph structure relating cities, implemented policies, and city features, and formulate the policy recommendation task as a GNN link prediction problem, demonstrating its potential to scale data-driven urban governance.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automatic Detection of Landmark Acoustic Cues in&#13;
Human Speech</title>
<link href="https://hdl.handle.net/1721.1/162991" rel="alternate"/>
<author>
<name>Park, Janette H.</name>
</author>
<id>https://hdl.handle.net/1721.1/162991</id>
<updated>2025-10-07T04:13:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Automatic Detection of Landmark Acoustic Cues in&#13;
Human Speech
Park, Janette H.
This study presents a framework for the automatic detection of the eight landmark acoustic cues in human speech. Landmarks are key articulatory events, produced as a result of minimal vocal tract constriction (e.g., vowels and glides) or closures and releases in the oral region (e.g., nasal, fricative, and stop consonants). A complete landmark detection system is a key step towards an overarching speech analysis system that relies on lexical acoustic cues, as landmarks guide the identification of other acoustic cues in speech. In the proposed framework, the acoustic properties of each of the eight landmark cues are modeled by extracting speech-related measurements and training Gaussian Mixture Models (GMMs). To remove the effects of speaker variability and different recording environments, methods for normalizing speech-related measurements are proposed and evaluated. For a new speech signal, the normalized speech-related measurements are extracted at each time frame and evaluated against the eight trained GMMs to compute the likelihood of each landmark. Using Bayes’ Theorem, the posterior probabilities are calculated to determine the most probable landmark (or absence thereof) at each time frame. The system’s performance is evaluated by comparing the detected landmarks to the manually labeled ground truth landmark annotations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Single-Cell Language Model for Transcriptomics &amp; Cell Type Annotation</title>
<link href="https://hdl.handle.net/1721.1/162990" rel="alternate"/>
<author>
<name>Lin, Vincent</name>
</author>
<id>https://hdl.handle.net/1721.1/162990</id>
<updated>2025-10-07T04:13:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Single-Cell Language Model for Transcriptomics &amp; Cell Type Annotation
Lin, Vincent
As single-cell transcriptomics datasets continue to grow in size and biological complexity, current models for cell type annotation remain limited in their generalizability and are often evaluated on only a small fraction of the standardized cell types defined in modern ontologies. Current state-of-the-art models for transcriptomic representation demonstrate that deep learning models can extract rich features on single-cell data but are evaluated on very few cell types and perform poorly on broader datasets. This work introduces a multimodal model architecture that integrates large language models (LLMs) with gene expression encoders to address this scalability gap in cell type annotation. Inspired by vision-language frameworks, our architecture combines a pretrained scRNA encoder with a Perceiver Resampler that maps gene expression profiles into the latent space of a large language model. We construct structured, ontology-grounded datasets of up to 197 cell types and evaluate our model's performance using instruction fine-tuning. Our experiments analyze the impact of integrating language modeling components with scRNA encoders and their benefit on cell type annotation performance for large, diverse datasets. Our results show that while a scRNA encoder may be sufficient for small datasets, our single-cell model leveraging LLMs consistently outperforms the scRNA encoder baseline on larger datasets, with a widening gap in classification performance as data complexity increases, demonstrating the scalability and improved generalizability of our multimodal architecture. We also provide further analysis of the tradeoffs associated with using the natural language domain for biological analysis.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inference Time Search for Protein Structure Prediction</title>
<link href="https://hdl.handle.net/1721.1/162989" rel="alternate"/>
<author>
<name>Qi, Richard</name>
</author>
<id>https://hdl.handle.net/1721.1/162989</id>
<updated>2025-10-07T04:13:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Inference Time Search for Protein Structure Prediction
Qi, Richard
Scaling inference-time compute for deep learning models has led to superhuman performance in games and enhanced reasoning capabilities for language models. However, similar gains have not yet been made in the field of biomolecular structure prediction. We introduce a new paradigm for inference-time search by adding architectural components and a finetuning procedure to state-of-the-art structure prediction models that give rise to a discrete latent space. We implement algorithms for searching and sampling in this discrete latent space and conduct experiments on a small model, demonstrating an increase in oracle and top-1-selected accuracy for predicted protein-protein complex structures.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing a Localized Lower Body Negative Pressure Garment for Long-Duration Spaceflight</title>
<link href="https://hdl.handle.net/1721.1/162988" rel="alternate"/>
<author>
<name>Chu, Kaitlyn A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162988</id>
<updated>2025-10-07T04:13:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Designing a Localized Lower Body Negative Pressure Garment for Long-Duration Spaceflight
Chu, Kaitlyn A.
Lower Body Negative Pressure (LBNP) has long been explored as a countermeasure to the physiological deconditioning and orthostatic intolerance associated with prolonged microgravity exposure. Traditional LBNP systems, however, are large, stationary devices that require astronauts to remain immobile during use, limiting their integration into daily spaceflight routines. Although more mobile LBNP solutions have emerged, they remain cumbersome and uncomfortable, ultimately still restricting multitasking and reducing operational feasibility. This study introduces the Soft Kinetics INterface (S.K.I.N.), a flexible, wearable structure designed to support the application of localized LBNP. The goal was to evaluate whether targeted negative pressure applied through the S.K.I.N. could replicate the fluid shift effects of a traditional LBNP chamber while improving comfort, mobility, and time-efficiency. The human thigh was chosen as the focus of this technology demonstration due to its known responsiveness to LBNP and its suitability for small-scale implementation. The development of the S.K.I.N. began with finite element modeling (FEM) to identify optimal material properties and structural geometry. Iterative physical prototyping resulted in a sinusoidal silicone waveform design, selected for its mechanical stability and user comfort. The final prototype was then evaluated in three experimental phases: (1) mechanical testing using pressure-sensitive film to assess structural integrity under vacuum, (2) an ex-vivo pig leg study to validate experimental protocols and assess the S.K.I.N.’s ability to induce fluid shifts, and (3) a human study (n=10) comparing fluid shifts between the S.K.I.N. and a scaled-down version of the traditional LBNP chamber. On average, results from the human study showed that the S.K.I.N. successfully induced localized fluid shifts similar to those of the chamber. However, response magnitude varied considerably across participants. Most of the observed effect was driven by female participants, who exhibited more pronounced fluid shifts, while most male participants showed minimal or no measurable response. FEM simulations supported this finding, suggesting that higher fat-tomuscle ratios — more common in women — may enhance tissue deformability and volume displacement, thereby facilitating greater fluid shifts under negative pressure. Although these differences limit generalizability, they also highlight the potential for the S.K.I.N. to serve as a more targeted countermeasure for specific physiologies or user groups. Although the current S.K.I.N. design’s limited surface area constrains its overall effect, the concept shows promise. The ability to deliver targeted fluid shifts in a more mobile, comfortable format could enable integration into dynamic operational settings. Future work should focus on expanding the system to cover larger areas, such as a whole-pants version, and incorporating a portable vacuum source for mobility in both spaceflight and terrestrial applications. Larger, more diverse participant cohorts will also be necessary to assess long-term usability, efficacy, and individual variability in response.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mantis: A Screen Magnification Tool for Diagram&#13;
Traversal</title>
<link href="https://hdl.handle.net/1721.1/162987" rel="alternate"/>
<author>
<name>Patterson, Lydia J.</name>
</author>
<id>https://hdl.handle.net/1721.1/162987</id>
<updated>2025-10-07T04:13:18Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Mantis: A Screen Magnification Tool for Diagram&#13;
Traversal
Patterson, Lydia J.
Complex diagrams and charts can be difficult for people who use screen magnification to navigate. A sense of spatial context and of the diagram’s overall structure is oftentimes lost, as magnifiers can only magnify a fraction of the screen at any given time. So, while sighted users have both clarity and full context simultaneously, screen magnifier users often have to choose or split their attention between the two. Existing screen magnifiers are content-agnostic, so the current way of navigating visualizations is freeform and unguided. The burden of figuring out where to explore while retaining a mental model of the diagram is placed entirely on the user. In this paper, we present Mantis—six prototypes of an automatic, content-aware screen magnification tool designed to aid people who have low vision in the traversal of diagrams. Each design experiments with what sorts of information might be provided to help the user retain a sense of context. Further, they each explore how such a tool might use its knowledge of the diagram’s semantic structure to streamline traversal to and from areas of interest to the user. To this end, we evaluate how these proof-of-concepts improve the user’s navigational experience and reduce the user’s cognitive load.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Teacher-Centered Design in Educational Games: Iterative Improvements to the Tragedy of the Commons pSim Dashboard</title>
<link href="https://hdl.handle.net/1721.1/162986" rel="alternate"/>
<author>
<name>Luong, Jacky K.</name>
</author>
<id>https://hdl.handle.net/1721.1/162986</id>
<updated>2025-10-07T04:12:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Teacher-Centered Design in Educational Games: Iterative Improvements to the Tragedy of the Commons pSim Dashboard
Luong, Jacky K.
Teaching tools such as the Tragedy of the Commons (ToC) participatory simulation, developed by MIT STEP Lab, have the potential to develop different skills or knowledge compared to single-player educational games. ToC illustrates the challenges of managing shared resources, but its existing teacher dashboard may not be well-suited to support its growing use across various classrooms. Through surveying and interviewing educators along with observing classroom usage, the software's shortcomings and opportunities for improvement were identified. This resulted in the design and implementation of a redesigned teacher dashboard, including a new “central bank” feature that provides structure to support more complex simulations. Additional enhancements improved usability and performance. Evaluations with teachers and controlled playtests demonstrated that these changes show promise in enabling richer classroom dynamics and making facilitation easier. The findings underscore the importance of teacher experience in educational game design.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>All Therapies Are Equal - Unless You’re a Bot: Evaluating the Effectiveness of Four Therapy Schools for AI Chatbot Therapists</title>
<link href="https://hdl.handle.net/1721.1/162985" rel="alternate"/>
<author>
<name>Liu, Andi</name>
</author>
<id>https://hdl.handle.net/1721.1/162985</id>
<updated>2025-10-07T04:13:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">All Therapies Are Equal - Unless You’re a Bot: Evaluating the Effectiveness of Four Therapy Schools for AI Chatbot Therapists
Liu, Andi
This thesis tests two design questions for Large Language Model (LLM) Chatbot Therapists: Which therapeutic school suits an LLM best, and does an explicit Theory-of-Mind (ToM) reflection improve outcomes? We prompted GPT-4.1-mini to act as eight therapists — CBT, Narrative, Psychodynamic, and SFBT, each with and without a ToM step — and held 240 simulated sessions with scripted AI patients. SFBT achieved the greatest projected PHQ-9 improvement (around 4 points), significantly higher than CBT, Narrative, or Psychodynamic approaches. Immediate distress (SUDS) fell modestly and uniformly across schools. ToM reasoning did not alter either measure. The findings show that extra “thinking time” might not automatically translate into therapeutic gain, but also highlight a current strength of LLMs: executing brief, rule-based therapies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Fiber Coupling with Actuated Mirrors</title>
<link href="https://hdl.handle.net/1721.1/162984" rel="alternate"/>
<author>
<name>Vel, Vetri Senthil</name>
</author>
<id>https://hdl.handle.net/1721.1/162984</id>
<updated>2025-10-07T04:13:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Automated Fiber Coupling with Actuated Mirrors
Vel, Vetri Senthil
Almost all atomic physics experiments rely on precise alignment of lasers. For example, optical fields are used to cool, control, and image atoms in neutral atom arrays. In this thesis, we present a design for mirrors actuated by servos that allow the precise, repeatable alignment of lasers in free space optical setups. We then apply these actuated mirrors to automate fiber coupling, where laser beams are coupled from free space into a fiber waveguide. We present the theory of fiber coupling and use experimental data on the fiber coupling landscape to develop an accurate digital twin. Insights from the combination of the digital twin and experimental data are used to develop a fast and effective algorithm for automated fiber coupling.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ACED: Automatic Concourse Event Detection</title>
<link href="https://hdl.handle.net/1721.1/162983" rel="alternate"/>
<author>
<name>Wagner, Luke A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162983</id>
<updated>2025-10-07T04:12:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">ACED: Automatic Concourse Event Detection
Wagner, Luke A.
Fans of the San Antonio Spurs often face long delays when traversing the arena or waiting for food. Automatic Concourse Event Detection (ACED) is a novel system designed for tracking these statistics in the Spurs’ arena in real time. We use existing machine learning models and introduce novel processing algorithms to identify the total number of people in each section throughout the arena in addition to tracking the wait times for different restaurants and restrooms. ACED collects and stores this information in a database, which could be used to present fans with up-to-date arena information in a live dashboard to assist them in their in-game decision making. This would improve the overall fan experience, which could encourage fans to buy tickets more frequently. We provide the San Antonio Spurs with a completed implementation of ACED, which is ready to be deployed within the Spurs’ arena.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ultraviolet-C Powered Air Purifying Respirator (UVC&#13;
PAPR)</title>
<link href="https://hdl.handle.net/1721.1/162982" rel="alternate"/>
<author>
<name>Seeyave, Evan</name>
</author>
<id>https://hdl.handle.net/1721.1/162982</id>
<updated>2025-10-07T04:13:30Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Ultraviolet-C Powered Air Purifying Respirator (UVC&#13;
PAPR)
Seeyave, Evan
The global challenge posed by pandemics, notably COVID-19, has underscored the critical need for advanced personal protective equipment (PPE). This thesis details the development and evaluation of a multi-stage powered air-purifying respirator (PAPR) incorporating direct ultraviolet-C (UVC) germicidal irradiation. The proposed PAPR aims to provide enhanced protection by actively sterilizing air through this UVC chamber immediately prior to inhalation. This approach offers an advantage over traditional filter-based PAPRs by removing both the need to replace filters and pull air with high-power motors, while still neutralizing a broad spectrum of airborne pathogens, including viruses and bacteria. The primary objective of this research is to design, construct, and test a PAPR prototype capable of achieving a high inactivation rate (target 99.9%), thereby offering a robust solution for individuals in high-exposure environments. In addition to the UVC chamber, we also built an alternate ultraviolet-A (UVA) activated titanium dioxide (TiO2) photocatalytic oxidation (PCO) chamber. This work encompasses the overall design of the system, safety considerations, and testing to quantify its pathogen inactivation efficacy and to characterize system performance.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>GridFix: A Desktop Application for the Correction of Algorithmically-Generated Beatgrids for Music</title>
<link href="https://hdl.handle.net/1721.1/162981" rel="alternate"/>
<author>
<name>Shi, Iris</name>
</author>
<id>https://hdl.handle.net/1721.1/162981</id>
<updated>2025-12-15T15:52:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">GridFix: A Desktop Application for the Correction of Algorithmically-Generated Beatgrids for Music
Shi, Iris
Beatgridding is a technique meant to aid DJs in aligning the beats of two different songs. By overlaying a grid of beat markers (a “beatgrid”) on top of a waveform representation of the track being beatgridded, a song’s beats can be visualized and thus easily matched to another’s. State-of-the-art DJ software—like rekordbox by the company AlphaTheta—will algorithmically generate beatgrids for songs. However, these beatgrids are not always accurate and can often be difficult to correct with only the software-provided tools. GridFix is a desktop application designed to be an auxiliary tool for rekordbox, allowing users to correct rekordbox-generated beatgrids by providing additional functionality that rekordbox does not. GridFix’s main advantage is its ability to let users make local changes to small, isolated sections of a beatgrid, a task that is quite hard to achieve in rekordbox. GridFix is fully compatible with rekordbox and fairly easy to learn how to use, as shown by user testing.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Graph Metrics for Improving Cybersecurity on Software Dependency Networks</title>
<link href="https://hdl.handle.net/1721.1/162980" rel="alternate"/>
<author>
<name>Yao, Darren Z.</name>
</author>
<id>https://hdl.handle.net/1721.1/162980</id>
<updated>2026-01-16T20:08:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Graph Metrics for Improving Cybersecurity on Software Dependency Networks
Yao, Darren Z.
Modern software ecosystems are deeply interconnected, allowing a vulnerability in a single component to propagate and affect many others. In this thesis, we model software ecosystems as directed graphs, and apply various graph-theoretic metrics to quantify security risk. We compare two deep learning frameworks (PyTorch and TensorFlow) with two traditional software frameworks (npm and PyPI), identifying critical properties of their dependency structures, which motivates several recommendations for improving software supply chain security.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Planning Robotic Cutting Operations</title>
<link href="https://hdl.handle.net/1721.1/162979" rel="alternate"/>
<author>
<name>Lunawat, Tarang</name>
</author>
<id>https://hdl.handle.net/1721.1/162979</id>
<updated>2025-10-07T04:12:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Planning Robotic Cutting Operations
Lunawat, Tarang
Classical planning and most PDDL variants operate on the assumption that the number and types of objects present in the environment are known at the time of initialization and neither can nor do change during plan execution. However, there are many domains in which it is helpful and necessary to be able to capture action (or environment) effects that are able to change the existence of objects rather than just facts about these objects. PDDLStream already provides a framework for "certifying" new facts about the environment as necessary throughout plan execution; I propose using PDDLStream to construct a principled way to reason over not just added facts, but also added or removed objects in the environment. In order to do this, I will work within the domain of cutting operations in the kitchen, as this is a domain that both necessitates a lot of object change as objects are cut and often requires chains of these generated objects to be fully reasoned over. Additionally, I will lay the groundwork to use this principled way to reason over new objects to implement different types of cutting operations in the kitchen, with the eventual goal of a robot planner being able to sequence different provided actions to more efficiently work with knives in the kitchen in a human-like manner.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adaptive Wavefront Estimation Algorithms for&#13;
High-Contrast Imaging of Exoplanets</title>
<link href="https://hdl.handle.net/1721.1/162978" rel="alternate"/>
<author>
<name>Manojkumar, Saikrishna</name>
</author>
<id>https://hdl.handle.net/1721.1/162978</id>
<updated>2025-10-07T04:15:05Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Adaptive Wavefront Estimation Algorithms for&#13;
High-Contrast Imaging of Exoplanets
Manojkumar, Saikrishna
The direct imaging of exoplanets orbiting stars outside our solar system remains one of the crucial tools we have available to answer whether there exists life beyond Earth. The light from an Earth-like exoplanet is approximately ten orders of magnitude dimmer than its host star and hence the imaging system of the telescope observing the exoplanet must be able to suppress the starlight to achieve a “contrast” of 10−10 in the image. This is typically achieved using a coronagraph, which blocks the light from the star while allowing the light from the planet to pass through. However, some starlight that leaks through the coronagraph needs to be further removed in the search region for the exoplanet; this region is referred to as the dark hole or dark zone (DZ). Creating a DZ requires the use of focal plane wavefront sensing and control techniques, which estimates the electric field of the starlight in the focal plane of the telescope using a camera and then informs the deformable mirrors (DMs) located upstream of the coronagraph to null these electric fields. Once the DZ is created with a desired contrast, there are still slow, high-order drifts in the optical system that cause the contrast to degrade over the long observation times of the science target. High-order wavefront sensing and control (HOWFSC) techniques are required to maintain the contrast in the DZ while observing a science target. Dark Zone Maintenance (DZM) is a technique that has demonstrated the ability to maintain the contrast in the DZ over long observation times. This algorithm utilizes an Extended Kalman Filter (EKF) to estimate the open-loop electric field at every pixel in the DZ and use this information to inform the control algorithm. The achievable contrast and contrast stability of DZM are determined by several key parameters: the optical system’s drift rate, the photon flux and associated shot noise in the measurement images, and the probe magnitude applied to the DMs for the estimation algorithm. This work quantifies the impact of the drift rate, photon rate, and probe magnitude on the performance of DZM by performing a parameter scan on high-contrast imaging testbeds. The parameter scan was performed on both the in-air High-contrast imager for Complex Aperture Telescopes (HiCAT) testbed at the Space Telescope Science Institute (STScI) and the in-vacuum Decadal Survey Testbed (DST) at the Jet Propulsion Laboratory (JPL). The parameter scan was run in both simulation and on the physical testbed using the contrast in the DZ as a performance metric, and evaluated relative to the photon-noise theoretical bounds to assess the efficacy of the DZM algorithm. The substantial difference between the theoretical bounds and experimental results, on average 70 times worse on HiCAT, motivated the development and implementation of a new DZM algorithm that utilized a separate EKF to estimate the modes of wavefront error derived from the DMs and use that information to correct for the aberrations. This new modal EKF algorithm was tested with a similar parameter scan on the HiCAT simulator demonstrating a nearly 5 times level of improvement relative to the original DZM algorithm simulation performance. The results of this work will inform the design of future algorithms to maintain high contrast during observations for upcoming space telescope missions such as the Habitable Worlds Observatory (HWO).
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Incentivizing Data Contributions in Decentralized Collaborative Learning</title>
<link href="https://hdl.handle.net/1721.1/162977" rel="alternate"/>
<author>
<name>Wang, Yuxiao</name>
</author>
<id>https://hdl.handle.net/1721.1/162977</id>
<updated>2025-12-15T15:42:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Incentivizing Data Contributions in Decentralized Collaborative Learning
Wang, Yuxiao
In a collaborative learning scheme such as the federated learning model, each user benefits from the data contribution of others. Previous work shows that the federated learning protocol can incentivize users to contribute more than in the competitive equilibrium by penalizing deviations. However, a central controller with access to all the data may raise privacy concerns. In this work, we construct a decentralized collaborative protocol in which users share data without relying on a centralized controller. We then extend this protocol to a repeated game and analyze the competitive equilibrium behavior, along with strategies users can implement to foster collaboration in the repeated setting of the protocol. We provide a quantitative analysis of free-rider behavior under decentralized protocols and compare the amount of information collected with decentralized protocols against that in the centralized protocol.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>DisViz: Visualizing real-world distributed system logs&#13;
with space time diagrams</title>
<link href="https://hdl.handle.net/1721.1/162976" rel="alternate"/>
<author>
<name>McMenamy, Josiah</name>
</author>
<id>https://hdl.handle.net/1721.1/162976</id>
<updated>2025-10-07T04:14:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">DisViz: Visualizing real-world distributed system logs&#13;
with space time diagrams
McMenamy, Josiah
This thesis aims to provide an intuitive debugging and learning tool for distributed systems that communicate by message passing. Understanding and debugging distributed systems can be challenging and slow to iterate on, so there is a need for tools that can speed up the time it takes to diagnose the root cause of a bug. There exists significant prior work in creating tools that can aid in the visualization and debugging of distributed system executions, such as the ShiViz log visualizer [13]. This work builds on top of these tools to provide more debugging information, handle large log files, and be easily instrumented in existing systems. We demonstrate using the tool to debug issues in an implementation of the Raft consensus algorithm [34].
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Casting Protein Structure Predictors as Energy-Based Models for Binder Design and Scoring</title>
<link href="https://hdl.handle.net/1721.1/162975" rel="alternate"/>
<author>
<name>Nori, Divya</name>
</author>
<id>https://hdl.handle.net/1721.1/162975</id>
<updated>2025-12-15T17:19:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Casting Protein Structure Predictors as Energy-Based Models for Binder Design and Scoring
Nori, Divya
Protein binder design has been transformed by hallucination-based methods that optimize structure prediction confidence metrics, such as the interface predicted TM-score (ipTM), via backpropagation. However, these metrics are imperfect proxies for binding affinity and do not reflect the statistical likelihood of a binder–target complex under the learned distribution. In this work, we propose a principled alternative: an energy-based framework that directly extracts the statistical likelihood of a predicted binder–target complex from a structure predictor’s internal confidence distributions. Building on the Joint Energy-based Modeling (JEM) framework, we introduce pTMEnergy, a statistical energy function over structures that is derived from predicted inter-residue error distributions. We incorporate pTMEnergy into BindEnergyCraft (BECraft), a hallucination-based binder design pipeline that maintains the same optimization framework as BindCraft but replaces ipTM with our energy-based objective. Across a diverse panel of challenging protein targets, BECraft achieves higher in silico success rates compared to BindCraft, RFDiffusion, and ESM3. Beyond design, we evaluate pTMEnergy as an unsupervised scoring function for retrospective virtual screening tasks. Without any task-specific supervision or retraining, pTMEnergy consistently outperforms baseline methods across both protein–protein and protein–RNA interaction benchmarks. Our results demonstrate that confidence-derived energy functions offer a powerful and generalizable signal for binder design and scoring.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Modeling, Optimization, and LLM-Assisted Decision Support for Geothermal Well Arrays</title>
<link href="https://hdl.handle.net/1721.1/162974" rel="alternate"/>
<author>
<name>Ouko, Edwin O.</name>
</author>
<id>https://hdl.handle.net/1721.1/162974</id>
<updated>2025-10-07T04:14:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Efficient Modeling, Optimization, and LLM-Assisted Decision Support for Geothermal Well Arrays
Ouko, Edwin O.
Geothermal well arrays, which organize multiple geothermal wells into carefully planned geometric configurations, provide an opportunity to enhance energy production capacity and increase fault tolerance of geothermal systems. Closed-loop geothermal systems (CLGS), a type of geothermal well design, promises to allow harnessing of geothermal energy in any location with minimal adverse environmental impact. I demonstrate how the development of these emerging geothermal technologies could be accelerated by recent advances in large language models (LLMs) in conjunction with high-level high-performance programming languages like Julia. In particular, I focus on how LLMs could be used in design brainstorming and to increase efficiency in numerical modeling. I assess the potential of state-of-the-art LLMs such as ChatGPT, Gemini, Claude, Grok, and a domain-specific model, AskGDR, as expert assistants in geothermal research. Owing to the unpredictable reliability of LLMs, there is a constant need for objective evaluation benchmarks in various domains. I propose a novel approach, leveraging Google’s recently introduced AI tool, NotebookLM, to accelerate the generation of quantitative geothermal benchmarks with only new unpublished questions. In addition, I propose the use of blackbox optimization as a computationally less costly alternative to approximate the optimal configuration of CLGS wells in a geothermal array to minimize thermal interference and improve heat energy production. I evaluate several optimization strategies such as Bayesian optimization, particle swarm optimization, natural evolution strategies, differential evolution optimization, Nelder-Mead, and simulated annealing on various performance characteristics such as convergence speed and highest production capacity attained.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Transformer-Based Foundation Model for Human&#13;
Microbiome Analysis</title>
<link href="https://hdl.handle.net/1721.1/162973" rel="alternate"/>
<author>
<name>Medearis, Nicholas A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162973</id>
<updated>2025-10-07T04:14:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Transformer-Based Foundation Model for Human&#13;
Microbiome Analysis
Medearis, Nicholas A.
The human microbiome plays a crucial role in maintaining our health. Alterations in the microbiome have been linked to various chronic conditions like autoimmune disorders, metabolic diseases, and cancer. While various tools have been developed to study the microbiome, each tool tends to be specialized for a specific task. To overcome this limitation, we report on the development of a foundation model pretrained on 13,524 human microbiome metagenomic samples. The model was then fine-tuned to predict the clinical status of the host. Our model was able to differentiate between healthy and diseased samples in 10-fold cross-validation on the training dataset with an accuracy of 83.7%. On an external validation dataset of 927 samples, our model had an accuracy of 74.9%. Notably, our model performed even better at differentiating diseases from one another. On the diseased samples in the training dataset, it classified samples with an accuracy of 93.3% in 10-fold cross-validation. Together, our results show that generative AI has the potential to transform microbiome research and advance personalized medicine.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards an Augmented Reality-based Cyber-Physical&#13;
Production System Planner</title>
<link href="https://hdl.handle.net/1721.1/162972" rel="alternate"/>
<author>
<name>Mueller, David</name>
</author>
<id>https://hdl.handle.net/1721.1/162972</id>
<updated>2025-10-07T04:13:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards an Augmented Reality-based Cyber-Physical&#13;
Production System Planner
Mueller, David
Investment in automation by small and medium-sized enterprise (SME) manufacturers in the United States has lagged behind their larger counterparts for decades, despite comprising a majority of the nation’s manufacturing industry. The cyber-physical production systems (CPPSs) introduced by Industry 4.0 promise to bolster productivity and efficiency, but only for those enterprises which invest in constituent technologies. These technologies are not easily integrated in existing factories, typically requiring installation of invasive infrastructure and continuous technical support. Robotic integration is typically performed by specialized third-party firms or by in-house staff with extensive technical training, such as engineers. SME manufacturers are particularly sensitive to the complexities of robot integration due to limited access to technologists, and their need for frequent reconfiguration under economies of scope. This thesis introduces Marve: the Mobile Augmented Reality Visual Editor. Marve is a proof-of-concept Android application that enables line workers to directly configure and control an autonomous mobile robot (AMR)-backed hybrid intralogistics system using lowcost consumer hardware. Workers can use Marve’s augmented reality (AR)-based interface to define and visualize the essential geometry and components of such a system. Once configured, workers are able to simulate how the system would respond to their requests to move material throughout the factory. The use of AR enables extensive work to be done at the planning stage of CPPS integration by line workers themselves, bypassing the need for modeling by engineers. Marve relies exclusively on fiducials and visual-inertial odometry (VIO) for localization, and fiducial tags for object tracking, thus eliminating the need for supporting infrastructure. Taken together, these features make Marve an easy on-ramp for SMEs seeking to transition legacy production lines into the CPPSs of Industry 4.0.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling Efficient ML Inference in SigmaOS with Model-Aware Scheduling</title>
<link href="https://hdl.handle.net/1721.1/162971" rel="alternate"/>
<author>
<name>Liu, Katie</name>
</author>
<id>https://hdl.handle.net/1721.1/162971</id>
<updated>2026-01-16T19:18:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Enabling Efficient ML Inference in SigmaOS with Model-Aware Scheduling
Liu, Katie
Machine learning inference in multi-tenant cloud environments leads to significant challenges when it comes to minimizing latency and resource contention, especially as models grow in size and complexity. This thesis addresses the cold start overhead and scheduling inefficiencies of multi-tenant ML serving by integrating the RayServe distributed model-serving framework into σOS, a cloud operating system that unifies container and serverless paradigms. The thesis also proposes two model-aware schedulers within σOS that intelligently routes inference requests to reduce the number of cold starts: Model Colocation, which prioritizes placing requests on machines where the required model is already loaded, and Centralized Model Registry, which tracks globally available models to inform scheduling decisions. These policies proactively reduce model load times by reusing cached models. Experimental results on language translation workloads in an 8-node cluster show that these schedulers achieve a ≈ 50% reduction in average inference latency and eliminates roughly 4–5 cold starts per workload, compared to σOS’s default scheduler. Through this model-aware approach to scheduling, our work enables more efficient, scalable, and low-latency ML inference serving in multi-tenant cloud settings.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact of efficiency-driven aircraft technology&#13;
improvements on climate and air quality</title>
<link href="https://hdl.handle.net/1721.1/162970" rel="alternate"/>
<author>
<name>Shukla, Aditeya</name>
</author>
<id>https://hdl.handle.net/1721.1/162970</id>
<updated>2025-10-07T04:14:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Impact of efficiency-driven aircraft technology&#13;
improvements on climate and air quality
Shukla, Aditeya
The impacts of commercial aviation on global climate and air quality have led to an industry-wide movement to reduce its environmental impact. While technological developments in aircraft propulsion, materials, and aerodynamics aim to reduce fuel consumption and CO₂ emissions, these efforts often overlook the full climate and air quality impacts of aviation, especially emissions impacts of NOₓ, CO, HC, soot, and contrails. This study assesses the environmental constraints associated with advancements driven by fuel efficiency by modeling aircraft technologies across narrow-body, wide-body, and regional jet categories. By focusing on near-future technology insertions in materials, aerodynamics, and propulsion, we can compute quantifiable environmental metrics such as temperature changes, global warming potentials, and monetized environmental damages. Our modeling shows that certain propulsion technologies — such as increased component polytropic efficiencies or higher allowable turbine-metal temperatures — can reduce fuel consumption by more than 10% under favorable re-optimizations of engine design. However, they often raise engine core pressures or temperatures in ways that increase NOₓ emissions indices by more than 30%. This can lead to worse air quality damages, offsetting some of the CO₂ savings and in some cases result in a 2% increase in environmental damages on a total net present value (NPV) basis. Primary structure material upgrades consistently reduce both fuel burn and NOₓ emissions. These improvements in air quality from reduced NOₓ result in a 10% reduction of the total NPV from environmental impacts. This analysis shows that focusing on fuel efficiency alone can be an incomplete metric towards understanding the environmental impact of an aircraft. By offering a quantitative assessment of how near-future upgrades can affect both climate and air quality, this study also provides guidance on which technology paths are most effective in reducing the overall environmental impact of aviation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital Symbol Digit Test: Multimodal Behavior Detection and Visualization</title>
<link href="https://hdl.handle.net/1721.1/162968" rel="alternate"/>
<author>
<name>Xu, Jessica J.</name>
</author>
<id>https://hdl.handle.net/1721.1/162968</id>
<updated>2025-10-07T04:14:47Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Digital Symbol Digit Test: Multimodal Behavior Detection and Visualization
Xu, Jessica J.
Neurodegenerative diseases, such as Alzheimer’s, impact many people worldwide and currently have no cure, making early detection essential for effective symptom management and intervention. Traditional diagnostic practices often rely on subjective clinical evaluations that can vary between practitioners, highlighting the need for more objective methods. The digital Symbol Digit Test (dSDT), administered via the Cognitive Health App on an iPad and using the ETVision Eye Tracking System, aims to provide an automated, reliable method to analyze patient cognitive function to detect early signs of impairment through capturing handwriting and gaze data. This thesis builds upon previous work by automating the synchronization of these two data modalities, refining definitions of learning behaviors, and developing pipelines for data processing and visualization. By creating a synchronized multimodal dataset, we can visualize participant behavior for more intuitive interpretation and draw meaningful conclusions. These contributions provide an end-to-end framework for analyzing behavior during the cognitive assessment and lay the groundwork for future development of diagnostic models to detect early signs of neurodegenerative diseases.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Location Verification for Spoofing Detection in Non-Terrestrial Networks</title>
<link href="https://hdl.handle.net/1721.1/162967" rel="alternate"/>
<author>
<name>Schatz, Ensign Nathan Caleb</name>
</author>
<id>https://hdl.handle.net/1721.1/162967</id>
<updated>2025-10-07T04:14:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Location Verification for Spoofing Detection in Non-Terrestrial Networks
Schatz, Ensign Nathan Caleb
Reliable location awareness is essential for the development of new services and applications in non-terrestrial networks (NTN). The ability of malicious users to report false location information poses a significant threat to NTN performance. This threat introduces the need for a flexible and robust location verification system (LVS) that can reliably detect malicious users. This paper proposes a single-satellite LVS based on round-trip time and angle-of-arrival measurements. We characterize several sources of uncertainty unique to the NTN scenario and examine their combined effect on positioning error. To detect spoofing probabilistically, we approximate the likelihood function for the unknown user position using a Gaussian mixture model and employ a likelihood ratio decision rule for location verification. Results display receiver operating characteristic curves to evaluate the LVS performance under various satellite ephemeris error conditions, spoofing distances, number of measurements available to the system, and wireless channel properties. The proposed LVS is shown to reliably detect spoofing among malicious users.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Triangle Splatting</title>
<link href="https://hdl.handle.net/1721.1/162966" rel="alternate"/>
<author>
<name>Xu, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/162966</id>
<updated>2025-10-07T04:14:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Triangle Splatting
Xu, Daniel
We develop a differentiable rendering method for recovering 3D meshes of scenes from 2D images. Unlike existing approaches, our method does not rely on a differentiable renderers and is compatible with any standard mesh rasterizer. To our knowledge, it is the first mesh-based differentiable rendering method that is not reliant the use of visibility masks entirely. Beyond these conceptual advancements, we implemented a set of highly optimized kernels that enable efficient scene representation on a sparse voxel grid, effectively overcoming the cubic scaling bottleneck faced by similar methods. These innovations result in promising performance on unbounded real-world scenes with complex backgrounds.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adaptive Control Strategies for Mitigating Spaceflight Fluid Shifts Using Lower Body Negative Pressure and Non-Invasive Fluid Shift Sensing</title>
<link href="https://hdl.handle.net/1721.1/162965" rel="alternate"/>
<author>
<name>Ortiz, Ciarra Celena</name>
</author>
<id>https://hdl.handle.net/1721.1/162965</id>
<updated>2026-01-16T19:55:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Adaptive Control Strategies for Mitigating Spaceflight Fluid Shifts Using Lower Body Negative Pressure and Non-Invasive Fluid Shift Sensing
Ortiz, Ciarra Celena
Entering a microgravity environment induces cephalad fluid shifts that can lead to cardiovascular and renal-hormonal adaptations that can effect astronaut health and performance in space. The current monitoring strategies for fluid shift lack the ability to track regional fluid shift in real-time, which limits countermeasure efficacy. This thesis aims to highlight the investigation and validation of using prototype non-invasive radiofrequency (RF) sensors for regional fluid shift detection. Additionally, the integration of the feedback from these sensors into Lower Body Negative Pressure (LBNP) chambers could allow for the development of an adaptive Lower Body Negative Pressure regulation framework. Coaxial RF sensors were designed and characterized using tissue phantoms, and tested in a human subject study involving controlled LBNP exposure. Reflection coefficients (S₁₁ and S₂₂) were analyzed to detect regional fluid changes in arm and leg tissue. The preliminary results indicated a statistically significant decrease in the arm reflection coefficients (S₁₁) during active LBNP, which is consistent with fluid being pulled towards the lower body. The leg reflection coefficients (S₂₂) were more variable and did not exhibit statistically significant results, suggesting a need for more investigation with placement and sensor sensitivity. This work demonstrates the potential of using wearable RF sensors for non-invasive fluid shift monitoring and lays the foundation for integrating fluid sensor feedback into adaptive LBNP control protocols to improve astronaut health monitoring and countermeasure personalization.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Robust Inference via Optimal Transport Ambiguity Sets</title>
<link href="https://hdl.handle.net/1721.1/162964" rel="alternate"/>
<author>
<name>Wang, Zheyu</name>
</author>
<id>https://hdl.handle.net/1721.1/162964</id>
<updated>2025-10-07T04:14:34Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Robust Inference via Optimal Transport Ambiguity Sets
Wang, Zheyu
Uncertainty quantification is pivotal for ensuring the safety and reliability of predictive algorithms in high-stakes applications—ranging from cancer diagnosis to autonomous driving. This challenge is exacerbated by distribution shift, in which the true data–generating distribution diverges from the nominal distribution on which our statistical methods were trained. In this thesis, we formalize distribution shifts via ambiguity sets—metric neighborhoods in the space of probability measures defined by distances such as the Wasserstein metric—and demonstrate that leveraging these ambiguity sets endows two widely used statistical algorithms with distributional robustness. The Kalman filter enables accurate, real-time tracking of latent states by assimilating noisy, indirect measurements over time. Its performance relies on precise state-space models for both the evolution dynamics and the observation process. In practice, uncertainties in these models introduce errors that can significantly degrade filter accuracy. Here, we review two robust Kalman-filter variants that explicitly account for such errors via Wasserstein ambiguity sets. Split conformal prediction, hereafter referred to as conformal prediction, offers a powerful framework for quantifying predictive uncertainty by constructing prediction intervals with finite-sample, distribution-free guarantees. Despite its widespread success, ensuring its validity under train-test distribution shifts remains a significant challenge. We model distribution shifts using ambiguity sets defined by two optimal transport-based metrics and propose two robust conformal prediction algorithms that preserves validity under these shifts. First, we consider ambiguity sets defined by a pseudo-divergence derived from the LévyProkhorov (LP) metric, which captures both local and global data perturbations. We provide a self-contained overview of LP ambiguity sets and their connections to widely used metrics such as the Wasserstein and Total Variation distances. We then establish a natural link between conformal prediction and LP ambiguity sets: by propagating the LP ambiguity set through the scoring function, we reduce complex high-dimensional distribution shifts to manageable one-dimensional shifts, enabling exact computation of the worst-case quantile and coverage. Building on this foundation, we develop valid robust conformal prediction intervals under distribution shifts, explicitly relating LP parameters to interval width and confidence levels. Experimental results on real-world datasets demonstrate the effectiveness of the proposed approach. Next, we extend our analysis to robust conformal prediction over Wasserstein-2 ambiguity sets, deriving a theoretical characterization of the worst-case quantile. However, we identify intractability due to the dependence on the shape of the original score CDF and conclude with potential future directions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Theoretical Limits of Quantum Ranging</title>
<link href="https://hdl.handle.net/1721.1/162963" rel="alternate"/>
<author>
<name>Kartal, Bünyamin</name>
</author>
<id>https://hdl.handle.net/1721.1/162963</id>
<updated>2025-10-07T04:14:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Theoretical Limits of Quantum Ranging
Kartal, Bünyamin
The ability to determine distances from dedicated measurements, namely active ranging, is crucial in a variety of systems including localization, radar, and lidar. This thesis establishes the quantum limits and determines the quantum advantage provided by single-beam displaced squeezed states in active ranging. Analytical expressions of the quantum Fisher information (QFI) are provided for monochromatic and continuous-mode waves passing through a thermal loss channel with arbitrary loss and noise conditions. The optimal allocation of system resources for performing displacement and squeezing operations is determined. The optimal allocation consists of apportioning all resources to perform either the displacement operation, providing no quantum advantage, or the squeezing operation. Analytical results are examined in optical and microwave regimes. The optimal gain, i.e., the ratio between the QFI obtained by optimal resource allocation and the QFI obtained by performing only the displacement operation, is derived for the optical and microwave regimes. Quantum advantage afforded by the prototypical heterodyne receiver is also investigated. The results of this thesis pave the way for establishing a foundation of active ranging and provide insights for system design employing currently available quantum technologies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generalized Policy Learning with Planning</title>
<link href="https://hdl.handle.net/1721.1/162962" rel="alternate"/>
<author>
<name>Yang, Ryan P.</name>
</author>
<id>https://hdl.handle.net/1721.1/162962</id>
<updated>2025-10-07T04:14:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Generalized Policy Learning with Planning
Yang, Ryan P.
Generalized policy learning seeks to find policies that solve multiple tasks within a planning domain. We introduce methods to search for policies independently in a domain from empty initialized policies. As an extension, we also propose a problem setting to learn satisficing policies between domains. In an independent domain, we propose a score function to guide the policy search. Our approach, Policy-Guided Planning for Generalized Policy Generation (PG3), evaluates policies based on how well it can be used to plan. Empirically, we show that PG3 allows generalized policy learning to occur more efficiently than other baselines with PDDL-based problems and policies represented as lifted decision lists. Finally, our experiments show that policies independently learned are qualitiatively similar, prompting further investigation on the possibilities of further accelerating the policy search process.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Meta-Learning Exploration Strategies with Decision Transformers</title>
<link href="https://hdl.handle.net/1721.1/162961" rel="alternate"/>
<author>
<name>Welch, Ryan</name>
</author>
<id>https://hdl.handle.net/1721.1/162961</id>
<updated>2025-10-07T04:14:18Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Meta-Learning Exploration Strategies with Decision Transformers
Welch, Ryan
The problem of pure exploration in sequential decision-making is to identify strategies for efficiently gathering information to uncover hidden properties of an environment. This challenge arises in many practical domains, including clinical diagnostics, recommender systems, and educational testing, where data collection is costly and the effectiveness of exploration is critical. Efficient exploration in these contexts strongly depends on exploiting underlying structural relationships within the environment. For instance, recognizing that multiple medical tests may provide overlapping information can reduce the number of tests required to make a diagnosis. Existing exploration approaches drawn from reinforcement learning and active hypothesis testing typically rely on heuristic strategies that require explicit prior assumptions about such structural information. However, when this information is unknown, heuristic methods often lead to redundant exploration, significantly limiting their practical utility in high-stakes domains. Furthermore, these existing approaches do not leverage past experience to improve their exploration efficiency over time. To overcome these limitations, we introduce In-Context Pure Exploration (ICPE), a novel meta-learning framework capable of autonomously discovering and exploiting latent environmental structures across related tasks to guide efficient exploration. ICPE leverages the in-context learning and sequence-modeling capabilities of transformers, combined with supervised learning and deep reinforcement learning techniques to learn exploration strategies directly from experience. Through extensive experiments on synthetic and semi-synthetic exploration tasks, we demonstrate that ICPE is able to efficiently explore in deterministic, stochastic and highly structured environments without relying on any explicit inductive biases. Our results highlight the potential of ICPE to enable more practical exploration strategies suitable for real-world decision-making contexts.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>WhatWhen2Ask: Cost-Aware LLM Querying for Autonomous Robots in Uncertain Environments</title>
<link href="https://hdl.handle.net/1721.1/162960" rel="alternate"/>
<author>
<name>Thirumalai, Vittal</name>
</author>
<id>https://hdl.handle.net/1721.1/162960</id>
<updated>2025-10-07T04:13:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">WhatWhen2Ask: Cost-Aware LLM Querying for Autonomous Robots in Uncertain Environments
Thirumalai, Vittal
Autonomous agents operating in real-world environments must make decisions under uncertainty, facing challenges such as partial observability, sparse rewards, and long-horizon planning. While reinforcement learning (RL) enables agents to learn from experience, standard policies often struggle to generalize in the presence of ambiguous tasks or incomplete information. Large language models (LLMs) can provide valuable semantic guidance, but their high computational cost and latency make constant querying impractical. This thesis introduces WhatWhen2Ask, a framework for cost-aware, confidence-driven querying of external multimodal large language models (MLLMs). The agent employs a Deep Q-Network (DQN) as its internal action planner, selectively querying open- and closed-source models (BLIP-2 and GPT-4o) in a hierarchical manner when its confidence is low and external guidance is likely to improve performance. Accepted hints are embedded and fused with structured state representations, supported by tailored reward shaping for improved learning in sparse environments. Evaluated in the HomeGrid environment, WhatWhen2Ask improves the success rate from 38% (DQN-only) to 54%, while querying in fewer than 6% of steps. Ablation studies show that semantic hints, confidence-based querying, selective hint filtering, and hierarchical fallback each contribute meaningfully to performance. These results suggest that principled, confidence-aware LLM querying can enhance decision-making in uncertain environments, offering a step toward more efficient and cost-aware language-augmented agents.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Detecting Errors in Financial Data: A Multi-Agent LLM and Synthetic Data Approach</title>
<link href="https://hdl.handle.net/1721.1/162958" rel="alternate"/>
<author>
<name>Liu, Katherine</name>
</author>
<id>https://hdl.handle.net/1721.1/162958</id>
<updated>2025-10-07T04:13:07Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Detecting Errors in Financial Data: A Multi-Agent LLM and Synthetic Data Approach
Liu, Katherine
With the high volume of activity flowing through financial institutions, detecting potential errors remains a critical challenge. This paper addresses two key areas where errors may occur: business name registrations and transactions within valid accounts. Traditional string-matching methods struggle to accurately identify incorrectly written business names that closely resemble existing ones, while existing error detection models for transaction data often suffer from class imbalance, leading to reduced performance on minority incorrect transaction cases. To address these issues, this paper proposes two novel approaches. First, a hybrid method integrating multi-agent Large Language Models (LLMs) with existing string-matching techniques enhances the detection of incorrect business names by capturing subtle variations beyond conventional edit-distance metrics, improving the recall from 0.815 for the baseline model to 0.987 using the proposed method. Second, an improved tabular data generation method for credit card transactions is introduced, leveraging LLMs and class balancing to generate high-quality synthetic data. Using this data to train error detection systems results in a decrease of the false negative rate from 23.47% to 12.84%. Together, these methods enhance the performance of error detection systems, enabling financial institutions to enhance the experiences of their clients.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Switching State Space Modeling via Constrained Inference&#13;
for Clinical Outcome Prediction</title>
<link href="https://hdl.handle.net/1721.1/162957" rel="alternate"/>
<author>
<name>Su, Arnold C.</name>
</author>
<id>https://hdl.handle.net/1721.1/162957</id>
<updated>2025-10-07T04:14:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Switching State Space Modeling via Constrained Inference&#13;
for Clinical Outcome Prediction
Su, Arnold C.
In clinical settings, timely and accurate prediction of adverse patient outcomes can help guide treatment decisions. While deep learning models such as LSTMs have demonstrated strong predictive performance on multivariate clinical time series, they often lack interpretability. To address this gap, this thesis proposes a framework that combines the predictive strength of neural networks with the interpretability of latent variable models. Specifically, we develop a constrained inference approach to train a switching state space model—an autoregressive hidden Markov model (AR-HMM)—for outcome prediction. Our method leverages knowledge distillation: a high-capacity LSTM "teacher" model is first trained to predict a target clinical outcome of interest, and its predictive behavior is then transferred to an interpretable AR-HMM "student" model through a similarity constraint during inference. We implement a constrained variational inference approach to estimate the parameters of the student model while aligning its latent representations with that of the teacher model’s. We evaluated our approach using two real-world clinical datasets. Our approach demonstrates predictive performance comparable to state-of-the-art deep learning models, while producing interpretable latent trajectories that reflect clinically meaningful patient states.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Duality, Weight Decay, and Metrized Deep Learning</title>
<link href="https://hdl.handle.net/1721.1/162956" rel="alternate"/>
<author>
<name>Newhouse, Laker</name>
</author>
<id>https://hdl.handle.net/1721.1/162956</id>
<updated>2025-10-07T04:14:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Duality, Weight Decay, and Metrized Deep Learning
Newhouse, Laker
The Muon optimizer has shown convincing evidence that it is faster and more scalable than AdamW for deep learning training, setting speed records for training NanoGPT and scaling up to models with 16B parameters. The theory that led to Muon is called metrized deep learning, a method that suggests assigning norms to each part of a neural network. Chapter 1 begins with an accessible explanation of metrized deep learning, including one of its recurring tools: odd polynomial iterations that act directly on singular values. Chapter 2 reviews duality, a way to modify the gradient that seeks to decrease the loss the most while disturbing the model the least. Pedagogically, duality links four popular optimizers—SGD, Adam, Shampoo, and Muon—under a common framework, steepest descent under a norm. Practically, experiments suggest that duality-based optimizers train faster than AdamW and transfer learning rate across width. Chapter 3 develops tools to enforce weight norm constraints during training, conferring provable and upfront Lipschitz guarantees for transformers. We find that optimizer dynamics matter: switching from AdamW to Muon improves standard weight regularization methods—weight decay and spectral normalization—allowing models to reach equal performance with a lower Lipschitz bound. Leveraging that Muon’s update has a fixed spectral norm, we co-design a weight constraint method called spectral cap that improves the Lipschitz vs. performance tradeoff for MLPs and 2M parameter transformers. Our 4-Lipschitz transformer on Shakespeare text reaches validation accuracy 60%. Scaling to 145M parameters, our 600-Lipschitz transformer reaches 21% accuracy on internet text. However, to match the NanoGPT baseline validation accuracy of 39.4%, our Lipschitz upper bound increases to 10^274. Nonetheless, our Lipschitz transformers train without stability measures such as layer norm, QK norm, and tanh logit softcapping.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Sequence Uncertainty in Comparative Genomics&#13;
with a Probabilistic DNA Representation</title>
<link href="https://hdl.handle.net/1721.1/162955" rel="alternate"/>
<author>
<name>Zhao, Sarah Ann</name>
</author>
<id>https://hdl.handle.net/1721.1/162955</id>
<updated>2025-10-07T04:13:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Modeling Sequence Uncertainty in Comparative Genomics&#13;
with a Probabilistic DNA Representation
Zhao, Sarah Ann
Uncertainty in nucleotide sequences is widespread in bioinformatics, arising from somatic mutations, population-level variation, sequencing errors, and ancestral state inference. Yet, standard formats like FASTA encode DNA deterministically using ASCII string characters, omitting this uncertainty and contributing to pervasive reference biases in genomics. Graph pangenomes have recently emerged to address these limitations by representing genetic variation across populations as bidirected graphs. While promising, these approaches are still developing and are not yet fully integrated with widely used linearly-referenced genomic tools and databases. To bridge this gap, I introduce pDNA (probabilistic DNA), a linearly-referenced data structure that encodes nucleotide-level uncertainty in a vector format compatible with traditional genomics workflows. Each position in a pDNA sequence is represented as a 4-dimension probability vector over the four possible DNA nucleotides, inspired by position weight matrices and one-hot encodings. I also introduce pFASTA, a binary file format for efficient storage of pDNA sequences, along with an open-source software package for generating, manipulating, and analyzing these data. This framework enables uncertainty-aware sequence analysis while maintaining compatibility with existing genomics infrastructure. I apply this framework to ancestral sequence reconstruction.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Online Acquisition of Simulatable Rigid Object Models</title>
<link href="https://hdl.handle.net/1721.1/162954" rel="alternate"/>
<author>
<name>Yang, Ethan</name>
</author>
<id>https://hdl.handle.net/1721.1/162954</id>
<updated>2025-10-07T04:14:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Online Acquisition of Simulatable Rigid Object Models
Yang, Ethan
How can we build a robot that operates autonomously in a home environment over long periods of time? A key requirement is the ability to perceive and understand its surroundings, including the objects it will interact with. This thesis investigates how a robot can reconstruct previously unknown objects and integrate them into a physics simulation for planning. We explore two methods for reconstructing the 3D geometry of objects and test their performance in simulation and in real-world experiments. Our results demonstrate that a learned depth model enables 3D reconstruction of unknown objects and their successful integration into simulation environments. Additionally, we investigate methods for estimating an object’s inertial parameters, using its reconstructed mesh and through manipulation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scaling contrastive learning batch size by two orders of magnitude</title>
<link href="https://hdl.handle.net/1721.1/162953" rel="alternate"/>
<author>
<name>Tian, Betsy</name>
</author>
<id>https://hdl.handle.net/1721.1/162953</id>
<updated>2026-01-16T20:14:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Scaling contrastive learning batch size by two orders of magnitude
Tian, Betsy
Contrastive learning has emerged as a powerful framework for unsupervised representation learning, allowing models to learn by maximizing agreement between related samples and distinguishing dissimilar ones. However, contrastive learning frameworks are fundamentally limited by the number of negative pairs a model can observe, and memory-intensive backbones constrain practical batch sizes. We introduce a three-phase, adapter-augmented training framework that scales contrastive batch sizes by two orders of magnitude – surpassing previous state-of-the-art learners in both accuracy and speed. First, we co-train the backbone and adapter on small batches to establish a strong initialization. Next, we freeze the backbone and train the adapter alone with very large batches, exposing it to an enlarged negative pool. Finally, we transfer large-batch adapter gradients back into the backbone via segmented backpropagation. We evaluate our method on the PlacesAudio dataset and show promising results for boosting retrieval performance at each phase. By exposing the model to substantially more negatives per effective batch, we achieve higher accuracy at a faster speed than optimizer-stepping baselines. Ultimately, this approach that scales batch size by hundreds of times can be integrated into any contrastive learning framework for more robust representation learning and abundant negative sampling.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Fully Automated Volumetric Analysis of Lung Nodules in Computed Tomography</title>
<link href="https://hdl.handle.net/1721.1/162952" rel="alternate"/>
<author>
<name>Rubel, Evan</name>
</author>
<id>https://hdl.handle.net/1721.1/162952</id>
<updated>2025-10-07T04:14:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards Fully Automated Volumetric Analysis of Lung Nodules in Computed Tomography
Rubel, Evan
Early detection of lung cancer significantly improves patient outcomes, and tracking the growth of lung nodules over time is key to understanding their progression and informing future treatment decisions. However, calculating nodule growth in computed tomography (CT) scans remains a highly manual and time-consuming task. In this work, we develop an automated end-to-end pipeline to compute lung nodule growth using state-of-the-art computer vision techniques. While modern advances in deep learning have all but solved many learning tasks in the domain of natural images, biomedical imaging presents unique challenges due to limited data availability, inconsistent annotations, and deployment constraints. We address these challenges by training robust detection and segmentation models using the LUNA16 and LNDb datasets. On the held-out UniToChest dataset, our methods generalize well, attaining a nodule recall of 77.49%, reducing false positives per scan by a factor of 11.3 compared to existing techniques, and achieving a mean nodule-wise Dice score of 0.6453. We then apply our methods to analyze nodule growth in 1,378 patients from the National Lung Screening Trial; we estimate a median nodule volume-doubling time of 791.23 days across all nodules from the patients that do not receive a cancer diagnosis and a median nodule volume-doubling time of 637.38 days across all nodules from the patients that do receive a cancer diagnosis. We also recall 82.20% of radiologist-annotated nodules that are directly associated with a cancer diagnosis and estimate a shorter median nodule volume-doubling time of 370.11 days for these nodules. By automating lung nodule growth quantification, this work lays the foundation for improved screening protocols, personalized treatment planning, and the development of novel imaging biomarkers. To encourage further work in this area, we release our full software pipeline at https://github.com/evanrubel/nodule_volumes.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Analysis of a 80 GHz Hybrid CMOS Dielectric Resonator Oscillator</title>
<link href="https://hdl.handle.net/1721.1/162951" rel="alternate"/>
<author>
<name>Louie, Tiffany</name>
</author>
<id>https://hdl.handle.net/1721.1/162951</id>
<updated>2025-10-07T04:13:26Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design and Analysis of a 80 GHz Hybrid CMOS Dielectric Resonator Oscillator
Louie, Tiffany
This work studies a high frequency, low phase noise, hybrid CMOS oscillator based on a cylindrical dielectric resonator coupled directly to an on chip structure. Dielectric resonators (DR) are known for their high quality factor, low cost, and high temperature stability which makes them a desirable frequency selecting element in design for millimeter-wave (mmWave) applications. Current dielectric resonator oscillators (DRO) have proven to be phase stable, but are limited in frequency (&lt; 40Ghz) due to their implementation with discrete components. However, in increasing the operational frequency up to the GHz range, it is possible to reduce size of the DR and place it directly on top of a cmos chip. We demonstrate, using a 22nm FD-SOI process, the design of a 80Ghz DRO with an area of 4mm² and an oscillator power consumption of 1.95mW. The DRO achieves a simulated phase noise of -128 dBc/Hz at 1MHz and -148 dBc/Hz at 10MHz.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>LEO: an LLM-Powered EDA Overview</title>
<link href="https://hdl.handle.net/1721.1/162950" rel="alternate"/>
<author>
<name>Zheng, Sophia</name>
</author>
<id>https://hdl.handle.net/1721.1/162950</id>
<updated>2025-10-07T04:13:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">LEO: an LLM-Powered EDA Overview
Zheng, Sophia
Computational notebooks impose a linear structure that impedes data analysts’ sensemaking process with overwritten cells, dead-end code, and fragmented logic. This challenge is especially pronounced when analysts either encounter a notebook authored by someone else or revisit a self-authored notebook after significant time has passed. In both cases, understanding the analysis code becomes convoluted and laborious. To address these barriers, we introduce LEO, a computational notebook tool that operationalizes notebook summarization by leveraging large language models to (1) cluster analysis patterns and (2) trace variable use. LEO organizes code into a two-level hierarchy–General Level Sections and Code Level Actions—integrated with in-line textual summaries filtered on the variable-level, further supporting task-driven exploration. We evaluate the system’s effectiveness in a user study with five computational notebook users across two realistic use cases. Participants reported that LEO streamlined code comprehension and navigation of undocumented notebooks by allowing them to query variables and traverse code cells with greater ease.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Articulated 3D Scene Graphs from Egocentric Vision</title>
<link href="https://hdl.handle.net/1721.1/162949" rel="alternate"/>
<author>
<name>Yu, Alan</name>
</author>
<id>https://hdl.handle.net/1721.1/162949</id>
<updated>2025-10-07T04:13:12Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Articulated 3D Scene Graphs from Egocentric Vision
Yu, Alan
Robotic mapping systems typically approach building metric-semantic scene representations from the robot’s own sensors and cameras. However, these “first person” maps inherit the robot’s own limitations due to its embodiment or skillset, which may leave many aspects of the environment unexplored. For example, the robot might not be able to open drawers or access wall cabinets. In this sense, the scene graph is not as complete, and requires a more capable robot to fill in the gaps by remapping. We narrow these blind spots in current methods by leveraging egocentric data captured as a human naturally explores a scene wearing Project Aria glasses, giving a way to directly transfer knowledge about articulation from the human to any deployable robot. We demonstrate that, by using simple heuristics, we can leverage egocentric data to recover models of articulate object parts, with quality comparable to those of state-of-the-art methods based on other input modalities. We also show how to integrate these models into 3D scene graph representations, leading to a better understanding of object dynamics and object-container relationships. We finally demonstrate that these articulated 3D scene graphs enhance a robot’s ability to perform mobile manipulation tasks, showcasing an application where a Boston Dynamics Spot is tasked with retrieving concealed target items, given only the 3D scene graph as input.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decentralized Declustering of Multiple Underactuated Autonomous Surface Vehicles</title>
<link href="https://hdl.handle.net/1721.1/162948" rel="alternate"/>
<author>
<name>Strømstad, Filip Traasdahl</name>
</author>
<id>https://hdl.handle.net/1721.1/162948</id>
<updated>2025-10-07T04:13:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Decentralized Declustering of Multiple Underactuated Autonomous Surface Vehicles
Strømstad, Filip Traasdahl
Multi-agent systems have seen a significant rise in research interest, enabled by the increasing availability of low-cost autonomous platforms and motivated by a wide range of emerging applications. However, the coordinated deployment of large numbers of autonomous vehicles in marine environments remains a nontrivial and high-risk problem, yet it is often overlooked in the literature. These vehicles are typically deployed from a single location, and their underactuated nature, close proximity, and susceptibility to external disturbances make it difficult to achieve a mission-ready configuration without collisions. In this thesis, we address the problem of transitioning a set of underactuated Autonomous Surface Vehicles (ASVs) from arbitrary and inconvenient initial conditions to a deconflicted set of deployed vehicles. We propose a decentralized and scalable method that calculates and assigns target positions to the vehicles, generates optimal paths that comply with minimum turning radius constraints, and ensures collision avoidance between the vehicles through a shared speed policy. Contributions also include a formal definition and quantification of clustering and declustering in multi-agent systems. The approach is implemented using the MOOS-IvP autonomy framework, and performance is evaluated through simulation with up to \(64\) vehicles and extensive field trials with eight vehicles. Results demonstrate that our approach reduces the time to decluster for the most challenging initial conditions by 50% compared to the current manual method. By improving efficiency and robustness while eliminating human involvement, this work streamlines ASV fleet deployments, enabling more scalable multi-agent field operations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>DBOS Advanced Network Analysis Capability for Collaborative Awareness</title>
<link href="https://hdl.handle.net/1721.1/162947" rel="alternate"/>
<author>
<name>Lockton, Sophia E.</name>
</author>
<id>https://hdl.handle.net/1721.1/162947</id>
<updated>2025-10-07T04:13:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">DBOS Advanced Network Analysis Capability for Collaborative Awareness
Lockton, Sophia E.
Collaborative cyber defense is an essential strategy for detecting and mitigating cyber threats [1]. As traditional intrusion detection systems struggle against increasingly sophisticated attacks, we propose embedding collaborative cyber defense directly into system infrastructure. This work presents a novel implementation of collaborative awareness within DBOS (a Database-Oriented Operating System), resulting in a platform that significantly accelerates application development while providing built-in security for transactional web services. By treating security as a first-class operating system service, our approach facilitates real-time comprehensive network observation and analysis without the need for external tools. The implementation supports the construction, aggregation, and analysis of traffic matrices using both Python and PostgreSQL-based workflows. These workflows extract and process IP-level metadata from DBOS applications, enabling multi-instance aggregation and analysis of network data. This integration represents the first instance of collaborative network analysis within an operating system runtime, demonstrating that secure-by-default infrastructure is both feasible and performant.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Minding the Politeness Gap in Cross-cultural Communication</title>
<link href="https://hdl.handle.net/1721.1/162946" rel="alternate"/>
<author>
<name>Machino, Yuka</name>
</author>
<id>https://hdl.handle.net/1721.1/162946</id>
<updated>2025-10-07T04:13:34Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Minding the Politeness Gap in Cross-cultural Communication
Machino, Yuka
Misunderstandings in cross-cultural communication often arise from subtle differences in interpretation, but it is unclear whether these differences arise from the literal meanings assigned to words or from more general pragmatic factors such as norms around politeness and brevity. In this paper, we report three experiments examining how speakers of British and American English interpret intensifiers like “quite” and “very,” finding support for a combination of semantic and pragmatic factors. To better understand these differences, we developed a computational cognitive model where listeners recursively reason about speakers who balance informativity, politeness, and utterance cost. A series of model comparisons suggest that cross-cultural differences in intensifier interpretation stem from (1) different literal meanings, (2) different weights on utterance cost. These findings challenge accounts based purely on semantic variation or politeness norms, demonstrating that cross-cultural differences in interpretation emerge from an intricate interplay between the two.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Empirical Analysis of Neural Architectures and Side&#13;
Information in Financial Time Series Forecasting</title>
<link href="https://hdl.handle.net/1721.1/162945" rel="alternate"/>
<author>
<name>Senthil, Swathi</name>
</author>
<id>https://hdl.handle.net/1721.1/162945</id>
<updated>2025-10-07T04:12:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Empirical Analysis of Neural Architectures and Side&#13;
Information in Financial Time Series Forecasting
Senthil, Swathi
This thesis investigates the predictive capabilities of neural networks in financial time series forecasting, focusing on predicting the weekly close price of the SPY index. We explore the integration of options-derived features alongside traditional price data, compare recurrent architectures and transformer-based models, and evaluate multiple training strategies. Our key contributions include: (1) evidence that options-derived input features improve both error metrics and directional accuracy; (2) a comparison study of four training methods (one-step-ahead, direct multi-step, simulation error, and teacher-forcing); (3) the development of a bidirectional GRU-LSTM hybrid model that outperforms standard recurrent networks in multi-step forecasting; and (4) a novel coarse tokenization approach for discretizing continuous financial data, which improves first-week prediction performance when used in transformer models that use an asymmetric attention mechanism. Overall, this thesis illustrates the importance of input design, model architecture, and training methodology in neural financial forecasting. We conclude by outlining directions for future work, including cross-asset generalization and further exploration of tokenization schemes for transformer-based models.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mitigating LLM Hallucination in the Banking Domain</title>
<link href="https://hdl.handle.net/1721.1/162944" rel="alternate"/>
<author>
<name>Sert, Deniz Bilge</name>
</author>
<id>https://hdl.handle.net/1721.1/162944</id>
<updated>2025-10-07T04:13:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Mitigating LLM Hallucination in the Banking Domain
Sert, Deniz Bilge
Large Language Models (LLMs) offer significant potential in the banking sector, particularly for applications such as fraud detection, credit approval, and enhancing customer experience. However, their tendency to "hallucinate"—generating plausible but inaccurate information—poses a critical challenge. This thesis examines existing strategies for mitigating LLM hallucinations and proposes a novel approach to reduce hallucinations in the context of predicting customer churn using LLMs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Layered Unlearning for Adversarial Relearning</title>
<link href="https://hdl.handle.net/1721.1/162943" rel="alternate"/>
<author>
<name>Qian, Timothy</name>
</author>
<id>https://hdl.handle.net/1721.1/162943</id>
<updated>2025-10-07T04:13:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Layered Unlearning for Adversarial Relearning
Qian, Timothy
Our goal is to understand how post-training methods, such as fine-tuning, alignment, and unlearning, modify language model behavior and representations. We are particularly interested in the brittle nature of these modifications that makes them easy to bypass through prompt engineering or relearning. Recent results suggest that post-training induces shallow contextdependent “circuits” that suppress specific response patterns. This could be one explanation for the brittleness of post-training. To test this hypothesis, we design an unlearning algorithm, Layered Unlearning (LU), that creates distinct inhibitory mechanisms for a growing subset of the data. By unlearning the first &#119894; folds while retaining the remaining &#119896; − &#119894; at the &#119894;th of &#119896; stages, LU limits the ability of relearning on a subset of data to recover the full dataset. We evaluate LU through a combination of synthetic and large language model (LLM) experiments. We find that LU improves robustness to adversarial relearning for several different unlearning methods. Our results contribute to the state-of-the-art of machine unlearning and provide insight into the effect of post-training updates.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mixed-Variable Bayesian Optimization using Prior-Data&#13;
Fitted Networks</title>
<link href="https://hdl.handle.net/1721.1/162942" rel="alternate"/>
<author>
<name>Qian, Janet</name>
</author>
<id>https://hdl.handle.net/1721.1/162942</id>
<updated>2025-10-07T04:13:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Mixed-Variable Bayesian Optimization using Prior-Data&#13;
Fitted Networks
Qian, Janet
Bayesian optimization (BO) is a powerful framework for optimizing expensive blackbox functions, widely used in domains such as materials science, engineering design, and hyperparameter tuning. Traditional BO relies on Gaussian processes (GPs) as surrogate models, but GPs face limitations in flexibility and scalability. Prior-Data Fitted Networks (PFNs) have recently emerged as a promising alternative, leveraging transformer architectures and in-context learning to approximate posterior predictive distributions (PPDs) in a single forward pass. By training on large amounts of synthetically generated data from sample-able function priors, PFNs can learn to rapidly predict PPDs across a wide range of function classes. In this thesis, we investigate the application of PFNs to mixed-variable BO, a particularly challenging setting due to the interplay between continuous and discrete inputs and the combinatorial complexity of the search space. We evaluate how PFNs perform when integrated with a range of mixed-variable BO strategies, including various encoding schemes and discrete-aware acquisition optimization. Additionally, we explore how finetuning PFNs on targeted function priors can enhance performance when prior knowledge about the objective is available. Our contributions include empirical evaluations of mixed-BO techniques, insights into PFN training, and a suite of mixed-variable benchmark problems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Eliminating Hallucination-Induced Errors in Code&#13;
Generation with Functional Clustering</title>
<link href="https://hdl.handle.net/1721.1/162941" rel="alternate"/>
<author>
<name>Ravuri, Chaitanya</name>
</author>
<id>https://hdl.handle.net/1721.1/162941</id>
<updated>2025-10-07T04:13:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Eliminating Hallucination-Induced Errors in Code&#13;
Generation with Functional Clustering
Ravuri, Chaitanya
Modern code–generation LLMs can already solve a large fraction of programming problems, yet they still hallucinate subtle bugs that make their outputs unsafe for autonomous deployment. We present functional clustering, a black-box wrapper that eliminates nearly all hallucination-induced errors while providing a tunable confidence score. The wrapper samples many candidate programs, executes each on a self-generated test suite, and clusters candidates whose I/O behavior is identical; the empirical mass of the largest cluster serves as an exact confidence estimate. A single scalar threshold on this estimate lets users trade coverage for reliability with exponential guarantees. On LiveCodeBench our verifier preserves baseline pass@1 on solvable tasks yet slashes the error rate of returned answers from ∼65% to 2%, and drives it to 0% at a conservative threshold while still answering 15.6% of prompts. Manual audits show that the few residual mistakes stem from prompt misinterpretation, not random generation noise, narrowing future work to specification clarity. Because the method requires only sampling and sandbox execution, it applies unchanged to closed-source APIs and future models, offering a practical path toward dependable, autonomous code generation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Choosing Networks for Ride-Hailing Platforms</title>
<link href="https://hdl.handle.net/1721.1/162940" rel="alternate"/>
<author>
<name>Somsirivattana, Thana</name>
</author>
<id>https://hdl.handle.net/1721.1/162940</id>
<updated>2025-10-07T04:13:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Choosing Networks for Ride-Hailing Platforms
Somsirivattana, Thana
The development of autonomous vehicles is poised to reshape the landscape of transportation. As companies prepare to deploy these vehicles on ride-hailing platforms, a key operational challenge is determining the networks on which to train the vehicles. Our work contributes toward addressing this challenge on three fronts. First, we develop a theoretical model of the network selection problem and prove theoretical results that show the importance of two parameters: the detour factor and the fleet size. Second, we develop several approaches for selecting the networks. Third, we evaluate these approaches on empirical data. We find empirical support for the importance of the detour factor and the fleet size.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>PyGridSim: A Functional Interface for Distributed System Simulation</title>
<link href="https://hdl.handle.net/1721.1/162939" rel="alternate"/>
<author>
<name>Zhao, Angela M.</name>
</author>
<id>https://hdl.handle.net/1721.1/162939</id>
<updated>2025-12-11T16:38:42Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">PyGridSim: A Functional Interface for Distributed System Simulation
Zhao, Angela M.
This thesis details the development of PyGridSim, an open source python module that leverages OpenDSS capabilities to provide an efficient and scalable functional interface for building distributed system simulations. Distributed power systems encompass all components that power an electrical system— from larger power plants to microgrids—and represent the network of electric consumption and production in a system. Simulations of such power systems allow experts to analyze potential faults and risks in a fast, reproducable, and cost-efficient way. Thus, the accessibility of such simulations is critical to supporting the safety and reliability of power systems. While existing packages built for distributed system simulation provide the necessary computing power and customizability of a distributed system simulator, their interfaces are hard to scale over many nodes and often have difficult-to-learn syntax. PyGridSim aims to build on these existing modules—maintaining customizability while providing a flexible, intuitive, and scalable syntax structure.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving the Programmability of A Distributed Hardware Accelerator</title>
<link href="https://hdl.handle.net/1721.1/162938" rel="alternate"/>
<author>
<name>Shwatal, Nathan A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162938</id>
<updated>2025-10-07T04:13:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Improving the Programmability of A Distributed Hardware Accelerator
Shwatal, Nathan A.
Sparse iterative matrix algorithms are critical to many scientific and engineering workloads, yet they perform poorly on conventional hardware. (Ōmeteōtl, a new hardware accelerator with a distributed-memory and task-based execution model, aims to address these performance bottlenecks. However, programming for (Ōmeteōtl is low-level, error-prone, and far removed from the simplicity of typical iterative formulations. This thesis presents Lapis, a domain-specific language and compiler that allows users to express sparse matrix algorithms in high-level Python code and automatically generates efficient C++ code for (Ōmeteōtl. Lapis abstracts away data partitioning and task orchestration, reducing implementation complexity: for example, it lowers lines of code by 30× for conjugate gradients and 46× for power iteration. Despite this abstraction, generated code achieves 75.7% to 92.6% of the performance of manually written implementations across several benchmarks.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Study of Thermochemical Non-equilibrium and Sensor&#13;
Cavity Geometry in Hypersonic Flow</title>
<link href="https://hdl.handle.net/1721.1/162937" rel="alternate"/>
<author>
<name>Mao, Grace</name>
</author>
<id>https://hdl.handle.net/1721.1/162937</id>
<updated>2025-10-07T04:12:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Study of Thermochemical Non-equilibrium and Sensor&#13;
Cavity Geometry in Hypersonic Flow
Mao, Grace
This work presents a computational investigation of the influence of geometric configurations within a hypersonic flow field on optical distortion, with a particular focus on the effects of window deformation and the role of thermochemical modeling compared to perfect gas assumptions. Turbulent RANS and conjugate heat transfer were used to model three 3D geometries in US3D, an unstructured-grid finite volume computational fluid dynamics (CFD) solver. The three investigated geometries are a flat plate with a flush-mounted sensor, an open cavity with a length-to-depth ratio of 2, and a closed cavity with a length-to-depth ratio of 16. The data demonstrate that the flat plate configuration has the best optical performance and that the closed cavity has the worst. Additionally, the inclusion of thermochemistry in the flow simulation results in a more pessimistic outlook on image quality compared to the perfect gas model. The results document optical distortion for several different geometries with and without thermochemical modeling within hypersonic flow that can inform future design decisions and research.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>BlueVeri: Formal Security Verification for Bluespec&#13;
Processor Designs</title>
<link href="https://hdl.handle.net/1721.1/162936" rel="alternate"/>
<author>
<name>Wang, Shih-Yu</name>
</author>
<id>https://hdl.handle.net/1721.1/162936</id>
<updated>2025-10-07T04:13:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">BlueVeri: Formal Security Verification for Bluespec&#13;
Processor Designs
Wang, Shih-Yu
There are numerous hardware security defense mechanisms designed to mitigate sidechannel attacks. However, ensuring that a defense can comprehensively protect against an entire class of attacks, while avoiding the introduction of new vulnerabilities that could lead to additional attack surfaces, remains a significant challenge. Although researchers have attempted to apply formal verification techniques to hardware security, these efforts have been hindered by scalability issues. In this paper, we introduce BlueVeri, a systematic and automatable approach for formally verifying the security of a Bluespec processor against speculative execution attacks. BlueVeri leverages the high-level information provided by Bluespec’s guarded atomic actions, simplifying and accelerating the verification process. We evaluate BlueVeri on out-of-order processors implemented in Bluespec, demonstrating that our approach substantially enhances verification scalability and is capable of proving the security properties of a minimal out-of-order processor within one hour.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stairway to Autonomy: Hierarchical Decision-Making for LLM-Guided Planning, Bandit-Driven Exploration, and Multi-Agent Navigation</title>
<link href="https://hdl.handle.net/1721.1/162935" rel="alternate"/>
<author>
<name>Nayak, Siddharth Nagar</name>
</author>
<id>https://hdl.handle.net/1721.1/162935</id>
<updated>2025-10-07T04:09:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Stairway to Autonomy: Hierarchical Decision-Making for LLM-Guided Planning, Bandit-Driven Exploration, and Multi-Agent Navigation
Nayak, Siddharth Nagar
Autonomous multi-agent systems must efficiently plan, explore, and navigate in dynamic and unknown environments, particularly for tasks like search &amp; rescue and environmental monitoring. These settings are often characterized by partial observability, limited communication, and dynamic objectives that require flexible coordination across agents. Designing autonomy that scales with team size and task complexity requires modular decision-making systems capable of high-level reasoning, information-driven exploration, and robust decentralized execution. This dissertation presents a hierarchical decision-making framework that addresses these challenges across three complementary levels of autonomy: high-level planning, adaptive exploration, and decentralized scalable navigation. At the highest level, LLaMAR (Language Model-based Long-Horizon Planner for Multi-Agent Robotics) leverages large language models (LLMs) to decompose long-horizon tasks into structured subtasks, enabling agents to adapt their strategies dynamically. However, the effective execution of these plans requires knowledge about the environment. Our mid-level exploration strategy, BaTMaN (Banditbased Tracking and Monitoring and Navigation), systematically prioritizes waypoints that maximize information gain while balancing real-world constraints such as energy efficiency and sensor reliability. Finally, InforMARL provides a scalable, decentralized navigation by leveraging graph-based local information aggregation, improving sample efficiency, and demonstrating transferability to unseen team sizes. This dissertation develops each of these modules to address a distinct level of the autonomy stack. LLaMAR functions as the high-level planner, translating natural language goals into structured sequences of subtasks and incorporating real-time corrections through a plan-act-correct-verify cycle. BaTMaN serves as the mid-level exploration engine, guiding sensor-equipped agents to prioritize informative regions based on uncertainty. InforMARL operates at the execution level, enabling decentralized agents to navigate through dynamic environments using graph-based local information aggregation and reactive control policies. Each module is independently deployable and optimized for different challenges: strategic reasoning, data-efficient monitoring, and scalable navigation, respectively. When combined, the three modules form a coherent autonomy stack for multi-agent systems operating under uncertainty.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning Methods for Churn Prediction and Infrastructure Resilience</title>
<link href="https://hdl.handle.net/1721.1/162934" rel="alternate"/>
<author>
<name>Agrawal, Shreeansh</name>
</author>
<id>https://hdl.handle.net/1721.1/162934</id>
<updated>2025-10-07T04:12:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Machine Learning Methods for Churn Prediction and Infrastructure Resilience
Agrawal, Shreeansh
This thesis investigates how advanced machine learning methods can effectively address two critical business challenges facing the telecommunications industry: short-term customer churn prediction and long-term infrastructure resilience to climate-driven disruptions.&#13;
&#13;
In the first part of this work, I develop an upgrades-informed churn forecasting model tailored specifically for marketing operations. Recognizing limitations in the existing aggregate forecasting methodologies, I create a cohort-based cascade model that explicitly integrates customer upgrade behavior across various contract tenures. To address data sparsity and longitudinal gaps in newer contract types, I employ synthetic data generation and imputation techniques, such as regression-based methods and Multivariate Imputation by Chained Equations (MICE). For forecasting churn and upgrade rates, I prioritize interpretability by applying linear regression enhanced with time-series forecasting techniques and macroeconomic indicators, including the Consumer Price Index. This approach significantly improves forecasting accuracy, aligns internal stakeholder objectives, and supports strategic decision-making around customer retention and promotional offers.&#13;
&#13;
The second part focuses on building predictive models and strategic frameworks for long-term infrastructure resilience in the face of increasing climate risks. Leveraging spatial-temporal clustering methods (DBSCAN) and advanced neural network architectures, I develop a model to attribute historical outages to extreme weather events. Further, I integrate this model with future climate scenarios from CMIP5 projections using Monte Carlo simulations, providing actionable insights into future infrastructure vulnerabilities. Employing SHapley Additive exPlanations (SHAP), I interpret model predictions, highlighting critical factors such as precipitation, windspeed, and atmospheric pressure. Additionally, I propose frameworks for quantifying financial impacts of future outages and recommend optimization strategies for proactive infrastructure hardening and emergency response.&#13;
&#13;
Collectively, these applications demonstrate the value of strategically employing interpretable and robust machine learning methodologies to enhance short-term operational decisions and long-term strategic planning within telecom organizations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Segmentation Based Tracking for Aerial Robot Global&#13;
Localization in Unstructured Environments with Oblique&#13;
Monocular Camera Orientation</title>
<link href="https://hdl.handle.net/1721.1/162933" rel="alternate"/>
<author>
<name>Shafferman, Hannah R.</name>
</author>
<id>https://hdl.handle.net/1721.1/162933</id>
<updated>2025-10-07T04:13:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Segmentation Based Tracking for Aerial Robot Global&#13;
Localization in Unstructured Environments with Oblique&#13;
Monocular Camera Orientation
Shafferman, Hannah R.
In the field of robotics, there has been a growing interest in multi-robot systems and their potential to improve the efficiency, scale, and reliability of tasks beyond what an individual robot can achieve. Global localization is a crucial task for autonomous robot navigation, specifically in the multi-agent scenario where robots need to localize within maps communicated by other agents. The scenario where vehicles are viewing their environments from the same perspective, or camera viewpoint, is well studied. However, when environments are mapped from different camera viewing angles, traditional methods fail to match visual features and thus fail to localize. The technical gap that this thesis addresses is when autonomous vehicles within a team are mapping the same environment from different viewpoints, specifically nadir and an oblique camera orientations in an unstructured environment. Many existing visual place recognition (VPR) methods fail to match visual features that look visually different due to appearance, illumination, or viewpoint changes and thus fail to localize. In this thesis, we demonstrate the shortcomings of previous work to generalize to an off-nadir camera angle and explore the benefits and challenges that arise with utilizing oblique imagery for visual feature detection and tracking. We propose a segmentation-based object tracking pipeline to improve tracking and environment mapping performance in this traditionally challenging scenario. Our approach consists of 1) a front-end auto-segmentation tracking pipeline followed by 2) a submap correspondence search, which exploits geometric consistencies between environment maps to align vehicle reference frames. We evaluate our approach on a challenging indoor, cluttered dataset and demonstrate a maximum precision 74% higher than traditional and learning-based baseline methods, with a map size 0.5% the size of the most memory conservative traditional baseline method.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Aerocapture Guidance and Estimation Framework for Improved Robustness to Uncertainty</title>
<link href="https://hdl.handle.net/1721.1/162932" rel="alternate"/>
<author>
<name>Sonandres, Kyle A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162932</id>
<updated>2025-10-07T04:13:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Aerocapture Guidance and Estimation Framework for Improved Robustness to Uncertainty
Sonandres, Kyle A.
Aerocapture is an orbital insertion maneuver that converts a hyperbolic approach trajectory into a desired captured orbit using the aerodynamic forces generated during a single atmospheric pass. While it offers major benefits, such as reduced interplanetary cruise time and lower propellant mass reserves, it also introduces significant risk due to extreme sensitivity to atmospheric and delivery state uncertainties. This drives the need for robust guidance algorithms and accurate environmental estimation techniques. This thesis presents approaches to address both of these needs, developing solutions to improve aerocapture performance and robustness to uncertainty. The first contribution is the development of ABAMGuid+, a novel aerocapture guidance algorithm that leverages simultaneous control over bank angle and angle of attack. Inspired by optimal control theory, the algorithm uses a four-phase structure to mimic the optimal control laws while maintaining tractability for online use. Optimal control theory is utilized to identify the optimal control solutions, and numerical optimization is used to validate the analytic solutions prior to integration into a guidance algorithm. Extensive simulation results of a Uranus aerocapture scenario, including over 140,000 Monte Carlo trajectories, demonstrate significant improvements in capture success rates and propellant efficiency compared to existing methods. The second contribution addresses environmental uncertainty directly by developing a deep learning-based approach to estimate the atmospheric density profile during flight. A long short-term memory (LSTM) neural network-based architecture is trained to predict atmospheric density given sequences of flight data. The trained model is integrated into the guidance loop and a curriculum learning process is used to refine in-flight performance. Monte Carlo results show that the LSTM-augmented guidance system reduces propellant usage compared to traditional estimation methods. In summary, this thesis presents two approaches that improve aerocapture performance and robustness to uncertainty. We show that this added robustness can be achieved both by expanding algorithmic ability and by improving environmental estimation approaches.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mass and Distance Estimation Simulations for the Nancy Grace Roman Space Telescope Using PyLIMASS and a Case Study on Intellectual Property Frameworks in Space Collaborations</title>
<link href="https://hdl.handle.net/1721.1/162931" rel="alternate"/>
<author>
<name>McGee, Carissma</name>
</author>
<id>https://hdl.handle.net/1721.1/162931</id>
<updated>2025-12-10T00:31:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Mass and Distance Estimation Simulations for the Nancy Grace Roman Space Telescope Using PyLIMASS and a Case Study on Intellectual Property Frameworks in Space Collaborations
McGee, Carissma
Gravitational microlensing is a phenomenon in which a foreground star or planet briefly magnifies light from a more distant background star. This effect enables the discovery of exoplanets that are otherwise undetectable, including those orbiting faint hosts and at large separations. Microlensing is well suited to characterizing exoplanets beyond the snow line, revealing mass ratios and orbital geometries inaccessible to transit or radial velocity methods. The Nancy Grace Roman Space Telescope will carry out the Galactic Exoplanet Survey to detect thousands of microlensing events with the cadence and precision necessary for statistical exoplanet population studies. To verify Roman’s ability to meet its core science requirement, recovering the lens mass and distance in at least 40% of planetary events with better than 20% uncertainty, targeted simulations are essential. Using the pyLIMASS inference framework and Fisher matrix-based uncertainty propagation, I demonstrate that for the well-characterized event OGLE-2013-BLG-0132Lb, the lens mass can be constrained to within 18.7% uncertainty, validating the feasibility of Roman’s requirement on a case-study basis. This thesis also addresses the legal and policy foundations needed to ensure global access to these simulation tools. By advancing open-source software models and proposing a space IP framework for equitable knowledge sharing, it supports collaborative scientific infrastructure for future international space missions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Combined Steam Power Cycle and Turbofan Engine&#13;
for Improvement in Aviation Climate Impacts</title>
<link href="https://hdl.handle.net/1721.1/162930" rel="alternate"/>
<author>
<name>Mueller, Anna</name>
</author>
<id>https://hdl.handle.net/1721.1/162930</id>
<updated>2025-10-07T04:13:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Combined Steam Power Cycle and Turbofan Engine&#13;
for Improvement in Aviation Climate Impacts
Mueller, Anna
Despite significant innovations in aviation technology over the last 70 years resulting in enormous efficiency improvement, the rising demand for air travel means that aviation carbon emissions continue to increase each year. The rate of improvement to aircraft propulsion engines is diminishing and additional improvements often add significant engine cost or weight. With the goal of reducing aviation’s contribution to global climate change, future aircraft engine designers must consider concepts that stray from the traditional turbofan engine. In this thesis, I develop an engine cycle model combining the turbofan engine with a steam power cycle and use the model to explore the benefits of applying this concept to aircraft engines. In order to study the impact to engine performance and emissions from adding a steam cycle, the engine model needs to be capable of representing the water phase changes and the heat exchangers required to drive those phase changes. My contribution is the development of such a model – with special attention to the modeling of water properties and phase change of water – which ties heat exchanger models into an engine thermodynamic model. The engine cycle as well as heat exchanger parameters including water-to-air ratio, combustor exit temperature, overall pressure ratio, and water pressure are varied to explore the impact to overall engine performance, including the impact of the added heat exchanger weight. This thesis covers the development and initial testing of this model, which enables future studies in engines with phase changing heat exchangers or water injection with the goal of assisting the search for the future engine technologies that will reduce harmful impacts of aviation while continuing to allow air travel.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Aero-Thermo-Chemo-Mechanical Coupling Framework for the Analysis of Hypersonic Ablative Thermal Protection Systems</title>
<link href="https://hdl.handle.net/1721.1/162929" rel="alternate"/>
<author>
<name>Hoss, Summer A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162929</id>
<updated>2025-10-07T04:13:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Aero-Thermo-Chemo-Mechanical Coupling Framework for the Analysis of Hypersonic Ablative Thermal Protection Systems
Hoss, Summer A.
There are countless challenges associated with the accurate modeling of the hypersonic flight of ablative thermal protection systems (TPS): resolving the relevant coupled physical phenomena through multi-physics simulations, the management of the disparate spatiotemporal scales associated with the fluid and solid responses, and establishing a reliable numerical model able to predict the response of ablative materials exposed to extreme gradients—to name a few. The two-way, loosely coupled framework presented in this thesis consists of ΣMIT, a multi-physics computational solid mechanics (CSM) code, coupled with US3D, a hypersonic computational fluid dynamics (CFD) solver, to form a complete aero-thermochemo-mechanical simulation framework. The ΣMIT-US3D coupling framework provides a step towards high-fidelity simulation capabilities for hypersonic vehicles with ablative TPS, establishing a strong foundation for the simulation of fluid-structure interaction (FSI) phenomena and computation of the mechanical response of porous ablators. The requirement of a robust numerical formulation for the solution of hypersonic pyrolysis problems was made apparent when encountering numerical convergence issues with legacy methods, which sparked the development of a robust semi-implicit pyrolysis material model. The so-called Linearized Pyrolysis model employs simplifying assumptions for the energy and mass balance equations and relies upon the time-lagging of chosen terms to achieve linear convergence and robust performance. The performance of the model has been validated against the Ablation Workshop Test Cases and has increased the range of allowable representative hypersonic boundary conditions significantly compared to the legacy approach. Together, the model and the coupling framework are applied to two aero-thermochemo-mechanical analyses contained within the thesis: a spherical-tipped nose cone and the Orion heat shield. Preliminary results identify the decomposition region as a zone in which high von Mises stress tends to occur—care must be taken to ensure that internal and external flight loads do not exceed allowable limits to prevent catastrophic TPS material failure in this region. However, perhaps the most significant insight resulting from the framework relates to the computation of mass fluxes through the porous ablative material, revealing that for an isotropic monolithic heat shield with at a zero angle of attack, pyrolysis gas flow is driven by the pressure gradient applied to the shield such that the flow exits at the edges of the shield rather than from the base.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Aeroverse: Aerospace Education in Extended Reality</title>
<link href="https://hdl.handle.net/1721.1/162928" rel="alternate"/>
<author>
<name>Johnson, Mollie</name>
</author>
<id>https://hdl.handle.net/1721.1/162928</id>
<updated>2025-10-07T04:13:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Aeroverse: Aerospace Education in Extended Reality
Johnson, Mollie
Aerospace education is a continuously evolving field that is increasingly dependent on digital tools. However, it is ambitious to shift the teaching paradigm to accommodate new cutting-edge technologies. Extended reality (XR), which encompasses augmented (AR) and virtual reality (VR), is an example of such technology. In recent years, VR has seen an increase in usage in education as a novel way to provide students with immersive learning experiences, and XR has a long history of use within the working aerospace industry. However, application in the overlap between the two— aerospace engineering education— remains largely unexplored to date. The themes addressed in this thesis are two-fold: first, the goal is to create VR learning modules to supplement the existing aerospace engineering curriculum. Second, the aim is to validate whether VR technology as a teaching medium can improve learning outcomes and student engagement within the MIT AeroAstro department. With these themes in mind, two experiments were conducted to explore this topic. The first experiment presents the design and execution of an experimental course aimed at aerospace engineering students to assess the educational impact of VR. Over the course of this study, ANOVA and Kruskal-Wallis tests found that there was no significant difference (p &gt; 0.05) in performance between the VR and non-VR groups, save for a few exceptional cases. The second experiment details the integration of a single VR module into an existing course in which all students interacted with the VR activity. Students responded positively to this experiment, reporting increased feelings of engagement and a sense that it aligned well with the rest of the course. One-sample Wilcoxon tests reveal that these findings are largely significant (p &lt; 0.05). This thesis advances the work on assessing VR use for aerospace education. The implications of this work may influence the decisions of other educators regarding the adoption of VR technology as supplements to their own teaching methodologies. As a whole, this thesis contributes to the broader conversation on integrating VR into the classroom.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating the Role of Mission Architecture in Crew Socioemotional Health for Mars Exploration</title>
<link href="https://hdl.handle.net/1721.1/162927" rel="alternate"/>
<author>
<name>MacRobbie, Madelyn</name>
</author>
<id>https://hdl.handle.net/1721.1/162927</id>
<updated>2025-10-07T04:13:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Investigating the Role of Mission Architecture in Crew Socioemotional Health for Mars Exploration
MacRobbie, Madelyn
Human space exploration is evolving rapidly, with commercial successes and NASA’s Artemis missions driving rapid growth and innovation. Plans for longer, larger, and more complex missions necessitate development of new mission architectures to sustain the crews needed to support these missions. Larger missions and multi-site architectures have become feasible with advances in commercial launch vehicles, and generate increased safety and redundancy for crewed operations. However, crew dynamics in these mission architectures have yet to be investigated. This thesis investigates the role of mission architecture (specifically single-site versus dual-site configurations) in subgroup formation and the resulting impacts to socioemotional well-being. We first develop a systematic approach for optimizing analog mission design, then apply this to design two analog missions to compare the effects of single-site and dual-site mission architectures on crew dynamics and psychosocial health. Results provide valuable insights for future Mars mission design, where crew structure and psychosocial adaptation are critical to mission success.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards a Strong, Human-Compatible Codenames AI&#13;
Agent</title>
<link href="https://hdl.handle.net/1721.1/162926" rel="alternate"/>
<author>
<name>Zhu, Sebastian</name>
</author>
<id>https://hdl.handle.net/1721.1/162926</id>
<updated>2025-10-07T04:13:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards a Strong, Human-Compatible Codenames AI&#13;
Agent
Zhu, Sebastian
Current language models are limited in their ability to solve complex planning and reasoning problems without the aid of search procedures. While a large body of work has developed search procedures tailored to single-turn, single-user natural language interactions, language generation in multi-agent contexts involving multiple users, imperfect information, and partially misaligned objectives remains extremely challenging. We aim to build search procedures that will enable language models to assist with interactive, multi-agent decision-making in a diverse range of contexts. Using the word game Codenames as a benchmark, we will combine game-theoretic planning procedures with basic language model-based scoring methods to create agents that both play strong policies and play well with human policies. This work yields a set of practical text generation procedures, new evaluation benchmarks, and foundational algorithmic improvements in language model search.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Investigation into Contrail Observability from Different Satellite Platforms</title>
<link href="https://hdl.handle.net/1721.1/162925" rel="alternate"/>
<author>
<name>Euchenhofer, Marlene V.</name>
</author>
<id>https://hdl.handle.net/1721.1/162925</id>
<updated>2025-10-07T04:13:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Investigation into Contrail Observability from Different Satellite Platforms
Euchenhofer, Marlene V.
Contrails are line-shaped ice clouds that can form behind aircraft engines and, under certain cold and moist conditions, spread into contrail cirrus that persists for several hours. By adding to the existing cloud cover, contrails can act to either cool or warm, with the latter, on average, being dominant, resulting in an overall warming effect. Although the effective radiative forcing from contrails is inferred to be of the same order of magnitude as that caused by aviation’s CO₂ emissions, large uncertainties remain around specific radiative forcing estimates. &#13;
Observational studies of contrails, either to support climate impact assessments or operational contrail avoidance strategies, face trade-offs between spatial and temporal resolution. Many recent publications have relied on data from geostationary satellites accepting lower input data resolution in exchange for higher temporal resolution and greater spatial coverage. Limitations of the observability of contrails in the resulting images have not been sufficiently investigated and need to be assessed and quantified.&#13;
This study aims to leverage the higher spatial resolution of VIIRS satellite imagery to identify potential limitations on contrail observability in lower-resolution GOES ABI imagery. We generate a dataset of human-identified contrails visible in false-color thermal infrared imagery from both GOES ABI and VIIRS for twelve scenes over the contiguous US. Based on this dataset, we investigate the number, cover, and appearance of the observed contrails. We find that GOES ABI does not resolve 80% of all contrails that can be identified in VIIRS imagery and only shows half of the total observed contrail length. Finally, incorporating an existing contrail-flight matching algorithm by Barbosa, we show that VIIRS tends to resolve more younger contrails than GOES ABI. The findings from this study help to bound the validity of current contrail simulations and modeling outputs that estimate contrail cover and occurrence.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MINCE: Dialect-Aware SQL Decomposition for Federated Query Execution</title>
<link href="https://hdl.handle.net/1721.1/162924" rel="alternate"/>
<author>
<name>Zhang, Sophie S.</name>
</author>
<id>https://hdl.handle.net/1721.1/162924</id>
<updated>2025-10-07T04:12:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">MINCE: Dialect-Aware SQL Decomposition for Federated Query Execution
Zhang, Sophie S.
The increasing adoption of specialized database systems has led to the rise of heterogeneous data environments. While having multiple engines in a data infrastructure enables opportunities for workload optimization, SQL dialect incompatibility makes workload migration difficult. To address this challenge, we develop MINCE (Multi-dialect INtegration and Crossengine Execution), a technique that decomposes SQL queries into parts to enable federated execution across engines with differing SQL dialects. MINCE uses a rule-based method to partition a query into executable components that are assigned to different database systems. To evaluate different execution strategies, MINCE further implements a cost model that incorporates both on-engine query execution time and inter-system data transfer overhead. We evaluate MINCE on a TPC-H-based workload augmented with PostgreSQL-specific functions unsupported in Amazon Redshift. Experimental results show that MINCE produces the fastest execution strategy among our baselines for 72.1% of queries using estimated cardinality, achieving a 2× speedup over single-engine baselines. With perfect cardinality information available to our cost model, this value increases to 88.4%, with an average 2.8× speedup. These results demonstrate that our system not only enables more flexible federated query execution, but also reliably identifies performant execution strategies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>New results in canonical polyadic decomposition overfinite fields</title>
<link href="https://hdl.handle.net/1721.1/162923" rel="alternate"/>
<author>
<name>Yang, Jason</name>
</author>
<id>https://hdl.handle.net/1721.1/162923</id>
<updated>2025-10-07T04:13:40Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">New results in canonical polyadic decomposition overfinite fields
Yang, Jason
Canonical polyadic decomposition (CPD) consists of expressing a tensor (multidimensional array) as a sum of several rank-1 tensors, each of which is an outer/separable product of vectors. The number of rank-1 tensors used in a CPD is called the rank of the CPD, and the minimum possible rank of a CPD for a given tensor is called the rank of the tensor. CPD is at the core of fast matrix multiplication, a computational problem with widespread implications across several seemingly unrelated problems in computer science. Much recent progress in this field has used randomized heuristic search to find new CPDs, often over a finite field. However, if these techniques fail to find a CPD with low enough rank, they cannot prove that no such CPD exists. Consequently, these methods fail to resolve certain long-standing questions, such as whether the tensor corresponding to 3 × 3 matrix multiplication has rank less than 23. To make progress on these problems, we develop a novel algorithm that preserves exactness, i.e. they can provably verify whether or not a given tensor has a specified rank. Compared to brute force, when searching for a rank-R CPD of a n0 × · · · × nD−1-shaped tensor over a finite field F, where n0 ≥ · · · ≥ nD−1, our algorithm saves a multiplicative factor of roughly |F| R(n0−1)+n0( P d≥1 nd) . Additionally, our algorithm runs in polynomial time. We also find a novel algorithm to search border CPDs, a variant of CPDs that is also important in fast matrix multiplication. Finally, we study the maximum rank problem and give new upper and lower bounds, both for families of tensor shapes and specific shapes. Although our CPD search algorithms are still too slow to resolve the rank of 3 × 3 matrix multiplication, we are able to utilize them in this problem by adding extra search pruners that do not affect exactness or increase asymptotic running time.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parameter Estimation for Anonymous Hawkes Processes</title>
<link href="https://hdl.handle.net/1721.1/162922" rel="alternate"/>
<author>
<name>Wang, William</name>
</author>
<id>https://hdl.handle.net/1721.1/162922</id>
<updated>2025-10-07T04:13:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Parameter Estimation for Anonymous Hawkes Processes
Wang, William
Hawkes Processes are self-exciting point processes used to model many real-life networks in which an event from one agent causes the rate at which events occur from related agents to increase, such as in earthquake networks or social media. This project investigates the question of finding the underlying structure of the Hawkes Processes given a history of when events occurred. This problem has been studied extensively in the regime where the event labels are known, and the bulk of the literature involves parameterizing the model and passing it through statistical learning tools. Our proposed work focuses on the the same question in “anonymous" case where labels are not given. In this regime, the lack of information makes many previous approaches intractable and we develop novel non-parametric approaches for solving cases of the structure learning problem in algorithmic and information theoretic settings. Our results show the ability to learn the entire model under mild assumptions in the information theoretic regime, where we have access to an arbitrarily long Anonymous Hawkes Process transcript, whereas when we’re confined to a polynomially lengthed transcript, the situation is considerably more difficult.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Organization Infrastructure for Tokenized Asset Records</title>
<link href="https://hdl.handle.net/1721.1/162921" rel="alternate"/>
<author>
<name>Whartenby, Patrick E.</name>
</author>
<id>https://hdl.handle.net/1721.1/162921</id>
<updated>2025-10-07T04:13:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Organization Infrastructure for Tokenized Asset Records
Whartenby, Patrick E.
The Tokenized Asset Record (TAR) represents a way to connect existing technology related to tokenized assets and asset schemas to real-world documents that validate the existence of an object. Exactly who should manage TARs and the properties of the related organization schemes remains an open question. Answering this question is crucial to furthering the existing digital economy. While existing solutions have sought to expand digital commerce through pioneering digital clearing houses, little work has explored support for other classes of real-world digitized assets with proof of ownership and existence. The research proposed here seeks to answer this question by suggesting possible solutions and developing a framework for uniformly analyzing the proposals. The research proposes and evaluates three models for the management of TARs. The first is a scheme that involves each industry setting up its own TAR database and managing the system independently from other industries. The second proposes hosting all TARs on a single blockchain. The third argues for an off-chain decentralized platform to host all, akin to the Data Spaces proposed by the European Union. The research finds, based on the proposed criteria, that a decentralized off-chain approach best meets the goals of a TAR management framework.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unveiling Phenotype–Genotype Interplay with Deep&#13;
Learning Foundation Models for scRNA-seq: A&#13;
Quantitative Perspective</title>
<link href="https://hdl.handle.net/1721.1/162920" rel="alternate"/>
<author>
<name>Thadawasin, Pakaphol</name>
</author>
<id>https://hdl.handle.net/1721.1/162920</id>
<updated>2025-10-07T04:13:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Unveiling Phenotype–Genotype Interplay with Deep&#13;
Learning Foundation Models for scRNA-seq: A&#13;
Quantitative Perspective
Thadawasin, Pakaphol
Foundation models have emerged as powerful tools for analyzing single-cell RNA sequencing (scRNA-seq) data, leveraging large-scale pretraining to capture complex gene expression patterns. However, a comprehensive quantitative framework for understanding the interplay between phenotypes and genotypes remains underdeveloped. Such a framework is critical not only for validating model performance but also for uncovering previously unrecognized biological relationships. In this work, we present both traditional and deep learning-based quantitative analysis pipelines for PolyGene [1], a transformer-based scRNA-seq foundation model, aimed at disentangling the complex phenotype–genotype relationship. First, we implement a top-k classification and entropy evaluation pipeline to serve as a primary validation framework. Our results demonstrate that the pretrained PolyGene [1] is robust in top-k classification metrics and provides meaningful insights into the entropy landscape of human cells across different life stages. Second, we propose a novel deep learning gradientbased gene selection method designed to address limitations in traditional feature selection approaches, such as poor scalability and sensitivity to heterogeneity in high-dimensional data. Through empirical evaluations on benchmark scRNA-seq datasets, we show that our method enhances model interpretability and improves downstream performance, offering a more scalable and biologically relevant alternative to existing techniques. Overall, this work introduces a set of quantitative analysis tools that fill a critical gap in evaluating and interpreting scRNA-seq foundation models, contributing to a deeper understanding of the genotype–phenotype interplay through modern deep learning techniques.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deepfake Face Detection: An Ensemble Framework for Generalized Classification in Biometric Verification Systems</title>
<link href="https://hdl.handle.net/1721.1/162919" rel="alternate"/>
<author>
<name>Zen, Hilary</name>
</author>
<id>https://hdl.handle.net/1721.1/162919</id>
<updated>2025-10-07T04:13:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Deepfake Face Detection: An Ensemble Framework for Generalized Classification in Biometric Verification Systems
Zen, Hilary
Generation methods for deepfake images have advanced rapidly, and deepfake face images pose a critical security for biometric verification systems. Applications that rely on face recognition to grant access to sensitive data need to maintain high accuracy across a wide variety of deepfake generation methods, including novel and developing types that the application has not previously trained on. Current deepfake detection models achieve nearperfect accuracy on benchmark datasets, but do not perform as well on unseen types of deepfakes that were not part of their training dataset. We propose building an ensemble model with multiple base detectors, each trained on different generation model families to maintain high performance across many deepfake generation methods. Using four base models, including two models with the same architecture and training data, we exhaustively test all possible ensemble models. We find that combining similar base models trained on the same deepfake generation family does not improve performance compared to the individual base models. However, combining base models trained on different deepfake generation families leads to significant increases in accuracy and recall. Our ensemble framework provides a flexible and inexpensive solution in the ever-changing landscape of deepfake generation and security.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>When Should Model Updates Propagate?</title>
<link href="https://hdl.handle.net/1721.1/162918" rel="alternate"/>
<author>
<name>Struckman, Isabella Marguerite</name>
</author>
<id>https://hdl.handle.net/1721.1/162918</id>
<updated>2025-10-07T04:13:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">When Should Model Updates Propagate?
Struckman, Isabella Marguerite
AI supply chains rely increasingly on downstream developers adapting pretrained upstream models. When upstream models are retrained with data deletions (which may be prompted by copyright violations, privacy compliance, or removal of illicit content), it’s unclear if all downstream developers must also undergo costly retraining. In this thesis, we investigate the propagation of data deletions through fine-tuned models within a controlled visual classification setting comprising dog-breed and plane-manufacturer recognition tasks. We show that not all model updates propagate equivalently to downstream tasks, and there is a strong relationship between the deleted data’s relationship to the downstream task and its affect on the downstream model. We demonstrate that neither simple performance metrics (accuracy or F1), nor output-level divergences, nor even embedding-based similarity metrics alone adequately predict when a deletion meaningfully impacts downstream tasks. To overcome these limitations, we introduce an information-theoretic metric grounded in Gaussian mixture modeling (GMM) of embedding distributions, capturing deeper representational shifts. Our proposed approach precisely distinguishes when deletions require downstream retraining, achieving high predictive accuracy and recall without directly accessing retrained downstream models.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital Twin Modeling for NV Magnetometry</title>
<link href="https://hdl.handle.net/1721.1/162917" rel="alternate"/>
<author>
<name>Rich, John P.</name>
</author>
<id>https://hdl.handle.net/1721.1/162917</id>
<updated>2025-10-07T04:13:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Digital Twin Modeling for NV Magnetometry
Rich, John P.
This thesis presents the development and application of a digital twin modeling framework for nitrogen-vacancy (NV) center-based magnetometry, advancing the field of quantum sensing. A surrogate model serves as a computational representation of the physical NV magnetometer system, enabling comprehensive exploration of parameter spaces to optimize device design. Leveraging machine learning techniques, this study optimizes control mechanisms, including the design of learned analog filters, to enhance system performance. This research investigates the fundamental limits of NV magnetometer performance, identifying strategies to minimize power requirements while maintaining high sensitivity. A dynamic framework is implemented to update the surrogate model’s parameters in real-time based on experimental measurements, ensuring accurate fidelity to the physical system. Additionally, the optimized control strategies are simulated within the digital twin environment, demonstrating their potential for advanced quantum sensing applications such as magnetocardiography (MCG) for heartbeat detection.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fuzzing for User-Schedulable Languages</title>
<link href="https://hdl.handle.net/1721.1/162916" rel="alternate"/>
<author>
<name>Moon, Kenneth</name>
</author>
<id>https://hdl.handle.net/1721.1/162916</id>
<updated>2025-10-07T04:13:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Fuzzing for User-Schedulable Languages
Moon, Kenneth
Performance engineers restructure programs to use hardware as efficiently as possible. Even simple mathematical functions can become sprawling and complex programs when fully optimized, as the resulting code must often be precisely molded around specialized behaviors supported by the hardware. To help performance engineers deal with this complexity, userschedulable languages provide scheduling operations, which are abstractions of common steps taken to restructure programs. By composing these scheduling operations, performance engineers can concisely represent their intended optimizations to programs. Exo, being a user-schedulable language, provides this abstraction with the additional guarantee that any scheduling operation which passes Exo’s automated checks does not change the behavior of the program. Though this guarantee is useful for avoiding bugs while optimizing a program, the analysis required to provide such a guarantee is infeasible on programs in general. To make analysis feasible, Exo only allows users to write programs with a restricted set of behaviors. As a result, some programs are impossible to schedule using Exo, limiting the use cases of Exo. In this thesis, we explore how fuzzing can be used as an alternative to the existing analysis in Exo, with the goal of allowing Exo to analyze more complex programs. “Fuzzing” refers to a test case-driven approach to determining properties of a program, such as whether its behavior changes after a scheduling operation. If the program’s outputs do not change after the scheduling operation when provided the same inputs, the fuzzer concludes that the program’s behavior did not change. Since fuzzing only requires us to know how to evaluate the program, it can be applied to a much broader set of programs than the existing analysis in Exo. However, fuzzing can miss mistakes in scheduling if the fuzzer fails to find a test case demonstrating the issue with a scheduling operation, as it is a complete form of analysis rather than a sound form of analysis like the existing analysis in Exo. Additionally, fuzzing can be costly compared to the original analysis, as repeatedly running programs on many test cases for many scheduling operations can be slow. We explore ways to mitigate these issues throughout this work. Finally, we evaluate our implementation of the fuzzer and its performance on some example use cases for Exo.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Verifiable Computation Made Easy</title>
<link href="https://hdl.handle.net/1721.1/162915" rel="alternate"/>
<author>
<name>Ma, Chengyuan</name>
</author>
<id>https://hdl.handle.net/1721.1/162915</id>
<updated>2025-10-07T04:12:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Efficient Verifiable Computation Made Easy
Ma, Chengyuan
Recent advancements in cloud computing, data privacy, and cryptography have sparked a growing interest in Verifiable Computation (VC) in both industry and academia. In particular, zero-knowledge proof (ZKP) algorithms are gaining rapid traction due to their strong privacy guarantees. However, they are notoriously computationally intensive, making performance a critical concern. Given the inherent data parallelism and heavy use of vector operations in ZKP computations, multicore CPUs and GPUs offer a promising acceleration path. Unfortunately, accelerated programming for ZKP remains challenging: ZKP algorithms evolve rapidly, their structures grow increasingly complex, and writing high-performance ZKP code is tedious, error-prone, non-portable, and unfriendly to algorithm developers. We present an end-to-end compiler framework, Zera, that lowers ZKP algorithms to parallel hardware for efficient acceleration, with minimal programmer effort. By effectively leveraging ZKP algorithm patterns and trends, we are able to automate the key performance optimizations, with a succinct linguistic extension and a set of practical compiler customizations. Consequently, with just 92 lines of trivial high-level annotation added to the original 7,000 lines of C++ code, our single-source code solution delivers 33.9× and 24.0× speedup on GPU over a highly optimized serial C++ implementation on CPU and an existing multithreaded Rust baseline on CPU, respectively. Compared to our hand-optimized GPU/CUDA implementation requiring an extra 2,000 lines of low-level code (roughly 60 programmer hours), our compiler-generated GPU implementation is only 58% slower (1.58× slowdown) on large inputs, demonstrating a compelling trade-off between performance and productivity.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Partitioning for Efficient Parallel Reads</title>
<link href="https://hdl.handle.net/1721.1/162914" rel="alternate"/>
<author>
<name>Sragow, John</name>
</author>
<id>https://hdl.handle.net/1721.1/162914</id>
<updated>2025-10-07T04:13:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing Partitioning for Efficient Parallel Reads
Sragow, John
Modern database management systems spend a significant portion of query execution time scanning data, so minimizing scanning latency is critical to maintaining high performance. As such, databases are partitioned into blocks so that queries can skip irrelevant tuples and avoid scanning the entire database. When this partitioning is optimized to minimize the number of blocks accessed by each query, smaller queries that access very few blocks fail to fully utilize the bandwidth because they cannot take advantage of parallel reading. However, reducing the size of each block in order to increase the number of blocks accessed by smaller queries slows down larger queries by forcing them to increase the number of I/Os they must perform. We propose a novel partitioning scheme that shuffles the row groups of blocks accessed by smaller queries so that they can read fewer tuples from multiple blocks in parallel without increasing the I/O cost of larger queries. Our experiments show that this technique allows smaller queries to be scanned up to twice as fast on larger block sizes as they would on a standard partitioning without significantly slowing down larger queries.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrating Functional Knowledge into Protein Design: A Novel Approach to Tokenization and Noise Injection for Function-Aware Protein Language Models</title>
<link href="https://hdl.handle.net/1721.1/162913" rel="alternate"/>
<author>
<name>Tang, Adrina</name>
</author>
<id>https://hdl.handle.net/1721.1/162913</id>
<updated>2025-10-07T04:12:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Integrating Functional Knowledge into Protein Design: A Novel Approach to Tokenization and Noise Injection for Function-Aware Protein Language Models
Tang, Adrina
Designing novel proteins with specific biological functions remains a fundamental challenge in computational biology. While recent advances in protein language models have enabled powerful sequence-based representations, most models, including state-of-the-art systems like ESM3, fall short in effectively encoding functional context during protein generation. In this work, we present a multimodal protein co-design framework that conditions sequence generation on fine-grained functional annotations, specifically leveraging residue-level Gene Ontology (GO) term labels on sequences from the UniRef100 database. By explicitly associating functional signals with residue elements of proteins, our model learns to generate function-conditioned protein sequences that are biologically plausible and semantically consistent. Unlike prior approaches, which treat function as a secondary feature or a classification task, our method focuses on joint reasoning over function and sequence during the design process. This closes a critical gap in the current landscape of protein design tools, offering a scalable and generalizable approach to co-designing protein sequences with user-specified functional profiles.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pairwise Matching of Intermediate Representations for Fine-grained Explainability</title>
<link href="https://hdl.handle.net/1721.1/162912" rel="alternate"/>
<author>
<name>Shrack, Lauren</name>
</author>
<id>https://hdl.handle.net/1721.1/162912</id>
<updated>2025-12-10T00:52:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Pairwise Matching of Intermediate Representations for Fine-grained Explainability
Shrack, Lauren
The differences between images belonging to fine-grained categories are often subtle and highly localized, and existing explainability techniques for deep learning models are often too diffuse to provide useful and interpretable explanations. We propose a new explainability method (PAIR-X) that leverages both intermediate model activations and backpropagated relevance scores to generate fine-grained, highly-localized pairwise visual explanations. We use animal and building re-identification (re-ID) as a primary case study of our method, and we demonstrate qualitatively improved results over a diverse set of explainability baselines on 35 public re-ID datasets. In interviews, animal re-ID experts were in unanimous agreement that PAIR-X was an improvement over existing baselines for deep model explainability, and suggested that its visualizations would be directly applicable to their work. We also propose a novel quantitative evaluation metric for our method, and demonstrate that PAIR-X visualizations appear more plausible for correct image matches than incorrect ones even when the model similarity score for the pairs is the same. By improving interpretability, PAIR-X enables humans to better distinguish correct and incorrect matches.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling End-to-End Sensitivity Analysis of Integrated&#13;
Models</title>
<link href="https://hdl.handle.net/1721.1/162911" rel="alternate"/>
<author>
<name>Davidson, Rosemary K.</name>
</author>
<id>https://hdl.handle.net/1721.1/162911</id>
<updated>2025-10-07T04:10:26Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Enabling End-to-End Sensitivity Analysis of Integrated&#13;
Models
Davidson, Rosemary K.
As space-based precision-pointed telescopes continue to grow in scale and complexity, integrated models are increasingly relied upon to inform early design decisions and support system-level verification. When ground testing of full-system configurations is infeasible, integrated models, including structural-thermal-optical performance models, are essential for predicting performance and validating requirements across multidisciplinary, coupled domains. In early design phases, when uncertainty is high and design decisions have long-term implications for cost and schedule, it is especially important to understand which uncertain parameters most influence system performance. Global sensitivity analysis can help identify dominant uncertainty sources and inform decisions about model reduction, testing priorities, and resource allocation. However, the computational cost of applying global sensitivity analysis to integrated models often exceeds available resources. The presence of cross-disciplinary coupling between subsystem models further complicates analysis efforts. Coupled and dependent variables obscure how specific inputs influence system-level performance, limiting the ability to reduce model dimensionality or focus testing efforts on individual subsystems. There is a need for integrated modeling methodologies that enable tractable global sensitivity analysis of large, feedforward-coupled systems while preserving the accuracy needed to support early-phase design.&#13;
&#13;
This thesis develops both exact and approximate methods for performing global sensitivity analysis on integrated models. A set of exact propagation techniques is introduced to compute end-to-end sensitivity indices when specific structural conditions are met, including functional linearity, non-interacting transforms, and monotonic intermediate mappings. These methods are evaluated using a suite of benchmark test cases that isolate when the exact sensitivity analysis method is valid and when structural assumptions begin to break down. A modular modeling framework is developed to compute exact or approximate end-to-end sensitivity indices and to enable automated mapping between disciplinary models in the integrated chain. The approach is also applied to a representative linearized structural-thermal-optical performance model, demonstrating how end-to-end global sensitivity analysis can be performed efficiently across thermal, structural, and optical subsystems.&#13;
&#13;
To extend tractable sensitivity analysis to black-box models, several approximate strategies are introduced, including multifidelity surrogate modeling and statistical regression. These methods support both forward uncertainty propagation and variance-based global sensitivity analysis for structurally complex integrated models, without requiring full-system evaluation at every iteration. Together, the exact and approximate strategies developed in this work provide a foundation for scalable end-to-end global sensitivity analysis in early-phase design, where identifying influential parameters and constraining model complexity are essential for evaluating candidate architectures and informing mission decisions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep Learning for Space Object Density Distribution&#13;
Prediction</title>
<link href="https://hdl.handle.net/1721.1/162910" rel="alternate"/>
<author>
<name>Sarangerel, Sumiyajav</name>
</author>
<id>https://hdl.handle.net/1721.1/162910</id>
<updated>2025-10-07T04:12:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Deep Learning for Space Object Density Distribution&#13;
Prediction
Sarangerel, Sumiyajav
The rapid growth of artificial objects in Low Earth Orbit (LEO) has heightened concerns over orbital congestion and collision cascades, known as Kessler Syndrome. Traditional high-fidelity models, while accurate, are computationally intensive and poorly scalable. This thesis introduces a machine learning–based framework for forecasting the long-term evolution of space object density. A large dataset is generated, using the MIT Orbital Capacity Assessment Tool – Monte Carlo (MOCAT-MC), simulating thousands of scenarios across varying launch, disposal, and maneuver parameters. A Convolutional Gated Recurrent Unit (ConvGRU) is trained to predict density distributions over a 100-year horizon, achieving accurate forecasts with significantly reduced runtime. With a simple guidance mechanism, the generalization capability of the model across diverse scenarios is greatly improved. This approach offers a scalable and efficient tool for supporting future space traffic management and sustainability efforts.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Understanding Privacy Leakage in Decentralized and Collaborative Learning</title>
<link href="https://hdl.handle.net/1721.1/162909" rel="alternate"/>
<author>
<name>Shi, Yichuan</name>
</author>
<id>https://hdl.handle.net/1721.1/162909</id>
<updated>2025-10-07T04:12:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards Understanding Privacy Leakage in Decentralized and Collaborative Learning
Shi, Yichuan
The emergence of large-scale machine learning (ML) models has highlighted a fundamental conflict: While computational demands push for the consolidation of data and models in vast, centralized data centers, real-world data continues to be distributed and fragmented across personal devices and private databases. How can we reconcile this contradiction without further monopolizing the ML ecosystem? What unique privacy and security risks arise from alternative ML orchestration system designs? Furthermore, how do these vulnerabilities and system failures inform our understanding of both how and what machines learn? This thesis attempts to explore these questions. It first examines key types of privacy leakages, evaluating their impact under realistic, cross-distribution settings. It then introduces a benchmarking analysis platform, SONAR, to investigate the relationship between privacy leakage (measured by attack performance), network topology, and data distribution. Finally, it presents Co-Dream, a novel algorithm for collaborative learning that offers improved privacy characteristics.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Prototyping a Scalable Proof Engine</title>
<link href="https://hdl.handle.net/1721.1/162908" rel="alternate"/>
<author>
<name>Rosario, Jon</name>
</author>
<id>https://hdl.handle.net/1721.1/162908</id>
<updated>2025-10-07T04:12:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Prototyping a Scalable Proof Engine
Rosario, Jon
Formal verification is an exciting development in software engineering, enabling implementations of programs to be rigorously checked against mathematical specifications. Assuming the specification is well-defined, formal verification provides guarantees of a program’s correctness and freedom from bugs that are simply not possible with test-based methods. There’s just one catch: the process of verifying large programs in popular theorem provers such as Coq (now known as Rocq) or Lean is painfully slow. These proof assistants rely on proof engines to construct proofs of correctness for given properties, but to our knowledge, there is no widely available proof engine that offers strong performance guarantees. Even more frustrating is the lack of consensus on what “good” performance should even mean in this context. This thesis lays the groundwork for addressing that gap by presenting a proof engine design that achieves asymptotically linear-time performance with respect to several important variables. We illustrate the design and its performance characteristics with examples from an implementation of the design and outline directions for future work.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Embedded Computing for Wavefront Control on Future Space&#13;
Telescopes</title>
<link href="https://hdl.handle.net/1721.1/162906" rel="alternate"/>
<author>
<name>Belsten, Nicholas</name>
</author>
<id>https://hdl.handle.net/1721.1/162906</id>
<updated>2025-10-07T04:10:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Embedded Computing for Wavefront Control on Future Space&#13;
Telescopes
Belsten, Nicholas
Future space telescopes will use adaptive optics to suppress starlight to directly image and characterize exoplanets. A measurement using this technique may be the first to detect extraterrestrial life in the universe. However, the real-time execution of adaptive optics control algorithms places unprecedented demands on spaceborne processors. Previous work has determined that processing limitations can degrade the achievable contrast and scientific yield of future exoplanet imaging missions. In this work, we quantify the relationship between adaptive optics processing needs and high contrast performance for the Habitable Worlds Observatory (HWO), a mission expected to launch in the 2040s and achieve the 10^-10 contrast necessary to image Earth-like planets around Sun-like stars.&#13;
&#13;
We survey the current landscape of high-order wavefront sensing and control (HOWFSC) algorithms for a future mission like HWO. We parameterize the compute requirements of multiple algorithms through analyses of computational patterns, benchmarks, and problem scaling. In parallel, we assess the capabilities of current and emerging spaceborne processors. We integrate these findings to model processor requirements across several dimensions of telescope design, and we predict whether various processor choices can meet the computational demands of specific HWO configurations. To validate our models, we implement HOWFSC algorithms on representative embedded processors and compare measured performance to predictions. These implementations also reduce risk for spaceflight by increasing the technology readiness level (TRL) of the algorithm–processor pairing to TRL 4.&#13;
&#13;
Given the significant uncertainty in HWO’s eventual design, we extend our deterministic models using Monte Carlo methods to evaluate system performance under uncertainty. We identify key sources of uncertainty and estimate the achievable contrast across a range of system configurations. Our results show that offloading computation to the ground is an important architectural option for most HWO designs. Even under optimistic assumptions, current space processors are insufficient to support the full range of HWO configurations. However, newly developed efficient algorithms substantially reduce the computational burden. Overall, we estimate that current technology has only a 40% probability of supporting HWO’s mission goals without additional architectural innovations. We conclude by recommending combinations of onboard computing, ground offloading, and optical design constraints to help close this technology gap as the mission design matures. In particular, we find that telescope stability and ground-in-the-loop performance are primary drivers of contrast performance, while algorithmic advances such as AD-EFC and onboard compute approaching ground-based GPU performance also provide significant benefits.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stress-Guided Material Segmentation for Recycled 3D&#13;
Printed Structures Using Finite Element Analysis</title>
<link href="https://hdl.handle.net/1721.1/162905" rel="alternate"/>
<author>
<name>Paulin, Cole J.</name>
</author>
<id>https://hdl.handle.net/1721.1/162905</id>
<updated>2025-10-07T04:13:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Stress-Guided Material Segmentation for Recycled 3D&#13;
Printed Structures Using Finite Element Analysis
Paulin, Cole J.
We present a simulation-driven method for optimizing the structural performance of 3D printed objects made with recycled and fresh filament. Although sustainable materials such as recycled PLA reduce environmental impact, they often exhibit degraded or inconsistent mechanical properties, making them less suitable for structurally demanding applications. To address this, we develop a finite element analysis (FEA) pipeline that simulates stress and strain distributions under user-defined loading conditions, enabling intelligent segmentation of the object into regions of high and low mechanical demand. These segmented regions can be assigned recycled or fresh material during fabrication. Our system leverages open-source tools (SfePy) for simulation and we validate its accuracy against Abaqus, a commercial industry standard. We also introduce methods for automatically identifying and correcting segmentation artifacts, such as small disconnected islands. Through comparative simulation studies and performance evaluation, we demonstrate that our approach enables more sustainable 3D printing without sacrificing structural reliability
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Metaheuristic Optimization for Automatic Arrangement of Power Electronics Components in a Shipboard Electrical Distribution System</title>
<link href="https://hdl.handle.net/1721.1/162904" rel="alternate"/>
<author>
<name>Lohier, Sebastien</name>
</author>
<id>https://hdl.handle.net/1721.1/162904</id>
<updated>2025-10-07T04:12:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Metaheuristic Optimization for Automatic Arrangement of Power Electronics Components in a Shipboard Electrical Distribution System
Lohier, Sebastien
This thesis proposes a novel methodology for the automatic placement of Power Electronics Building Blocks (PEBBs) in modular, integrated power corridor designs. These building blocks, which are created and tested offsite for a variety of applications, are currently placed manually during the design process, a method that is time-consuming and suboptimal. To address this challenge, we reduce the placement problem to a 2D bin-packing problem, leveraging a hybrid approach combining Genetic Algorithms and Simulated Annealing. This approach enables the generation of optimized placements that find the extremes of arbitrary heuristics, including minimizing routing distance and power density, effectively improving both design efficiency and system performance. The proposed methodology offers a significant step toward automating and optimizing the layout of power electronic components in complex systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mining the CD4 antigen repertoire for next-generation tuberculosis vaccines</title>
<link href="https://hdl.handle.net/1721.1/162903" rel="alternate"/>
<author>
<name>Vidal, Samuel J</name>
</author>
<author>
<name>Lasrado, Ninaad</name>
</author>
<author>
<name>Tostanoski, Lisa H</name>
</author>
<author>
<name>Chaudhari, Jayeshbhai</name>
</author>
<author>
<name>Mbiwan, Esther R</name>
</author>
<author>
<name>Neka, Ganad D</name>
</author>
<author>
<name>Strutton, Ellis A</name>
</author>
<author>
<name>Espinosa Perez, Alejandro A</name>
</author>
<author>
<name>Sellers, Daniel</name>
</author>
<author>
<name>Barrett, Julia</name>
</author>
<author>
<name>Lifton, Michelle</name>
</author>
<author>
<name>Wakabayashi, Shoko</name>
</author>
<author>
<name>Eshaghi, Behnaz</name>
</author>
<author>
<name>Borducchi, Erica N</name>
</author>
<author>
<name>Aid, Malika</name>
</author>
<author>
<name>Li, Wenjun</name>
</author>
<author>
<name>Scriba, Thomas J</name>
</author>
<author>
<name>Jaklenec, Ana</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Barouch, Dan H</name>
</author>
<id>https://hdl.handle.net/1721.1/162903</id>
<updated>2026-03-08T03:25:59Z</updated>
<published>2025-09-15T00:00:00Z</published>
<summary type="text">Mining the CD4 antigen repertoire for next-generation tuberculosis vaccines
Vidal, Samuel J; Lasrado, Ninaad; Tostanoski, Lisa H; Chaudhari, Jayeshbhai; Mbiwan, Esther R; Neka, Ganad D; Strutton, Ellis A; Espinosa Perez, Alejandro A; Sellers, Daniel; Barrett, Julia; Lifton, Michelle; Wakabayashi, Shoko; Eshaghi, Behnaz; Borducchi, Erica N; Aid, Malika; Li, Wenjun; Scriba, Thomas J; Jaklenec, Ana; Langer, Robert; Barouch, Dan H
Tuberculosis (TB) is the leading cause of death from infectious disease worldwide, and Bacillus Calmette-Guérin (BCG) remains the only clinically approved vaccine. An enduring challenge in TB vaccine development is systematic antigen selection from a large repertoire of potential candidates. We performed an efficacy screen in mice of antigens that are targets of CD4 T cells in humans. We found striking heterogeneity in protective efficacy, and most of the top protective antigens are not currently in clinical development. We observed immunologic cross-reactivity among phylogenetically clustered antigens, reflecting common CD4 epitopes. We developed a trivalent mRNA vaccine consisting of PPE20 (Rv1387), EsxG (Rv0287), and PE18 (Rv1788), which augmented and exceeded BCG protection in multiple mouse models. Finally, we observed cellular immune responses to these antigens in 84% of humans exposed to M. tuberculosis. These data advance our understanding of TB vaccine immunology and define a vaccine concept for clinical development.
</summary>
<dc:date>2025-09-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Biotechnology in materials science: A storied past and a bold future</title>
<link href="https://hdl.handle.net/1721.1/162902" rel="alternate"/>
<author>
<name>Sharma, Shonit Nair</name>
</author>
<author>
<name>Witten, Jacob</name>
</author>
<author>
<name>Das, Rishi</name>
</author>
<author>
<name>Anderson, R Rox</name>
</author>
<author>
<name>Anderson, Daniel G</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<id>https://hdl.handle.net/1721.1/162902</id>
<updated>2026-03-08T03:26:02Z</updated>
<published>2025-10-01T00:00:00Z</published>
<summary type="text">Biotechnology in materials science: A storied past and a bold future
Sharma, Shonit Nair; Witten, Jacob; Das, Rishi; Anderson, R Rox; Anderson, Daniel G; Langer, Robert
The intersection of biotechnology and materials science has driven medical and scientific innovation for decades and is poised to make similar transformative impacts over the next 50 years. Advanced drug delivery systems, including nanoparticles and larger delivery material platforms, are enhancing therapeutic precision, while tissue engineering and regenerative medicine are laying the groundwork for bioprinting complex organs, offering new possibilities for transplantation and repair. Nanotechnology and biomedical devices are reshaping diagnostics and therapeutics, enabling real-time monitoring essential for personalized health care. Additionally, emerging fields such as space biotechnology and machine learning-driven biomaterials design hold potential for cutting-edge discoveries. This article examines the historical trajectory, current state-of-the-art applications, and bold future directions of biotechnology in materials science, emphasizing its impact on human health and its untapped potential yet to be explored.
</summary>
<dc:date>2025-10-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementation of Sub‐Grid Scale Temperature Perturbations Induced by Non‐Orographic Gravity Waves in WACCM6</title>
<link href="https://hdl.handle.net/1721.1/162901" rel="alternate"/>
<author>
<name>Yook, Simchan</name>
</author>
<author>
<name>Solomon, Susan</name>
</author>
<author>
<name>Weimer, Michael</name>
</author>
<author>
<name>Kinnison, Douglas E</name>
</author>
<author>
<name>Garcia, Rolando</name>
</author>
<author>
<name>Stone, Kane</name>
</author>
<id>https://hdl.handle.net/1721.1/162901</id>
<updated>2026-03-08T03:25:57Z</updated>
<published>2025-04-21T00:00:00Z</published>
<summary type="text">Implementation of Sub‐Grid Scale Temperature Perturbations Induced by Non‐Orographic Gravity Waves in WACCM6
Yook, Simchan; Solomon, Susan; Weimer, Michael; Kinnison, Douglas E; Garcia, Rolando; Stone, Kane
Atmospheric gravity waves can play a significant role on atmospheric chemistry throughtemperature fluctuations. A recent modeling study introduced a method to implement subgrid‐scale orographicgravity‐wave‐induced temperature perturbations in the Whole Atmosphere Community Climate Model(WACCM). The model with a wave‐induced temperature parameterization was able to reproduce for example,the influence of mountain wave events on atmospheric chemistry, as highlighted in previous literature. Here weextend the subgrid‐scale wave‐induced temperature parameterization to also include non‐orographic gravitywaves arising from frontal activity and convection. We explore the impact of these waves on middle atmospherechemistry, particularly focusing on reactions that are strongly sensitive to temperature. The non‐orographicgravity waves increase the variability of chemical reaction rates, especially in the lower mesosphere. As anexample, we show that this, in turn, leads to increases in the daytime ozone variability. To demonstrate anotherimpact, we briefly investigate the role of non‐orographic gravity waves in cirrus cloud formation in this model.Consistent with findings from the previous study focusing on orographic gravity waves, non‐orographic wavesalso enhance homogeneous nucleation and increase cirrus clouds. The updated method used enables the globalchemistry‐climate model to account for both orographic and non‐orographic gravity‐wave‐induced subgrid‐scale dynamical perturbations in a consistent manner.
</summary>
<dc:date>2025-04-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reply to: Comments on “Fisher–Schultz Lecture: Generic Machine Learning Inference on Heterogeneous Treatment Effects in Randomized Experiments, With an Application to Immunization in India”</title>
<link href="https://hdl.handle.net/1721.1/162900" rel="alternate"/>
<author>
<name>Chernozhukov, Victor</name>
</author>
<author>
<name>Demirer, Mert</name>
</author>
<author>
<name>Duflo, Esther</name>
</author>
<author>
<name>Fernández-Val, Iván</name>
</author>
<id>https://hdl.handle.net/1721.1/162900</id>
<updated>2026-03-08T03:26:02Z</updated>
<published>2025-07-30T00:00:00Z</published>
<summary type="text">Reply to: Comments on “Fisher–Schultz Lecture: Generic Machine Learning Inference on Heterogeneous Treatment Effects in Randomized Experiments, With an Application to Immunization in India”
Chernozhukov, Victor; Demirer, Mert; Duflo, Esther; Fernández-Val, Iván
We warmly thank Kosuke Imai, Michael Lingzhi Li, and Stefan Wager for their gracious and insightful comments. We are particularly encouraged that both pieces recognize the importance of the research agenda the lecture laid out, which we see as critical for applied researchers. It is also great to see that both underscore the potential of the basic approach we propose—targeting summary features of the CATE after proxy estimation with sample splitting.&#13;
&#13;
We are also happy that both papers push us (and the reader) to continue thinking about the inference problem associated with sample splitting. We recognize that our current paper is only scratching the surface of this interesting agenda. Our proposal is certainly not the only option, and it is exciting that both papers provide and assess alternatives. Hopefully, this will generate even more work in this area.
</summary>
<dc:date>2025-07-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fisher–Schultz Lecture: Generic Machine Learning Inference on Heterogeneous Treatment Effects in Randomized Experiments, With an Application to Immunization in India</title>
<link href="https://hdl.handle.net/1721.1/162899" rel="alternate"/>
<author>
<name>Chernozhukov, Victor</name>
</author>
<author>
<name>Demirer, Mert</name>
</author>
<author>
<name>Duflo, Esther</name>
</author>
<author>
<name>Fernández-Val, Iván</name>
</author>
<id>https://hdl.handle.net/1721.1/162899</id>
<updated>2026-03-08T03:26:01Z</updated>
<published>2025-07-30T00:00:00Z</published>
<summary type="text">Fisher–Schultz Lecture: Generic Machine Learning Inference on Heterogeneous Treatment Effects in Randomized Experiments, With an Application to Immunization in India
Chernozhukov, Victor; Demirer, Mert; Duflo, Esther; Fernández-Val, Iván
We propose strategies to estimate and make inference on key features of heteroge-neous effects in randomized experiments. These key features include best linear predic-tors of the effects using machine learning proxies, average effects sorted by impact groups,and average characteristics of most and least impacted units. The approach is valid inhigh-dimensional settings, where the effects are proxied (but not necessarily consis-tently estimated) by predictive and causal machine learning methods. We post-processthese proxies into estimates of the key features. Our approach is generic; it can beused in conjunction with penalized methods, neural networks, random forests, boostedtrees, and ensemble methods, both predictive and causal. Estimation and inference arebased on repeated data splitting to avoid overﬁtting and achieve validity. We use quan-tile aggregation of the results across many potential splits, in particular taking mediansof p-values and medians and other quantiles of conﬁdence intervals. We show thatquantile aggregation lowers estimation risks over a single split procedure, and establishits principal inferential properties. Finally, our analysis reveals ways to build provablybetter machine learning proxies through causal learning: we can use the objective func-tions that we develop to construct the best linear predictors of the effects, to obtainbetter machine learning proxies in the initial step. We illustrate the use of both infer-ential tools and causal learners with a randomized ﬁeld experiment that evaluates acombination of nudges to stimulate demand for immunization in India.
</summary>
<dc:date>2025-07-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Limited Validity of Breath‐Counting as a Measure of Mindfulness in Ruminative Adolescents</title>
<link href="https://hdl.handle.net/1721.1/162898" rel="alternate"/>
<author>
<name>Treves, Isaac N.</name>
</author>
<author>
<name>Tierney, Anna O.</name>
</author>
<author>
<name>Goldberg, Simon B.</name>
</author>
<author>
<name>Rouleau, Nancie</name>
</author>
<author>
<name>Carson, Nicholas</name>
</author>
<author>
<name>Schuman‐Olivier, Zev</name>
</author>
<author>
<name>Webb, Christian A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162898</id>
<updated>2026-03-08T03:26:00Z</updated>
<published>2025-05-06T00:00:00Z</published>
<summary type="text">Limited Validity of Breath‐Counting as a Measure of Mindfulness in Ruminative Adolescents
Treves, Isaac N.; Tierney, Anna O.; Goldberg, Simon B.; Rouleau, Nancie; Carson, Nicholas; Schuman‐Olivier, Zev; Webb, Christian A.
Objective measurement of mindfulness could help us understand the mechanisms of meditation interventions and how indi-viduals vary in their disposition to be mindful. One proposed measure is the breath-counting task (BCT), which measures howaccurately one can count cycles of their breath. Breath counting, which involves sustained attention, meta-awareness, and an in-ternal locus of attention, has been shown in adults to be related to measures of mindfulness even when controlling for establishedattentional measures. In this study, we test the psychometrics of the BCT in a convenience sample of 78 adolescents with elevatedrumination. In preregistered analyses, we related breath-counting measures, including novel objective respiration measures, toa suite of self-report measures as well as the sustained attention to response task (SART). While breath-counting performanceshowed fair split-half reliability and similar distributions to studies in adults, it did not show the expected positive associationswith self-reported mindfulness measures (neither trait nor EMA). Surprisingly, breath-counting accuracy showed negative cor-relations with a subscale measuring observing of emotions and body sensations, negative correlations with nonreactivity, andperformance decrements were larger for individuals scoring more highly on mindfulness in general. The SART showed a smallnegative correlation with breath-counting resets (an index of mind-wandering). Finally, breath-counting performance was notrelated to other theoretically relevant clinical, personality, and executive functioning criteria. Our results suggest that, at least inruminative adolescents, breath-counting may measure a very narrow, contextual form of sustained attention, may not captureother qualities of mindfulness, and may lack predictive validity.
</summary>
<dc:date>2025-05-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>Polyanhydride‐Based Microparticles for Programmable Pulsatile Release of Diphtheria Toxoid (DT) for Single‐Injection Self‐Boosting Vaccines</title>
<link href="https://hdl.handle.net/1721.1/162897" rel="alternate"/>
<author>
<name>Zhang, Linzixuan</name>
</author>
<author>
<name>Xiao, Ruiqing</name>
</author>
<author>
<name>Gao, Wenhao</name>
</author>
<author>
<name>Garcia, Johnny</name>
</author>
<author>
<name>Pan, Xinyan</name>
</author>
<author>
<name>Daristotle, John L</name>
</author>
<author>
<name>Forster, Timothy</name>
</author>
<author>
<name>Han, Jooli</name>
</author>
<author>
<name>Chaddah, Mehr</name>
</author>
<author>
<name>Varshney, Dhruv</name>
</author>
<author>
<name>Menon, Nandita</name>
</author>
<author>
<name>McHugh, Kevin J</name>
</author>
<author>
<name>Pedretti, Benjamin J</name>
</author>
<author>
<name>Yeo, Jing Ying</name>
</author>
<author>
<name>Yang, Xin</name>
</author>
<author>
<name>MacDonald, Sydney</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<author>
<name>Jaklenec, Ana</name>
</author>
<id>https://hdl.handle.net/1721.1/162897</id>
<updated>2026-03-08T03:26:00Z</updated>
<published>2025-05-15T00:00:00Z</published>
<summary type="text">Polyanhydride‐Based Microparticles for Programmable Pulsatile Release of Diphtheria Toxoid (DT) for Single‐Injection Self‐Boosting Vaccines
Zhang, Linzixuan; Xiao, Ruiqing; Gao, Wenhao; Garcia, Johnny; Pan, Xinyan; Daristotle, John L; Forster, Timothy; Han, Jooli; Chaddah, Mehr; Varshney, Dhruv; Menon, Nandita; McHugh, Kevin J; Pedretti, Benjamin J; Yeo, Jing Ying; Yang, Xin; MacDonald, Sydney; Langer, Robert; Jaklenec, Ana
Vaccination remains a critical tool in preventing infectious diseases, yet itseﬀectiveness is undermined by under-immunization, particularly for vaccinesrequiring multiple doses that patients fail to complete. To address this chal-lenge, the development of single-injection platforms delivering self-boostingvaccines has gained signiﬁcant attention. Despite some advances, translatingthese platforms into clinical applications has been limited. In this study, anovel polyanhydride-based polymeric delivery platform is introduced, designedfor single-injection self-boosting vaccines, replacing multiple doses. Over20 polyanhydride polymers are synthesized and screened, ultimately downselecting to 6 for in vitro studies, and 2 for in vivo studies. Using diphtheriatoxoid (DT) as a model antigen, programmed pulsatile release with a narrowwindow is demonstrated, ideal for self-boosting immunization. The platformeﬀectively protects the pH-sensitive antigen before release, achieving recoveryrate of 39.7% to 89.7%. The system’s tunability is further enhanced by machinelearning algorithms, which accurately predict release proﬁles, conﬁrmedthrough experimental validation. In vivo studies in a mouse model revealsthat the platform induces DT-speciﬁc antibody responses comparable to thosegenerated by traditional multi-dose regimens. Collectively, these ﬁndingshighlight the potential of this platform to deliver various vaccines, oﬀering apotentially promising solution to the global challenge of under-immunization.
</summary>
<dc:date>2025-05-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reduction in Global Lightning Activity During the COVID Pandemic</title>
<link href="https://hdl.handle.net/1721.1/162896" rel="alternate"/>
<author>
<name>Liu, Yakun</name>
</author>
<author>
<name>Williams, Earle</name>
</author>
<author>
<name>Guha, Anirban</name>
</author>
<author>
<name>Satori, Gabriella</name>
</author>
<author>
<name>Neto, Osmar Pinto</name>
</author>
<author>
<name>Said, Ryan</name>
</author>
<author>
<name>Holzworth, Robert</name>
</author>
<author>
<name>Virts, Katrina</name>
</author>
<author>
<name>Lang, Timothy</name>
</author>
<author>
<name>Zhu, Yanan</name>
</author>
<author>
<name>LaPierre, Jeff</name>
</author>
<author>
<name>DiGangi, Elizabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/162896</id>
<updated>2026-03-08T03:25:58Z</updated>
<published>2025-04-28T00:00:00Z</published>
<summary type="text">Reduction in Global Lightning Activity During the COVID Pandemic
Liu, Yakun; Williams, Earle; Guha, Anirban; Satori, Gabriella; Neto, Osmar Pinto; Said, Ryan; Holzworth, Robert; Virts, Katrina; Lang, Timothy; Zhu, Yanan; LaPierre, Jeff; DiGangi, Elizabeth
The effect of anthropogenic aerosols on lightning is one of the least understood aspects of human‐induced climate change. Global aerosol clearly diminished during the COVID pandemic by 7.6%. A pronounceddecrease in global lightning activity in the range 3.0%–5.8% is identified from various detection systems duringthis natural experiment. The Maritime Continent lightning chimney shows the largest reduction of 7.0% inaerosol accompanied by a lightning drop of 15%. The COVID period in 2020 also experiences a transition frompre‐COVID El Niño to a strong and sustained La Niña. Compensation for ENSO forcing of lightning activity isimplemented to disclose the distinct responses of three global lightning chimneys to competing thermodynamicand aerosol effects. Our observational findings indicate a marked influence of aerosol on a global scale by virtueof the extraordinary COVID‐induced aerosol alteration.
</summary>
<dc:date>2025-04-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Engineered prime editors with minimal genomic errors</title>
<link href="https://hdl.handle.net/1721.1/162895" rel="alternate"/>
<author>
<name>Chauhan, Vikash P</name>
</author>
<author>
<name>Sharp, Phillip A</name>
</author>
<author>
<name>Langer, Robert</name>
</author>
<id>https://hdl.handle.net/1721.1/162895</id>
<updated>2025-10-04T03:09:26Z</updated>
<published>2025-09-17T00:00:00Z</published>
<summary type="text">Engineered prime editors with minimal genomic errors
Chauhan, Vikash P; Sharp, Phillip A; Langer, Robert
Prime editors make programmed genome modifications by writing new sequences into extensions of nicked DNA 3′ ends1. These edited 3′ new strands must displace competing 5′ strands to install edits, yet a bias towards retaining the competing 5′ strands hinders efficiency and can cause indel errors2. Here we discover that nicked end degradation, consistent with competing 5′ strand destabilization, can be promoted by Cas9-nickase mutations that relax nick positioning. We exploit this mechanism to engineer efficient prime editors with strikingly low indel errors. Combining this error-suppressing strategy with the latest efficiency-boosting architecture, we design a next-generation prime editor (vPE). Compared with previous editors, vPE features comparable efficiency yet up to 60-fold lower indel errors, enabling edit:indel ratios as high as 543:1.
</summary>
<dc:date>2025-09-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Defining Nanostores: Cybernetic Insights on Independent Grocery Micro-Retailers’ Identity and Transformations</title>
<link href="https://hdl.handle.net/1721.1/162894" rel="alternate"/>
<author>
<name>Salinas-Navarro, David Ernesto</name>
</author>
<author>
<name>Vilalta-Perdomo, Eliseo</name>
</author>
<author>
<name>Herron, Rebecca Michell</name>
</author>
<author>
<name>Mejía-Argueta, Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/162894</id>
<updated>2025-10-04T03:09:11Z</updated>
<published>2025-09-02T00:00:00Z</published>
<summary type="text">Defining Nanostores: Cybernetic Insights on Independent Grocery Micro-Retailers’ Identity and Transformations
Salinas-Navarro, David Ernesto; Vilalta-Perdomo, Eliseo; Herron, Rebecca Michell; Mejía-Argueta, Christopher
Nanostores—micro, independent grocery retailers—are often defined overlooking their socioeconomic roles and relational significance in favour of their primary functional aspects. To close this gap, this study adopts a systemic perspective to examine how multiple stakeholders (owners, customers, and suppliers) shape nanostore identity. Accordingly, this study proposes a framework of X-Y-Z identity statements, along with the use of the TASCOI tool, to examine nanostore descriptions and map their roles, expectations, and transformation processes. This systemic framework, rooted in management cybernetics, enabled the collection and analysis of 168 survey responses from 34 stores in Mexico City. The results show that nanostore identities are varied and context-dependent, operating as grocery stores, family projects, community anchors, economic lifelines, and competitors. This diversity influences stakeholder engagement, resource utilisation, and operational decisions. Overall, this study provides a transferable framework for analysing micro-business identity and transformation, with implications for problem-solving, decision-making, and policy development. Future research should address the current limitations of this study, including its geographical cross-sectional design, limited sampling method, reliance on self-reported perceptions, and lack of generalisability to other populations. Future work will involve exploring other urban contexts, utilising longitudinal data, expanding the sample, and adopting a participatory research approach to gain a deeper understanding of identity dynamics and their implications for nanostore resilience and survivability.
</summary>
<dc:date>2025-09-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Verifying Online Safety Properties for Safe Deep Reinforcement Learning</title>
<link href="https://hdl.handle.net/1721.1/162893" rel="alternate"/>
<author>
<name>Marzari, Luca</name>
</author>
<author>
<name>Cicalese, Ferdinando</name>
</author>
<author>
<name>Farinelli, Alessandro</name>
</author>
<author>
<name>Amato, Christopher</name>
</author>
<author>
<name>Marchesini, Enrico</name>
</author>
<id>https://hdl.handle.net/1721.1/162893</id>
<updated>2025-10-04T03:09:15Z</updated>
<published>2025-09-30T00:00:00Z</published>
<summary type="text">Verifying Online Safety Properties for Safe Deep Reinforcement Learning
Marzari, Luca; Cicalese, Ferdinando; Farinelli, Alessandro; Amato, Christopher; Marchesini, Enrico
Ensuring safety in reinforcement learning (RL) is critical for deploying agents in real-world applications. During training, current safe RL approaches often rely on indicator cost functions that provide sparse feedback, resulting in two key limitations: (i) poor sample efficiency due to the lack of safety information in neighboring states, and (ii) dependence on cost-value functions, leading to brittle convergence and suboptimal performance. After training, safety is guaranteed via formal verification methods for deep neural networks (FV), whose computational complexity hinders their application during training.  We address the limitations of using cost functions via verification by proposing a safe RL method based on a violation value---the risk associated with policy decisions in a portion of the state space. Our approach verifies safety properties (i.e., state-action pairs) that may lead to unsafe behavior, and quantifies the size of the state space where properties are violated. This violation value is then used to penalize the agent during training to encourage safer policy behavior. Given the NP-hard nature of FV, we propose an efficient, sample-based approximation with probabilistic guarantees to compute the violation value.   Extensive experiments on standard benchmarks and real-world robotic navigation tasks show that violation-augmented approaches significantly improve safety by reducing the number of unsafe states encountered while achieving superior performance compared to existing methods.
</summary>
<dc:date>2025-09-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Novel Prediction Model for Multimodal Medical Data Based on Graph Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/162892" rel="alternate"/>
<author>
<name>Zhang, Lifeng</name>
</author>
<author>
<name>Li, Teng</name>
</author>
<author>
<name>Cui, Hongyan</name>
</author>
<author>
<name>Zhang, Quan</name>
</author>
<author>
<name>Jiang, Zijie</name>
</author>
<author>
<name>Li, Jiadong</name>
</author>
<author>
<name>Welsch, Roy E.</name>
</author>
<author>
<name>Jia, Zhongwei</name>
</author>
<id>https://hdl.handle.net/1721.1/162892</id>
<updated>2025-10-04T03:09:04Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">A Novel Prediction Model for Multimodal Medical Data Based on Graph Neural Networks
Zhang, Lifeng; Li, Teng; Cui, Hongyan; Zhang, Quan; Jiang, Zijie; Li, Jiadong; Welsch, Roy E.; Jia, Zhongwei
Multimodal medical data provides a wide and real basis for disease diagnosis. Computer-aided diagnosis (CAD) powered by artificial intelligence (AI) is becoming increasingly prominent in disease diagnosis. CAD for multimodal medical data requires addressing the issues of data fusion and prediction. Traditionally, the prediction performance of CAD models has not been good enough due to the complicated dimensionality reduction. Therefore, this paper proposes a fusion and prediction model&amp;mdash;EPGC&amp;mdash;for multimodal medical data based on graph neural networks. Firstly, we select features from unstructured multimodal medical data and quantify them. Then, we transform the multimodal medical data into a graph data structure by establishing each patient as a node, and establishing edges based on the similarity of features between the patients. Normalization of data is also essential in this process. Finally, we build a node prediction model based on graph neural networks and predict the node classification, which predicts the patients&amp;rsquo; diseases. The model is validated on two publicly available datasets of heart diseases. Compared to the existing models that typically involve dimensionality reduction, classification, or the establishment of complex deep learning networks, the proposed model achieves outstanding results with the experimental dataset. This demonstrates that the fusion and diagnosis of multimodal data can be effectively achieved without dimension reduction or intricate deep learning networks. We take pride in exploring unstructured multimodal medical data using deep learning and hope to make breakthroughs in various fields.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Competitive Nanostore Networks for Enhanced Food Accessibility: Insights from a Competitive Facility Location Model</title>
<link href="https://hdl.handle.net/1721.1/162891" rel="alternate"/>
<author>
<name>da Silva-Ovando, Agatha Clarice</name>
</author>
<author>
<name>Granados-Rivera, Daniela</name>
</author>
<author>
<name>Mejía, Gonzalo</name>
</author>
<author>
<name>Mejía-Argueta, Christopher</name>
</author>
<author>
<name>Gutiérrez-Franco, Edgar</name>
</author>
<id>https://hdl.handle.net/1721.1/162891</id>
<updated>2025-10-04T03:09:20Z</updated>
<published>2025-08-18T00:00:00Z</published>
<summary type="text">Designing Competitive Nanostore Networks for Enhanced Food Accessibility: Insights from a Competitive Facility Location Model
da Silva-Ovando, Agatha Clarice; Granados-Rivera, Daniela; Mejía, Gonzalo; Mejía-Argueta, Christopher; Gutiérrez-Franco, Edgar
Background: Access to healthy food in emerging-economy cities is challenged by last-mile constraints and poor infrastructure. Aligned with the UN SDGs on Zero Hunger and Sustainable Cities, this study examines how a strategically located nanostores network can help close these gaps while fostering local resilience. Focusing on Colombia’s Sabana Centro region, we designed a nanostore network that maximizes spatial coverage, proximity, and affordability. Methods: A competitive facility-location model combined with a discrete choice model captures consumer heterogeneity in price and location preferences. Results: Results show that locating nanostores in peripheral rather than central areas improves equity: the proposed network meets about 65,400 kg of weekly demand—51% fruit, 36% vegetables, 13% tubers—representing 16% of total regional demand and reaching underserved municipalities. This is notable given that existing nanostores already satisfy roughly 37% of household needs. Conclusions: By linking consumer behavior with sustainable spatial planning, the research offers both theoretical insight and practical tools for equitable distribution. Future work should evaluate supportive policies and supply chain innovations to secure nanostores’ long-term viability and community impact.
</summary>
<dc:date>2025-08-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Prompt Injection Attacks with LSTM-Based Generative Adversarial Networks: A Lightweight Alternative to Large Language Models</title>
<link href="https://hdl.handle.net/1721.1/162890" rel="alternate"/>
<author>
<name>Rashid, Sharaf</name>
</author>
<author>
<name>Bollis, Edson</name>
</author>
<author>
<name>Pellicer, Lucas</name>
</author>
<author>
<name>Rabbani, Darian</name>
</author>
<author>
<name>Palacios, Rafael</name>
</author>
<author>
<name>Gupta, Aneesh</name>
</author>
<author>
<name>Gupta, Amar</name>
</author>
<id>https://hdl.handle.net/1721.1/162890</id>
<updated>2025-10-04T03:09:22Z</updated>
<published>2025-08-05T00:00:00Z</published>
<summary type="text">Evaluating Prompt Injection Attacks with LSTM-Based Generative Adversarial Networks: A Lightweight Alternative to Large Language Models
Rashid, Sharaf; Bollis, Edson; Pellicer, Lucas; Rabbani, Darian; Palacios, Rafael; Gupta, Aneesh; Gupta, Amar
Generative Adversarial Networks (GANs) using Long Short-Term Memory (LSTM) provide a computationally cheaper approach for text generation compared to large language models (LLMs). The low hardware barrier of training GANs poses a threat because it means more bad actors may use them to mass-produce prompt attack messages against LLM systems. Thus, to better understand the threat of GANs being used for prompt attack generation, we train two well-known GAN architectures, SeqGAN and RelGAN, on prompt attack messages. For each architecture, we evaluate generated prompt attack messages, comparing results with each other, with generated attacks from another computationally cheap approach, a 1-billion-parameter Llama 3.2 small language model (SLM), and with messages from the original dataset. This evaluation suggests that GAN architectures like SeqGAN and RelGAN have the potential to be used in conjunction with SLMs to readily generate malicious prompts that impose new threats against LLM-based systems such as chatbots. Analyzing the effectiveness of state-of-the-art defenses against prompt attacks, we also find that GAN-generated attacks can deceive most of these defenses with varying levels of success with the exception of Meta&amp;rsquo;s PromptGuard. Further, we suggest an improvement of prompt attack defenses based on the analysis of the language quality of the prompts, which we found to be the weakest point of GAN-generated messages.
</summary>
<dc:date>2025-08-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimal Transmission Switching and Grid Reconfiguration for Transmission Systems via Convex Relaxations</title>
<link href="https://hdl.handle.net/1721.1/162889" rel="alternate"/>
<author>
<name>Jagadeesan Nair, Vineet</name>
</author>
<id>https://hdl.handle.net/1721.1/162889</id>
<updated>2025-10-04T03:09:19Z</updated>
<published>2025-07-02T00:00:00Z</published>
<summary type="text">Optimal Transmission Switching and Grid Reconfiguration for Transmission Systems via Convex Relaxations
Jagadeesan Nair, Vineet
In this paper, we formulate optimization problems and successive convex relaxations to perform optimal transmission switching (OTS) in order to operate power transmission grids more efficiently. OTS may be crucial in future power grids with much higher penetrations of renewable energy sources, which will introduce more variability and intermittency in generation. Similarly, OTS can potentially help mitigate the effects of unpredictable demand fluctuations (e.g., due to extreme weather). We explore and compare several different formulations for the OTS problem in terms of the computational performance and optimality. In particular, we build upon the literature by considering more complex and accurate power flow formulations for OTS and introducing novel convex relaxations. This allows us to model the grid physics more accurately than prior works and generalize to several different types of networks. We also apply our methods to small transmission test cases as a proof of concept to determine the effects of applying OTS.
</summary>
<dc:date>2025-07-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>FabObscura: Computational Design and Fabrication for Interactive Barrier-Grid Animations</title>
<link href="https://hdl.handle.net/1721.1/162888" rel="alternate"/>
<author>
<name>Sethapakdi, Ticha</name>
</author>
<author>
<name>Perroni-Scharf, Maxine</name>
</author>
<author>
<name>Li, Mingming</name>
</author>
<author>
<name>Li, Jiaji</name>
</author>
<author>
<name>Solomon, Justin</name>
</author>
<author>
<name>Satyanarayan, Arvind</name>
</author>
<author>
<name>Mueller, Stefanie</name>
</author>
<id>https://hdl.handle.net/1721.1/162888</id>
<updated>2025-10-04T03:09:13Z</updated>
<published>2025-09-27T00:00:00Z</published>
<summary type="text">FabObscura: Computational Design and Fabrication for Interactive Barrier-Grid Animations
Sethapakdi, Ticha; Perroni-Scharf, Maxine; Li, Mingming; Li, Jiaji; Solomon, Justin; Satyanarayan, Arvind; Mueller, Stefanie
We present FabObscura: a system for creating interactive barrier-grid animations, a classic technique that uses occlusion patterns to create the illusion of motion. Whereas traditional barrier-grid animations are constrained to simple linear occlusion patterns, FabObscura introduces a parameterization that represents patterns as mathematical functions. Our parameterization offers two key advantages over existing barrier-grid animation design methods: first, it has a high expressive ceiling by enabling the systematic design of novel patterns; second, it is versatile enough to represent all established forms of barrier-grid animations.&#13;
Using this parameterization, our computational design tool enables an end-to-end workflow for authoring, visualizing, and fabricating these animations without domain expertise. Our applications demonstrate how FabObscura can be used to create animations that respond to a range of user interactions, such as translations, rotations, and changes in viewpoint. By formalizing barrier-grid animation as a computational design material, FabObscura extends its expressiveness as an interactive medium.
</summary>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spaces of Polynomials as Grassmanians for Immersions and Embeddings</title>
<link href="https://hdl.handle.net/1721.1/162887" rel="alternate"/>
<author>
<name>Katz, Gabriel</name>
</author>
<id>https://hdl.handle.net/1721.1/162887</id>
<updated>2025-10-04T03:09:18Z</updated>
<published>2025-06-24T00:00:00Z</published>
<summary type="text">Spaces of Polynomials as Grassmanians for Immersions and Embeddings
Katz, Gabriel
: Let Y be a smooth compact n-manifold. We studied smooth embeddings and&#13;
immersions β : M → R × Y of compact n-manifolds M such that β(M) avoids some priory&#13;
chosen closed poset Θ of tangent patterns to the fibers of the obvious projection π : R × Y → Y.&#13;
Then, for a fixed Y, we introduced an equivalence relation between such β’s; creating a crossover&#13;
between pseudo-isotopies and bordisms. We called this relation quasitopy. In the presented&#13;
study of quasitopies, the spaces P&#13;
cΘ&#13;
d&#13;
of real univariate polynomials of degree d with real&#13;
divisors, whose combinatorial patterns avoid a given closed poset Θ, play the classical role of&#13;
Grassmanians. We computed the quasitopy classes Qemb&#13;
d&#13;
(Y, cΘ) of Θ-constrained embeddings&#13;
β in terms of homotopy/homology theory of spaces Y and P&#13;
cΘ&#13;
d&#13;
. We proved also that the&#13;
quasitopies of embeddings stabilize, as d → ∞.
</summary>
<dc:date>2025-06-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, The Research Laboratory of Electronics</title>
<link href="https://hdl.handle.net/1721.1/162886" rel="alternate"/>
<author>
<name>Baldo, Marc A</name>
</author>
<id>https://hdl.handle.net/1721.1/162886</id>
<updated>2025-10-17T15:34:24Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, The Research Laboratory of Electronics
Baldo, Marc A
This report contains the following sections: Introduction, Research Highlights, Personnel, Honors and Awards, and Outreach.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Direct and Indirect Mass Flow Rate Measurements for Ionic Liquid Ion Sources</title>
<link href="https://hdl.handle.net/1721.1/162885" rel="alternate"/>
<author>
<name>Shaik, Saba Z.</name>
</author>
<author>
<name>Lozano, Paulo C.</name>
</author>
<id>https://hdl.handle.net/1721.1/162885</id>
<updated>2025-10-04T03:09:27Z</updated>
<published>2025-09-16T00:00:00Z</published>
<summary type="text">Direct and Indirect Mass Flow Rate Measurements for Ionic Liquid Ion Sources
Shaik, Saba Z.; Lozano, Paulo C.
The dominant performance loss in ionic liquid ion sources is thought to be the mass&#13;
utilization efficiency, where an electrospray source appears to shed neutral propellant mass&#13;
that does not appear its exhaust. The underlying cause of this phenomenon is presently&#13;
unclear. Investigating and characterizing potential utilization losses requires accurate measurements of electrospray mass flow rates, which is difficult due to the extremely small flow&#13;
rates that are processed by individual sources, particularly those that operate in the pureion regime. In this work, we present an experimental platform that allows for simultaneous,&#13;
rapid, and in-situ measurements of both supply and exhaust mass flow rates, allowing for&#13;
measurements of the mass utilization efficiency for single electrospray emitters. Supply&#13;
flow rates are measured directly using an optical approach that provides ng/s level resolution. Exhaust flow rates are measured indirectly using a time-of-flight mass spectrometer.&#13;
This platform is employed to measure mass flow rates for a 3 µm internally fed emitter using&#13;
the ionic liquid EMI-BF4 at emission currents ranging from 100 to 500 nA. At all currents,&#13;
there is a major discrepancy between the direct and indirect flow rates, with the direct&#13;
value being greater in almost all cases. Component efficiency estimates confirm that the&#13;
mass utilization is the most significant performance loss at low flow rates when the source&#13;
is working in the pure-ion regime.
39th International Electric Propulsion Conference, Imperial College London, London, United Kingdom 14-19 September 2025
</summary>
<dc:date>2025-09-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Department of Urban Studies and Planning</title>
<link href="https://hdl.handle.net/1721.1/162884" rel="alternate"/>
<author>
<name>Zegras, P. Chris</name>
</author>
<id>https://hdl.handle.net/1721.1/162884</id>
<updated>2025-10-04T03:10:45Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Department of Urban Studies and Planning
Zegras, P. Chris
This report contains the following sections: Promotions and Faculty Appointments; Comings, Goings, Changing Roles; Committees and Leadership; Major Awards, Events, and Other Noteworthy News; Education/Degree Programs; Commencement/Awards; and DUSP Student Council Awards.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Graffiti: Enabling an Ecosystem of Personalized and Interoperable Social Applications</title>
<link href="https://hdl.handle.net/1721.1/162883" rel="alternate"/>
<author>
<name>Henderson, Theia</name>
</author>
<author>
<name>Karger, David</name>
</author>
<author>
<name>Clark, David D</name>
</author>
<id>https://hdl.handle.net/1721.1/162883</id>
<updated>2026-02-11T15:33:23Z</updated>
<published>2025-09-27T00:00:00Z</published>
<summary type="text">Graffiti: Enabling an Ecosystem of Personalized and Interoperable Social Applications
Henderson, Theia; Karger, David; Clark, David D
Most social applications, from Twitter to Wikipedia, have rigid one-size-fits-all designs, but building new social applications is both technically challenging and results in applications that are siloed away from existing communities. We present Graffiti, a system that can be used to build a wide variety of personalized social applications with relative ease that also interoperate with each other. People can freely move between a plurality of designs—each with its own aesthetic, feature set, and moderation—all without losing their friends or data.&#13;
Our concept of total reification makes it possible for seemingly contradictory designs, including conflicting moderation rules, to interoperate. Conversely, our concept of channels prevents interoperation from occurring by accident, avoiding context collapse.&#13;
Graffiti applications interact through a minimal client-side API, which we show admits at least two decentralized implementations. Above the API, we built a Vue plugin, which we use to develop applications similar to Twitter, Messenger, and Wikipedia using only client-side code. Our case studies explore how these and other novel applications interoperate, as well as the broader ecosystem that Graffiti enables.
UIST ’25, Busan, Republic of Korea
</summary>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>PeelFab: Designing 3D Printed Peelable Structures for 3D Masking</title>
<link href="https://hdl.handle.net/1721.1/162882" rel="alternate"/>
<author>
<name>Ni, Yongbo</name>
</author>
<author>
<name>Ji, Junzhe</name>
</author>
<author>
<name>Yang, Yue</name>
</author>
<author>
<name>Chen, Chuang</name>
</author>
<author>
<name>Li, Jiaji</name>
</author>
<author>
<name>Tao, Ye</name>
</author>
<author>
<name>Wang, Guanyun</name>
</author>
<id>https://hdl.handle.net/1721.1/162882</id>
<updated>2025-10-03T06:56:17Z</updated>
<published>2025-09-27T00:00:00Z</published>
<summary type="text">PeelFab: Designing 3D Printed Peelable Structures for 3D Masking
Ni, Yongbo; Ji, Junzhe; Yang, Yue; Chen, Chuang; Li, Jiaji; Tao, Ye; Wang, Guanyun
Desktop 3D printers are capable of fabricating structures with complex geometries, thus enhancing the functionality and interactivity of printed objects. Peelable structures represent an important application in 3D printing, as the supports and brims demonstrate, offering more possibilities for printing. However, existing tools are limited in their ability to effectively assist users in designing and customizing such structures, and their broader application potential remains underexplored. In traditional artistic practices, masks also exhibit the characteristics of a peelable design and serve as creative tools. However, within the field of human-computer interaction, no prior work has investigated the use of 3D-printed peelable structures for mask creation. To address this gap, we present PeelFab, a fabrication method and accompanying design tool for generating custom peelable structures directly within modeling software. Through the use of a built-in structure library and an interactive interface, users can create peelable structures based on points, lines, and surfaces, allowing the design of various 3D printed masking geometries. We also demonstrate several application cases that showcase the potential of 3D-printed masking using peelable structures.
UIST Adjunct ’25, Busan, Republic of Korea
</summary>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Asynchronous Training of Mixed-Role Human Actors in a Partially Observable Environment</title>
<link href="https://hdl.handle.net/1721.1/162881" rel="alternate"/>
<author>
<name>Chestnut Chang, Kimberlee</name>
</author>
<author>
<name>Jensen, Reed</name>
</author>
<author>
<name>Paleja, Rohan</name>
</author>
<author>
<name>Polk, Sam</name>
</author>
<author>
<name>Seater, Rob</name>
</author>
<author>
<name>Steilberg, Jackson</name>
</author>
<author>
<name>Schiefelbein, Curran</name>
</author>
<author>
<name>Scheldrup, Melissa</name>
</author>
<author>
<name>Gombolay, Matthew</name>
</author>
<author>
<name>Ramirez, Mabel</name>
</author>
<id>https://hdl.handle.net/1721.1/162881</id>
<updated>2025-10-03T06:56:33Z</updated>
<published>2025-09-17T00:00:00Z</published>
<summary type="text">Asynchronous Training of Mixed-Role Human Actors in a Partially Observable Environment
Chestnut Chang, Kimberlee; Jensen, Reed; Paleja, Rohan; Polk, Sam; Seater, Rob; Steilberg, Jackson; Schiefelbein, Curran; Scheldrup, Melissa; Gombolay, Matthew; Ramirez, Mabel
In cooperative training, humans within a team coordinate on complex tasks, building mental models of their teammates and learning to adapt to teammates' actions in real-time. To reduce the often prohibitive scheduling constraints associated with cooperative training, this article introduces a paradigm for cooperative asynchronous training of human teams in which trainees practice coordination with autonomous teammates rather than humans. We introduce a novel experimental design for evaluating autonomous teammates for use as training partners in cooperative training. We apply this design to a human-subjects experiment where humans are trained with either another human or an autonomous teammate and are evaluated with a new human subject in a new, partially observable, cooperative game developed for this study. Importantly, we employ an unsupervised sequential clustering methodology to partition teammate trajectories from demonstrations performed in the experiment to form a smaller number of training conditions. This results in a simpler experiment design, enabling us to conduct a complex cooperative training human-subjects study in a reasonable amount of time. Through a demonstration of the proposed experimental design, we provide takeaways and design recommendations for future research in the development of cooperative asynchronous training systems utilizing robot surrogates for human teammates.
</summary>
<dc:date>2025-09-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generic Pan Tilt: Open Source Motion Control Platform for Entertainment and Research</title>
<link href="https://hdl.handle.net/1721.1/162880" rel="alternate"/>
<author>
<name>Naseck, Perry</name>
</author>
<author>
<name>Mayton, Brian</name>
</author>
<author>
<name>Blanchard, Lancelot</name>
</author>
<author>
<name>Paradiso, Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/162880</id>
<updated>2025-10-03T06:56:28Z</updated>
<published>2025-09-27T00:00:00Z</published>
<summary type="text">Generic Pan Tilt: Open Source Motion Control Platform for Entertainment and Research
Naseck, Perry; Mayton, Brian; Blanchard, Lancelot; Paradiso, Joseph
We introduce the Generic Pan Tilt, an open-source, two-axis motion control platform designed for use in entertainment, art, and research. Combining affordable off-the-shelf hardware, 3D-printed parts, and custom electronics, the system enables rapid development and flexible integration of kinetic movement into small-scale performances and installations. The Generic Pan Tilt adheres to industry standards for connectivity and control, supporting DMX512-A and modular payloads. Demonstrated in a live AI-augmented musical performance, the platform allows for a new music and performance interfaces that feature expressive motion.
UIST Adjunct ’25, Busan, Republic of Korea
</summary>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sound2Haptic: A Toolkit for Portable Multi-Channel Haptic Integration Across Multiple Form Factors and Devices</title>
<link href="https://hdl.handle.net/1721.1/162879" rel="alternate"/>
<author>
<name>Chin, Sam</name>
</author>
<author>
<name>Fitz-Gibbon, Emmie</name>
</author>
<author>
<name>Huang, Bingjian</name>
</author>
<author>
<name>Tims, Carter</name>
</author>
<author>
<name>Orzech, Gabrielle</name>
</author>
<author>
<name>Thoo, Yong-Joon</name>
</author>
<author>
<name>Paradiso, Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/162879</id>
<updated>2025-10-03T06:56:25Z</updated>
<published>2025-09-27T00:00:00Z</published>
<summary type="text">Sound2Haptic: A Toolkit for Portable Multi-Channel Haptic Integration Across Multiple Form Factors and Devices
Chin, Sam; Fitz-Gibbon, Emmie; Huang, Bingjian; Tims, Carter; Orzech, Gabrielle; Thoo, Yong-Joon; Paradiso, Joseph
Existing multi-actuator vibrotactile systems often require external hardware such as sound cards and haptic amplifiers, which limits portability and creates complexity for non-technical users. This presents a significant barrier for researchers and designers in fields like human factors and healthcare. We present Sound2Haptic, an vibrotactile toolkit that integrates a sound card and haptic amplifiers into a single device. The toolkit connects to laptops, phones, and XR headsets, enabling portable eight-channel multi-actuator interaction accessible to non-technical users. The toolkit features a novel mechanical design that reduces cross-actuator interference and enables form factor customization. We demonstrate the toolkit’s functional efficacy through psychophysical evaluation across three form factors, and its ease of use through three case studies: (1) a clinical application for tinnitus research (2) a human factors study on speech prosody conducted with human factors researcher, and (3) an exploration of spatial neglect rehabilitation using XR and haptics.
UIST Adjunct ’25, Busan, Republic of Korea
</summary>
<dc:date>2025-09-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2023, Dean, Sloan School</title>
<link href="https://hdl.handle.net/1721.1/162878" rel="alternate"/>
<author>
<name>Schmittlein, David C</name>
</author>
<id>https://hdl.handle.net/1721.1/162878</id>
<updated>2025-10-03T06:58:04Z</updated>
<published>2023-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2023, Dean, Sloan School
Schmittlein, David C
This report contains the following sections: Introduction; Faculty and Research; Academic Programs; Office of External Relations; MIT Sloan Management Review; MIT Sloan Global Programs; Diversity, Equity &amp; Inclusion; and Conclusion.
</summary>
<dc:date>2023-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of MOF linker rotation and functionalization on methane uptake and diffusion</title>
<link href="https://hdl.handle.net/1721.1/162877" rel="alternate"/>
<author>
<name>Yue, Shuwen</name>
</author>
<author>
<name>Oh, Changhwan</name>
</author>
<author>
<name>Nandy, Aditya</name>
</author>
<author>
<name>Terrones, Gianmarco G</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<id>https://hdl.handle.net/1721.1/162877</id>
<updated>2025-10-03T06:56:48Z</updated>
<published>2023-01-02T00:00:00Z</published>
<summary type="text">Effects of MOF linker rotation and functionalization on methane uptake and diffusion
Yue, Shuwen; Oh, Changhwan; Nandy, Aditya; Terrones, Gianmarco G; Kulik, Heather J
The flexible degrees of freedom in metal–organic frameworks (MOFs) can have significant effects on guest molecule behavior. However, in the majority of studies applying molecular simulations to MOFs, the framework is assumed to be rigid in order to minimize computational cost. Here we assess the significance of this assumption on a representative example of methane uptake and diffusion in UiO-66. We introduce an open-source code to modify MOFs through functionalization and linker rotation and we perform Grand Canonical Monte Carlo and molecular dynamics simulations of methane in each of the functionalized and linker-rotated derivatives of UiO-66. We find that linker rotation moderately influences methane uptake and significantly influences methane diffusion. Our assessment provides ranges of property values that serve as measures of uncertainty of these two properties associated with linker rotation. We further determine that void volume fraction and minimum pore size are the features that govern methane uptake and diffusion, respectively. These findings illustrate the impact of linker rotation on MOFs and provide design principles to guide future investigations.
</summary>
<dc:date>2023-01-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Synthesis and Ring-Opening Metathesis Polymerization of a Strained trans-Silacycloheptene and Single-Molecule Mechanics of Its Polymer</title>
<link href="https://hdl.handle.net/1721.1/162876" rel="alternate"/>
<author>
<name>Wakefield, Herbert</name>
</author>
<author>
<name>Kevlishvili, Ilia</name>
</author>
<author>
<name>Wentz, Kelsie E</name>
</author>
<author>
<name>Yao, Yunxin</name>
</author>
<author>
<name>Kouznetsova, Tatiana B</name>
</author>
<author>
<name>Melvin, Sophia J</name>
</author>
<author>
<name>Ambrosius, Em G</name>
</author>
<author>
<name>Herzog-Arbeitman, Abraham</name>
</author>
<author>
<name>Siegler, Maxime A</name>
</author>
<author>
<name>Johnson, Jeremiah A</name>
</author>
<author>
<name>Craig, Stephen L</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<author>
<name>Klausen, Rebekka S</name>
</author>
<id>https://hdl.handle.net/1721.1/162876</id>
<updated>2025-10-03T06:56:47Z</updated>
<published>2023-04-05T00:00:00Z</published>
<summary type="text">Synthesis and Ring-Opening Metathesis Polymerization of a Strained trans-Silacycloheptene and Single-Molecule Mechanics of Its Polymer
Wakefield, Herbert; Kevlishvili, Ilia; Wentz, Kelsie E; Yao, Yunxin; Kouznetsova, Tatiana B; Melvin, Sophia J; Ambrosius, Em G; Herzog-Arbeitman, Abraham; Siegler, Maxime A; Johnson, Jeremiah A; Craig, Stephen L; Kulik, Heather J; Klausen, Rebekka S
The cis- and trans-isomers of a silacycloheptene were selectively synthesized by the alkylation of a silyl dianion, a novel approach to strained cycloalkenes. The trans-silacycloheptene (trans-SiCH) was significantly more strained than the cis isomer, as predicted by quantum chemical calculations and confirmed by crystallographic signatures of a twisted alkene. Each isomer exhibited distinct reactivity toward ring-opening metathesis polymerization (ROMP), where only trans-SiCH afforded high-molar-mass polymer under enthalpy-driven ROMP. Hypothesizing that the introduction of silicon might result in increased molecular compliance at large extensions, we compared poly(trans-SiCH) to organic polymers by single-molecule force spectroscopy (SMFS). Force-extension curves from SMFS showed that poly(trans-SiCH) is more easily overstretched than two carbon-based analogues, polycyclooctene and polybutadiene, with stretching constants that agree well with the results of computational simulations.
</summary>
<dc:date>2023-04-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>SESAMI APP: An Accessible Interface for Surface AreaCalculation of Materials from Adsorption Isotherms</title>
<link href="https://hdl.handle.net/1721.1/162875" rel="alternate"/>
<author>
<name>Terrones, Gianmarco G</name>
</author>
<author>
<name>Chen, Yu</name>
</author>
<author>
<name>Datar, Archit</name>
</author>
<author>
<name>Lin, Li-Chiang</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<author>
<name>Chung, Yongchul G</name>
</author>
<id>https://hdl.handle.net/1721.1/162875</id>
<updated>2025-10-03T06:56:45Z</updated>
<published>2023-06-09T00:00:00Z</published>
<summary type="text">SESAMI APP: An Accessible Interface for Surface AreaCalculation of Materials from Adsorption Isotherms
Terrones, Gianmarco G; Chen, Yu; Datar, Archit; Lin, Li-Chiang; Kulik, Heather J; Chung, Yongchul G
Accurate characterization of surface area is critical for understanding a material’s properties and&#13;
performance. The most widely used approach to calculate a material’s gravimetric surface area,&#13;
i.e. surface area per unit mass, is the Brunauer-Emmett-Teller (BET) method (Brunauer et al.,&#13;
1938). The BET method computes the surface area of a material given the adsorption isotherm&#13;
of a probe gas (i.e. N2 or Ar) in that material. Many researchers either obtain the BET area&#13;
from commercial software that comes with measurement equipment, or perform the analyses&#13;
manually on a spreadsheet, which is time-consuming and nearly impossible for some types&#13;
of isotherms. Furthermore, these two approaches lead to large variability in BET-calculated&#13;
areas (Osterrieth et al., 2022). These challenges have motivated the development of programs&#13;
for the automated and standardized calculation of BET areas (Datar et al., 2020; Iacomi &amp;&#13;
Llewellyn, 2019; Osterrieth et al., 2022; Sadeghi et al., 2020; Sinha et al., 2019).
</summary>
<dc:date>2023-06-09T00:00:00Z</dc:date>
</entry>
<entry>
<title>Seasonal Salinification of the US Northeast Continental Shelf Cold Pool Driven by Imbalance Between Cross‐Shelf Fluxes and Vertical Mixing</title>
<link href="https://hdl.handle.net/1721.1/162874" rel="alternate"/>
<author>
<name>Taenzer, Lukas L.</name>
</author>
<author>
<name>Chen, Ke</name>
</author>
<author>
<name>Plueddemann, Albert J.</name>
</author>
<author>
<name>Gawarkiewicz, Glen G.</name>
</author>
<id>https://hdl.handle.net/1721.1/162874</id>
<updated>2025-10-03T06:56:43Z</updated>
<published>2025-05-14T00:00:00Z</published>
<summary type="text">Seasonal Salinification of the US Northeast Continental Shelf Cold Pool Driven by Imbalance Between Cross‐Shelf Fluxes and Vertical Mixing
Taenzer, Lukas L.; Chen, Ke; Plueddemann, Albert J.; Gawarkiewicz, Glen G.
The US Northeast continental shelf “cold pool” comprises winter‐cooled Shelf Water that istrapped below the warm surface layer during the stratified season. The regional ecosystem relies on thepreservation of winter temperatures within the cold pool throughout the year. Here, we present first evidence ofa significant increase in the cold pool's salt content on the US Northeast continental shelf throughout thestratified season, suggesting that shelfbreak exchange contributes strongly to the seasonal erosion of the coldpool. Cold pool salinification rates of 0.18 PSU/month remain steady throughout the stratified season, leadingto salinity differences of over 1 PSU between April and October. A cold‐pool salinity budget reveals that theobserved salinification is caused by an imbalance between cross‐shelf salt fluxes, which deposit salt into thecold pool at all times of year, and the strong seasonal cycle of vertical mixing. During the stratified season,vertical mixing is inhibited and no longer counteracts the cross‐shelf flux, leading to net salinification of the coldpool over the summer. Along‐shelf freshwater advection from upstream is only present in the fall andcontributes some additional freshening to shut down the salinification trend. Seasonal variability in the positionof the US Northeast shelfbreak front is too small and out of phase to contribute to the salinity increase. Thestrong relationship between the seasonal cycle of cold pool modification and seasonal stratification pointstoward the importance of the timing of spring re‐ and fall de‐stratification on near‐bottom continental shelftemperature and salinity.
</summary>
<dc:date>2025-05-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tailoring dynamic hydrogels by controlling associative exchange rates</title>
<link href="https://hdl.handle.net/1721.1/162873" rel="alternate"/>
<author>
<name>Zhang, Vivian</name>
</author>
<author>
<name>Accardo, Joseph V</name>
</author>
<author>
<name>Kevlishvili, Ilia</name>
</author>
<author>
<name>Woods, Eliot F</name>
</author>
<author>
<name>Chapman, Steven J</name>
</author>
<author>
<name>Eckdahl, Christopher T</name>
</author>
<author>
<name>Stern, Charlotte L</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<author>
<name>Kalow, Julia A</name>
</author>
<id>https://hdl.handle.net/1721.1/162873</id>
<updated>2025-10-03T06:56:44Z</updated>
<published>2023-08-10T00:00:00Z</published>
<summary type="text">Tailoring dynamic hydrogels by controlling associative exchange rates
Zhang, Vivian; Accardo, Joseph V; Kevlishvili, Ilia; Woods, Eliot F; Chapman, Steven J; Eckdahl, Christopher T; Stern, Charlotte L; Kulik, Heather J; Kalow, Julia A
Dithioalkylidenes are a newly developed class of conjugate acceptors that undergo thiol exchange via an associative mechanism, enabling decoupling of key material properties for sustainability, biomedical, and sensing applications. Here, we show that the exchange rate is highly sensitive to the structure of the acceptor and tunable over four orders of magnitude in aqueous environments. Cyclic acceptors exchange rapidly, from 0.95 to 15.6 M−1s−1, whereas acyclic acceptors exchange between 3.77 × 10−3 and 2.17 × 10−2 M−1s−1. Computational, spectroscopic, and structural data suggest that cyclic acceptors are more reactive than their acyclic counterparts because of resonance stabilization of the tetrahedral exchange intermediate. We parametrize molecular reactivity with respect to computed descriptors of the electrophilic site and leverage this insight to design a compound with intermediate characteristics. Lastly, we incorporate this dynamic bond into hydrogels and demonstrate that the characteristic stress relaxation time (τ) is directly proportional to molecular kex.
</summary>
<dc:date>2023-08-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Heterogeneity of continuous glucose monitoring features and their clinical associations in a type 2 diabetes population</title>
<link href="https://hdl.handle.net/1721.1/162872" rel="alternate"/>
<author>
<name>Healey, Elizabeth</name>
</author>
<author>
<name>Morato, Carlos</name>
</author>
<author>
<name>Murillo, Jaime</name>
</author>
<author>
<name>Kohane, Isaac</name>
</author>
<id>https://hdl.handle.net/1721.1/162872</id>
<updated>2025-10-03T06:56:41Z</updated>
<published>2025-05-19T00:00:00Z</published>
<summary type="text">Heterogeneity of continuous glucose monitoring features and their clinical associations in a type 2 diabetes population
Healey, Elizabeth; Morato, Carlos; Murillo, Jaime; Kohane, Isaac
Objective: Data from continuous glucose monitors (CGM) enable the extraction of fea-tures descriptive of glycemic dynamics that may provide insight into underlying healthstatus. In this work, we analyse CGM data from a large population of individuals withtype 2 diabetes (T2D) and study the association of features with clinical covariates.Methods: We retrospectively analysed CGM and electronic health record data froma large population of individuals with T2D. We extracted 25 daily CGM features foreach individual over a 30-day period and performed statistical association tests onthe features and clinical findings from medical claims data and laboratory records.Results: Our final analysis was performed on 6533 individuals. When clustering theCGM features across the population of individuals with T2D, four distinct clusters offeatures emerged. Further, the CGM features had heterogeneous discriminatorypower with clinical covariates, including laboratory values and the presence of claimsfor diabetic complications. Features related to glycemic variability, such as coefficientof variation, showed markedly lower p-values in many association tests for the pres-ence of diabetic complications than mean glucose.Conclusions: In examining the characteristics of different features extracted fromCGM data in a large population of individuals with T2D, we found that the featureswere heterogeneously associated with different clinical comorbidities related to dia-betes. This work motivates further research to investigate the relationship betweenCGM features and health outcomes in T2D to enable precision medicine.
</summary>
<dc:date>2025-05-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development and Characterization of Electrochemically Machined Tungsten Extractor Electrodes for Electrospray Thrusters</title>
<link href="https://hdl.handle.net/1721.1/162871" rel="alternate"/>
<author>
<name>Gale, Alex E.</name>
</author>
<author>
<name>Shaik, Saba Z.</name>
</author>
<author>
<name>Lozano, Paulo C.</name>
</author>
<id>https://hdl.handle.net/1721.1/162871</id>
<updated>2025-10-03T06:56:40Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Development and Characterization of Electrochemically Machined Tungsten Extractor Electrodes for Electrospray Thrusters
Gale, Alex E.; Shaik, Saba Z.; Lozano, Paulo C.
This work explores electrochemically machined (ECM) tungsten extractors as an alternative to microfabricated silicon, in order to benefit from manufacturability, improved&#13;
ion optics through chamfered apertures, reduced secondary electron emission, and the potential for thinner geometries. A custom ECM fabrication process employing a linearly&#13;
oscillating cathodic paddle in sodium hydroxide was designed to manufacture extractors&#13;
and increase aperture uniformity. Using through-mask ECM, a 76.2 µm thick tungsten&#13;
extractor was fabricated, achieving a mean aperture diameter of 368 µm with a standard&#13;
deviation of 29 µm. The extractor was integrated with a modified version of the MIT&#13;
ion electrospray propulsion system (iEPS) to form a complete thruster. Characterization&#13;
included current-voltage sweeps, angular beam scans, and retarding potential analysis.&#13;
Measured efficiencies are comparable to previous iEPS thrusters, with intercepted currents ranging approximately between 1–2% of emitted current. These results demonstrate&#13;
that ECM tungsten extractors can deliver at least similar performance to existing designs&#13;
while offering improved manufacturability and scalability for future electrospray propulsion&#13;
systems.
39th International Electric Propulsion Conference, Imperial College London, London, United Kingdom 14-19 September 2025
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cytosolic Delivery of Functional Ubiquitin</title>
<link href="https://hdl.handle.net/1721.1/162870" rel="alternate"/>
<author>
<name>Giancola, JoLynn B</name>
</author>
<author>
<name>Okon, Aniekan</name>
</author>
<author>
<name>Li, Yanfeng</name>
</author>
<author>
<name>Strieter, Eric R</name>
</author>
<author>
<name>Raines, Ronald T</name>
</author>
<id>https://hdl.handle.net/1721.1/162870</id>
<updated>2025-10-03T06:56:38Z</updated>
<published>2025-05-08T00:00:00Z</published>
<summary type="text">Cytosolic Delivery of Functional Ubiquitin
Giancola, JoLynn B; Okon, Aniekan; Li, Yanfeng; Strieter, Eric R; Raines, Ronald T
The proteostasis network involves complex protein signaling cascades. The tagging of proteins with ubiquitin is central to thedegradation of cellular proteins, but understanding its exact role in processing proteins is complicated by the complexity andextent of its utilization within cells. Here, we describe the application of a traceless protein delivery strategy to effect the uptakeof exogenous ubiquitin into the cytosol of human cells. We find that coadministration of the endosomolytic peptides L17E and,especially, L17ER 4 provides not only cytosolic access to ubiquitin but also its functional incorporation into endogenous proteins.By enabling the study of semisynthetic ubiquitin variants in the human cytosol, this strategy could advance the field of ubiquitinbiology.
</summary>
<dc:date>2025-05-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>Discrete Simulations of Fluid‐Driven Transport of Naturally Shaped Sediment Particles</title>
<link href="https://hdl.handle.net/1721.1/162869" rel="alternate"/>
<author>
<name>Zhang, Qiong</name>
</author>
<author>
<name>Deal, Eric</name>
</author>
<author>
<name>Perron, J Taylor</name>
</author>
<author>
<name>Venditti, Jeremy G</name>
</author>
<author>
<name>Benavides, Santiago J</name>
</author>
<author>
<name>Rushlow, Matthew</name>
</author>
<author>
<name>Kamrin, Ken</name>
</author>
<id>https://hdl.handle.net/1721.1/162869</id>
<updated>2025-10-03T06:56:36Z</updated>
<published>2025-04-29T00:00:00Z</published>
<summary type="text">Discrete Simulations of Fluid‐Driven Transport of Naturally Shaped Sediment Particles
Zhang, Qiong; Deal, Eric; Perron, J Taylor; Venditti, Jeremy G; Benavides, Santiago J; Rushlow, Matthew; Kamrin, Ken
The particles in natural bedload transport processes are usually aspherical and span a range ofshapes and sizes, which is challenging to be represented in numerical simulations. We assemble existingnumerical methods to simulate the transport of natural gravel (NG). Starting with computerized tomographicscans of natural grains, our method approximates the shapes of these grains by “gluing” spheres (SP) ofdifferent sizes together with overlaps. The conglomerated SP move using a Discrete Element Method which iscoupled with a Lattice Boltzmann Method fluid solver, forming the first complete workflow from particleshape measurement to high‐resolution simulations with hundreds of distinct shapes. The simulations arequantitatively benchmarked by flume experiments. Beyond the flume, in a more generalized wide wall‐freegeometry, the numerical tool is used to further test a recently proposed modified sediment transport relation,which takes particle shape effects into account, including the competition between hydrodynamic drag andmaterial friction. Unlike a physical experiment, our simulations allow us to vary the hydrodynamic dragcoefficient of the NG independently of the material friction. The results support the modified sedimenttransport relation. The simulations also provide insights into particle‐level kinematics, such as particleorientations. Though particles below the bed surface prefer to orient with their shortest axes perpendicular tothe bed surface, with a decaying tendency with an increasing height above the bed surface, the orientationalpreferences in transport processes are much weaker than those in settling processes. NG rotates relativelyfreely during bedload transport.
</summary>
<dc:date>2025-04-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Finite Element Modeling of Abdominal Near‐Infrared Spectroscopy for Infant Splanchnic Oximetry</title>
<link href="https://hdl.handle.net/1721.1/162868" rel="alternate"/>
<author>
<name>Emani, Vishnu S</name>
</author>
<author>
<name>Ozturk, Caglar</name>
</author>
<author>
<name>Singh, Manisha</name>
</author>
<author>
<name>Long, Carly</name>
</author>
<author>
<name>Duffy, Summer</name>
</author>
<author>
<name>Sen, Danielle Gottlieb</name>
</author>
<author>
<name>Roche, Ellen T</name>
</author>
<author>
<name>Baker, Wesley B</name>
</author>
<id>https://hdl.handle.net/1721.1/162868</id>
<updated>2025-10-03T06:56:37Z</updated>
<published>2025-04-15T00:00:00Z</published>
<summary type="text">Finite Element Modeling of Abdominal Near‐Infrared Spectroscopy for Infant Splanchnic Oximetry
Emani, Vishnu S; Ozturk, Caglar; Singh, Manisha; Long, Carly; Duffy, Summer; Sen, Danielle Gottlieb; Roche, Ellen T; Baker, Wesley B
Abdominal near-infrared spectroscopy (NIRS) holds promise for early detection of necrotizing enterocolitis and other infantpathologies prior to irreversible injury, but the optimal NIRS sensor design is not well defined. In this study, we develop anddemonstrate a computational method to evaluate NIRS sensor designs for infant splanchnic oximetry. We used a finite element(FE) approach to simulate near-infrared light transport through a 3D model of the infant abdomen constructed from computedtomography (CT) images. The simulations enable the measurement of the contrast-to-noise ratio (CNR) for splanchnic oximetry,given a specific NIRS sensor design. A key design criterion is the sensor's source–detector distance (SDD). We calculated the CNRas a function of SDD for two sensor positions near the umbilicus. Contrast-to-noise was maximal at SDDs between 4 and 5 cm,and comparable between sensor positions. Sensitivity to intestinal tissue also exceeded sensitivity to superficial adipose tissue inthe 4–5 cm range. FE modeling of abdominal NIRS signals provides a means for rapid and thorough evaluation of sensor designsfor infant splanchnic oximetry. By informing optimal NIRS sensor design, the computational methods presented here can im-prove the reliability and applicability of infant splanchnic oximetry.
</summary>
<dc:date>2025-04-15T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Bayesian Proof of the Spread Lemma</title>
<link href="https://hdl.handle.net/1721.1/162867" rel="alternate"/>
<author>
<name>Mossel, Elchanan</name>
</author>
<author>
<name>Niles‐Weed, Jonathan</name>
</author>
<author>
<name>Sun, Nike</name>
</author>
<author>
<name>Zadik, Ilias</name>
</author>
<id>https://hdl.handle.net/1721.1/162867</id>
<updated>2025-10-03T06:56:34Z</updated>
<published>2025-06-06T00:00:00Z</published>
<summary type="text">A Bayesian Proof of the Spread Lemma
Mossel, Elchanan; Niles‐Weed, Jonathan; Sun, Nike; Zadik, Ilias
A key set-theoretic “spread” lemma has been central to two recent celebrated results in combinatorics: the recentimprovements on the sunflower conjecture by Alweiss, Lovett, Wu, and Zhang; and the proof of the fractionalKahn–Kalai conjecture by Frankston, Kahn, Narayanan, and Park. In this work, we present a new proof of the spreadlemma, that—perhaps surprisingly—takes advantage of an explicit recasting of the proof in the language of Bayesianinference. We show that from this viewpoint the reasoning proceeds in a straightforward and principled probabilisticmanner, leading to a truncated second moment calculation which concludes the proof.
</summary>
<dc:date>2025-06-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>Large-scale comparison of Fe and Ru polyolefin C–H activation catalysts</title>
<link href="https://hdl.handle.net/1721.1/162866" rel="alternate"/>
<author>
<name>Adamji, Husain</name>
</author>
<author>
<name>Kevlishvili, Ilia</name>
</author>
<author>
<name>Nandy, Aditya</name>
</author>
<author>
<name>Román-Leshkov, Yuriy</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<id>https://hdl.handle.net/1721.1/162866</id>
<updated>2026-03-08T03:25:26Z</updated>
<published>2024-03-01T00:00:00Z</published>
<summary type="text">Large-scale comparison of Fe and Ru polyolefin C–H activation catalysts
Adamji, Husain; Kevlishvili, Ilia; Nandy, Aditya; Román-Leshkov, Yuriy; Kulik, Heather J
We performed a large-scale density functional theory comparison of polyolefin C–H hydroxylation trends across over 200 Fe and Ru catalysts that are identical except for their metal centers for the radical-rebound conversion of propane to propanol. We observed a strong spin-state dependence: higher-spin states had more favorable metal-oxo formation and isopropanol release in Ru catalysts, while hydrogen atom transfer (HAT) was more favorable in Fe catalysts. While the widely studied metal-oxo formation vs. HAT linear free-energy relationship held for Ru, it was more easily disrupted for Fe. Ru catalysts have a spin-forbidden C–H hydroxylation pathway, while Fe catalysts favor a spin-allowed, intermediate-spin pathway. Calculation of reaction coordinates on representative catalysts corroborated these spin–reactivity trends and showed comparable energetic spans for Fe and Ru analogues, as well as strong Brønsted–Evans–Polanyi relationships for both the metal-oxo formation and HAT steps, motivating expanded study of Fe catalysts.
</summary>
<dc:date>2024-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Do Differences in Electronic Structure Affect the Use of Vanadium Intermediates as Mimics in Nonheme Iron Hydroxylases?</title>
<link href="https://hdl.handle.net/1721.1/162865" rel="alternate"/>
<author>
<name>Vennelakanti, Vyshnavi</name>
</author>
<author>
<name>Jeon, Mugyeom</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<id>https://hdl.handle.net/1721.1/162865</id>
<updated>2026-03-08T03:24:57Z</updated>
<published>2024-03-01T00:00:00Z</published>
<summary type="text">How Do Differences in Electronic Structure Affect the Use of Vanadium Intermediates as Mimics in Nonheme Iron Hydroxylases?
Vennelakanti, Vyshnavi; Jeon, Mugyeom; Kulik, Heather J
We study active-site models of nonheme iron hydroxylases and their vanadium-based mimics using density functional theory to determine if vanadyl is a faithful structural mimic. We identify crucial structural and energetic differences between ferryl and vanadyl isomers owing to the differences in their ground electronic states, i.e., high spin (HS) for Fe and low spin (LS) for V. For the succinate cofactor bound to the ferryl intermediate, we predict facile interconversion between monodentate and bidentate coordination isomers for ferryl species but difficult rearrangement for vanadyl mimics. We study isomerization of the oxo intermediate between axial and equatorial positions and find the ferryl potential energy surface to be characterized by a large barrier of ca. 10 kcal/mol that is completely absent for the vanadyl mimic. This analysis reveals even starker contrasts between Fe and V in hydroxylases than those observed for this metal substitution in nonheme halogenases. Analysis of the relative bond strengths of coordinating carboxylate ligands for Fe and V reveals that all of the ligands show stronger binding to V than Fe owing to the LS ground state of V in contrast to the HS ground state of Fe, highlighting the limitations of vanadyl mimics of native nonheme iron hydroxylases.
</summary>
<dc:date>2024-03-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, MIT Morningside Academy for Design (MAD)</title>
<link href="https://hdl.handle.net/1721.1/162864" rel="alternate"/>
<author>
<name>Ochsendorf, John</name>
</author>
<author>
<name>Yang, Maria C</name>
</author>
<author>
<name>Cunningham, Marion</name>
</author>
<id>https://hdl.handle.net/1721.1/162864</id>
<updated>2025-10-02T03:18:05Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, MIT Morningside Academy for Design (MAD)
Ochsendorf, John; Yang, Maria C; Cunningham, Marion
This report contains the following sections: Highlights; Explore, Learn, Create, Activate; Communications; Organizational Development; and Personnel.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, MIT Anthropology</title>
<link href="https://hdl.handle.net/1721.1/162863" rel="alternate"/>
<author>
<name>Walley, Christine</name>
</author>
<id>https://hdl.handle.net/1721.1/162863</id>
<updated>2025-10-02T03:17:37Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, MIT Anthropology
Walley, Christine
This report contains the following sections: Personnel and Administrative Changes, Highlights of the Year, Teaching and Curriculum, Publications, Presentations, and Contributions to MIT and Outside Communities.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lanmodulin‐Decorated Microbes for Efficient Lanthanide Recovery</title>
<link href="https://hdl.handle.net/1721.1/162862" rel="alternate"/>
<author>
<name>Gut, Melanie</name>
</author>
<author>
<name>Wilhelm, Tatum</name>
</author>
<author>
<name>Beniston, Olivia</name>
</author>
<author>
<name>Ogundipe, Safiyyah</name>
</author>
<author>
<name>Kuo, Chao‐Chi</name>
</author>
<author>
<name>Nguyen, Kristine</name>
</author>
<author>
<name>Furst, Ariel</name>
</author>
<id>https://hdl.handle.net/1721.1/162862</id>
<updated>2026-03-08T03:25:52Z</updated>
<published>2025-01-16T00:00:00Z</published>
<summary type="text">Lanmodulin‐Decorated Microbes for Efficient Lanthanide Recovery
Gut, Melanie; Wilhelm, Tatum; Beniston, Olivia; Ogundipe, Safiyyah; Kuo, Chao‐Chi; Nguyen, Kristine; Furst, Ariel
Rare earth elements (REEs) are essential for many clean energy technologies.Yet, they are a limited resource currently obtained through carbon-intensivemining. Here, bio-scaﬀolded proteins serve as simple, eﬀective materials forthe recovery of REEs. Surface expression of the protein lanmodulin (LanM) onE. coli, followed by freeze-drying of the microbes, yields a displayed proteinmaterial for REE recovery. Four REE cations (Y3+, La 3+, Gd3+, and Tb3+) arecaptured eﬃciently, with over 80% recovery even in the presence ofcompetitive ions at one-hundred-fold excess. Moreover, these materials arereadily integrated into a ﬁlter with high capture capacity (12 mg g−1 dry cellweight) for the selective isolation and recovery of REEs from complexmatrices. Further, the proteins in the ﬁlter remain stable over tenbind-and-release cycles and a week of storage. To improve the deployability ofthis ﬁlter material, a simple colorimetric assay with the dyealizarin-3-methyliminodiacetic acid is incorporated. The assay can beperformed in under 5 min, enabling rapid monitoring of REE recovery andﬁlter eﬃciency. Overall, this low-cost, robust material will enableenvironmentally friendly recycling and recovery of critical elements.
</summary>
<dc:date>2025-01-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exchange Bias in La0.67Sr0.33MnO3/YFeO3 Ferromagnet/Antiferromagnet Multilayer Heterostructures</title>
<link href="https://hdl.handle.net/1721.1/162861" rel="alternate"/>
<author>
<name>Fourmont, Paul</name>
</author>
<author>
<name>Cho, Eunsoo</name>
</author>
<author>
<name>Cloutier, Sylvain G</name>
</author>
<author>
<name>Ross, Caroline A</name>
</author>
<id>https://hdl.handle.net/1721.1/162861</id>
<updated>2026-03-08T03:25:51Z</updated>
<published>2025-04-13T00:00:00Z</published>
<summary type="text">Exchange Bias in La0.67Sr0.33MnO3/YFeO3 Ferromagnet/Antiferromagnet Multilayer Heterostructures
Fourmont, Paul; Cho, Eunsoo; Cloutier, Sylvain G; Ross, Caroline A
Exchange bias (EB), manifested as a hysteresis-loop oﬀset after ﬁeld-cooling,is demonstrated in perovskite-structured ferromagnet/antiferromagnet(La 0.67 Sr 0.33 MnO3 /YFeO3 )n heterostructures grown on (100) SrTiO3substrates. Bilayer samples show an EB of 306 Oe at 50 K, whereas multilayerswith ﬁve layers exhibit an exchange bias of up to 424 Oe at 50 K. A spin valveconsisting of La 0.67 Sr 0.33 MnO3 /SrTiO3 /La 0.67 Sr 0.33 MnO3 /YFeO3 shows stableremanent conﬁgurations resulting from pinning of the upper La0.67 Sr 0.33 MnO3layer by the YFeO3 . In contrast, EB is not observed on (111)-oriented SrTiO3substrates due to interface roughening. These results demonstrate YFeO3 asan alternative orthoferrite antiferromagnet compared to BiFeO 3 and LaFeO3for incorporation into exchange-biased heterostructures.
</summary>
<dc:date>2025-04-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhanced Electrochemical Response and Device Speed in Diketopyrrolopyrrole/PEO Composite Channels</title>
<link href="https://hdl.handle.net/1721.1/162860" rel="alternate"/>
<author>
<name>Cunin, Camille E</name>
</author>
<author>
<name>Winther, Sara</name>
</author>
<author>
<name>Matthews, James R</name>
</author>
<author>
<name>He, Mingqian</name>
</author>
<author>
<name>Gumyusenge, Aristide</name>
</author>
<id>https://hdl.handle.net/1721.1/162860</id>
<updated>2026-03-08T03:25:50Z</updated>
<published>2025-04-03T00:00:00Z</published>
<summary type="text">Enhanced Electrochemical Response and Device Speed in Diketopyrrolopyrrole/PEO Composite Channels
Cunin, Camille E; Winther, Sara; Matthews, James R; He, Mingqian; Gumyusenge, Aristide
Achieving eﬃcient charge conduction in organic electrochemical transistor(OECT) channel materials requires a delicate balance between electronicconduction and ion uptake. Common approaches to this challenge focus ontethering hydrophilic side chains to conjugated backbones, often resulting incomplex synthetic routes. Herein, an alternative strategy is presented usingcomposite mixed-conductive materials. Speciﬁcally, polyethylene oxide (PEO),a hydrophilic polymer, and a diketopyrrolopyrrole-based semiconductor,renowned for electronic conduction and processability, are used in varyingratios to form composite ﬁlms with tunable mixed conduction and enhancedOECT performance. The eﬀect of incorporating PEO on the composite’smorphology and OECT performance in both aqueous and non-aqueouselectrolytes is investigated. At the nanoscale, PEO is found to not onlyenhance channel hydrophilicity and ion uptake but also electrochemical gatingspeed, leading to improved OECT performance. These enhancements inelectrochemical performance are correlated with the morphological propertiesof the composite via structural and in-situ spectro-electrochemicalcharacterizations. Furthermore, the composite’s response is found to varywith the electrolyte environment: in organic electrolytes such as1-ethyl-3-methylimidazolium bis(triﬂuoromethylsulfonyl)imide (EMIM-TFSI),it exhibits high-speed performance suitable for neuromorphic applications,while in aqueous electrolytes, it achieves robust ion uptake ideal forbioelectronics. These ﬁndings highlight the potential of composite designs foroptimized OECT functionality across applications.
</summary>
<dc:date>2025-04-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Synthesis, characterization, and interfacial adhesion of titania iodine‐doped nanotubes architectures on additively manufactured Ti‐6Al‐4V implant</title>
<link href="https://hdl.handle.net/1721.1/162859" rel="alternate"/>
<author>
<name>Taweekitikul, P.</name>
</author>
<author>
<name>Aliyu, A. A.</name>
</author>
<author>
<name>Decha‐Umphai, D.</name>
</author>
<author>
<name>Tantavisut, S.</name>
</author>
<author>
<name>Khamwannah, J.</name>
</author>
<author>
<name>Puncreobutr, C.</name>
</author>
<author>
<name>Lohwongwatana, B.</name>
</author>
<id>https://hdl.handle.net/1721.1/162859</id>
<updated>2026-03-08T03:25:02Z</updated>
<published>2025-03-18T00:00:00Z</published>
<summary type="text">Synthesis, characterization, and interfacial adhesion of titania iodine‐doped nanotubes architectures on additively manufactured Ti‐6Al‐4V implant
Taweekitikul, P.; Aliyu, A. A.; Decha‐Umphai, D.; Tantavisut, S.; Khamwannah, J.; Puncreobutr, C.; Lohwongwatana, B.
This study aimed to synthesize, characterize, and evaluate the adhesionstrength of titania nanotubes (titania nanotubes) and iodine-doped titaniananotubes (I-titania nanotubes) architectures on the additively manufacturedTi-6Al-4 V (Ti64) implant surface. The titania nanotubes and I-titania nano-tubes were synthesized through two stages of electrochemical anodization,whereby titania nanotubes are anodically fabricated through a conventionalapproach and then modified by replacing the ethylene glycol electrolyte withpotassium iodide solution. The characterization results revealed the formationof α-Ti, β-Ti, and titanium iodide (TiI2) phases on the titania nanotubes and I-titania nanotubes surfaces. The morphology of titania nanotubes exhibits aconsistent diameter, evenly distributed, well-ordered array, and denselypacked nanotubular structures. Formation of a water-soluble fluoride-rich[TiF6]2 complexes in the inner titania nanotubes surface and incessant nano-tube’s sidewall etching resulted in poor interfacial titania nanotubes adhesionto the titanium-substrate surface. Iodine doping on the titania nanotubes isbelieved to reduce the [TiF6]2 complexes accumulation and the titania nano-tubes sidewall etching. This facilitates the adhesion and interfacial mechan-ical anchorage between the titania nanotubes and the surface of the Ti64 im-plant. The hardness and adhesion strength of the titania nanotubes increasedby more than 50 %, due to the formation of a hard titanium iodide film at thetitania nanotubes/I-titania nanotubes surfaces and interfaces.
</summary>
<dc:date>2025-03-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Perfusion‐Based Production of rAAV via an Intensified Transient Transfection Process</title>
<link href="https://hdl.handle.net/1721.1/162858" rel="alternate"/>
<author>
<name>Nguyen, Tam NT</name>
</author>
<author>
<name>Park, Damdae</name>
</author>
<author>
<name>Canova, Christopher T</name>
</author>
<author>
<name>Sangerman, Jose</name>
</author>
<author>
<name>Srinivasan, Prasanna</name>
</author>
<author>
<name>Ou, Rui Wen</name>
</author>
<author>
<name>Barone, Paul W</name>
</author>
<author>
<name>Neufeld, Caleb</name>
</author>
<author>
<name>Wolfrum, Jacqueline M</name>
</author>
<author>
<name>Springs, Stacy L</name>
</author>
<author>
<name>Sinskey, Anthony J</name>
</author>
<author>
<name>Braatz, Richard D</name>
</author>
<id>https://hdl.handle.net/1721.1/162858</id>
<updated>2026-03-08T03:24:56Z</updated>
<published>2025-03-18T00:00:00Z</published>
<summary type="text">Perfusion‐Based Production of rAAV via an Intensified Transient Transfection Process
Nguyen, Tam NT; Park, Damdae; Canova, Christopher T; Sangerman, Jose; Srinivasan, Prasanna; Ou, Rui Wen; Barone, Paul W; Neufeld, Caleb; Wolfrum, Jacqueline M; Springs, Stacy L; Sinskey, Anthony J; Braatz, Richard D
Increasing demand for recombinant adeno‐associated virus (rAAV)‐based gene therapies necessitates increased manufacturingproduction. Transient transfection of mammalian cells remains the most commonly used method to produce clinical‐graderAAVs due to its ease of implementation. However, transient transfection processes are often characterized by suboptimal yieldsand low fractions of full‐to‐total capsids, both of which contribute to the high cost of goods of many rAAV‐based gene therapies.Our previously developed mechanistic model for rAAV2/5 production indicated that the inadequate capsid filling is due to atemporal misalignment between viral DNA replication and capsid synthesis within the cells and the repression of later phasecapsid formation by Rep proteins. We experimentally validated this prediction and showed that performing multiple, time‐separated doses of plasmid increases the production of rAAV. In this study, we use the insights generated by our mechanisticmodel to develop an intensified process for rAAV production that combines perfusion with high cell density re‐transfection. Wedemonstrate that performing multiple, time‐separated doses at high cell density boosts both cell‐specific and volumetricproductivity and improves plasmid utilization when compared to a single bolus at standard operating conditions. Our resultsestablish a new paradigm for continuously manufacturing rAAV via transient transfection that improves productivity andreduces manufacturing costs.
</summary>
<dc:date>2025-03-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>A 2D/3D Heterostructure Perovskite Solar Cell with a Phase‐Pure and Pristine 2D Layer</title>
<link href="https://hdl.handle.net/1721.1/162857" rel="alternate"/>
<author>
<name>Shih, Meng‐Chen</name>
</author>
<author>
<name>Tan, Shaun</name>
</author>
<author>
<name>Lu, Yongli</name>
</author>
<author>
<name>Kodalle, Tim</name>
</author>
<author>
<name>Lee, Do‐Kyoung</name>
</author>
<author>
<name>Dong, Yifan</name>
</author>
<author>
<name>Larson, Bryon W</name>
</author>
<author>
<name>Park, Soyeon</name>
</author>
<author>
<name>Zhang, Ruiqi</name>
</author>
<author>
<name>Grotevent, Matthias J</name>
</author>
<author>
<name>Sverko, Tara</name>
</author>
<author>
<name>Zhu, Hua</name>
</author>
<author>
<name>Lin, Yu‐Kuan</name>
</author>
<author>
<name>Sutter‐Fella, Carolin M</name>
</author>
<author>
<name>Zhu, Kai</name>
</author>
<author>
<name>Beard, Matthew C</name>
</author>
<author>
<name>Bulović, Vladimir</name>
</author>
<author>
<name>Bawendi, Moungi G</name>
</author>
<id>https://hdl.handle.net/1721.1/162857</id>
<updated>2026-03-08T03:25:48Z</updated>
<published>2025-03-18T00:00:00Z</published>
<summary type="text">A 2D/3D Heterostructure Perovskite Solar Cell with a Phase‐Pure and Pristine 2D Layer
Shih, Meng‐Chen; Tan, Shaun; Lu, Yongli; Kodalle, Tim; Lee, Do‐Kyoung; Dong, Yifan; Larson, Bryon W; Park, Soyeon; Zhang, Ruiqi; Grotevent, Matthias J; Sverko, Tara; Zhu, Hua; Lin, Yu‐Kuan; Sutter‐Fella, Carolin M; Zhu, Kai; Beard, Matthew C; Bulović, Vladimir; Bawendi, Moungi G
Interface engineering plays a critical role in advancing the performance ofperovskite solar cells. As such, 2D/3D perovskite heterostructures are ofparticular interest due to their optoelectrical properties and their furtherpotential improvements. However, for conventional solution-processed 2Dperovskites grown on an underlying 3D perovskite, the reaction stoichiometryis normally unbalanced with excess precursors. Moreover, the formed 2Dperovskite is impure, leading to unfavorable energy band alignment at theinterface. Here a simple method is presented that solves both issuessimultaneously. The 2D formation reaction is taken ﬁrst to completion, fullyconsuming excess PbI2 . Then, isopropanol is utilized to remove excessorganic ligands, control the 2D perovskite thickness, and obtain a phase-pure,n = 2, 2D perovskite. The outcome is a pristine (without residual 2Dprecursors) and phase-pure 2D perovskite heterostructure with improvedsurface passivation and charge carrier extraction compared to theconventional solution process. PSCs incorporating this treatmentdemonstrate a notable improvement in both stability and power conversioneﬃciency, with negligible hysteresis, compared to the conventionalprocess.
</summary>
<dc:date>2025-03-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Living in the Paraindustrial</title>
<link href="https://hdl.handle.net/1721.1/162856" rel="alternate"/>
<author>
<name>Walley, Christine J</name>
</author>
<id>https://hdl.handle.net/1721.1/162856</id>
<updated>2026-03-08T03:24:59Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Living in the Paraindustrial
Walley, Christine J
This article is an autoethnographic exploration of life in the former steel mill regionof Southeast Chicago in the ‘Rust Belt’ of the Midwestern United States. It challengesassumptions about deindustrialization that depict one discrete historical stage follow-ing another (i.e., the postindustrial following the industrial) in favor of what is heredefined as the ‘paraindustrial’ (or a setting in which active industry with minimal num-bers of workers exists alongside defunct industry and toxic brownfields). This accountcenters upon the experiences of women who have too often been neglected in researchon deindustrialized regions. In particular, it focuses on the author’s elderly motherArlene who has spent her entire life in Southeast Chicago. From her wheelchair ona backyard porch, Arlene observes this damaged landscape built out of the formerCalumet wetlands. The article considers the relationships of care, centered aroundwomen, that continue to bind together and support the living despite decades ofeconomic and environmental rupture and degradation. Utilizing the concept of a‘palimpsest,’ the piece considers how different historical, ecological, and social reali-ties and temporalities are both layered on top of each other and intermingle to createthe complex landscape found in this former wetland region.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulated radiation levels and patterns of MRI without a Faraday shielded room</title>
<link href="https://hdl.handle.net/1721.1/162855" rel="alternate"/>
<author>
<name>Kazemivalipour, Ehsan</name>
</author>
<author>
<name>Guerin, Bastien</name>
</author>
<author>
<name>Wald, Lawrence L.</name>
</author>
<id>https://hdl.handle.net/1721.1/162855</id>
<updated>2026-03-08T03:25:22Z</updated>
<published>2025-03-17T00:00:00Z</published>
<summary type="text">Simulated radiation levels and patterns of MRI without a Faraday shielded room
Kazemivalipour, Ehsan; Guerin, Bastien; Wald, Lawrence L.
Purpose: We characterize electromagnetic (EM) radiation patterns and levelsin conventional MRI systems as a function of field strength and load symmetry,providing a framework for mitigation strategies allowing operation without ashielded room.&#13;
Methods: We simulated the far-field radiation pattern and fields at a 10 mradius (|E|10m and |B|10m ) for a solenoidal superconducting MRI with abody birdcage coil operated between 0.25T and 6.5T. Five load configura-tions probed the impact of load-symmetry, ranging from a sphere to a bodyload (least-symmetric). We also assessed simple layered EM absorbers at thebore-ends.&#13;
Results: All configurations exceeded regulatory limits for realistic transmit lev-els. At 1.5T, a 300 V rms RF-pulse is 2700-fold the |E|10m limit. Field strengthand load symmetry strongly modulate radiation patterns and levels. The radi-ated power increased by more than four orders of magnitude from 0.25T to6.5T. Spherical load radiation transitioned from a peak gain at the bore-ends(0.25–0.5T) to a donut-shaped pattern, suggesting current loops around the bore(1 T–1.5T), back to bore-axis-directed gain, suggesting propagating waves alongthe bore (2T–6.5T). Transition patterns were seen between these regimes; uni-form radiation at 0.75T and a combined donut/bore-directed pattern at 1.75T.Load asymmetry increased both strength and pattern asymmetry, with the bodyload having the highest and least symmetric radiation with the legs facilitat-ing wave propagation at high-fields. A simple optimized layered absorber atscanner’s service-end reduced 3T peak radiation by 11 dB.&#13;
Conclusion: Radiation from unshielded scanners far exceeds regulatory lim-its, particularly at high-field. Mitigation strategies must address load-symmetry,field strength, and wave effects.
</summary>
<dc:date>2025-03-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparison of tight-fitting 7T parallel-transmit head array designs using excitation uniformity and local specific absorption rate metrics</title>
<link href="https://hdl.handle.net/1721.1/162854" rel="alternate"/>
<author>
<name>Kazemivalipour, Ehsan</name>
</author>
<author>
<name>Wald, Lawrence L.</name>
</author>
<author>
<name>Guerin, Bastien</name>
</author>
<id>https://hdl.handle.net/1721.1/162854</id>
<updated>2026-03-08T03:25:18Z</updated>
<published>2023-11-06T00:00:00Z</published>
<summary type="text">Comparison of tight-fitting 7T parallel-transmit head array designs using excitation uniformity and local specific absorption rate metrics
Kazemivalipour, Ehsan; Wald, Lawrence L.; Guerin, Bastien
Purpose: We model the performance of parallel transmission (pTx) arrays with8, 16, 24, and 32 channels and varying loop sizes built on a close-fitting helmetfor brain imaging at 7 T and compare their local specific absorption rate (SAR)and flip-angle performances to that of birdcage coil (used as a baseline) andcylindrical 8-channel and 16-channel pTx coils (single-row and dual-row).&#13;
Methods: We use the co-simulation approach along with MATLAB scriptingfor batch-mode simulation of the coils. For each coil, we extracted B 1+ mapsand SAR matrices, which we compressed using the virtual observation pointsalgorithm, and designed slice-selective RF shimming pTx pulses with multiplelocal SAR and peak power constraints to generate L-curves in the transverse,coronal, and sagittal orientations.&#13;
Results: Helmet designs outperformed cylindrical pTx arrays at a constant num-ber of channels in the flip-angle uniformity at a constant local SAR metric: up to29% for 8-channel arrays, and up to 34% for 16-channel arrays, depending on theslice orientation. For all helmet arrays, increasing the loop diameter led to betterlocal SAR versus flip-angle uniformity tradeoffs, although this effect was morepronounced for the 8-channel and 16-channel systems than the 24-channel and32-channel systems, as the former have more limited degrees of freedom andtherefore benefit more from loop-size optimization.&#13;
Conclusion: Helmet pTx arrays significantly outperformed cylindrical arrayswith the same number of channels in local SAR and flip-angle uniformitymetrics. This improvement was especially pronounced for non-transverse sliceexcitations. Loop diameter optimization for helmets appears to favor large loops,compatible with nearest-neighbor decoupling by overlap.
</summary>
<dc:date>2023-11-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhanced Electrochemical Properties of Biobased Activated Carbon for Supercapacitors</title>
<link href="https://hdl.handle.net/1721.1/162853" rel="alternate"/>
<author>
<name>Zhou, Shengfei</name>
</author>
<author>
<name>Tai‐Chieh Wan, Charles</name>
</author>
<author>
<name>Chanut, Nicolas</name>
</author>
<author>
<name>Brushett, Fikile R</name>
</author>
<author>
<name>Buehler, Markus J</name>
</author>
<id>https://hdl.handle.net/1721.1/162853</id>
<updated>2026-03-08T03:25:32Z</updated>
<published>2025-04-04T00:00:00Z</published>
<summary type="text">Enhanced Electrochemical Properties of Biobased Activated Carbon for Supercapacitors
Zhou, Shengfei; Tai‐Chieh Wan, Charles; Chanut, Nicolas; Brushett, Fikile R; Buehler, Markus J
Supercapacitors are great candidates for energy boosting, power, and memory backup. However, they suffer from low-energy density, relatively high cost, and carbon footprint problems due to their electrode materials, such as commonly used activated carbons (ACs). To prepare better renewable ACs, 11 biomass materials are pretreated with hydrothermal processing and then activated at high temperature with potassium hydroxide (KOH) in the present study. The prepared ACs are characterized for scanning electron microscopy images, atomic concentration, specific surface areas, electrical conductivity, cyclic voltammograms, and specific capacitance to determine their potential for supercapacitor application. The electrical conductivity reaches 0.47–1.23 S cm−1, and specific capacitance reaches 250–360 F g−1 (at current density 20 A g−1), which are much higher than previously reported literature values (conductivity &lt;0.3 S cm−1, capacitance 40–160 F g−1) for biobased ACs, indicating great potential for supercapacitor application of our biobased ACs.
</summary>
<dc:date>2025-04-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nonlinear Ion Dynamics Enable Spike Timing Dependent Plasticity of Electrochemical Ionic Synapses</title>
<link href="https://hdl.handle.net/1721.1/162852" rel="alternate"/>
<author>
<name>Huang, Mantao</name>
</author>
<author>
<name>Xu, Longlong</name>
</author>
<author>
<name>del Alamo, Jesús A</name>
</author>
<author>
<name>Li, Ju</name>
</author>
<author>
<name>Yildiz, Bilge</name>
</author>
<id>https://hdl.handle.net/1721.1/162852</id>
<updated>2026-03-08T03:25:20Z</updated>
<published>2025-01-29T00:00:00Z</published>
<summary type="text">Nonlinear Ion Dynamics Enable Spike Timing Dependent Plasticity of Electrochemical Ionic Synapses
Huang, Mantao; Xu, Longlong; del Alamo, Jesús A; Li, Ju; Yildiz, Bilge
Programmable synaptic devices that can achieve timing-dependent weightupdates are key components to implementing energy-eﬃcient spiking neuralnetworks (SNNs). Electrochemical ionic synapses (EIS) enable theprogramming of weight updates with very low energy consumption and lowvariability. Here, the strongly nonlinear kinetics of EIS, arising from nonlineardynamics of ions and charge transfer reactions in solids, are leveraged toimplement various forms of spike-timing-dependent plasticity (STDP). Inparticular, protons are used as the working ion. Diﬀerent forms of the STDPfunction are deterministically predicted and emulated by a linearsuperposition of appropriately designed pre- and post-synaptic neuronsignals. Heterogeneous STDP is also demonstrated within the array tocapture diﬀerent learning rules in the same system. STDP timescales arecontrollable, ranging from milliseconds to nanoseconds. The STDP resultingfrom EIS has lower variability than other hardware STDP implementations,due to the deterministic and uniform insertion of charge in the tunablechannel material. The results indicate that the ion and charge transferdynamics in EIS can enable bio-plausible synapses for SNN hardware withhigh energy eﬃciency, reliability, and throughput.
</summary>
<dc:date>2025-01-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unraveling Polymer–Ion Interactions in Electrochromic Polymers for their Implementation in Organic Electrochemical Synaptic Devices</title>
<link href="https://hdl.handle.net/1721.1/162851" rel="alternate"/>
<author>
<name>Roh, Heejung</name>
</author>
<author>
<name>Yue, Shuwen</name>
</author>
<author>
<name>Hu, Hang</name>
</author>
<author>
<name>Chen, Ke</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<author>
<name>Gumyusenge, Aristide</name>
</author>
<id>https://hdl.handle.net/1721.1/162851</id>
<updated>2026-03-08T03:25:30Z</updated>
<published>2023-11-02T00:00:00Z</published>
<summary type="text">Unraveling Polymer–Ion Interactions in Electrochromic Polymers for their Implementation in Organic Electrochemical Synaptic Devices
Roh, Heejung; Yue, Shuwen; Hu, Hang; Chen, Ke; Kulik, Heather J; Gumyusenge, Aristide
Owing to low-power, fast and highly adaptive operability, as well as scalability, electrochemical random-access memory (ECRAM) technology is one of the most promising approaches for neuromorphic computing based on artificial neural networks. Despite recent advances, practical implementation of ECRAMs remains challenging due to several limitations including high write noise, asymmetric weight updates, and insufficient dynamic ranges. Here, inspired by similarities in structural and functional requirements between electrochromic devices and ECRAMs, high-performance, single-transistor and neuromorphic devices based on electrochromic polymers (ECPs) are demonstrated. To effectively translate electrochromism into electrochemical ion memory in polymers, this study systematically investigates polymer–ion interactions, redox activity, mixed ionic–electronic conduction, and stability of ECPs both experimentally and computationally using select electrolytes. The best-performing ECP-electrolyte combination is then implemented into an ECRAM device to further explore synaptic plasticity behaviors. The resulting ECRAM exhibits high linearity and symmetric conductance modulation, high dynamic range (≈1 mS or ≈6x), and high training accuracy (&gt;84% within five training cycles on a standard image recognition dataset), comparable to existing state-of-the-art ECRAMs. This study offers a promising approach to discover and design novel polymer materials for organic ECRAMs and demonstrates potential applications, taking advantage of mature knowledge basis on electrochromic materials and devices.
</summary>
<dc:date>2023-11-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reversible O–O Bond Scission and O2 Evolution at MOF-Supported Tetramanganese Clusters</title>
<link href="https://hdl.handle.net/1721.1/162850" rel="alternate"/>
<author>
<name>He, Xin</name>
</author>
<author>
<name>Iliescu, Andrei</name>
</author>
<author>
<name>Yang, Tzuhsiung</name>
</author>
<author>
<name>Arguilla, Maxx Q</name>
</author>
<author>
<name>Chen, Tianyang</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<author>
<name>Dincă, Mircea</name>
</author>
<id>https://hdl.handle.net/1721.1/162850</id>
<updated>2026-03-08T03:25:30Z</updated>
<published>2023-07-20T00:00:00Z</published>
<summary type="text">Reversible O–O Bond Scission and O2 Evolution at MOF-Supported Tetramanganese Clusters
He, Xin; Iliescu, Andrei; Yang, Tzuhsiung; Arguilla, Maxx Q; Chen, Tianyang; Kulik, Heather J; Dincă, Mircea
The scission of the O–O bond in O2 during respiration and the formation of the O–O bond during photosynthesis are the engines of aerobic life. Likewise, the reduction of O2 and the oxidation of reduced oxygen species to form O2 are indispensable components for emerging renewable technologies, including energy storage and conversion, yet discrete molecule-like systems that promote these fundamental reactions are rare. Herein, we report a square-planar tetramanganese cluster formed by self-assembly within a metal–organic framework that reversibly reduces O2 by four electrons, facilitating the interconversion between molecular O2 and metal-oxo species. The tetranuclear cluster spontaneously cleaves the O–O bond of O2 at room temperature to generate a tetramanganese-bis(μ2-oxo) species, which, in turn, is competent for O–O bond reformation and O2 evolution at elevated temperatures, enabled by the head-to-head orientation of two oxo species. This study demonstrates the viability of four-electron interconversion between molecular O2 and metal-oxo species and highlights the importance of site isolation for achieving multi-electron chemistry at polynuclear metal clusters.
</summary>
<dc:date>2023-07-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systematic Investigation of Silicon Substitution on Single Macromolecule Mechanics</title>
<link href="https://hdl.handle.net/1721.1/162849" rel="alternate"/>
<author>
<name>Wentz, Kelsie E</name>
</author>
<author>
<name>Yao, Yunxin</name>
</author>
<author>
<name>Kevlishvili, Ilia</name>
</author>
<author>
<name>Kouznetsova, Tatiana B</name>
</author>
<author>
<name>Mediavilla, Braden A</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<author>
<name>Craig, Stephen L</name>
</author>
<author>
<name>Klausen, Rebekka S</name>
</author>
<id>https://hdl.handle.net/1721.1/162849</id>
<updated>2026-03-08T03:25:13Z</updated>
<published>2023-08-18T00:00:00Z</published>
<summary type="text">Systematic Investigation of Silicon Substitution on Single Macromolecule Mechanics
Wentz, Kelsie E; Yao, Yunxin; Kevlishvili, Ilia; Kouznetsova, Tatiana B; Mediavilla, Braden A; Kulik, Heather J; Craig, Stephen L; Klausen, Rebekka S
Four unsaturated poly­(carbooligosilane)­s (P1–P4) were prepared via acyclic diene metathesis polycondensation of new oligosilane diene monomers (1–4). These novel polymers with varying main-chain Si incorporation have high trans internal olefin stereochemistry (ca. 80%) and molecular weights (9500–21,700 g mol–1). Postpolymerization epoxidation converted all alkene moieties to epoxides and rendered the polymers (P5–P8) more electrophilic, which allowed for single-molecule force spectroscopy studies via a modified atomic force microscope setup with a silicon tip and cantilever. The single-chain elasticity of the polycarbooligosilanes decreased with increasing numbers of Si–Si bonds, a finding reproduced by quantum chemical calculations.
</summary>
<dc:date>2023-08-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Protein3D: Enabling analysis and extraction of metal‐containing sites from the Protein Data Bank with molSimplify</title>
<link href="https://hdl.handle.net/1721.1/162848" rel="alternate"/>
<author>
<name>Edholm, Freya</name>
</author>
<author>
<name>Nandy, Aditya</name>
</author>
<author>
<name>Reinhardt, Clorice R</name>
</author>
<author>
<name>Kastner, David W</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<id>https://hdl.handle.net/1721.1/162848</id>
<updated>2026-03-08T03:24:54Z</updated>
<published>2024-03-05T00:00:00Z</published>
<summary type="text">Protein3D: Enabling analysis and extraction of metal‐containing sites from the Protein Data Bank with molSimplify
Edholm, Freya; Nandy, Aditya; Reinhardt, Clorice R; Kastner, David W; Kulik, Heather J
Metalloenzymes catalyze a wide range of chemical transformations, with the active site residues playing a key role in modulating chemical reactivity and selectivity. Unlike smaller synthetic catalysts, a metalloenzyme active site is embedded in a larger protein, which makes interrogation of electronic properties and geometric features with quantum mechanical calculations challenging. Here we implement the ability to fetch crystallographic structures from the Protein Data Bank and analyze the metal binding sites in the program molSimplify. We show the usefulness of the newly created protein3D class to extract the local environment around non‐heme iron enzymes containing a two histidine motif and prepare 372 structures for quantum mechanical calculations. Our implementation of protein3D serves to expand the range of systems molSimplify can be used to analyze and will enable high‐throughput study of metal‐containing active sites in proteins.
</summary>
<dc:date>2024-03-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Angle-strained sila-cycloalkynes</title>
<link href="https://hdl.handle.net/1721.1/162847" rel="alternate"/>
<author>
<name>Wakefield, Herbert</name>
</author>
<author>
<name>Melvin, Sophia J</name>
</author>
<author>
<name>Jiang, Jennifer</name>
</author>
<author>
<name>Kevlishvili, Ilia</name>
</author>
<author>
<name>Siegler, Maxime A</name>
</author>
<author>
<name>Craig, Stephen L</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<author>
<name>Klausen, Rebekka S</name>
</author>
<id>https://hdl.handle.net/1721.1/162847</id>
<updated>2026-03-08T03:25:24Z</updated>
<published>2024-04-05T00:00:00Z</published>
<summary type="text">Angle-strained sila-cycloalkynes
Wakefield, Herbert; Melvin, Sophia J; Jiang, Jennifer; Kevlishvili, Ilia; Siegler, Maxime A; Craig, Stephen L; Kulik, Heather J; Klausen, Rebekka S
Second row elements in small- and medium-rings modulate strain. Herein we report the synthesis of two novel oligosilyl-containing cycloalkynes that exhibit angle-strain, as observed by X-ray crystallography. However, the angle-strained sila-cyclooctynes are sluggish participants in cycloadditions with benzyl azide. A distortion-interaction model analysis based on density functional theory calculations was performed.
</summary>
<dc:date>2024-04-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Institute for Data, Systems, and Society</title>
<link href="https://hdl.handle.net/1721.1/162846" rel="alternate"/>
<author>
<name>Christia, Fotini</name>
</author>
<author>
<name>Rigollet, Philippe</name>
</author>
<id>https://hdl.handle.net/1721.1/162846</id>
<updated>2025-10-01T03:18:17Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Institute for Data, Systems, and Society
Christia, Fotini; Rigollet, Philippe
This report contains the following sections: Faculty &amp; Leadership, Academic Programs, Research, Events, External Relations, Resource Development and Fundraising, and IDSSx.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Office of Educational Opportunity Programs</title>
<link href="https://hdl.handle.net/1721.1/162845" rel="alternate"/>
<author>
<name>Layne, Evette</name>
</author>
<author>
<name>Johnson, Alicia</name>
</author>
<id>https://hdl.handle.net/1721.1/162845</id>
<updated>2025-10-01T03:18:12Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Office of Educational Opportunity Programs
Layne, Evette; Johnson, Alicia
This report contains the following sections: Office of Educational Opportunity Programs, MIT/Wellesley Upward Bound Program, Enrollment Statistics, Summer Session, Classes and Academic Support, Recreational Workshops, College &amp; Career Advising, School-year Session, Homework Supervision, Academic Advising, Career Advising, Class of 2025, and Postgraduate Involvement.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Department of Biological Engineering.</title>
<link href="https://hdl.handle.net/1721.1/162844" rel="alternate"/>
<author>
<name>Voigt, Christopher A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162844</id>
<updated>2025-10-01T03:18:18Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Department of Biological Engineering.
Voigt, Christopher A.
This report contains the following sections: Graduate Education, Undergraduate Education, Research, Center for Environmental Health and Sciences, Resource Development, Faculty Promotions, New Faculty Hiring, BE Career Expo, BE Departmental Retreat, BE Community Report, and Department Awards.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Open-Source Modular Bioreactor Platform for Cultivation of Synechocystis sp. PCC 6803 and Extraction of Intracellular Glucose</title>
<link href="https://hdl.handle.net/1721.1/162843" rel="alternate"/>
<author>
<name>Baho, Ingie</name>
</author>
<author>
<name>Tseo, Yitong</name>
</author>
<author>
<name>Zu, Yuexuan</name>
</author>
<author>
<name>Padia, Vineet</name>
</author>
<author>
<name>Hunter, Ian</name>
</author>
<id>https://hdl.handle.net/1721.1/162843</id>
<updated>2026-03-08T03:25:21Z</updated>
<published>2025-09-18T00:00:00Z</published>
<summary type="text">An Open-Source Modular Bioreactor Platform for Cultivation of Synechocystis sp. PCC 6803 and Extraction of Intracellular Glucose
Baho, Ingie; Tseo, Yitong; Zu, Yuexuan; Padia, Vineet; Hunter, Ian
Synechocystis sp. PCC 6803 is a photosynthetic microbe with high potential for capturing excessive atmospheric carbon while generating valuable bioproducts, like glucose. Current cultivation technologies remain expensive, closed-source, and poorly suited for downstream processing. This study presents a low-cost, open-source bioreactor platform with integrated modules for Synechocystis cultivation and glucose extraction. The system incorporates a photobioreactor, a lysis module, and a pressure-driven filtration setup. Optical density was continuously monitored using a custom-built module, and glucose was quantified using high-performance liquid chromatography (HPLC). Under an incident light intensity of approximately 400 μmol&#13;
 m−2&#13;
 s−1&#13;
, cultures reached a biomass productivity of 90 mg L−1 day−1&#13;
, with a specific growth rate of 0.166 day−1&#13;
 and glucose concentrations up to 5.08&#13;
 mg L−1&#13;
. A model was developed to predict the growth based on measured environmental parameters, achieving a strong predictive accuracy with a mean absolute error and variance of 0.0009±0.0003&#13;
. The system demonstrates up to 65% reduction in cost compared to commercial alternatives. This modular platform provides an accessible solution for biomanufacturing research and serves as a template for sustainable cyanobacteria-derived glucose production.
</summary>
<dc:date>2025-09-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fact-based Counter Narrative Generation to Combat Hate Speech</title>
<link href="https://hdl.handle.net/1721.1/162842" rel="alternate"/>
<author>
<name>Wilk, Brian</name>
</author>
<author>
<name>Shomee, Homaira Huda</name>
</author>
<author>
<name>Maity, Suman Kalyan</name>
</author>
<author>
<name>Medya, Sourav</name>
</author>
<id>https://hdl.handle.net/1721.1/162842</id>
<updated>2026-03-08T03:21:50Z</updated>
<published>2025-04-22T00:00:00Z</published>
<summary type="text">Fact-based Counter Narrative Generation to Combat Hate Speech
Wilk, Brian; Shomee, Homaira Huda; Maity, Suman Kalyan; Medya, Sourav
Online hatred has become an increasingly pervasive issue, affecting individuals and communities across various digital platforms. To combat hate speech in such platforms, counter narratives (CNs) are regarded as an effective method. In recent years, there has been growing interest in using generative AI tools to construct CNs. However, most of the generative models produce generic responses to hate speech and can hallucinate, reducing their effectiveness. To address the above limitations, we propose a counter narrative generation method that enhances CNs by providing non-aggressive, fact-based narratives with relevant background knowledge from two distinct sources, including a web search module. Furthermore, we conduct a comprehensive evaluation using multiple metrics, including LLM-based measures for persuasion, factuality, and informativeness, along with human and traditional NLP evaluations. Our method significantly outperforms baselines, achieving an average factuality score of 0.915, compared to 0.741, 0.701, and 0.69 for competitive baselines, and performs well in human evaluations.
WWW ’25, April 28-May 2, 2025, Sydney, NSW, Australia
</summary>
<dc:date>2025-04-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Diagrammatization and Abduction to Improve AI Interpretability With Domain-Aligned Explanations for Medical Diagnosis</title>
<link href="https://hdl.handle.net/1721.1/162841" rel="alternate"/>
<author>
<name>Lim, Brian</name>
</author>
<author>
<name>Cahaly, Joseph</name>
</author>
<author>
<name>Sng, Chester</name>
</author>
<author>
<name>Chew, Adam</name>
</author>
<id>https://hdl.handle.net/1721.1/162841</id>
<updated>2026-03-08T03:21:57Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Diagrammatization and Abduction to Improve AI Interpretability With Domain-Aligned Explanations for Medical Diagnosis
Lim, Brian; Cahaly, Joseph; Sng, Chester; Chew, Adam
Many visualizations have been developed for explainable AI (XAI), but they often require further reasoning by users to interpret. Investigating XAI for high-stakes medical diagnosis, we propose improving domain alignment with diagrammatic and abductive reasoning to reduce the interpretability gap. We developed DiagramNet to predict cardiac diagnoses from heart auscultation, select the best-fitting hypothesis based on criteria evaluation, and explain with clinically-relevant murmur diagrams. The ante-hoc interpretable model leverages domain-relevant ontology, representation, and reasoning process to increase trust in expert users. In modeling studies, we found that DiagramNet not only provides faithful murmur shape explanations, but also has better performance than baseline models. We demonstrate the interpretability and trustworthiness of diagrammatic, abductive explanations in a qualitative user study with medical students, showing that clinically-relevant, diagrammatic explanations are preferred over technical saliency map explanations. This work contributes insights into providing domain-aligned explanations for user-centric XAI in complex domains.
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>TelePulse: Enhancing the Teleoperation Experience through Biomechanical Simulation-Based Electrical Muscle Stimulation in Virtual Reality</title>
<link href="https://hdl.handle.net/1721.1/162840" rel="alternate"/>
<author>
<name>Hwang, Seokhyun</name>
</author>
<author>
<name>Kang, Seongjun</name>
</author>
<author>
<name>Oh, Jeongseok</name>
</author>
<author>
<name>Park, Jeongju</name>
</author>
<author>
<name>Shin, Semoo</name>
</author>
<author>
<name>Luo, Yiyue</name>
</author>
<author>
<name>DelPreto, Joseph</name>
</author>
<author>
<name>Lee, Sangbeom</name>
</author>
<author>
<name>Lee, Kyoobin</name>
</author>
<author>
<name>Matusik, Wojciech</name>
</author>
<author>
<name>Rus, Daniela</name>
</author>
<author>
<name>Kim, SeungJun</name>
</author>
<id>https://hdl.handle.net/1721.1/162840</id>
<updated>2026-03-08T03:22:06Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">TelePulse: Enhancing the Teleoperation Experience through Biomechanical Simulation-Based Electrical Muscle Stimulation in Virtual Reality
Hwang, Seokhyun; Kang, Seongjun; Oh, Jeongseok; Park, Jeongju; Shin, Semoo; Luo, Yiyue; DelPreto, Joseph; Lee, Sangbeom; Lee, Kyoobin; Matusik, Wojciech; Rus, Daniela; Kim, SeungJun
This paper introduces TelePulse, a system integrating biomechanical simulation with electrical muscle stimulation (EMS) to provide precise haptic feedback for robot teleoperation tasks in virtual reality (VR). TelePulse has two components: a physical simulation part that calculates joint torques based on real-time force data from remote manipulators, and an electrical stimulation part that converts these torques into muscle stimulation. Two experiments were conducted to evaluate the system. The first experiment assessed the accuracy of EMS generated through biomechanical simulations by comparing it with electromyography (EMG) data during force-directed tasks, while the second experiment evaluated the impact of TelePulse on teleoperation performance during sanding and drilling tasks. The results suggest that TelePulse provided more accurate stimulation across all arm muscles, thereby enhancing task performance and user experience in the teleoperation environment. In this paper, we discuss the effect of TelePulse on teleoperation, its limitations, and areas for future improvement.
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>TactStyle: Generating Tactile Textures with Generative AI for Digital Fabrication</title>
<link href="https://hdl.handle.net/1721.1/162839" rel="alternate"/>
<author>
<name>Faruqi, Faraz</name>
</author>
<author>
<name>Perroni-Scharf, Maxine</name>
</author>
<author>
<name>Walia, Jaskaran</name>
</author>
<author>
<name>Zhu, Yunyi</name>
</author>
<author>
<name>Feng, Shuyue</name>
</author>
<author>
<name>Degraen, Donald</name>
</author>
<author>
<name>Mueller, Stefanie</name>
</author>
<id>https://hdl.handle.net/1721.1/162839</id>
<updated>2026-03-08T03:22:38Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">TactStyle: Generating Tactile Textures with Generative AI for Digital Fabrication
Faruqi, Faraz; Perroni-Scharf, Maxine; Walia, Jaskaran; Zhu, Yunyi; Feng, Shuyue; Degraen, Donald; Mueller, Stefanie
Recent work in Generative AI enables the stylization of 3D models based on image prompts. However, these methods do not incorporate tactile information, leading to designs that lack the expected tactile properties. We present TactStyle, a system that allows creators to stylize 3D models with images while incorporating the expected tactile properties. TactStyle accomplishes this using a modified image-generation model fine-tuned to generate heightfields for given surface textures. By optimizing 3D model surfaces to embody a generated texture, TactStyle creates models that match the desired style and replicate the tactile experience. We utilize a large-scale dataset of textures to train our texture generation model. In a psychophysical experiment, we evaluate the tactile qualities of a set of 3D-printed original textures and TactStyle’s generated textures. Our results show that TactStyle successfully generates a wide range of tactile features from a single image input, enabling a novel approach to haptic design.
CHI ’25, April 26–May 01, 2025, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Xstrings: 3D Printing Cable-Driven Mechanism for Actuation, Deformation, and Manipulation</title>
<link href="https://hdl.handle.net/1721.1/162838" rel="alternate"/>
<author>
<name>Li, Jiaji</name>
</author>
<author>
<name>Feng, Shuyue</name>
</author>
<author>
<name>Perroni-Scharf, Maxine</name>
</author>
<author>
<name>Liu, Yujia</name>
</author>
<author>
<name>Guan, Emily</name>
</author>
<author>
<name>Wang, Guanyun</name>
</author>
<author>
<name>Mueller, Stefanie</name>
</author>
<id>https://hdl.handle.net/1721.1/162838</id>
<updated>2026-03-08T03:22:43Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Xstrings: 3D Printing Cable-Driven Mechanism for Actuation, Deformation, and Manipulation
Li, Jiaji; Feng, Shuyue; Perroni-Scharf, Maxine; Liu, Yujia; Guan, Emily; Wang, Guanyun; Mueller, Stefanie
In this paper, we present Xstrings, a method for designing and fabricating 3D printed objects with integrated cable-driven mechanisms that can be printed in one go without the need for manual assembly. Xstrings supports four types of cable-driven interactions—bend, coil, screw and compress—which are activated by applying an input force to the cables. To facilitate the design of Xstrings objects, we present a design tool that allows users to embed cable-driven mechanisms into object geometries based on their desired interactions by automatically placing joints and cables inside the object. To assess our system, we investigate the effect of printing parameters on the strength of Xstrings objects and the extent to which the interactions are repeatable without cable breakage. We demonstrate the application potential of Xstrings through examples such as manipulable gripping, bionic robot manufacturing, and dynamic prototyping.
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding the LLM-ification of CHI: Unpacking the Impact of LLMs at CHI through a Systematic Literature Review</title>
<link href="https://hdl.handle.net/1721.1/162837" rel="alternate"/>
<author>
<name>Pang, Rock Yuren</name>
</author>
<author>
<name>Schroeder, Hope</name>
</author>
<author>
<name>Smith, Kynnedy</name>
</author>
<author>
<name>Barocas, Solon</name>
</author>
<author>
<name>Xiao, Ziang</name>
</author>
<author>
<name>Tseng, Emily</name>
</author>
<author>
<name>Bragg, Danielle</name>
</author>
<id>https://hdl.handle.net/1721.1/162837</id>
<updated>2026-03-08T03:22:33Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Understanding the LLM-ification of CHI: Unpacking the Impact of LLMs at CHI through a Systematic Literature Review
Pang, Rock Yuren; Schroeder, Hope; Smith, Kynnedy; Barocas, Solon; Xiao, Ziang; Tseng, Emily; Bragg, Danielle
Large language models (LLMs) have been positioned to revolutionize HCI, by reshaping not only the interfaces, design patterns, and sociotechnical systems that we study, but also the research practices we use. To-date, however, there has been little understanding of LLMs’ uptake in HCI. We address this gap via a systematic literature review of 153 CHI papers from 2020-24 that engage with LLMs. We taxonomize: (1) domains where LLMs are applied; (2) roles of LLMs in HCI projects; (3) contribution types; and (4) acknowledged limitations and risks. We find LLM work in 10 diverse domains, primarily via empirical and artifact contributions. Authors use LLMs in five distinct roles, including as research tools or simulated users. Still, authors often raise validity and reproducibility concerns, and overwhelmingly study closed models. We outline opportunities to improve HCI research with and on LLMs, and provide guiding questions for researchers to consider the validity and appropriateness of LLM-related work.
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Need Help? Designing Proactive AI Assistants for Programming</title>
<link href="https://hdl.handle.net/1721.1/162836" rel="alternate"/>
<author>
<name>Chen, Valerie</name>
</author>
<author>
<name>Zhu, Alan</name>
</author>
<author>
<name>Zhao, Sebastian</name>
</author>
<author>
<name>Mozannar, Hussein</name>
</author>
<author>
<name>Sontag, David</name>
</author>
<author>
<name>Talwalkar, Ameet</name>
</author>
<id>https://hdl.handle.net/1721.1/162836</id>
<updated>2026-03-08T03:22:26Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Need Help? Designing Proactive AI Assistants for Programming
Chen, Valerie; Zhu, Alan; Zhao, Sebastian; Mozannar, Hussein; Sontag, David; Talwalkar, Ameet
While current chat-based AI assistants primarily operate reactively, responding only when prompted by users, there is significant potential for these systems to proactively assist in tasks without explicit invocation, enabling a mixed-initiative interaction. This work explores the design and implementation of proactive AI assistants powered by large language models. We first outline the key design considerations for building effective proactive assistants. As a case study, we propose a proactive chat-based programming assistant that automatically provides suggestions and facilitates their integration into the programmer’s code. The programming context provides a shared workspace enabling the assistant to offer more relevant suggestions. We conducted a randomized experimental study examining the impact of various design elements of the proactive assistant on programmer productivity and user experience. Our findings reveal significant benefits of incorporating proactive chat assistants into coding environments, while also uncovering important nuances that influence their usage and effectiveness.
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort Recollection</title>
<link href="https://hdl.handle.net/1721.1/162835" rel="alternate"/>
<author>
<name>Pataranutaporn, Pat</name>
</author>
<author>
<name>Archiwaranguprok, Chayapatr</name>
</author>
<author>
<name>Chan, Samantha</name>
</author>
<author>
<name>Loftus, Elizabeth</name>
</author>
<author>
<name>Maes, Pattie</name>
</author>
<id>https://hdl.handle.net/1721.1/162835</id>
<updated>2026-03-08T03:22:17Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort Recollection
Pataranutaporn, Pat; Archiwaranguprok, Chayapatr; Chan, Samantha; Loftus, Elizabeth; Maes, Pattie
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating amenity access of new and repurposed housing within the 15-Minute City framework in Amsterdam</title>
<link href="https://hdl.handle.net/1721.1/162834" rel="alternate"/>
<author>
<name>Aksoy, Esma S.</name>
</author>
<author>
<name>Venverloo, Titus</name>
</author>
<author>
<name>Benson, Tom</name>
</author>
<author>
<name>Duarte, Fabio</name>
</author>
<id>https://hdl.handle.net/1721.1/162834</id>
<updated>2026-03-08T03:21:06Z</updated>
<published>2025-04-30T00:00:00Z</published>
<summary type="text">Evaluating amenity access of new and repurposed housing within the 15-Minute City framework in Amsterdam
Aksoy, Esma S.; Venverloo, Titus; Benson, Tom; Duarte, Fabio
Amsterdam has a housing shortage issue. To address this, the Municipality aims to provide 73,660 housing units by 2028, either by constructing new housing buildings or by repurposing existing buildings with other functions such as offices, schools or industrial spaces. The comparison between these two strategies in past research primarily focuses on lower construction costs, reduced raw material usage, and decreased energy consumption associated with demolition and new construction processes; on the other hand, comparisons of locational characteristics between new and repurposed housing projects have seldom been studied. In this paper, we compare access to amenities, specifically the number and diversity, between new and repurposed housing buildings based on their location in the city. Using the 15-Minute City concept as both a theoretical framework and a practical tool, we evaluate the amenities within a 15-min walking isochrone for 38,061 housing units (554 residential buildings) constructed between 2015 and 2019. By aggregating these results at district level, we deepen the analysis and provide insights that could support the development of locally tailored policies.
</summary>
<dc:date>2025-04-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>What-if Analysis for Business Professionals: Current Practices and Future Opportunities</title>
<link href="https://hdl.handle.net/1721.1/162833" rel="alternate"/>
<author>
<name>Gathani, Sneha</name>
</author>
<author>
<name>Liu, Zhicheng</name>
</author>
<author>
<name>Haas, Peter J.</name>
</author>
<author>
<name>Demiralp, ?a?atay</name>
</author>
<id>https://hdl.handle.net/1721.1/162833</id>
<updated>2026-03-08T03:22:05Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">What-if Analysis for Business Professionals: Current Practices and Future Opportunities
Gathani, Sneha; Liu, Zhicheng; Haas, Peter J.; Demiralp, ?a?atay
What-if analysis (WIA) is essential for data-driven decision-making, allowing users to assess how changes in variables impact outcomes and explore alternative scenarios. Existing WIA research primarily supports the workflows of data scientists and analysts, and largely overlooks business professionals who engage in WIA through non-technical means. To bridge this gap, we conduct a two-part user study with 22 business professionals across marketing, sales, product, and operations roles. The first study examines their existing WIA practices, tools, and challenges. Findings reveal that business professionals perform many WIA techniques independently using rudimentary tools due to various constraints. We then implement representative WIA techniques in a visual analytics prototype and use it as a probe to conduct a follow-up study evaluating business professionals’ practical use of the techniques. Results show that these techniques improve decision-making efficiency and confidence while underscoring the need for better support in data preparation, risk assessment, and domain knowledge integration. Finally, we offer design recommendations to enhance future business analytics systems.
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Combined dendritic cell and anti-TIGIT immunotherapy potentiates adaptive NK cells against HIV-1</title>
<link href="https://hdl.handle.net/1721.1/162832" rel="alternate"/>
<author>
<name>Sánchez-Cerrillo, Ildefonso</name>
</author>
<author>
<name>Agudo-Lera, María</name>
</author>
<author>
<name>Popova, Olga</name>
</author>
<author>
<name>Tsukalov, Ilya</name>
</author>
<author>
<name>Calvet-Mirabent, Marta</name>
</author>
<author>
<name>de los Santos, Ignacio</name>
</author>
<author>
<name>García-Fraile, Lucio</name>
</author>
<author>
<name>Fuentes, Patricia</name>
</author>
<author>
<name>Delgado-Arévalo, Cristina</name>
</author>
<author>
<name>Alcain, Juan</name>
</author>
<author>
<name>Sánchez-Gaona, Nerea</name>
</author>
<author>
<name>Grau-Expósito, Judith</name>
</author>
<author>
<name>Lázaro-Díez, María</name>
</author>
<id>https://hdl.handle.net/1721.1/162832</id>
<updated>2026-03-08T03:21:00Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">Combined dendritic cell and anti-TIGIT immunotherapy potentiates adaptive NK cells against HIV-1
Sánchez-Cerrillo, Ildefonso; Agudo-Lera, María; Popova, Olga; Tsukalov, Ilya; Calvet-Mirabent, Marta; de los Santos, Ignacio; García-Fraile, Lucio; Fuentes, Patricia; Delgado-Arévalo, Cristina; Alcain, Juan; Sánchez-Gaona, Nerea; Grau-Expósito, Judith; Lázaro-Díez, María
Natural Killer (NK) cells are promising candidates for targeting persistently infected CD4 + T cells in people with HIV-1 (PWH). However, chronicity of HIV-1 infection impairs NK cell functionality, requiring additional strategies to potentiate their cytotoxic activity. This study demonstrates that dendritic cells primed with nanoparticles containing Poly I:C (Nano-PIC-MDDC) enhance the natural cytotoxic function of NK cells from effective responder PWH. These NK cells exhibit increased proportions of NKG2C+ cell subsets capable of eliminating HIV-1 infected CD4 + T cells through the TRAIL receptor. In contrast, in non-responder PWH, elevated expression of the inhibitory receptor TIGIT is associated with reduced frequencies of NKG2C + NK cells and diminished TRAIL expression. TIGIT blockade restores cytotoxicity of NK cells from non-responder PWH against HIV-1-infected cells by upregulating TRAIL. Furthermore, combining Nano-PIC-MDDC-primed NK cells with anti-TIGIT immunotherapy in humanized NSG mice reduces the expansion of HIV-1 infected cells, preserves NKG2C + NK cell precursors and increases TRAIL expression in tissue. Collectively, these findings support the combined use of Nano-PIC-MDDC and TIGIT blockade as a promising immunotherapeutic strategy toward an HIV-1 cure.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An asymmetric nautilus-like HflK/C assembly controls FtsH proteolysis of membrane proteins</title>
<link href="https://hdl.handle.net/1721.1/162831" rel="alternate"/>
<author>
<name>Ghanbarpour, Alireza</name>
</author>
<author>
<name>Telusma, Bertina</name>
</author>
<author>
<name>Powell, Barrett M.</name>
</author>
<author>
<name>Zhang, Jia J.</name>
</author>
<author>
<name>Bolstad, Isabella</name>
</author>
<author>
<name>Vargas, Carolyn</name>
</author>
<author>
<name>Keller, Sandro</name>
</author>
<author>
<name>Baker, Tania A.</name>
</author>
<author>
<name>Sauer, Robert T.</name>
</author>
<author>
<name>Davis, Joseph H.</name>
</author>
<id>https://hdl.handle.net/1721.1/162831</id>
<updated>2026-03-08T03:20:59Z</updated>
<published>2025-03-13T00:00:00Z</published>
<summary type="text">An asymmetric nautilus-like HflK/C assembly controls FtsH proteolysis of membrane proteins
Ghanbarpour, Alireza; Telusma, Bertina; Powell, Barrett M.; Zhang, Jia J.; Bolstad, Isabella; Vargas, Carolyn; Keller, Sandro; Baker, Tania A.; Sauer, Robert T.; Davis, Joseph H.
The AAA protease FtsH associates with HflK/C subunits to form a megadalton-size complex that spans the inner membrane and extends into the periplasm of E. coli. How this bacterial complex and homologous assemblies in eukaryotic organelles recruit, extract, and degrade membrane-embedded substrates is unclear. Following the overproduction of protein components, recent cryo-EM structures showed symmetric HflK/C cages surrounding FtsH in a manner proposed to inhibit the degradation of membrane-embedded substrates. Here, we present structures of native protein complexes, in which HflK/C instead forms an asymmetric nautilus-shaped assembly with an entryway for membrane-embedded substrates to reach and be engaged by FtsH. Consistent with this nautilus-like structure, proteomic assays suggest that HflK/C enhances FtsH degradation of certain membrane-embedded substrates. Membrane curvature in our FtsH•HflK/C complexes is opposite that of surrounding membrane regions, a property that correlates with lipid scramblase activity and possibly with FtsH’s function in the degradation of membrane-embedded proteins.
</summary>
<dc:date>2025-03-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building health systems capable of leveraging AI: applying Paul Farmer’s 5S framework for equitable global health</title>
<link href="https://hdl.handle.net/1721.1/162830" rel="alternate"/>
<author>
<name>McCoy, Liam G.</name>
</author>
<author>
<name>Bihorac, Azra</name>
</author>
<author>
<name>Celi, Leo A.</name>
</author>
<author>
<name>Elmore, Matthew</name>
</author>
<author>
<name>Kewalramani, Divya</name>
</author>
<author>
<name>Kwaga, Teddy</name>
</author>
<author>
<name>Martinez-Martin, Nicole</name>
</author>
<author>
<name>Prôa, Renata</name>
</author>
<author>
<name>Schamroth, Joel</name>
</author>
<author>
<name>Shaffer, Jonathan D.</name>
</author>
<author>
<name>Youssef, Alaa</name>
</author>
<author>
<name>Fiske, Amelia</name>
</author>
<id>https://hdl.handle.net/1721.1/162830</id>
<updated>2026-03-08T03:21:07Z</updated>
<published>2025-05-02T00:00:00Z</published>
<summary type="text">Building health systems capable of leveraging AI: applying Paul Farmer’s 5S framework for equitable global health
McCoy, Liam G.; Bihorac, Azra; Celi, Leo A.; Elmore, Matthew; Kewalramani, Divya; Kwaga, Teddy; Martinez-Martin, Nicole; Prôa, Renata; Schamroth, Joel; Shaffer, Jonathan D.; Youssef, Alaa; Fiske, Amelia
The development of artificial intelligence (AI) applications in healthcare is often positioned as a solution to the greatest challenges facing global health. Advocates propose that AI can bridge gaps in care delivery and access, improving healthcare quality and reducing inequity, including in resource-constrained settings. A broad base of critical scholarship has highlighted important issues with healthcare AI, including algorithmic bias and inequitable and inaccurate model outputs. While such criticisms are valid, there exists a much more fundamental challenge that is often overlooked in global health policy debates: the dangerous mismatch between AI’s imagined benefits and the material realities of healthcare systems globally. AI cannot be deployed effectively or ethically in contexts lacking sufficient social and material infrastructure and resources to provide effective healthcare services. Continued investments in AI within unprepared, under-resourced contexts risk misallocating resources and potentially causing more harm than good. The article concludes by providing concrete questions to assess AI systemic capacity and socio-technical readiness in global health.
</summary>
<dc:date>2025-05-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mediating The Marginal: A Quantitative Analysis of Curated LGBTQ+ Content on Instagram</title>
<link href="https://hdl.handle.net/1721.1/162829" rel="alternate"/>
<author>
<name>Souza, Garrett</name>
</author>
<author>
<name>Lutz, Nina</name>
</author>
<author>
<name>Turner, Katlyn</name>
</author>
<id>https://hdl.handle.net/1721.1/162829</id>
<updated>2026-03-08T03:22:04Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Mediating The Marginal: A Quantitative Analysis of Curated LGBTQ+ Content on Instagram
Souza, Garrett; Lutz, Nina; Turner, Katlyn
Control and curation of dominant visual culture – rendering who and what is visible – is central to identity formation, particularly for LGBTQ+ communities relying on digital spaces for safe self-expression. In this work, we analyze Instagram as a site of algorithmic visual curation, performing a quantitative analysis of algorithmically mediated image feeds delivered to a gay-coded user. Our persona account exclusively followed #gay and #instagay feeds, and engaged in content within these discursive spaces to seed algorithmic content promotion to a normative gay user. We present an analysis of skin tone presentations, emoji usage, and engagement metrics alongside analysis of generative outputs of dominant visual trends within the #gay search and Explore feeds. We observe content depicting darker-skinned individuals has higher engagement yet less algorithmic promotion relative to lighter skin tones, while hypermasculine and homonormative content is heavily promoted. These results suggest that, while marginalized positionalities have certainly been rendered more visible through social media platforms, this visibility is increasingly contingent on assimilation to normative ideals through algorithmically determined modes that are not necessarily consistent with user choices, preferences, or realities.
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mechanistic basis for the emergence of EPS1 as a catalyst in salicylic acid biosynthesis of Brassicaceae</title>
<link href="https://hdl.handle.net/1721.1/162828" rel="alternate"/>
<author>
<name>Torrens-Spence, Michael P</name>
</author>
<author>
<name>Matos, Jason O</name>
</author>
<author>
<name>Li, Tianjie</name>
</author>
<author>
<name>Kastner, David W</name>
</author>
<author>
<name>Kim, Colin Y</name>
</author>
<author>
<name>Wang, Ziqi</name>
</author>
<author>
<name>Glinkerman, Christopher M</name>
</author>
<author>
<name>Sherk, Jennifer</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<author>
<name>Wang, Yi</name>
</author>
<author>
<name>Weng, Jing-Ke</name>
</author>
<id>https://hdl.handle.net/1721.1/162828</id>
<updated>2026-03-08T03:25:22Z</updated>
<published>2024-01-01T00:00:00Z</published>
<summary type="text">Mechanistic basis for the emergence of EPS1 as a catalyst in salicylic acid biosynthesis of Brassicaceae
Torrens-Spence, Michael P; Matos, Jason O; Li, Tianjie; Kastner, David W; Kim, Colin Y; Wang, Ziqi; Glinkerman, Christopher M; Sherk, Jennifer; Kulik, Heather J; Wang, Yi; Weng, Jing-Ke
Salicylic acid (SA) production in Brassicaceae plants is uniquely accelerated from isochorismate by EPS1, a newly identified enzyme in the BAHD acyltransferase family. We present crystal structures of EPS1 from Arabidopsis thaliana in both its apo and substrate-analog-bound forms. Integrating microsecond-scale molecular dynamics simulations with quantum mechanical cluster modeling, we propose a pericyclic rearrangement lyase mechanism for EPS1. We further reconstitute the isochorismate-derived SA biosynthesis pathway in Saccharomyces cerevisiae, establishing an in vivo platform to examine the impact of active-site residues on EPS1 functionality. Moreover, stable transgenic expression of EPS1 in soybean increases basal SA levels, highlighting the enzyme’s potential to enhance defense mechanisms in non-Brassicaceae plants lacking an EPS1 ortholog. Our findings illustrate the evolutionary adaptation of an ancestral enzyme’s active site to enable a novel catalytic mechanism that boosts SA production in Brassicaceae plants.
</summary>
<dc:date>2024-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nested non-covalent interactions expand the functions of supramolecular polymer networks</title>
<link href="https://hdl.handle.net/1721.1/162827" rel="alternate"/>
<author>
<name>Lundberg, David J</name>
</author>
<author>
<name>Brown, Christopher M</name>
</author>
<author>
<name>Bobylev, Eduard O</name>
</author>
<author>
<name>Oldenhuis, Nathan J</name>
</author>
<author>
<name>Alfaraj, Yasmeen S</name>
</author>
<author>
<name>Zhao, Julia</name>
</author>
<author>
<name>Kevlishvili, Ilia</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<author>
<name>Johnson, Jeremiah A</name>
</author>
<id>https://hdl.handle.net/1721.1/162827</id>
<updated>2026-03-08T03:25:16Z</updated>
<published>2024-05-10T00:00:00Z</published>
<summary type="text">Nested non-covalent interactions expand the functions of supramolecular polymer networks
Lundberg, David J; Brown, Christopher M; Bobylev, Eduard O; Oldenhuis, Nathan J; Alfaraj, Yasmeen S; Zhao, Julia; Kevlishvili, Ilia; Kulik, Heather J; Johnson, Jeremiah A
Supramolecular polymer networks contain non-covalent cross-links that enable access to broadly tunable mechanical properties and stimuli-responsive behaviors; the incorporation of multiple unique non-covalent cross-links within such materials further expands their mechanical responses and functionality. To date, however, the design of such materials has been accomplished through discrete combinations of distinct interaction types in series, limiting materials design logic. Here we introduce the concept of leveraging “nested” supramolecular crosslinks, wherein two distinct types of non-covalent interactions exist in parallel, to control bulk material functions. To demonstrate this concept, we use polymer-linked Pd&lt;jats:sub&gt;2&lt;/jats:sub&gt;L&lt;jats:sub&gt;4&lt;/jats:sub&gt; metal–organic cage (polyMOC) gels that form hollow metal–organic cage junctions through metal–ligand coordination and can exhibit well-defined host-guest binding within their cavity. In these “nested” supramolecular network junctions, the thermodynamics of host-guest interactions within the junctions affect the metal–ligand interactions that form those junctions, ultimately translating to substantial guest-dependent changes in bulk material properties that could not be achieved in traditional supramolecular networks with multiple interactions in series.
</summary>
<dc:date>2024-05-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Direct air capture-assisted sustainable fuel solution in maritime sector: a carbon footprint perspective</title>
<link href="https://hdl.handle.net/1721.1/162826" rel="alternate"/>
<author>
<name>Li, Shuangjun</name>
</author>
<author>
<name>Du, Zhenyu</name>
</author>
<author>
<name>Wang, Junyao</name>
</author>
<author>
<name>Wang, Hao</name>
</author>
<author>
<name>Cao, Xiangkun E.</name>
</author>
<author>
<name>Chen, Runkai</name>
</author>
<author>
<name>Pang, Yujia</name>
</author>
<author>
<name>Deng, Shuai</name>
</author>
<author>
<name>Mašek, Ondřej</name>
</author>
<author>
<name>Yuan, Xiangzhou</name>
</author>
<author>
<name>Lee, Ki B.</name>
</author>
<id>https://hdl.handle.net/1721.1/162826</id>
<updated>2026-03-08T03:20:56Z</updated>
<published>2025-05-16T00:00:00Z</published>
<summary type="text">Direct air capture-assisted sustainable fuel solution in maritime sector: a carbon footprint perspective
Li, Shuangjun; Du, Zhenyu; Wang, Junyao; Wang, Hao; Cao, Xiangkun E.; Chen, Runkai; Pang, Yujia; Deng, Shuai; Mašek, Ondřej; Yuan, Xiangzhou; Lee, Ki B.
Carbon emissions reduction within the maritime sector is pivotal for realizing zero-carbon goals and mitigating climate impacts. Adopting renewable carbon fuels presents a potent strategy. It is necessary to have a comprehensive understanding of its negative carbon attributes and enduring contributions to future development based on carbon footprint assessment. By using the CO2 captured through direct air capture (DAC) technology and the H2 obtained via water electrolysis as feedstock, electro-methanol (e-methanol) can be produced under renewable energy-driven conditions. Owing to the environmental benefits and economic feasibility of e-methanol, we highlight its potential as a practical alternative to traditional fossil fuel-based technical scenarios. A quantitative analysis of this integrated system from a carbon footprint perspective allows for an environmental sustainability assessment. According to predictions, scaled-up usage of the system can reduce the maritime sector's contribution to global carbon emissions by half by 2050.
</summary>
<dc:date>2025-05-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Topic Brief: Guidance for Reporting on Studies of Open and Equitable Scholarship</title>
<link href="https://hdl.handle.net/1721.1/162825" rel="alternate"/>
<author>
<name>Altman, Micah</name>
</author>
<id>https://hdl.handle.net/1721.1/162825</id>
<updated>2025-09-30T03:05:09Z</updated>
<published>2025-09-29T00:00:00Z</published>
<summary type="text">Topic Brief: Guidance for Reporting on Studies of Open and Equitable Scholarship
Altman, Micah
The absence of standardized measurement and reporting hinders progress toward more reliable and equitable scientific practices. This topic brief summarized existing practice across stages of the research lifecycle to promote transparency and reliability, and to support the evaluation of participation and equity. The brief also discusses issues surrounding integrating these practices within the context of limited-term fellowship programs.
</summary>
<dc:date>2025-09-29T00:00:00Z</dc:date>
</entry>
<entry>
<title>Infants Recognize the Negative Impact of Phone Distraction on Performance</title>
<link href="https://hdl.handle.net/1721.1/162822" rel="alternate"/>
<author>
<name>Cao, Qiong</name>
</author>
<author>
<name>Mears, Anna</name>
</author>
<author>
<name>Feigenson, Lisa</name>
</author>
<id>https://hdl.handle.net/1721.1/162822</id>
<updated>2026-03-08T03:25:32Z</updated>
<published>2025-03-21T00:00:00Z</published>
<summary type="text">Infants Recognize the Negative Impact of Phone Distraction on Performance
Cao, Qiong; Mears, Anna; Feigenson, Lisa
Seeing adults use cellphones is a common daily experience for infants, yet little is known about how infants think about others’cellphone use. Do infants recognize that phone usage can affect the user’s behavior? Here we asked whether infants expect aperson’s task performance to be impaired by phone use. Twenty‐month‐old infants watched adults building block towers. Oneadult did this while also using a phone, either looking at the screen and scrolling (Experiment 1; N = 24) or simply talking(Experiment 2; N = 24). Across both experiments, infants looked longer when the person who had been using the phone built ataller tower than the person who had not been using the phone, compared to the reverse. This suggests that infants expectedphone usage to negatively impact performance. Thus, early in development, children recognize that cell phone use can affectpeople's goal‐directed actions; this may be one example of a broader understanding of the impact of multitasking onperformance.
</summary>
<dc:date>2025-03-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Regio‐Selective Mechanical Enhancement of Polymer‐Grafted Nanoparticle Composites via Light‐Mediated Crosslinking</title>
<link href="https://hdl.handle.net/1721.1/162821" rel="alternate"/>
<author>
<name>Kim, Kyungtae</name>
</author>
<author>
<name>Grummon, Benjamin C.</name>
</author>
<author>
<name>Thrasher, Carl J.</name>
</author>
<author>
<name>Macfarlane, Robert J.</name>
</author>
<id>https://hdl.handle.net/1721.1/162821</id>
<updated>2026-03-08T03:25:31Z</updated>
<published>2025-01-28T00:00:00Z</published>
<summary type="text">Regio‐Selective Mechanical Enhancement of Polymer‐Grafted Nanoparticle Composites via Light‐Mediated Crosslinking
Kim, Kyungtae; Grummon, Benjamin C.; Thrasher, Carl J.; Macfarlane, Robert J.
Polymer-brush-grafted nanoparticles (PGNPs) that can be covalentlycrosslinked post-processing enable the fabrication of mechanically robust andchemically stable polymer nanocomposites with high inorganic ﬁller content.Modifying PGNP brushes to append UV-activated crosslinkers along the poly-mer chains would permit a modular crosslinking strategy applicable to a diverserange of nanocomposite compositions. Further, light-activated crosslinkingreactions enable spatial control of crosslink density to program intentionallyinhomogeneous mechanical responses. Here, a method of synthesizingcomposites using UV-crosslinkable brush-coated nanoparticles (referred to asUV-XNPs) is introduced that can be applied to various monomer compositionsby incorporating photoinitiators into the polymer brushes. UV crosslinking ofprocessed UV-XNP structures can increase their tensile modulus up to 15-foldwithout any noticeable alteration to their appearance or shape. By usingphotomasks to alter UV intensity across a sample, intentionally designedinhomogeneities in crosslink density result in predetermined anisotropic shapechanges under strain. This unique capability of UV-XNP materials is applied tostiﬀness-patterned ﬂexible electronic substrates that prevent the delaminationof rigid components under deformation. The potential of UV-XNPsas functional, soft device components is further demonstrated by wearabledevices that can be modiﬁed post-fabrication to customize their performance,permitting the ability to add functionality to existing device architectures.
</summary>
<dc:date>2025-01-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Co-designing Large Language Model Tools for Project-Based Learning with K12 Educators</title>
<link href="https://hdl.handle.net/1721.1/162820" rel="alternate"/>
<author>
<name>Ravi, Prerna</name>
</author>
<author>
<name>Masla, John</name>
</author>
<author>
<name>Kakoti, Gisella</name>
</author>
<author>
<name>Lin, Grace</name>
</author>
<author>
<name>Anderson, Emma</name>
</author>
<author>
<name>Taylor, Matt</name>
</author>
<author>
<name>Ostrowski, Anastasia</name>
</author>
<author>
<name>Breazeal, Cynthia</name>
</author>
<author>
<name>Klopfer, Eric</name>
</author>
<author>
<name>Abelson, Hal</name>
</author>
<id>https://hdl.handle.net/1721.1/162820</id>
<updated>2026-03-08T03:25:29Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Co-designing Large Language Model Tools for Project-Based Learning with K12 Educators
Ravi, Prerna; Masla, John; Kakoti, Gisella; Lin, Grace; Anderson, Emma; Taylor, Matt; Ostrowski, Anastasia; Breazeal, Cynthia; Klopfer, Eric; Abelson, Hal
The emergence of generative AI, particularly large language models (LLMs), has opened the door for student-centered and active learning methods like project-based learning (PBL). However, PBL poses practical implementation challenges for educators around project design and management, assessment, and balancing student guidance with student autonomy. The following research documents a co-design process with interdisciplinary K-12 teachers to explore and address the current PBL challenges they face. Through teacher-driven interviews, collaborative workshops, and iterative design of wireframes, we gathered evidence for ways LLMs can support teachers in implementing high-quality PBL pedagogy by automating routine tasks and enhancing personalized learning. Teachers in the study advocated for supporting their professional growth and augmenting their current roles without replacing them. They also identified affordances and challenges around classroom integration, including resource requirements and constraints, ethical concerns, and potential immediate and long-term impacts. Drawing on these, we propose design guidelines for future deployment of LLM tools in PBL.
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-resolution direct thrust characterization of electrospray thrusters with EMI-BF4 at different temperatures and polarities</title>
<link href="https://hdl.handle.net/1721.1/162819" rel="alternate"/>
<author>
<name>Neunzig, O.</name>
</author>
<author>
<name>Lozano, P.</name>
</author>
<author>
<name>Tajmar, M.</name>
</author>
<id>https://hdl.handle.net/1721.1/162819</id>
<updated>2026-03-08T03:21:11Z</updated>
<published>2025-06-23T00:00:00Z</published>
<summary type="text">High-resolution direct thrust characterization of electrospray thrusters with EMI-BF4 at different temperatures and polarities
Neunzig, O.; Lozano, P.; Tajmar, M.
Electrospray thrusters have garnered significant attention throughout the years as an exceptional propulsion technology for nano- and picosatellites due to their efficiency and precise thrust control. They operate on the principle of electrostatically accelerating charged particles (liquid droplets, pure ions or their mixtures) from ionic liquids and other low-volatility propellants, which are extracted from a Taylor-cone formation on top of porous emitter arrays. In this work we characterized the thrust performance of electrospray thrusters with the ionic liquid 1-ethyl-3-methylimidazoliumtetrafluoroborate (EMI-BF4) as well as an attempt with an acetate-based ionic liquid. The arrays were operated at different polarities and at elevated temperatures of up to 43 °C which led to a decrease in viscosity and enhanced current emission for EMI-BF4 with a factor of 1.43 at equal voltage levels. Temperature related effects resulted in a thrust difference of 3% between the maximum and minimum temperature throughout the tested current range. Thrust measurements for emission currents between 10 µA and 200 µA revealed a detectable and temperature independent difference between the positive and negative mode in favor of the negative polarity, indicating different ion-regimes compared to most data found in literature. The paper presents a novel thrust measurement setup for micro-propulsion systems based on a counterbalanced double pendulum thrust balance that achieves nanonewton resolution with the option to heat several thrusters. A comprehensive overview of the test setup and calculations of obtained electrospray parameters from experimental data is presented.
</summary>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mucus-derived glycans are inhibitory signals for Salmonella Typhimurium SPI-1-mediated invasion</title>
<link href="https://hdl.handle.net/1721.1/162818" rel="alternate"/>
<author>
<name>Wheeler, Kelsey M.</name>
</author>
<author>
<name>Gold, Michaela A.</name>
</author>
<author>
<name>Stevens, Corey A.</name>
</author>
<author>
<name>Tedin, Karsten</name>
</author>
<author>
<name>Wood, Amanda M.</name>
</author>
<author>
<name>Uzun, Deniz</name>
</author>
<author>
<name>Cárcamo-Oyarce, Gerardo</name>
</author>
<author>
<name>Turner, Bradley S.</name>
</author>
<author>
<name>Fulde, Marcus</name>
</author>
<author>
<name>Song, Jeongmin</name>
</author>
<author>
<name>Kramer, Jessica R.</name>
</author>
<author>
<name>Ribbeck, Katharina</name>
</author>
<id>https://hdl.handle.net/1721.1/162818</id>
<updated>2026-03-08T03:25:27Z</updated>
<published>2025-09-23T00:00:00Z</published>
<summary type="text">Mucus-derived glycans are inhibitory signals for Salmonella Typhimurium SPI-1-mediated invasion
Wheeler, Kelsey M.; Gold, Michaela A.; Stevens, Corey A.; Tedin, Karsten; Wood, Amanda M.; Uzun, Deniz; Cárcamo-Oyarce, Gerardo; Turner, Bradley S.; Fulde, Marcus; Song, Jeongmin; Kramer, Jessica R.; Ribbeck, Katharina
Mucus forms a critical barrier against enteric pathogens like Salmonella enterica serovar Typhimurium. While in vivo studies indicate that secreted, gel-forming mucins and specifically core 3 glycosylation are protective against S. Typhimurium, the molecular mechanisms involved remain unclear. Here, we demonstrate that native intestinal mucins inhibit Salmonella invasion of colonic epithelial cells by downregulating the type 3 secretion system through suppression of the key virulence regulator, HilD. Our study identifies mucin glycans and specific mucin sugars, namely N-acetyl galactosamine and N-acetyl glucosamine, as the components responsible for mucin’s anti-virulence effect, likely via functional or direct interaction with HilD’s putative carbohydrate-binding domain. Notably, we find that the native presentation of these sugars is important for activity. These insights provide a mechanistic foundation for mucin-based strategies to combat enteric infections and, given the prevalence of homologous AraC-type regulators in other pathogens, suggest mucins’ potential as broad-spectrum anti-virulence agents.
</summary>
<dc:date>2025-09-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Revenue Management to Maximize Global Network Revenue for a Satellite Communication Operator</title>
<link href="https://hdl.handle.net/1721.1/162817" rel="alternate"/>
<author>
<name>Eiskowitz, Skylar</name>
</author>
<author>
<name>Cameron, Bruce G</name>
</author>
<author>
<name>Crawley, Edward F</name>
</author>
<author>
<name>Belobaba, Peter</name>
</author>
<id>https://hdl.handle.net/1721.1/162817</id>
<updated>2026-03-08T03:25:24Z</updated>
<published>2025-03-21T00:00:00Z</published>
<summary type="text">Revenue Management to Maximize Global Network Revenue for a Satellite Communication Operator
Eiskowitz, Skylar; Cameron, Bruce G; Crawley, Edward F; Belobaba, Peter
The satellite communication (SatCom) industry is rapidly expanding, with supply growing much faster than demand, potentiallystraining market prices and company stability. Effective revenue management (RM) can help operators optimize the use of lim-ited and expensive satellite resources. Current SatCom RM methods fail to account for both the temporal and spatial nature ofsatellite services. This paper presents a multizone displacement-adjusted virtual nesting (DAVN) RM method to create bookinglimits that guide operators in determining which products to accept to maximize revenue. By incorporating spatial interzoneeffects, the multizone method improves revenue compared to the separate zones method by 2%–10%. The results demonstratethat under varying pricing structures, the multizone approach increases the acceptance of high-revenue mobile products by ap-proximately 10%, with a corresponding reduction in the sale of longer duration stationary products.
</summary>
<dc:date>2025-03-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Solving Large‐Scale Weapon Target Assignment Problems in Seconds Using Branch‐Price‐And‐Cut</title>
<link href="https://hdl.handle.net/1721.1/162816" rel="alternate"/>
<author>
<name>Bertsimas, Dimitris</name>
</author>
<author>
<name>Paskov, Alex</name>
</author>
<id>https://hdl.handle.net/1721.1/162816</id>
<updated>2026-03-08T03:25:26Z</updated>
<published>2025-01-27T00:00:00Z</published>
<summary type="text">Solving Large‐Scale Weapon Target Assignment Problems in Seconds Using Branch‐Price‐And‐Cut
Bertsimas, Dimitris; Paskov, Alex
This paper proposes a framework based on branch-price-and-cut to solve the weapon target assignment (WTA) problem, a popularclass of non-linear assignment problems that has received significant attention over the past several decades. We first reformulatethe WTA into a form amenable to column generation and then derive efficient algorithms for initializing the column generation,solving the pricing problem, generating clique cuts, and managing the branch-and-bound. Through significant experimentation,we display the framework’s efficiency – which scales to solve problems with 10000 targets and weapons on a laptop and exactlysolves problems in seconds, which previously took hours to solve. We also discuss extensions to common WTA variants and moregeneral non-linear assignment problems in hopes of motivating algorithmic developments.
</summary>
<dc:date>2025-01-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>First‐Order Empirical Interpolation Method for Real‐Time Solution of Parametric Time‐Dependent Nonlinear PDEs</title>
<link href="https://hdl.handle.net/1721.1/162815" rel="alternate"/>
<author>
<name>Nguyen, Ngoc Cuong</name>
</author>
<id>https://hdl.handle.net/1721.1/162815</id>
<updated>2026-03-08T03:25:01Z</updated>
<published>2025-03-31T00:00:00Z</published>
<summary type="text">First‐Order Empirical Interpolation Method for Real‐Time Solution of Parametric Time‐Dependent Nonlinear PDEs
Nguyen, Ngoc Cuong
We present a model reduction approach for the real-time solution of time-dependent nonlinear partial differential equations(PDEs) with parametric dependencies. A major challenge in constructing efficient and accurate reduced-order models for nonlin-ear PDEs is the efficient treatment of nonlinear terms. We address this by unifying the implementation of hyperreduction methodsto deal with nonlinear terms. Furthermore, we introduce a first-order empirical interpolation method (EIM) to provide an effi-cient approximation of the nonlinear terms in time-dependent PDEs. We demonstrate the effectiveness of our approach on theAllen–Cahn equation, which models phase separation, and the Buckley–Leverett equation, which describes two-phase fluid flowin porous media. Numerical results highlight the accuracy, efficiency, and stability of the proposed method compared with boththe Galerkin–Newton approach and hyper-reduced models using the standard EIM.
</summary>
<dc:date>2025-03-31T00:00:00Z</dc:date>
</entry>
<entry>
<title>Propagation of Slow Slip Events on Rough Faults: Clustering, Back Propagation, and Re‐Rupturing</title>
<link href="https://hdl.handle.net/1721.1/162814" rel="alternate"/>
<author>
<name>Sun, Yudong</name>
</author>
<author>
<name>Cattania, Camilla</name>
</author>
<id>https://hdl.handle.net/1721.1/162814</id>
<updated>2026-03-08T03:25:25Z</updated>
<published>2025-02-04T00:00:00Z</published>
<summary type="text">Propagation of Slow Slip Events on Rough Faults: Clustering, Back Propagation, and Re‐Rupturing
Sun, Yudong; Cattania, Camilla
Seismic and geodetic observations show that slow slip events (SSEs) in subduction zones canhappen at all temporal and spatial scales and propagate at various velocities. Observation of rapid tremorreversals indicates back‐propagating fronts traveling much faster than the main rupture front. Heterogeneity offault properties, such as fault roughness, is a ubiquitous feature often invoked to explain this complex behavior,but how roughness affects SSEs is poorly understood. Here we use quasi‐dynamic seismic cycle simulations tomodel SSEs on a rough fault, using normal stress perturbations as a proxy for roughness and assuming rate‐and‐state friction, with velocity‐weakening friction at low slip rate and velocity‐strengthening at high slip rate. SSEsexhibit temporal clustering, large variations in rupture length and propagation speed, and back‐propagatingfronts at different scales. We identify a mechanism for back propagation: as ruptures propagate through low‐normal stress regions, a rapid increase in slip velocity combined with rate‐strengthening friction induces stressoscillations at the rupture tip, and the subsequent “delayed stress drop” induces secondary back‐propagatingfronts. Moreover, on rough faults with fractal elevation profiles, the transition from pulse to crack can also leadto the re‐rupture of SSEs due to local variations in the level of heterogeneity. Our study provides a possiblemechanism for the complex evolution of SSEs inferred from geophysical observations and its link to faultroughness.
</summary>
<dc:date>2025-02-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Impact of 2022 Hunga Tonga‐Hunga Ha'apai (Hunga) Eruption on Stratospheric Circulation and Climate</title>
<link href="https://hdl.handle.net/1721.1/162813" rel="alternate"/>
<author>
<name>Yook, Simchan</name>
</author>
<author>
<name>Solomon, Susan</name>
</author>
<author>
<name>Wang, Xinyue</name>
</author>
<id>https://hdl.handle.net/1721.1/162813</id>
<updated>2026-03-08T03:24:52Z</updated>
<published>2025-03-17T00:00:00Z</published>
<summary type="text">The Impact of 2022 Hunga Tonga‐Hunga Ha'apai (Hunga) Eruption on Stratospheric Circulation and Climate
Yook, Simchan; Solomon, Susan; Wang, Xinyue
The Hunga Tonga‐Hunga Ha'apai (Hunga) volcanic eruption in January 2022 injected asubstantial amount of water vapor and a moderate amount of SO2 into the stratosphere. Both satelliteobservations in 2022 and subsequent chemistry‐climate model simulations forced by realistic Hungaperturbations reveal large‐scale cooling in the Southern Hemisphere (SH) tropical to subtropical stratospherefollowing the Hunga eruption. This study analyzes the drivers of this cooling, including the distinctive role ofanomalies in water vapor, ozone, and sulfate aerosol concentration on the simulated climate response to theHunga volcanic forcing, based on climate simulations with prescribed chemistry/aerosol. Simulated circulationand temperature anomalies based on specified‐chemistry simulations show good agreement with previouscoupled‐chemistry simulations and indicate that each forcing of ozone, water vapor, and sulfate aerosol from theHunga volcanic eruption contributed to the circulation and temperature anomalies in the SH stratosphere. Ourresults also suggest that (a) the large‐scale stratospheric cooling during the austral winter was mainly induced bychanges in dynamical processes, not by radiative processes, and that (b) the radiative feedback from negativeozone anomalies contributed to the prolonged cold temperature anomalies in the lower stratosphere (∼70 hPalevel) and hence to long lasting cold conditions of the polar vortex.
</summary>
<dc:date>2025-03-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Status of Vernier Acuity Following Late Sight Onset</title>
<link href="https://hdl.handle.net/1721.1/162812" rel="alternate"/>
<author>
<name>Vogelsang, Lukas</name>
</author>
<author>
<name>Gupta, Priti</name>
</author>
<author>
<name>Vogelsang, Marin</name>
</author>
<author>
<name>Shah, Pragya</name>
</author>
<author>
<name>Tiwari, Kashish</name>
</author>
<author>
<name>Verma, Dhun</name>
</author>
<author>
<name>Yadav, Mrinalini</name>
</author>
<author>
<name>Raja, Sruti</name>
</author>
<author>
<name>Ganesh, Suma</name>
</author>
<author>
<name>Sinha, Pawan</name>
</author>
<id>https://hdl.handle.net/1721.1/162812</id>
<updated>2026-03-08T03:24:50Z</updated>
<published>2025-02-05T00:00:00Z</published>
<summary type="text">The Status of Vernier Acuity Following Late Sight Onset
Vogelsang, Lukas; Gupta, Priti; Vogelsang, Marin; Shah, Pragya; Tiwari, Kashish; Verma, Dhun; Yadav, Mrinalini; Raja, Sruti; Ganesh, Suma; Sinha, Pawan
We possess a remarkably acute ability to detect even small misalignments between extended line segments. This “vernier acuity”significantly exceeds our “resolution acuity”—the ability to resolve closely separated stimuli—and is generally considered a“hyperacuity,” since the detectable misalignments are markedly finer than the diameter of single retinal cones. Vernier acuityhas, thus, often been proposed to reflect spatial organization and multi-unit cortical processing, rendering it an important indexof visual function. Notably, vernier acuity exhibits a characteristic developmental signature: it is inferior to resolution acuity earlyin life but eventually exceeds it by up to one order of magnitude. However, vernier acuity may be disproportionately sensitiveto developmental disruptions. Here, we examined the resilience of acquiring this visual proficiency to early-onset, prolongeddeprivation by longitudinally tracking vernier and resolution acuities in children with dense congenital cataracts who gainedsight late in life as part of Project Prakash. Our data reveal marked longitudinal improvements in both acuity measures andalso demonstrate that, like the normally-sighted, late-sighted individuals’ vernier acuity exceeds their resolution acuity, therebyrendering it a hyperacuity. However, the extent of this hyperacuity is weaker than observed in normally-sighted controls, pointingto partial limitations in postsurgical skill acquisition. Despite these constraints, our findings point to the feasibility of formingsome integrative circuits in the visual system even when inputs are severely compromised, and to the availability of some residualplasticity late in childhood, with implications for the rehabilitation prospects of children following treatment for congenitalcataracts.
</summary>
<dc:date>2025-02-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Laser‐Enabled Fabrication of Flexible Printed Electronics with Integrated Functional Devices</title>
<link href="https://hdl.handle.net/1721.1/162811" rel="alternate"/>
<author>
<name>Babatain, Wedyan</name>
</author>
<author>
<name>Park, Christine</name>
</author>
<author>
<name>Ishii, Hiroshi</name>
</author>
<author>
<name>Gershenfeld, Neil</name>
</author>
<id>https://hdl.handle.net/1721.1/162811</id>
<updated>2026-03-08T03:24:49Z</updated>
<published>2025-03-04T00:00:00Z</published>
<summary type="text">Laser‐Enabled Fabrication of Flexible Printed Electronics with Integrated Functional Devices
Babatain, Wedyan; Park, Christine; Ishii, Hiroshi; Gershenfeld, Neil
The demand for ﬂexible and printed electronics in wearable and soft roboticsapplications has increased the need for scalable, additive manufacturingprocesses. However, traditional printed circuit board manufacturing involvescomplex, multistep processes, is limited to certain substrates, and faceschallenges in integrating functional devices. Here, an additive, laser-enabledprocess is introduced for fabricating ﬂexible, double-sided printed electronicsleveraging laser-induced graphene (LIG) as a seed layer for selective copperelectrodeposition (E-LIG). This technique enables precise conductive circuitpatterning down to 50 µm and is reliable via formation in a single streamlinedprocess. E-LIG supports transfer to various substrates, allowing for large-areaelectronics up to 100 cm2 , broadening applications in large-scale interfaces.Functional LIG device integration, including sensors and actuators, directlyinterfaced with control circuits on a single substrate is demonstrated.Applications such as real-time graphical output and interactive interfacingshowcase the method’s versatility. E-LIG exhibits repairability for on-demandrestoration of damaged circuits, enhancing durability and oﬀering a scalable,cost-eﬀective solution for multifunctional printed electronics.
</summary>
<dc:date>2025-03-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding housing market responses to stringent energy codes</title>
<link href="https://hdl.handle.net/1721.1/162810" rel="alternate"/>
<author>
<name>Muzio, Maria Jimena</name>
</author>
<author>
<name>Niu, Dongxiao</name>
</author>
<author>
<name>Steil, Justin</name>
</author>
<author>
<name>Zheng, Siqi</name>
</author>
<id>https://hdl.handle.net/1721.1/162810</id>
<updated>2026-03-08T03:24:48Z</updated>
<published>2025-03-16T00:00:00Z</published>
<summary type="text">Understanding housing market responses to stringent energy codes
Muzio, Maria Jimena; Niu, Dongxiao; Steil, Justin; Zheng, Siqi
Increased energy efficiency in buildings is essential to reducing carbon emissions and addressing climate change. Massachusetts' Green Communities Act of 2008, aiming for a 50% reduction in carbon emissions by 2030 and net-zero by 2050, mandates the Stretch Energy Code for eligibility for state funding. This code requires new residential constructions to meet stringent Home Energy Rating System (HERS) Index scores. While these requirements benefit the environment, they may increase construction costs, affecting housing production and affordability. Using the staggered municipal adoption of the Stretch Energy Code to tease out causal relationships, we analyze the effects of the Stretch Energy Code on housing quantity and price across municipalities in Massachusetts. The results indicate that more energy-efficient single-family properties command a sales price premium of 4.0%, and the Stretch Energy Code adoption is associated with a decrease in the quantity of new single-family housing starts. Approximately 45.5% of the price increase is due to higher willingness to pay for energy-efficient homes, with the remainder attributed to reduced housing supply. Our article is particularly relevant as policymakers seek to balance the objectives and address the tensions between “E” and “S” in their “ESG” policy packages.
</summary>
<dc:date>2025-03-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Knowing to infinity: Full knowledge and the margin‐for‐error principle</title>
<link href="https://hdl.handle.net/1721.1/162809" rel="alternate"/>
<author>
<name>Fiat, Yonathan</name>
</author>
<id>https://hdl.handle.net/1721.1/162809</id>
<updated>2026-03-08T03:24:47Z</updated>
<published>2025-03-18T00:00:00Z</published>
<summary type="text">Knowing to infinity: Full knowledge and the margin‐for‐error principle
Fiat, Yonathan
Let’s say that I fully know that &#119901; if I know that &#119901;, I knowthat I know that &#119901;, I know that I know that I knowthat &#119901;, and so on. Let’s say that I partially know that &#119901;if I know that &#119901; but I don’t fully know that &#119901;. What,if anything, do I fully know? What, if anything, do Ipartially know? One response in the literature is that Ifully know everything that I know; partial knowledgeis impossible. This response is in tension with a plausi-ble margin-for-error principle on knowledge. A differentresponse in the literature is that I don’t fully know any-thing; everything that I know, I partially know. Recently,Goldstein (forthcoming, 2024) defended a third view,according to which I fully know some things and I par-tially know other things. While this seems plausible,Goldstein’s account is based on denying the margin-for-error principle. In this paper, I show that the possibilityof both full knowledge and partial knowledge is consis-tent with the margin-for-error principle. I also argue thatthe resulting picture of knowledge is well-motivated.
</summary>
<dc:date>2025-03-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cleavable Strand‐Fusing Cross‐Linkers as Additives for Chemically Deconstructable Thermosets with Preserved Thermomechanical Properties</title>
<link href="https://hdl.handle.net/1721.1/162808" rel="alternate"/>
<author>
<name>Zhang, Shuyi</name>
</author>
<author>
<name>Xu, Zhenchuang</name>
</author>
<author>
<name>Husted, Keith E. L.</name>
</author>
<author>
<name>Lundberg, David J.</name>
</author>
<author>
<name>Brown, Christopher M.</name>
</author>
<author>
<name>Wang, Yuyan</name>
</author>
<author>
<name>Shieh, Peyton</name>
</author>
<author>
<name>Ko, Kwangwook</name>
</author>
<author>
<name>Moore, Jeffrey S.</name>
</author>
<author>
<name>Johnson, Jeremiah A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162808</id>
<updated>2026-03-08T03:24:50Z</updated>
<published>2025-03-27T00:00:00Z</published>
<summary type="text">Cleavable Strand‐Fusing Cross‐Linkers as Additives for Chemically Deconstructable Thermosets with Preserved Thermomechanical Properties
Zhang, Shuyi; Xu, Zhenchuang; Husted, Keith E. L.; Lundberg, David J.; Brown, Christopher M.; Wang, Yuyan; Shieh, Peyton; Ko, Kwangwook; Moore, Jeffrey S.; Johnson, Jeremiah A.
Permanently cross-linked polymer networks—thermosets—are often difﬁcult to chemically deconstruct. Theinstallation of cleavable bonds into the strands of thermosets using cleavable comonomers as additives can facilitatethermoset deconstruction without replacement of permanent cross-links, but such monomers can lead to reducedthermomechanical properties and require high loadings to function effectively, motivating the design of new and optimalcleavable additives. Here, we introduce “strand-fusing cross-linkers” (SFCs), which fuse two network strands via a four-way cleavable cross-link. SFCs enable deconstruction of model polydicyclopentadiene (pDCPD) thermosets with aslittle as one-ﬁfth of the molar loading needed to achieve deconstruction using traditional cleavable comonomers. SFCsfunction under traditional oven curing as well as low-energy frontal ring-opening metathesis polymerization (FROMP)conditions and lead to improved thermomechanical properties, for example, glass transition temperatures, compared toprior cleavable comonomer designs. This work motivates the development of increasingly improved cleavable additives toenable thermoset deconstruction without compromising material performance.
</summary>
<dc:date>2025-03-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Great Observatory for Long Wavelengths (GO-LoW) NIAC Phase I Final Report</title>
<link href="https://hdl.handle.net/1721.1/162807" rel="alternate"/>
<author>
<name>Knapp, Mary</name>
</author>
<author>
<name>Paritsky, Lenny</name>
</author>
<author>
<name>Kononov, Ekaterina</name>
</author>
<author>
<name>Kao, Melodie M.</name>
</author>
<id>https://hdl.handle.net/1721.1/162807</id>
<updated>2025-09-27T03:01:06Z</updated>
<published>2024-02-21T00:00:00Z</published>
<summary type="text">Great Observatory for Long Wavelengths (GO-LoW) NIAC Phase I Final Report
Knapp, Mary; Paritsky, Lenny; Kononov, Ekaterina; Kao, Melodie M.
</summary>
<dc:date>2024-02-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Neo-Panamax Decarbonization via Microreactor Propulsion Conversion</title>
<link href="https://hdl.handle.net/1721.1/162806" rel="alternate"/>
<author>
<name>Kang, Richard</name>
</author>
<author>
<name>Izurieta Torres, Jose</name>
</author>
<author>
<name>O’Connor, Kristen</name>
</author>
<id>https://hdl.handle.net/1721.1/162806</id>
<updated>2026-03-08T03:25:00Z</updated>
<published>2025-01-01T00:00:00Z</published>
<summary type="text">Neo-Panamax Decarbonization via Microreactor Propulsion Conversion
Kang, Richard; Izurieta Torres, Jose; O’Connor, Kristen
This study presents a comprehensive feasibility assessment for retrofitting a Neo-Panamax (NPX)&#13;
container vessel with nuclear microreactor propulsion to contribute to decarbonization of commercial shipping.&#13;
The project selected a 12,000 TEU container vessel as a baseline hull and replaced its WinGD 7x92-B diesel&#13;
engine and auxiliary generators with two MIT-designed Organically Cooled Reactors (OCRs), each paired with&#13;
a 27MW Mitsubishi steam turbine generator and a Leonardo DRS 36.5MW direct-drive electric motor. Detailed&#13;
Computer-Aided Design (CAD) modeling and Finite Element Analysis (FEA) were used to validate seakeeping&#13;
performance, optimize system arrangements, and verify the structural integrity of deck reinforcements under&#13;
static and buckling loads. Stability and damaged-condition survivability were evaluated using MAXSURF,&#13;
demonstrating intact and damaged American Bureau of Shipping (ABS) compliance across operational load&#13;
cases. Seakeeping analyses at sea states 4–9 confirmed that motions remain within recoverable righting-arm&#13;
limits. A bottom-up financial analysis compared lifecycle costs over 25 years, showing that the retrofit’s $540M&#13;
total cost—including capital, operations, maintenance, and nuclear fuel, and nuclear insurance—is significantly&#13;
lower than the $946M projected lifecycle cost of a conventional NPX and yields $405–806M in net savings&#13;
when accounting for impending carbon taxes. Key regulatory challenges including absence of propulsion-&#13;
specific nuclear regulations and port-entry protocols were identified as primary non-technical hurdles, with&#13;
emerging frameworks from industry consortia offering pathways to implementation. Nuclear microreactor&#13;
retrofits can be technically and economically viable for large commercial vessels, positioning them as a potent&#13;
strategy to meet International Maritime Organization’s (IMO) net-zero targets by 2050.
</summary>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, MIT Center for Real Estate</title>
<link href="https://hdl.handle.net/1721.1/162803" rel="alternate"/>
<author>
<name>Zheng, Siqi</name>
</author>
<id>https://hdl.handle.net/1721.1/162803</id>
<updated>2025-09-26T03:10:15Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, MIT Center for Real Estate
Zheng, Siqi
This report contains the following sections: Executive Summary (Goals, Objectives, Priorities); Accomplishments; Administrative Initiatives; Finances and Funding; Personnel Information; Teaching and Curriculum; and Research Activities.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and testing of flat-panel pixel electrospray thrusters</title>
<link href="https://hdl.handle.net/1721.1/162802" rel="alternate"/>
<author>
<name>Nachtigal, Catherine J.</name>
</author>
<author>
<name>Lozano, Paulo C.</name>
</author>
<id>https://hdl.handle.net/1721.1/162802</id>
<updated>2025-09-25T07:31:46Z</updated>
<published>2025-04-30T00:00:00Z</published>
<summary type="text">Design and testing of flat-panel pixel electrospray thrusters
Nachtigal, Catherine J.; Lozano, Paulo C.
Electrospray thrusters are a promising form of electric propulsion due to their compactness and high mass efficiency, making them advantageous in most mission scenarios, especially for small spacecraft. These thrusters operate through the emission of charged particles from an electrically-conductive liquid flowing inside an array of capillaries or sharp permeable structures from applying a potential difference between the liquid and a downstream extractor electrode. Emission is most efficient when operated in the pure ionic regime (PIR), with recent designs utilizing sharp porous structures to transport the liquid and provide electric field enhancement to induce ion evaporation. However, these structures are often difficult to manufacture uniformly at the scales required to ensure stable PIR emission. Existing electrospray thrusters also suffer in reliability due to the monolithic nature of their extractor design, which is prone to induce full array failure upon the shortage of a single emitter structure. These issues can be mitigated by a design that utilizes (1) a flat-panel array configuration, where the geometry and arrangement of each emitter element meets the physical requirements that ensure consistent manufacturing and PIR operation, and (2) a series of fuses interconnecting individual extractor rings for each emitter structure, which would break upon shortage, protecting the rest of the extractors in an array in case of a single emitter shortage. These fuses would allow each emitter to function as a pixel on an LED screen, where the outage of a single pixel does not prevent the rest of the pixels from producing the rest of the image. Through this research, an emitter design is properly fabricated with properties that favor PIR emission, as a capillary fabricated on top of a porous glass substrate. The required starting voltage based on this approach is simulated and a preliminary characterization is performed using a non-integrated extractor. Though degradation of the emitter is experienced over time due to the preliminary extractor set-up, it is found that the emitter capillary can properly wick propellant and operate at moderate voltages for tens of minutes.
</summary>
<dc:date>2025-04-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>TutorUp: What If Your Students Were Simulated? Training Tutors to Address Engagement Challenges in Online Learning</title>
<link href="https://hdl.handle.net/1721.1/162801" rel="alternate"/>
<author>
<name>Pan, Sitong</name>
</author>
<author>
<name>Schmucker, Robin</name>
</author>
<author>
<name>Garcia Bulle Bueno, Bernardo</name>
</author>
<author>
<name>Llanes, Salome Aguilar</name>
</author>
<author>
<name>Albo Alarc?n, Fernanda</name>
</author>
<author>
<name>Zhu, Hangxiao</name>
</author>
<author>
<name>Teo, Adam</name>
</author>
<author>
<name>Xia, Meng</name>
</author>
<id>https://hdl.handle.net/1721.1/162801</id>
<updated>2025-09-25T07:31:44Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">TutorUp: What If Your Students Were Simulated? Training Tutors to Address Engagement Challenges in Online Learning
Pan, Sitong; Schmucker, Robin; Garcia Bulle Bueno, Bernardo; Llanes, Salome Aguilar; Albo Alarc?n, Fernanda; Zhu, Hangxiao; Teo, Adam; Xia, Meng
With the rise of online learning, many novice tutors lack experience engaging students remotely. We introduce TutorUp, a Large Language Model (LLM)-based system that enables novice tutors to practice engagement strategies with simulated students through scenario-based training. Based on a formative study involving two surveys (N1 = 86, N2 = 102) on student engagement challenges, we summarize scenarios that mimic real teaching situations. To enhance immersion and realism, we employ a prompting strategy that simulates dynamic online learning dialogues. TutorUp provides immediate and asynchronous feedback by referencing tutor-students online session dialogues and evidence-based teaching strategies from learning science literature. In a within-subject evaluation (N = 16), participants rated TutorUp significantly higher than a baseline system without simulation capabilities regarding effectiveness and usability. Our findings suggest that TutorUp provides novice tutors with more effective training to learn and apply teaching strategies to address online student engagement challenges.
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ionic liquid electrospray beam target performance characterization</title>
<link href="https://hdl.handle.net/1721.1/162800" rel="alternate"/>
<author>
<name>Arestie, Steven M.</name>
</author>
<author>
<name>Marrese-Reading, Colleen M.</name>
</author>
<author>
<name>Shaik, Saba Z.</name>
</author>
<id>https://hdl.handle.net/1721.1/162800</id>
<updated>2025-09-25T07:31:42Z</updated>
<published>2025-04-01T00:00:00Z</published>
<summary type="text">Ionic liquid electrospray beam target performance characterization
Arestie, Steven M.; Marrese-Reading, Colleen M.; Shaik, Saba Z.
Electrospray thruster ground testing, with well understood facility effects, is of critical importance to qualify the technology for long duration flight missions. While there has been substantial work to understand the beam physics and plume dynamics of electrospray thrusters and the implications thereof on performance and lifetime, work to understand the impact of facility effects has been neglected until recently. Interactions between an electrospray plume and the vacuum chamber test facility have implications on both performance and lifetime. Therefore, any effort to characterize electrospray thruster performance and lifetime must be done so with an understanding of facility effects. In some ways, this is no different than the significant investment that has been made to understand the facility effects for plasma thruster testing. However, there are different challenges with the management of positively charged, negatively charged, and neutral propellant particles across a distribution of particle charge and mass when testing electrospray thrusters in a vacuum chamber. The focus of this paper is to characterize the significance of secondary particles from the impact of ionic liquid electrosprays with a beam target, and the influence of a novel beam target design and biasing. Results on secondary current and mass flux measurements are presented with some initial results on secondary time-of-flight measurements from the beam target. Additionally, beam target modeling results are presented to support the experiments and interpretation of the results. The results revealed secondary particles with an average charge-to-mass ratio as low as 31 C/kg, and that an improperly biased beam target, or no beam target, can artificially inflate emitted current due to electron back streaming by as much as 20%. The experimental and modeling results suggest an optimized beam target and screen voltage of -100 V and -200 V, respectively. If no consideration of facility effects is included in testing electrospray thrusters, performance, reliability, and lifetime can be adversely affected, and premature thruster failure may result. The work presented here improves our understanding of facility effects and our capabilities to mitigate them to successfully qualify and acceptance test electrospray thrusters for flight.
</summary>
<dc:date>2025-04-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Semi‐Automated, High‐Throughput Approach for the Synthesis and Identification of Highly Photo‐Cytotoxic Iridium Complexes</title>
<link href="https://hdl.handle.net/1721.1/162799" rel="alternate"/>
<author>
<name>Kench, Timothy</name>
</author>
<author>
<name>Rahardjo, Arielle</name>
</author>
<author>
<name>Terrones, Gianmarco G</name>
</author>
<author>
<name>Bellamkonda, Adinarayana</name>
</author>
<author>
<name>Maher, Thomas E</name>
</author>
<author>
<name>Storch, Marko</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<author>
<name>Vilar, Ramon</name>
</author>
<id>https://hdl.handle.net/1721.1/162799</id>
<updated>2025-09-25T07:31:40Z</updated>
<published>2024-02-26T00:00:00Z</published>
<summary type="text">A Semi‐Automated, High‐Throughput Approach for the Synthesis and Identification of Highly Photo‐Cytotoxic Iridium Complexes
Kench, Timothy; Rahardjo, Arielle; Terrones, Gianmarco G; Bellamkonda, Adinarayana; Maher, Thomas E; Storch, Marko; Kulik, Heather J; Vilar, Ramon
The discovery of new compounds with pharmacological properties is usually a lengthy, laborious and expensive process. Thus, there is increasing interest in developing workflows that allow for the rapid synthesis and evaluation of libraries of compounds with the aim of identifying leads for further drug development. Herein, we apply combinatorial synthesis to build a library of 90 iridium(III) complexes (81 of which are new) over two synthesise‐and‐test cycles, with the aim of identifying potential agents for photodynamic therapy. We demonstrate the power of this approach by identifying highly active complexes that are well‐tolerated in the dark but display very low nM phototoxicity against cancer cells. To build a detailed structure–activity relationship for this class of compounds we have used density functional theory (DFT) calculations to determine some key electronic parameters and study correlations with the experimental data. Finally, we present an optimised semi‐automated synthesise‐and‐test protocol to obtain multiplex data within 72 hours.
</summary>
<dc:date>2024-02-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Self-Amplified HF Release and Polymer Deconstruction Cascades Triggered by Mechanical Force</title>
<link href="https://hdl.handle.net/1721.1/162798" rel="alternate"/>
<author>
<name>Hu, Yixin</name>
</author>
<author>
<name>Wang, Liqi</name>
</author>
<author>
<name>Kevlishvili, Ilia</name>
</author>
<author>
<name>Wang, Shu</name>
</author>
<author>
<name>Chiou, Chun-Yu</name>
</author>
<author>
<name>Shieh, Peyton</name>
</author>
<author>
<name>Lin, Yangju</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<author>
<name>Johnson, Jeremiah A</name>
</author>
<author>
<name>Craig, Stephen L</name>
</author>
<id>https://hdl.handle.net/1721.1/162798</id>
<updated>2025-09-25T07:31:37Z</updated>
<published>2024-03-30T00:00:00Z</published>
<summary type="text">Self-Amplified HF Release and Polymer Deconstruction Cascades Triggered by Mechanical Force
Hu, Yixin; Wang, Liqi; Kevlishvili, Ilia; Wang, Shu; Chiou, Chun-Yu; Shieh, Peyton; Lin, Yangju; Kulik, Heather J; Johnson, Jeremiah A; Craig, Stephen L
Hydrogen fluoride (HF) is a versatile reagent for material transformation, with applications in self-immolative polymers, remodeled siloxanes, and degradable polymers. The responsive in situ generation of HF in materials therefore holds promise for new classes of adaptive material systems. Here, we report the mechanochemically coupled generation of HF from alkoxy-gem-difluorocyclopropane (gDFC) mechanophores derived from the addition of difluorocarbene to enol ethers. Production of HF involves an initial mechanochemically assisted rearrangement of gDFC mechanophore to α-fluoro allyl ether whose regiochemistry involves preferential migration of fluoride to the alkoxy-substituted carbon, and ab initio steered molecular dynamics simulations reproduce the observed selectivity and offer insights into the mechanism. When the alkoxy gDFC mechanophore is derived from poly(dihydrofuran), the α-fluoro allyl ether undergoes subsequent hydrolysis to generate 1 equiv of HF and cleave the polymer chain. The hydrolysis is accelerated via acid catalysis, leading to self-amplifying HF generation and concomitant polymer degradation. The mechanically generated HF can be used in combination with fluoride indicators to generate an optical response and to degrade polybutadiene with embedded HF-cleavable silyl ethers (11 mol %). The alkoxy-gDFC mechanophore thus provides a mechanically coupled mechanism of releasing HF for polymer remodeling pathways that complements previous thermally driven mechanisms.
</summary>
<dc:date>2024-03-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Thermally Stable SO2-Releasing Mechanophore: Facile Activation, Single-Event Spectroscopy, and Molecular Dynamic Simulations</title>
<link href="https://hdl.handle.net/1721.1/162797" rel="alternate"/>
<author>
<name>Sun, Yunyan</name>
</author>
<author>
<name>Neary, William J</name>
</author>
<author>
<name>Huang, Xiao</name>
</author>
<author>
<name>Kouznetsova, Tatiana B</name>
</author>
<author>
<name>Ouchi, Tetsu</name>
</author>
<author>
<name>Kevlishvili, Ilia</name>
</author>
<author>
<name>Wang, Kecheng</name>
</author>
<author>
<name>Chen, Yingying</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<author>
<name>Craig, Stephen L</name>
</author>
<author>
<name>Moore, Jeffrey S</name>
</author>
<id>https://hdl.handle.net/1721.1/162797</id>
<updated>2025-09-25T07:31:35Z</updated>
<published>2024-04-06T00:00:00Z</published>
<summary type="text">A Thermally Stable SO2-Releasing Mechanophore: Facile Activation, Single-Event Spectroscopy, and Molecular Dynamic Simulations
Sun, Yunyan; Neary, William J; Huang, Xiao; Kouznetsova, Tatiana B; Ouchi, Tetsu; Kevlishvili, Ilia; Wang, Kecheng; Chen, Yingying; Kulik, Heather J; Craig, Stephen L; Moore, Jeffrey S
Polymers that release small molecules in response to mechanical force are promising candidates as next-generation on-demand delivery systems. Despite advancements in the development of mechanophores for releasing diverse payloads through careful molecular design, the availability of scaffolds capable of discharging biomedically significant cargos in substantial quantities remains scarce. In this report, we detail a nonscissile mechanophore built from an 8-thiabicyclo[3.2.1]octane 8,8-dioxide (TBO) motif that releases one equivalent of sulfur dioxide (SO2) from each repeat unit. The TBO mechanophore exhibits high thermal stability but is activated mechanochemically using solution ultrasonication in either organic solvent or aqueous media with up to 63% efficiency, equating to 206 molecules of SO2 released per 143.3 kDa chain. We quantified the mechanochemical reactivity of TBO by single-molecule force spectroscopy and resolved its single-event activation. The force-coupled rate constant for TBO opening reaches ∼9.0 s–1 at ∼1520 pN, and each reaction of a single TBO domain releases a stored length of ∼0.68 nm. We investigated the mechanism of TBO activation using ab initio steered molecular dynamic simulations and rationalized the observed stereoselectivity. These comprehensive studies of the TBO mechanophore provide a mechanically coupled mechanism of multi-SO2 release from one polymer chain, facilitating the translation of polymer mechanochemistry to potential biomedical applications.
</summary>
<dc:date>2024-04-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving gas adsorption modeling for MOFs by local calibration of Hubbard U parameters</title>
<link href="https://hdl.handle.net/1721.1/162796" rel="alternate"/>
<author>
<name>Cho, Yeongsu</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<id>https://hdl.handle.net/1721.1/162796</id>
<updated>2025-09-25T07:31:33Z</updated>
<published>2024-04-16T00:00:00Z</published>
<summary type="text">Improving gas adsorption modeling for MOFs by local calibration of Hubbard U parameters
Cho, Yeongsu; Kulik, Heather J
While computational screening with density functional theory (DFT) is frequently employed for the screening of metal–organic frameworks (MOFs) for gas separation and storage, commonly applied generalized gradient approximations (GGAs) exhibit self-interaction errors, which hinder the predictions of adsorption energies. We investigate the Hubbard U parameter to augment DFT calculations for full periodic MOFs, targeting a more precise modeling of gas molecule–MOF interactions, specifically for N2, CO2, and O2. We introduce a calibration scheme for the U parameter, which is tailored for each MOF, by leveraging higher-level calculations on the secondary building unit (SBU) of the MOF. When applied to the full periodic MOF, the U parameter calibrated against hybrid HSE06 calculations of SBUs successfully reproduces hybrid-quality calculations of the adsorption energy of the periodic MOF. The mean absolute deviation of adsorption energies reduces from 0.13 eV for a standard GGA treatment to 0.06 eV with the calibrated U, demonstrating the utility of the calibration procedure when applied to the full MOF structure. Furthermore, attempting to use coupled cluster singles and doubles with perturbative triples calculations of isolated SBUs for this calibration procedure shows varying degrees of success in predicting the experimental heat of adsorption. It improves accuracy for N2 adsorption for cases of overbinding, whereas its impact on CO2 is minimal, and ambiguities in spin state assignment hinder consistent improvements of O2 adsorption. Our findings emphasize the limitations of cluster models and advocate the use of full periodic MOF systems with a calibrated U parameter, providing a more comprehensive understanding of gas adsorption in MOFs.
</summary>
<dc:date>2024-04-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantitative Electron Beam‐Single Atom Interactions Enabled by Sub‐20‐pm Precision Targeting</title>
<link href="https://hdl.handle.net/1721.1/162795" rel="alternate"/>
<author>
<name>Roccapriore, Kevin M.</name>
</author>
<author>
<name>Ross, Frances M.</name>
</author>
<author>
<name>Klein, Julian</name>
</author>
<id>https://hdl.handle.net/1721.1/162795</id>
<updated>2025-09-25T07:31:30Z</updated>
<published>2025-06-25T00:00:00Z</published>
<summary type="text">Quantitative Electron Beam‐Single Atom Interactions Enabled by Sub‐20‐pm Precision Targeting
Roccapriore, Kevin M.; Ross, Frances M.; Klein, Julian
The ability to probe and control matter at the picometer scale is essential foradvancing quantum and energy technologies. Scanning transmission electronmicroscopy oﬀers powerful capabilities for materials analysis andmodiﬁcation, but sample damage, drift, and scan distortions hinder singleatom analysis and deterministic manipulation. Materials analysis andmodiﬁcation via electron–solid interactions can be transformed by precisedelivery of electrons to a speciﬁed atomic location, maintaining the beamposition despite drift, and minimizing collateral dose. Here a fast, low-dose,sub-20-pm precision electron beam positioning technique is developed,“atomic lock-on,” (ALO), which oﬀers the ability to position the beam on aspeciﬁc atomic column without previously irradiating that column. Thistechnique is used to lock onto a single selected atomic location to repeatedlymeasure its weak electron energy loss signal despite sample drift. Moreover,electron beam-matter interactions in single atomic events are measured with&#120525;s time resolution. This enables observation of single-atom dynamics, suchas atomic bistability, revealing partially bonded atomic conﬁgurations andrecapture phenomena. This opens prospects for using electron microscopyfor high-precision measurements and deterministic control of matter forquantum technologies.
</summary>
<dc:date>2025-06-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing resilience with natural growth targeting</title>
<link href="https://hdl.handle.net/1721.1/162794" rel="alternate"/>
<author>
<name>Orphanides, Athanasios</name>
</author>
<id>https://hdl.handle.net/1721.1/162794</id>
<updated>2025-09-25T07:31:31Z</updated>
<published>2025-02-18T00:00:00Z</published>
<summary type="text">Enhancing resilience with natural growth targeting
Orphanides, Athanasios
Despite a number of helpful changes, including the adop-tion of an inflation target, the Fed's monetary policy strat-egy proved insufficiently resilient in recent years. Whilethe Fed eased policy appropriately during the pandemic,it fell behind the curve during the post-pandemic recov-ery. During 2021, the Fed kept easing policy while theinflation outlook was deteriorating and the economy wasgrowing considerably faster than the economy's naturalgrowth rate—the sum of the Fed's 2% inflation goal andthe growth rate of potential output. The resilience of theFed's monetary policy strategy could be enhanced, andsuch errors be avoided with guidance from a simple natu-ral growth targeting rule that prescribes that the federalfunds rate during each quarter be raised (cut) when pro-jected nominal income growth exceeds (falls short) of theeconomy's natural growth rate. An illustration with real-time data and forecasts since the early 1990s shows thatFed policy has not persistently deviated from this simplerule with the notable exception of the period coincidingwith the Fed's post-pandemic policy error.
</summary>
<dc:date>2025-02-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>N‐Heterocyclic Carbene‐Based Copolymers for Templated Synthesis and Stabilization of Gold Nanoparticles</title>
<link href="https://hdl.handle.net/1721.1/162793" rel="alternate"/>
<author>
<name>Nguyen, Suong T.</name>
</author>
<author>
<name>Brown, Christopher M.</name>
</author>
<author>
<name>Zhang, Wenxu</name>
</author>
<author>
<name>Kilgallon, Landon J.</name>
</author>
<author>
<name>Johnson, Jeremiah A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162793</id>
<updated>2025-09-25T07:31:23Z</updated>
<published>2025-02-17T00:00:00Z</published>
<summary type="text">N‐Heterocyclic Carbene‐Based Copolymers for Templated Synthesis and Stabilization of Gold Nanoparticles
Nguyen, Suong T.; Brown, Christopher M.; Zhang, Wenxu; Kilgallon, Landon J.; Johnson, Jeremiah A.
Surface functionalization and colloidal stability are pivotal for numerous applications of gold nanoparticles (Au-NPs). Over the past decade, N-heterocyclic carbenes (NHCs) have emerged as promising ligands for stabilizing Au-NPs owing to their ease of synthesis, structural diversity, and strong metal-ligand bonds. Here, we introduce new Au(I)–NHCcopolymer scaffolds as precursors to multidentate NHC-protected Au-NPs. Ring-opening metathesis copolymerization of a norbornene-appended Au(I)−NHC complex with another functionalized norbornene comonomer provides NHC–Au(I) copolymers with modular compositions and structures. Upon reduction, these copolymers yield multidentate polyNHC-coated Au-NPs with varied properties and corona functionalities dictated by the secondary monomer. These nanoparticles exhibit excellent size homogeneity and stability against aggregation in various buffers, cell culture media, and under exposure to electrolytes, oxidants, and exogenous thiols over extended periods. Moreover, we demonstrate post-synthetic surface functionalization reactions of polyNHC−Au-NPs while maintaining colloidal stability, highlighting their robustness and potential for applications such as bioconjugation. Overall, these findings underscore the potential of ROMP-derived NHC-containing copolymers as highly tunable and versatile multidentate ligands that may be suitable for other inorganic colloids and flat surfaces.
</summary>
<dc:date>2025-02-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simultaneous 3D quantitative magnetization transfer imaging and susceptibility mapping</title>
<link href="https://hdl.handle.net/1721.1/162792" rel="alternate"/>
<author>
<name>Jang, Albert</name>
</author>
<author>
<name>Chan, Kwok‐Shing</name>
</author>
<author>
<name>Mareyam, Azma</name>
</author>
<author>
<name>Stockmann, Jason</name>
</author>
<author>
<name>Huang, Susie Yi</name>
</author>
<author>
<name>Wang, Nian</name>
</author>
<author>
<name>Jang, Hyungseok</name>
</author>
<author>
<name>Lee, Hong‐Hsi</name>
</author>
<author>
<name>Liu, Fang</name>
</author>
<id>https://hdl.handle.net/1721.1/162792</id>
<updated>2025-09-25T07:31:27Z</updated>
<published>2025-03-17T00:00:00Z</published>
<summary type="text">Simultaneous 3D quantitative magnetization transfer imaging and susceptibility mapping
Jang, Albert; Chan, Kwok‐Shing; Mareyam, Azma; Stockmann, Jason; Huang, Susie Yi; Wang, Nian; Jang, Hyungseok; Lee, Hong‐Hsi; Liu, Fang
Purpose: Introduce a unified acquisition and modeling strategy to simul-taneously quantify magnetization transfer (MT), tissue susceptibility (&#120594;)and T∗2 .&#13;
Theory and Methods: Magnetization transfer is induced through the appli-cation of off-resonance irradiation between excitation and acquisition of anRF-spoiled gradient-echo scheme, where free pool spin–lattice relaxation (TF1 ),macromolecular proton fraction (f ) and magnetization exchange rate (kF ) werecalculated by modeling the magnitude of the MR signal using a binary spin-bathMT model with B+1 inhomogeneity correction via Bloch-Siegert shift. Simultane-ously, a multi-echo acquisition is incorporated into this framework to measurethe time evolution of both signal magnitude and phase, which was further mod-eled for estimating T∗2 and tissue susceptibility. In this work, we demonstratethe feasibility of this new acquisition and modeling strategy in vivo on the braintissue.&#13;
Results: In vivo brain experiments were conducted on five healthy subjects tovalidate our method. Utilizing an analytically derived signal model, we simul-taneously obtained 3D TF1 , f , kF , &#120594; and T∗2 maps of the whole brain. Our resultsfrom the brain regional analysis show good agreement with those previouslyreported in the literature, which used separate MT and QSM methods.Conclusion: A unified acquisition and modeling strategy based on an analyticalsignal model that fully leverages both the magnitude and phase of the acquiredsignals was demonstrated and validated for simultaneous MT, susceptibility andT∗2 quantification that are free from B+1 bias.
</summary>
<dc:date>2025-03-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Development of Carbon Markets in Upper‐Middle‐Income Countries</title>
<link href="https://hdl.handle.net/1721.1/162791" rel="alternate"/>
<author>
<name>Stek, Pieter E.</name>
</author>
<author>
<name>Lima‐de‐Oliveira, Renato</name>
</author>
<author>
<name>Vasudhevan, Thessa</name>
</author>
<id>https://hdl.handle.net/1721.1/162791</id>
<updated>2025-09-25T07:31:18Z</updated>
<published>2025-03-05T00:00:00Z</published>
<summary type="text">The Development of Carbon Markets in Upper‐Middle‐Income Countries
Stek, Pieter E.; Lima‐de‐Oliveira, Renato; Vasudhevan, Thessa
Upper-middle-income economies face a specific set of trade-offs when reducing carbon emissions, which differ from the trade-offs faced in low- and high-income economies. To mobilize domestic funds, middle-income countries are developing carbonmarkets to attract private sector investment. This study advances a theoretical framework for carbon market development andexplores the process in Brazil, Indonesia, and Malaysia. The case of Malaysia is examined in depth due to the slow developmentof its carbon market compared to its peers. Analysis reveals that Malaysia faces a carbon market dilemma due to high domesticemissions and internal challenges related to energy market regulation and land ownership, which have hindered the emergenceof a pro-carbon market coalition. In contrast, Brazil and Indonesia have been more active in the international voluntary carbonmarket and have implemented key regulations with domestic political support. This study provides insights into the challengesand opportunities of carbon market development in middle-income economies, highlighting the importance of resource endow-ments and an enabling coalition for successful implementation.
</summary>
<dc:date>2025-03-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quality Disclosure and Regulation: Scoring Design in Medicare Advantage</title>
<link href="https://hdl.handle.net/1721.1/162790" rel="alternate"/>
<author>
<name>Vatter, Benjamin</name>
</author>
<id>https://hdl.handle.net/1721.1/162790</id>
<updated>2025-09-25T07:31:25Z</updated>
<published>2025-06-10T00:00:00Z</published>
<summary type="text">Quality Disclosure and Regulation: Scoring Design in Medicare Advantage
Vatter, Benjamin
Policymakers and market intermediaries often use quality scores to alleviate asymmetric information about product quality. Scores affect the demand for quality and, in equilibrium, its supply. Equilibrium effects break the rule whereby more information is always better, and the optimal design of scores must account for them. In the context of Medicare Advantage, I find that consumers' information is limited, and quality is inefficiently low. A simple design alleviates these issues and increases total welfare by 3.7 monthly premiums. More than half of the gains stem from scores' effect on quality rather than information. Scores can outperform full-information outcomes by regulating inefficient oligopolistic quality provision, and a binary certification of quality attains 98% of this welfare. Scores are informative even when coarse; firms' incentives are to produce quality at the scoring threshold, which consumers know. The primary design challenge of scores is to dictate thresholds and thus regulate quality.
</summary>
<dc:date>2025-06-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Robust Chemiresistive Behavior in Conductive Polymer/MOF Composites</title>
<link href="https://hdl.handle.net/1721.1/162789" rel="alternate"/>
<author>
<name>Roh, Heejung</name>
</author>
<author>
<name>Kim, Dong‐Ha</name>
</author>
<author>
<name>Cho, Yeongsu</name>
</author>
<author>
<name>Jo, Young‐Moo</name>
</author>
<author>
<name>del Alamo, Jesús A</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<author>
<name>Dincă, Mircea</name>
</author>
<author>
<name>Gumyusenge, Aristide</name>
</author>
<id>https://hdl.handle.net/1721.1/162789</id>
<updated>2025-09-25T07:31:29Z</updated>
<published>2024-04-17T00:00:00Z</published>
<summary type="text">Robust Chemiresistive Behavior in Conductive Polymer/MOF Composites
Roh, Heejung; Kim, Dong‐Ha; Cho, Yeongsu; Jo, Young‐Moo; del Alamo, Jesús A; Kulik, Heather J; Dincă, Mircea; Gumyusenge, Aristide
Metal-organic frameworks (MOFs) are promising materials for gas sensing but are often limited to single-use detection. A hybridization strategy is demonstrated synergistically deploying conductive MOFs (cMOFs) and conductive polymers (cPs) as two complementary mixed ionic-electronic conductors in high-performing stand-alone chemiresistors. This work presents significant improvement in i) sensor recovery kinetics, ii) cycling stability, and iii) dynamic range at room temperature. The effect of hybridization across well-studied cMOFs is demonstrated based on 2,3,6,7,10,11-hexahydroxytriphenylene (HHTP) and 2,3,6,7,10,11-hexaiminotriphenylene (HITP) ligands with varied metal nodes (Co, Cu, Ni). A comprehensive mechanistic study is conducted to relate energy band alignments at the heterojunctions between the MOFs and the polymer with sensing thermodynamics and binding kinetics. The findings reveal that hole enrichment of the cMOF component upon hybridization leads to selective enhancement in desorption kinetics, enabling significantly improved sensor recovery at room temperature, and thus long-term response retention. This mechanism is further supported by density functional theory calculations on sorbate–analyte interactions. It is also found that alloying cPs and cMOFs enables facile thin film co-processing and device integration, potentially unlocking the use of these hybrid conductors in diverse electronic applications.
</summary>
<dc:date>2024-04-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Internal Catalysis in Dynamic Hydrogels with Associative Thioester Cross-Links</title>
<link href="https://hdl.handle.net/1721.1/162788" rel="alternate"/>
<author>
<name>Zhang, Vivian</name>
</author>
<author>
<name>Ou, Carrie</name>
</author>
<author>
<name>Kevlishvili, Ilia</name>
</author>
<author>
<name>Hemmingsen, Christina M</name>
</author>
<author>
<name>Accardo, Joseph V</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<author>
<name>Kalow, Julia A</name>
</author>
<id>https://hdl.handle.net/1721.1/162788</id>
<updated>2025-09-25T07:31:16Z</updated>
<published>2024-05-03T00:00:00Z</published>
<summary type="text">Internal Catalysis in Dynamic Hydrogels with Associative Thioester Cross-Links
Zhang, Vivian; Ou, Carrie; Kevlishvili, Ilia; Hemmingsen, Christina M; Accardo, Joseph V; Kulik, Heather J; Kalow, Julia A
Thioesters are an essential functional group in biosynthetic pathways, which has motivated their development as reactive handles in probes and peptide assembly. Thioester exchange is typically accelerated by catalysts or elevated pH. Here, we report the use of bifunctional aromatic thioesters as dynamic covalent cross-links in hydrogels, demonstrating that at physiologic pH in aqueous conditions, transthioesterification facilitates stress relaxation on the time scale of hundreds of seconds. We show that intramolecular hydrogen bonding is responsible for accelerated exchange, evident in both molecular kinetics and macromolecular stress relaxation. Drawing from concepts in the vitrimer literature, this system exemplifies how dynamic cross-links that exchange through an associative mechanism enable tunable stress relaxation without altering stiffness.
</summary>
<dc:date>2024-05-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>CH−π Interactions Are Required for Human Galectin-3 Function</title>
<link href="https://hdl.handle.net/1721.1/162787" rel="alternate"/>
<author>
<name>Diehl, Roger C</name>
</author>
<author>
<name>Chorghade, Rajeev S</name>
</author>
<author>
<name>Keys, Allison M</name>
</author>
<author>
<name>Alam, Mohammad Murshid</name>
</author>
<author>
<name>Early, Stephen A</name>
</author>
<author>
<name>Dugan, Amanda E</name>
</author>
<author>
<name>Krupkin, Miri</name>
</author>
<author>
<name>Ribbeck, Katharina</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<author>
<name>Kiessling, Laura L</name>
</author>
<id>https://hdl.handle.net/1721.1/162787</id>
<updated>2025-09-25T07:31:05Z</updated>
<published>2024-07-18T00:00:00Z</published>
<summary type="text">CH−π Interactions Are Required for Human Galectin-3 Function
Diehl, Roger C; Chorghade, Rajeev S; Keys, Allison M; Alam, Mohammad Murshid; Early, Stephen A; Dugan, Amanda E; Krupkin, Miri; Ribbeck, Katharina; Kulik, Heather J; Kiessling, Laura L
Glycan-binding proteins, or lectins, recognize distinct structural elements of polysaccharides, to mediate myriad biological functions. Targeting glycan-binding proteins involved in human disease has been challenging due to an incomplete understanding of the molecular mechanisms that govern protein-glycan interactions. Bioinformatics and structural studies of glycan-binding proteins indicate that aromatic residues with the potential for CH-π interactions are prevalent in glycan-binding sites. However, the contributions of these CH-π interactions to glycan binding and their relevance in downstream function remain unclear. An emblematic lectin, human galectin-3, recognizes lactose and &lt;i&gt;N&lt;/i&gt;-acetyllactosamine-containing glycans by positioning the electropositive face of a galactose residue over the tryptophan 181 (W181) indole forming a CH-π interaction. We generated a suite of galectin-3 W181 variants to assess the importance of these CH-π interactions to glycan binding and function. As determined experimentally and further validated with computational modeling, variants with smaller or less electron-rich aromatic side chains (W181Y, W181F, W181H) or sterically similar but nonaromatic residues (W181M, W181R) showed poor or undetectable binding to lactose and attenuated ability to bind mucins or agglutinate red blood cells. The latter functions depend on multivalent binding, highlighting that weakened CH-π interactions cannot be overcome by avidity. Two galectin-3 variants with disrupted hydrogen bonding interactions (H158A and E184A) showed similarly impaired lactose binding. Molecular simulations demonstrate that all variants have decreased binding orientation stability relative to native galectin-3. Thus, W181 collaborates with the endogenous hydrogen bonding network to enhance binding affinity for lactose, and abrogation of these CH-π interactions is as deleterious as eliminating key hydrogen bonding interactions. These findings underscore the critical roles of CH-π interactions in carbohydrate binding and lectin function and will aid the development of novel lectin inhibitors.
</summary>
<dc:date>2024-07-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Predictions of Spin-Crossover Complex Properties through DFT Calculations with a Local Hybrid Functional</title>
<link href="https://hdl.handle.net/1721.1/162786" rel="alternate"/>
<author>
<name>Rajpurohit, Sangeeta</name>
</author>
<author>
<name>Vennelakanti, Vyshnavi</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<id>https://hdl.handle.net/1721.1/162786</id>
<updated>2026-03-08T03:24:44Z</updated>
<published>2024-10-03T00:00:00Z</published>
<summary type="text">Improving Predictions of Spin-Crossover Complex Properties through DFT Calculations with a Local Hybrid Functional
Rajpurohit, Sangeeta; Vennelakanti, Vyshnavi; Kulik, Heather J
We conducted a study on the performance of the local hybrid exchange-correlation functional PBE0r for a set of 95 experimentally characterized iron spin-crossover (SCO) complexes [Vennelakanti, V.; &lt;i&gt;J. Chem. Phys.&lt;/i&gt; 2023, 159, 024120]. The PBE0r functional is a variant of PBE0 where the exchange correction is restricted to on-site terms formulated on the basis of local orbitals. We determine the free parameters of the PBE0r functional against the experimental data and other hybrid functionals. With a Hartree-Fock (HF) exchange factor of 4%, the PBE0r functional accurately reproduces the electronic and free-energy trends predicted in prior DFT studies for these 95 complexes by using the B3LYP functional. Larger values of HF exchange stabilize high-spin states. The PBE0r-predicted bond lengths tend to exceed the experimental bond lengths, although bond lengths are less sensitive to HF exchange than in global hybrids. The predicted SCO transition temperatures &lt;i&gt;T&lt;/i&gt;&lt;sub&gt;1/2&lt;/sub&gt; from PBE0r correlate moderately with the experimental transition temperatures, showing a slight improvement compared to the previous modB3LYP-predicted &lt;i&gt;T&lt;/i&gt;&lt;sub&gt;1/2&lt;/sub&gt;. This study suggests that the PBE0r functional is computationally cost-effective and offers the possibility of simulating larger complexes with accuracy comparable to global hybrid functionals, provided the HF-exchange parameter is carefully optimized.
</summary>
<dc:date>2024-10-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ligand‐Mediated Quantum Yield Enhancement in 1‐D Silver Organothiolate Metal–Organic Chalcogenolates</title>
<link href="https://hdl.handle.net/1721.1/162785" rel="alternate"/>
<author>
<name/>
</author>
<id>https://hdl.handle.net/1721.1/162785</id>
<updated>2026-03-08T03:24:45Z</updated>
<published>2024-12-01T00:00:00Z</published>
<summary type="text">Ligand‐Mediated Quantum Yield Enhancement in 1‐D Silver Organothiolate Metal–Organic Chalcogenolates
X‐ray free electron laser (XFEL) microcrystallography and synchrotron single‐crystal crystallography are used to evaluate the role of organic substituent position on the optoelectronic properties of metal–organic chalcogenolates (MOChas). MOChas are crystalline 1D and 2D semiconducting hybrid materials that have varying optoelectronic properties depending on composition, topology, and structure. While MOChas have attracted much interest, small crystal sizes impede routine crystal structure determination. A series of constitutional isomers where the aryl thiol is functionalized by either methoxy or methyl ester are solved by small molecule serial femtosecond X‐ray crystallography (smSFX) and single crystal rotational crystallography. While all the methoxy examples have a low quantum yield (0‐1%), the methyl ester in the &lt;jats:italic&gt;ortho&lt;/jats:italic&gt; position yields a high quantum yield of 22%. The proximity of the oxygen atoms to the silver inorganic core correlates to a considerable enhancement of quantum yield. Four crystal structures are solved at a resolution range of 0.8–1.0 Å revealing a collapse of the 2D topology for functional groups in the 2‐ and 3‐ positions, resulting in needle‐like crystals. Further analysis using density functional theory (DFT) and many‐body perturbation theory (MBPT) enables the exploration of complex excitonic phenomena within easily prepared material systems.
</summary>
<dc:date>2024-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Department of Materials Science and Engineering</title>
<link href="https://hdl.handle.net/1721.1/162784" rel="alternate"/>
<author>
<name>Anikeeva, Polina Olegovna</name>
</author>
<id>https://hdl.handle.net/1721.1/162784</id>
<updated>2025-09-26T19:02:26Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Department of Materials Science and Engineering
Anikeeva, Polina Olegovna
This report contains the following sections: Undergraduate education, Graduate education, Graduate and postdoc career support, Student organizations, Facilities, Fundraising, Personnel changes and promotions, Research highlights, Awards and honors, and Future plans.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, MIT Professional Education</title>
<link href="https://hdl.handle.net/1721.1/162783" rel="alternate"/>
<author>
<name>Pant, Bhaskar</name>
</author>
<id>https://hdl.handle.net/1721.1/162783</id>
<updated>2025-09-24T03:13:20Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, MIT Professional Education
Pant, Bhaskar
This report contains the following sections: Current Goals, Objectives, and Priorities; Accomplishments by Program; Recognition; Funding; Challenges / Prospective Solutions; and Personnel.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of the Deployable HF Vector Sensor for the AERO-VISTA Spacecraft</title>
<link href="https://hdl.handle.net/1721.1/162782" rel="alternate"/>
<author>
<name>Silver, Mark</name>
</author>
<author>
<name>Lopez, Alai</name>
</author>
<author>
<name>Howe, Daniel</name>
</author>
<author>
<name>Thompson, Erik</name>
</author>
<author>
<name>Morris, Alexander</name>
</author>
<author>
<name>Fenn, Alan</name>
</author>
<author>
<name>Knapp, Mary</name>
</author>
<author>
<name>Erickson, Philip</name>
</author>
<author>
<name>Lind, Frank</name>
</author>
<author>
<name>Paritsky, Lenny</name>
</author>
<author>
<name>Masterson, Rebecca</name>
</author>
<author>
<name>Ammons, Kristen</name>
</author>
<author>
<name>Belsten, Nicholas</name>
</author>
<author>
<name>Kononov, Ekaterina</name>
</author>
<author>
<name>Payne, Cadence</name>
</author>
<id>https://hdl.handle.net/1721.1/162782</id>
<updated>2026-03-08T03:24:43Z</updated>
<published>2024-05-13T00:00:00Z</published>
<summary type="text">Development of the Deployable HF Vector Sensor for the AERO-VISTA Spacecraft
Silver, Mark; Lopez, Alai; Howe, Daniel; Thompson, Erik; Morris, Alexander; Fenn, Alan; Knapp, Mary; Erickson, Philip; Lind, Frank; Paritsky, Lenny; Masterson, Rebecca; Ammons, Kristen; Belsten, Nicholas; Kononov, Ekaterina; Payne, Cadence
The Auroral Emissions Radio Observer (AERO) and Vector Interferometry Space Technology using AERO (VISTA) CubeSat missions will use two identical 6U CubeSats developed to measure HF auroral emissions from Low Earth Orbit for NASA’s Space Mission Directorate (SMD) for Heliophysics. Each CubeSat employs a unique antenna, called a Vector Sensor Antenna (VSA), to measure all six electromagnetic degrees of freedom of incoming HF radiation via a combination of loop, dipole and monopole antennas. The VSA payload stows into a compact volume within the 6U spacecraft, and through a series of deployments, makes a 4 m by 4 m by 2.3 m antenna array. The relatively large antenna element deployment from such a small initial volume is achieved using fiberglass composite tape springs which unroll to form the antenna elements. These tape springs fall into a class of structural elements called High Strain Composites, which are becoming more commonly used in space missions. This paper describes the development, integration and testing of the AERO-VISTA VSA payload prototype.
2024 IEEE Aerospace Conference, Big Sky, MT, USA, 2-9 March
</summary>
<dc:date>2024-05-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accessibility for Whom? Perceptions of Mobility Barriers Across Disability Groups and Implications for Designing Personalized Maps</title>
<link href="https://hdl.handle.net/1721.1/162781" rel="alternate"/>
<author>
<name>Li, Chu</name>
</author>
<author>
<name>Pang, Rock Yuren</name>
</author>
<author>
<name>Labb?, Delphine</name>
</author>
<author>
<name>Eisenberg, Yochai</name>
</author>
<author>
<name>Hosseini, Maryam</name>
</author>
<author>
<name>Froehlich, Jon</name>
</author>
<id>https://hdl.handle.net/1721.1/162781</id>
<updated>2026-03-08T03:22:01Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Accessibility for Whom? Perceptions of Mobility Barriers Across Disability Groups and Implications for Designing Personalized Maps
Li, Chu; Pang, Rock Yuren; Labb?, Delphine; Eisenberg, Yochai; Hosseini, Maryam; Froehlich, Jon
Today’s mapping tools fail to address the varied experiences of different mobility device users. This paper presents a large-scale online survey exploring how five mobility groups—users of canes, walkers, mobility scooters, manual wheelchairs, and motorized wheelchairs—perceive sidewalk barriers and differences therein. Using 52 sidewalk barrier images, respondents evaluated their confidence in navigating each scenario. Our findings (N=190) reveal variations in barrier perceptions across groups, while also identifying shared concerns. To further demonstrate the value of this data, we showcase its use in two custom prototypes: a visual analytics tool and a personalized routing tool. Our survey findings and open dataset advance work in accessibility-focused maps, routing algorithms, and urban planning.
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Many-body expansion based machine learning models for octahedral transition metal complexes</title>
<link href="https://hdl.handle.net/1721.1/162780" rel="alternate"/>
<author>
<name>Meyer, Ralf</name>
</author>
<author>
<name>Chu, Daniel BK</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<id>https://hdl.handle.net/1721.1/162780</id>
<updated>2026-03-08T03:24:41Z</updated>
<published>2025-01-06T00:00:00Z</published>
<summary type="text">Many-body expansion based machine learning models for octahedral transition metal complexes
Meyer, Ralf; Chu, Daniel BK; Kulik, Heather J
Graph-based machine learning (ML) models for material properties show great potential to accelerate virtual high-throughput screening of large chemical spaces. However, in their simplest forms, graph-based models do not include any 3D information and are unable to distinguish stereoisomers such as those arising from different orderings of ligands around a metal center in coordination complexes. In this work we present a modification to revised autocorrelation descriptors, a molecular graph featurization method, for predicting spin state dependent properties of octahedral transition metal complexes (TMCs). Inspired by analytical semi-empirical models for TMCs, the new modeling strategy is based on the many-body expansion (MBE) and allows one to tune the captured stereoisomer information by changing the truncation order of the MBE. We present the necessary modifications to include this approach in two commonly used ML methods, kernel ridge regression and feed-forward neural networks. On a test set composed of all possible isomers of binary TMCs, the best MBE models achieve mean absolute errors (MAEs) of 2.75 kcal mol−1 on spin-splitting energies and 0.26 eV on frontier orbital energy gaps, a 30%–40% reduction in error compared to models based on our previous approach. We also observe improved generalization to previously unseen ligands where the best-performing models exhibit MAEs of 4.00 kcal mol−1 (i.e. a 0.73 kcal mol−1 reduction) on the spin-splitting energies and 0.53 eV (i.e. a 0.10 eV reduction) on the frontier orbital energy gaps. Because the new approach incorporates insights from electronic structure theory, such as ligand additivity relationships, these models exhibit systematic generalization from homoleptic to heteroleptic complexes, allowing for efficient screening of TMC search spaces.
</summary>
<dc:date>2025-01-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mixed-Chalcogen 2D Silver Phenylchalcogenides (AgE1–xExPh; E = S, Se, Te)</title>
<link href="https://hdl.handle.net/1721.1/162779" rel="alternate"/>
<author>
<name>Lee, Woo Seok</name>
</author>
<author>
<name>Cho, Yeongsu</name>
</author>
<author>
<name>Paritmongkol, Watcharaphol</name>
</author>
<author>
<name>Sakurada, Tomoaki</name>
</author>
<author>
<name>Ha, Seung Kyun</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<author>
<name>Tisdale, William A</name>
</author>
<id>https://hdl.handle.net/1721.1/162779</id>
<updated>2026-03-08T03:24:46Z</updated>
<published>2024-12-12T00:00:00Z</published>
<summary type="text">Mixed-Chalcogen 2D Silver Phenylchalcogenides (AgE1–xExPh; E = S, Se, Te)
Lee, Woo Seok; Cho, Yeongsu; Paritmongkol, Watcharaphol; Sakurada, Tomoaki; Ha, Seung Kyun; Kulik, Heather J; Tisdale, William A
Alloying is a powerful strategy for tuning the electronic band structure and optical properties of semiconductors. Here, we investigate the thermodynamic stability and excitonic properties of mixed-chalcogen alloys of two-dimensional (2D) hybrid organic–inorganic silver phenylchalcogenides (AgEPh; E = S, Se, Te). Using a variety of structural and optical characterization techniques, we demonstrate that the AgSePh-AgTePh system forms homogeneous alloys (AgSe1–xTexPh, 0 ≤ x ≤ 1) across all compositions, whereas the AgSPh-AgSePh and AgSPh-AgTePh systems exhibit distinct miscibility gaps. Density functional theory calculations reveal that chalcogen mixing is energetically unfavorable in all cases but comparable in magnitude to the ideal entropy of mixing at room temperature. Because AgSePh and AgTePh have the same crystal structure (which is different from AgSPh), alloying is predicted to be thermodynamically preferred over phase separation in the case of AgSePh-AgTePh, whereas phase separation is predicted to be more favorable than alloying for both the AgSPh-AgSePh and AgSPh-AgTePh systems, in agreement with experimental observations. Homogeneous AgSe1–xTexPh alloys exhibit continuously tunable excitonic absorption resonances in the ultraviolet–visible range, while the emission spectrum reveals competition between exciton delocalization (characteristic of AgSePh) and localization behavior (characteristic of AgTePh). Overall, these observations provide insight into the thermodynamics of 2D silver phenylchalcogenides and the effect of lattice composition on electron–phonon interactions in 2D hybrid organic–inorganic semiconductors.
</summary>
<dc:date>2024-12-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessment of ARPA-E Energy Storage Program: Capability and Capacity to Solve Battery Waste Issues</title>
<link href="https://hdl.handle.net/1721.1/162778" rel="alternate"/>
<author>
<name>Lubeck, Mila A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162778</id>
<updated>2026-03-08T03:20:52Z</updated>
<published>2025-05-22T00:00:00Z</published>
<summary type="text">Assessment of ARPA-E Energy Storage Program: Capability and Capacity to Solve Battery Waste Issues
Lubeck, Mila A.
Society today relies on batteries to power our devices, electric vehicles, and at growing rates grid-scale energy. As the demand for batteries increases so does the amount of waste produced. The Advanced Research Projects Agency-Energy (ARPA-E) has tried to tackle the battery waste issue through its energy storage program with a project called Catalyzing Innovative Research for Circular Use of Long-Lived Advanced Rechargeable (CIRCULAR). The program intends to introduce Electric Vehicle (EV) battery technology with longer lifespans and driving ranges to a circular supply chain. They also want to integrate an EV battery health monitor into the circular supply chain practices. The program intends to determine the ability of the project to commercialize at scale through analytics. This article notes previous ARPA-E efforts to solve the battery waste issue through a circular supply chain and develops a proposed innovation policy framework for a circular battery economy. This framework is separated into five categories which identify emerging technologies and create a system of federally funded waste and recycling sites. We propose integrating support mechanisms and using neoclassical economic tools to induce innovation. Also, we recommend collaborating with the appropriate agencies for the creation, continuation, and oversight of facilities. Lastly, we will include technology transfer of emerging technology for testing and validation upon hand-off. The article utilizes the proposed framework to guide policy recommendations and contribute one possible solution for the battery waste issue through a national system of transport and collection for material recovery, reuse, and cascaded use.
</summary>
<dc:date>2025-05-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>First-principles study of SiO 2 / MoS 2 and SiO 2 / WS 2 interfaces: A comparative analysis of surface terminations, van der Waals corrections, and functionals</title>
<link href="https://hdl.handle.net/1721.1/162777" rel="alternate"/>
<author>
<name>Fotopoulos, Vasileios</name>
</author>
<author>
<name>Siebenhofer, Matthäus</name>
</author>
<author>
<name>Huang, Mantao</name>
</author>
<author>
<name>Xu, Longlong</name>
</author>
<author>
<name>Yildiz, Bilge</name>
</author>
<id>https://hdl.handle.net/1721.1/162777</id>
<updated>2026-03-08T03:21:08Z</updated>
<published>2025-05-19T00:00:00Z</published>
<summary type="text">First-principles study of SiO 2 / MoS 2 and SiO 2 / WS 2 interfaces: A comparative analysis of surface terminations, van der Waals corrections, and functionals
Fotopoulos, Vasileios; Siebenhofer, Matthäus; Huang, Mantao; Xu, Longlong; Yildiz, Bilge
This study presents a first-principles investigation of SiO 2 / MoS 2 and SiO 2 / WS 2 interfaces, examining how surface terminations, van der Waals (vdW) corrections, and functional choices impact structural stability and electronic properties. Using density functional theory with generalized gradient approximation (GGA; PBE, PBEsol, revPBE), meta-GGA (SCAN, r 2 SCAN), and hybrid (PBE0) functionals, we assess the effect of vdW correction schemes (D2, D3, Tkatchenko-Scheffler) on interfacial energetics and separation. The results show that vdW corrections are essential for accurate GGA descriptions, while meta-GGAs yield similar accuracy even without them, enabling efficient modeling of SiO 2 /2D heterostructures. Additionally, SiO 2 surface morphology plays a significant role, with fully saturated interfaces showing lower energy and greater interlayer separations. In both SiO 2 / MoS 2 and SiO 2 / WS 2 systems, band gap predictions using PBE0 closely match the experimental values, underscoring the value of hybrid functionals for accurate electronic structure calculations.
</summary>
<dc:date>2025-05-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optoionics: New opportunity for ionic conduction-based radiation detection</title>
<link href="https://hdl.handle.net/1721.1/162776" rel="alternate"/>
<author>
<name>Defferriere, Thomas</name>
</author>
<author>
<name>Tuller, Harry L.</name>
</author>
<id>https://hdl.handle.net/1721.1/162776</id>
<updated>2026-03-08T03:20:58Z</updated>
<published>2025-05-13T00:00:00Z</published>
<summary type="text">Optoionics: New opportunity for ionic conduction-based radiation detection
Defferriere, Thomas; Tuller, Harry L.
Optoionics, involving light-modulated ionic transport in ionic solids, parallels optoelectronics in semiconductors and offers novel device design opportunities across various fields. Among these opportunities, grain boundary phenomena related to radiation-induced electron/hole pair generation and charge trapping at the boundaries causing a modulation in ionic current could enable fast, sensitive, and reversible radiation detectors. The robustness of ionic solids in chemical, structural, and thermal aspects in turn makes them scalable and robust alternatives to traditional semiconductor detectors. This article explores the theoretical underpinnings, experimental breakthroughs, and design considerations needed to optimize such optoionic devices.
</summary>
<dc:date>2025-05-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deceptive Explanations by Large Language Models Lead People to Change their Beliefs About Misinformation More Often than Honest Explanations</title>
<link href="https://hdl.handle.net/1721.1/162775" rel="alternate"/>
<author>
<name>Danry, Valdemar</name>
</author>
<author>
<name>Pataranutaporn, Pat</name>
</author>
<author>
<name>Groh, Matthew</name>
</author>
<author>
<name>Epstein, Ziv</name>
</author>
<id>https://hdl.handle.net/1721.1/162775</id>
<updated>2026-03-08T03:22:08Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Deceptive Explanations by Large Language Models Lead People to Change their Beliefs About Misinformation More Often than Honest Explanations
Danry, Valdemar; Pataranutaporn, Pat; Groh, Matthew; Epstein, Ziv
Advanced Artificial Intelligence (AI) systems, specifically large language models (LLMs), have the capability to generate not just misinformation, but also deceptive explanations that can justify and propagate false information and discredit true information. We examined the impact of deceptive AI generated explanations on individuals’ beliefs in a pre-registered online experiment with 11,780 observations from 589 participants. We found that in addition to being more persuasive than accurate and honest explanations, AI-generated deceptive explanations can significantly amplify belief in false news headlines and undermine true ones as compared to AI systems that simply classify the headline incorrectly as being true/false. Moreover, our results show that logically invalid explanations are deemed less credible - diminishing the effects of deception. This underscores the importance of teaching logical reasoning and critical thinking skills to identify logically invalid arguments, fostering greater resilience against advanced AI-driven misinformation.
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Voxel Invention Kit: Reconfigurable Building Blocks for Prototyping Interactive Electronic Structures</title>
<link href="https://hdl.handle.net/1721.1/162774" rel="alternate"/>
<author>
<name>Smith, Miana</name>
</author>
<author>
<name>Forman, Jack</name>
</author>
<author>
<name>Abdel-Rahman, Amira</name>
</author>
<author>
<name>Wang, Sophia</name>
</author>
<author>
<name>Gershenfeld, Neil</name>
</author>
<id>https://hdl.handle.net/1721.1/162774</id>
<updated>2026-03-08T03:22:03Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Voxel Invention Kit: Reconfigurable Building Blocks for Prototyping Interactive Electronic Structures
Smith, Miana; Forman, Jack; Abdel-Rahman, Amira; Wang, Sophia; Gershenfeld, Neil
Prototyping large, electronically integrated structures is challenging and often results in unwieldy wiring, weak mechanical properties, expensive iterations, or limited reusability. While many electronics prototyping kits exist for small-scale objects, relatively few methods exist to freely iterate large and sturdy structures with integrated electronics. To address this gap, we present the Voxel Invention Kit (VIK), which uses reconfigurable blocks that assemble into high-stiffness, lightweight structures with integrated electronics. We do this by creating cubic blocks composed of PCBs that carry electrical routing and components and can be (re)configured with simple tools into a variety of structures. To ensure structural stability without expertise, we created a tool to configure structures and simulate applied loads, which we validated with mechanical testing data. Using VIK, we produced devices reconfigured from a shared set of voxels: multiple iterations of a customizable AV lounge seat, a dance floor game, and a force-sensing bridge.
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>The energetic landscape of CH–π interactions in protein–carbohydrate binding</title>
<link href="https://hdl.handle.net/1721.1/162773" rel="alternate"/>
<author>
<name>Keys, Allison M</name>
</author>
<author>
<name>Kastner, David W</name>
</author>
<author>
<name>Kiessling, Laura L</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<id>https://hdl.handle.net/1721.1/162773</id>
<updated>2026-03-08T03:24:42Z</updated>
<published>2024-12-03T00:00:00Z</published>
<summary type="text">The energetic landscape of CH–π interactions in protein–carbohydrate binding
Keys, Allison M; Kastner, David W; Kiessling, Laura L; Kulik, Heather J
CH–π interactions between carbohydrates and aromatic amino acids play an essential role in biological systems that span all domains of life. Quantifying the strength and importance of these CH–π interactions is challenging because these interactions involve several atoms and can exist in many distinct orientations. To identify an orientational landscape of CH–π interactions, we constructed a dataset of close contacts formed between β-D-galactose residues and the aromatic amino acids, tryptophan, tyrosine, and phenylalanine, across crystallographic structures deposited in the Protein Data Bank. We carried out quantum mechanical calculations to quantify their interaction strengths. The data indicate that tryptophan-containing CH–π interactions have more favorable interaction energies than those formed by tyrosine or phenylalanine. The energetic differences between these amino acids are caused by the aromatic ring system electronics and size. We use individual distance and angle features to train random forest models to successfully predict the first-principles computed energetics of CH–π interactions. Using insights from our models, we define a tradeoff in CH–π interaction strength arising from the proximity of galactose carbons 1 and 2 versus carbons 4 and 6 to the aromatic amino acid. Our work demonstrates that a feature of CH–π stacking interactions is that numerous orientations allow for highly favorable interaction strengths.
</summary>
<dc:date>2024-12-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Space-Based Solar Power: Implications for Operational Robustness in Lunar EVAs and Exploration Architectures</title>
<link href="https://hdl.handle.net/1721.1/162772" rel="alternate"/>
<author>
<name>MacRobbie, Madelyn</name>
</author>
<author>
<name>Tretiakova, Anna</name>
</author>
<author>
<name>Chen, Vanessa</name>
</author>
<author>
<name>Ma, Clara</name>
</author>
<id>https://hdl.handle.net/1721.1/162772</id>
<updated>2026-03-08T03:21:03Z</updated>
<published>2025-06-01T00:00:00Z</published>
<summary type="text">Space-Based Solar Power: Implications for Operational Robustness in Lunar EVAs and Exploration Architectures
MacRobbie, Madelyn; Tretiakova, Anna; Chen, Vanessa; Ma, Clara
Human exploration of the lunar surface has large power requirements for both the lunar base and for rover exploration. NASA’s recent contract awards indicate a reliance on fission surface power. While nuclear options provide reliable power to lunar base locations, they have a limited reach that restricts exploration capacity. The Space Exploration Vehicle’s 125-mile range only allows coverage of 0.34% of the lunar surface. A constellation of space-based solar power (SBSP) satellites paired with pressurized rovers allows 24-h, full-surface coverage on excursions from the lunar base. A case study is conducted of the constellation design, system cost, operational lifetime, and power provided using SBSP. Results of the case study demonstrate that SBSP provides an additional 20 kW/h of emergency power and extends EVA range from 125 to 1000 km to cover 26 of the lunar geologic units, at an added lifecycle cost of less than 1% of the baseline mission cost. Addition of a SBSP constellation for rovers provides operational flexibility, safety, and robustness to enable multiple lunar exploration architectures beyond that enabled by surface power infrastructures, and should be further explored for lunar missions.
</summary>
<dc:date>2025-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards optimal energy efficiency: analysing generalized and tailored retrofitting decisions</title>
<link href="https://hdl.handle.net/1721.1/162771" rel="alternate"/>
<author>
<name>Castro, Wilamy</name>
</author>
<author>
<name>Barrelas, Joana</name>
</author>
<author>
<name>Mendes, Maria P.</name>
</author>
<author>
<name>Reinhart, Christoph</name>
</author>
<author>
<name>Silva, Ana</name>
</author>
<id>https://hdl.handle.net/1721.1/162771</id>
<updated>2026-03-08T03:21:04Z</updated>
<published>2025-07-12T00:00:00Z</published>
<summary type="text">Towards optimal energy efficiency: analysing generalized and tailored retrofitting decisions
Castro, Wilamy; Barrelas, Joana; Mendes, Maria P.; Reinhart, Christoph; Silva, Ana
A building’s energy performance, in terms of thermal comfort, energy demand, cost and CO2 emissions, is considerably affected by its envelope. Enhancing energy efficiency through maintenance and retrofitting is essential to reduce consumption and emissions, thereby mitigating climate change. However, selecting the most cost-effective retrofitting solution remains challenging for decision-makers. Analysing real data across multiple scenarios provides valuable insights, supporting informed decision-making. This study discusses the impact of thermal retrofitting decisions on the energy efficiency of an existing single-family home, by analysing multiple scenarios concerning the implementation of measures on external walls, roof and windows. Both generalized and tailored approaches, particularly for external walls, are evaluated. Options include different insulation materials for the roof and façades—with the latter employing an external thermal insulation composite system (ETICS)—and various framing materials with double-glazing for window replacement. Various scenarios are discussed based on thermal simulations, implementation costs, and cost-benefit analysis. Additionally, multi-criteria (MCA) and sensitivity (SA) analyses are conducted to determine the optimal retrofitting solution. The most effective combined strategy applies ETICS with rock wool on the external walls, extruded polystyrene panels on the roof, and aluminium-framed windows with a thermal break, balancing energy efficiency, costs, durability, and sustainability. Although not part of the optimal solution, tailored retrofitting of façade F2 presents a viable alternative under cost constraints.
</summary>
<dc:date>2025-07-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Thrust Density in Porous Electrospray Thrusters</title>
<link href="https://hdl.handle.net/1721.1/162770" rel="alternate"/>
<author>
<name>Corrado, Matthew N.</name>
</author>
<author>
<name>Lozano, Paulo C.</name>
</author>
<id>https://hdl.handle.net/1721.1/162770</id>
<updated>2026-03-08T03:24:41Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Thrust Density in Porous Electrospray Thrusters
Corrado, Matthew N.; Lozano, Paulo C.
A path for increasing thrust density in electrospray thrusters is through fabrication of&#13;
denser arrays of emitters. Conventional arguments assume thrust to scale linearly with the&#13;
emitter number, but there has not been a critical analysis to examine the behavior of this&#13;
trend at very high densities. Here, we describe a model for thruster current as a function&#13;
of array density which considers how hydraulic losses change as density increases, and we&#13;
find that the ideal scaling is a poor approximation. In the optimistic cases, the current&#13;
increases monotonically with density but with diminishing returns. In the worst cases,&#13;
packing more emitters into the same space is detrimental as hydraulic losses dominate over&#13;
gains in the number of emitters. Under certain conditions there is an optimum density&#13;
which maximizes the net output. We also describe the fabrication and testing of a family&#13;
of porous electrospray emitters featuring pore sizes in the 10 nm to 100 nm range, with&#13;
the purpose of leveraging the high precision and uniformity afforded by these materials&#13;
to develop a platform suitable for experimentally validating the density models. A set&#13;
of test results from two of these thrusters is presented, both having a 450 µm pitch but&#13;
with different pore sizes. The 100 nm pore thruster shows characteristics similar to other&#13;
porous electrosprays, emitting in the pure-ion mode at currents up to 400 µA and exhibiting&#13;
current-temperature behavior commensurate with the liquid viscosity. The 10 nm pore&#13;
thruster appears to be greatly flow-restricted, producing about an order of magnitude less&#13;
current at analogous conditions and showing negligible response to changes in temperature.
39th International Electric Propulsion Conference, Imperial College London, London, United Kingdom 14-19 September 2025
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tractoriae and the logistics of Carolingian entourages</title>
<link href="https://hdl.handle.net/1721.1/162769" rel="alternate"/>
<author>
<name>Goldberg, Eric J.</name>
</author>
<id>https://hdl.handle.net/1721.1/162769</id>
<updated>2026-03-08T03:24:40Z</updated>
<published>2025-03-19T00:00:00Z</published>
<summary type="text">Tractoriae and the logistics of Carolingian entourages
Goldberg, Eric J.
Entourages played a central role in Carolingian politics and militaryorganization. Yet historians have neglected the important question of howkings and magnates supplied their retinues. This article investigates thattopic by examining an overlooked genre of evidence: tractoriae or royal lettersof requisition. Louis the Pious revived the use of these late Roman andMerovingian documents to authorize magnates to collect supplies for theirfollowers and horses. The provisions enumerated in tractoriae give us rareinsight into the composition and scale of ninth-century retinues and armies.Their disappearance during the reign of Charles the Bald was bound upwith larger transformations of late Carolingian politics.
</summary>
<dc:date>2025-03-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, History Faculty</title>
<link href="https://hdl.handle.net/1721.1/162768" rel="alternate"/>
<author>
<name>Ghachem, Malick</name>
</author>
<id>https://hdl.handle.net/1721.1/162768</id>
<updated>2025-09-20T03:08:29Z</updated>
<published>2025-09-19T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, History Faculty
Ghachem, Malick
This report contains the following sections: Highlights (Arrivals &amp; Departures, Promotions, History and HASTS, the History Office, Teaching and Curriculum, the History of Now), Faculty and Staff Updates.
</summary>
<dc:date>2025-09-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Abstraction Alignment: Comparing Model-Learned and Human-Encoded Conceptual Relationships</title>
<link href="https://hdl.handle.net/1721.1/162767" rel="alternate"/>
<author>
<name>Boggust, Angie</name>
</author>
<author>
<name>Bang, Hyemin</name>
</author>
<author>
<name>Strobelt, Hendrik</name>
</author>
<author>
<name>Satyanarayan, Arvind</name>
</author>
<id>https://hdl.handle.net/1721.1/162767</id>
<updated>2026-03-08T03:22:09Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Abstraction Alignment: Comparing Model-Learned and Human-Encoded Conceptual Relationships
Boggust, Angie; Bang, Hyemin; Strobelt, Hendrik; Satyanarayan, Arvind
While interpretability methods identify a model’s learned concepts, they overlook the relationships between concepts that make up its abstractions and inform its ability to generalize to new data. To assess whether models’ have learned human-aligned abstractions, we introduce abstraction alignment, a methodology to compare model behavior against formal human knowledge. Abstraction alignment externalizes domain-specific human knowledge as an abstraction graph, a set of pertinent concepts spanning levels of abstraction. Using the abstraction graph as a ground truth, abstraction alignment measures the alignment of a model’s behavior by determining how much of its uncertainty is accounted for by the human abstractions. By aggregating abstraction alignment across entire datasets, users can test alignment hypotheses, such as which human concepts the model has learned and where misalignments recur. In evaluations with experts, abstraction alignment differentiates seemingly similar errors, improves the verbosity of existing model-quality metrics, and uncovers improvements to current human abstractions.
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep Flows Transmitted by Forced Surface Gravity Waves</title>
<link href="https://hdl.handle.net/1721.1/162766" rel="alternate"/>
<author>
<name>Pizzo, Nick</name>
</author>
<author>
<name>Wagner, Gregory L.</name>
</author>
<id>https://hdl.handle.net/1721.1/162766</id>
<updated>2026-03-08T03:21:12Z</updated>
<published>2025-03-26T00:00:00Z</published>
<summary type="text">Deep Flows Transmitted by Forced Surface Gravity Waves
Pizzo, Nick; Wagner, Gregory L.
We examine a two-dimensional deep-water surface gravity wave packet generated by a pressure disturbance in the Lagrangian reference frame. The pressure disturbance has the form of a narrow-banded weakly nonlinear deep-water wave packet. During forcing, the vorticity equation implies that the momentum resides entirely in the near-surface Lagrangian-mean flow, which in this context is often called the “Stokes drift”. After the forcing turns off, the wave packet propagates away from the forcing region, carrying with it most of the energy imparted by the forcing. These waves together with their induced long wave response have no momentum in a depth integrated sense, in agreement with the classical results of Longuet-Higgins and Stewart (Deep Sea Research and Oceanographic Abstracts 11, 592−562) and McIntyre (Journal of Fluid Mechanics 106, 331−347). The total flow associated with the propagating packet has no net momentum. In contrast with the finite-depth scenario discussed by McIntyre (Journal of Fluid Mechanics 106, 331−347), however, momentum imparted to the fluid during forcing resides in a dipolar structure that persists in the forcing region—rather than being carried away by shallow-water waves. We conclude by examining waves propagating from deep to shallow water and show that wave packets, which initially have no momentum, may have non-zero momentum in finite-depth water through reflected and trapped long waves. This explains how deep water waves acquire momentum as they approach shore. The artificial form of the parameterized forcing from the wind facilitates the thought experiments considered in this paper, as opposed to striving to model more realistic wind forcing scenarios.
</summary>
<dc:date>2025-03-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>SpeakEasy: Enhancing Text-to-Speech Interactions for Expressive Content Creation</title>
<link href="https://hdl.handle.net/1721.1/162765" rel="alternate"/>
<author>
<name>Brade, Stephen</name>
</author>
<author>
<name>Anderson, Sam</name>
</author>
<author>
<name>Kumar, Rithesh</name>
</author>
<author>
<name>Jin, Zeyu</name>
</author>
<author>
<name>Truong, Anh</name>
</author>
<id>https://hdl.handle.net/1721.1/162765</id>
<updated>2026-03-08T03:22:31Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">SpeakEasy: Enhancing Text-to-Speech Interactions for Expressive Content Creation
Brade, Stephen; Anderson, Sam; Kumar, Rithesh; Jin, Zeyu; Truong, Anh
Novice content creators often invest significant time recording expressive speech for social media videos. While recent advancements in text-to-speech (TTS) technology can generate highly realistic speech in various languages and accents, many struggle with unintuitive or overly granular TTS interfaces. We propose simplifying TTS generation by allowing users to specify high-level context alongside their script. Our Wizard-of-Oz system, SpeakEasy, leverages user-provided context to inform and influence TTS output, enabling iterative refinement with high-level feedback. This approach was informed by two 8-subject formative studies: one examining content creators’ experiences with TTS, and the other drawing on effective strategies from voice actors. Our evaluation shows that participants using SpeakEasy were more successful in generating performances matching their personal standards, without requiring significantly more effort than leading industry interfaces.
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Giant, non-perturbative tuning of light-matter interaction of embedded quantum dots in semiconducting matrices</title>
<link href="https://hdl.handle.net/1721.1/162764" rel="alternate"/>
<author>
<name>Wu, Ming-Chung</name>
</author>
<author>
<name>Hsiao, Kai-Chi</name>
</author>
<author>
<name>Fu, Chuliang</name>
</author>
<author>
<name>Lin, Ting-Han</name>
</author>
<author>
<name>Chang, Yin-Hsuan</name>
</author>
<author>
<name>Huang, Yu-Ching</name>
</author>
<author>
<name>Nieh, Mu-Ping</name>
</author>
<author>
<name>Su, Wei-Fang</name>
</author>
<author>
<name>Li, Mingda</name>
</author>
<id>https://hdl.handle.net/1721.1/162764</id>
<updated>2026-03-08T03:20:29Z</updated>
<published>2025-06-21T00:00:00Z</published>
<summary type="text">Giant, non-perturbative tuning of light-matter interaction of embedded quantum dots in semiconducting matrices
Wu, Ming-Chung; Hsiao, Kai-Chi; Fu, Chuliang; Lin, Ting-Han; Chang, Yin-Hsuan; Huang, Yu-Ching; Nieh, Mu-Ping; Su, Wei-Fang; Li, Mingda
Embedding quantum dots (QDs) in a solid-state matrix represents a promising hybrid platform that offers great flexibility and tunability. However, the lack of clear underlying designing principle and presence of large design space make the design process heavily relies on trial-and-error methods. Here we present a new principle that can drastically tailor the light-matter interaction of matrix by matrix-mediated QD interactions. We show that conducting matrices like P3HT can mediate a non-perturbative inter-QD interactions that lead to qualitatively distinct properties, including the enhanced carrier lifetime and enhanced binding energies with increased QD densities, which cannot be explained by conventional perturbative scattering theories and in sharp contrast to independent embedded QDs in an insulating matrix like PMMA. An effective quantum-field-theory is developed, showing qualitative agreement with experiments. Our study serves as a foundation for the predictive design of advanced hybrid materials aimed at optimizing functionalities.
</summary>
<dc:date>2025-06-21T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine learning used to study risk factors for chronic diseases: A scoping review</title>
<link href="https://hdl.handle.net/1721.1/162763" rel="alternate"/>
<author>
<name>Shergill, Mahek</name>
</author>
<author>
<name>Durant, Steve</name>
</author>
<author>
<name>Birdi, Sharon</name>
</author>
<author>
<name>Rabet, Roxana</name>
</author>
<author>
<name>Ziegler, Carolyn</name>
</author>
<author>
<name>Ali, Shehzad</name>
</author>
<author>
<name>Buckeridge, David</name>
</author>
<author>
<name>Ghassemi, Marzyeh</name>
</author>
<author>
<name>Gibson, Jennifer</name>
</author>
<author>
<name>John-Baptiste, Ava</name>
</author>
<author>
<name>Macklin, Jillian</name>
</author>
<author>
<name>McCradden, Melissa</name>
</author>
<author>
<name>McKenzie, Kwame</name>
</author>
<author>
<name>Naraei, Parisa</name>
</author>
<id>https://hdl.handle.net/1721.1/162763</id>
<updated>2026-03-08T03:20:55Z</updated>
<published>2025-06-11T00:00:00Z</published>
<summary type="text">Machine learning used to study risk factors for chronic diseases: A scoping review
Shergill, Mahek; Durant, Steve; Birdi, Sharon; Rabet, Roxana; Ziegler, Carolyn; Ali, Shehzad; Buckeridge, David; Ghassemi, Marzyeh; Gibson, Jennifer; John-Baptiste, Ava; Macklin, Jillian; McCradden, Melissa; McKenzie, Kwame; Naraei, Parisa
Objectives Machine learning (ML) has received significant attention for its potential to process and learn from vast amounts of data. Our aim was to perform a scoping review to identify studies that used ML to study risk factors for chronic diseases at a population level, notably those that incorporated methods to mitigate algorithmic bias. We focused on ML applications for the most common risk factors for chronic disease: tobacco use, alcohol use, unhealthy eating, physical activity, and psychological stress. Methods We searched the peer-reviewed, indexed literature using Medline (Ovid), Embase (Ovid), Cochrane Central Register of Controlled Trials and Cochrane Database of Systematic Reviews (Ovid), Scopus, ACM Digital Library, INSPEC, and Web of Science’s Science Citation Index, Social Sciences Citation Index, and Emerging Sources Citation Index. Among the included studies, we examined whether bias was considered and identified strategies employed to mitigate bias. Synthesis The search identified 10,329 studies, and 20 met our inclusion criteria. The studies we identified used ML for a wide range of goals, from prediction of chronic disease development to automating the classification of data to identifying new associations between risk factors and disease. Nine studies (45%) included some discussion of algorithmic bias. Studies that incorporated a broad array of sociodemographic variables did so primarily to improve the performance of a ML model rather than to mitigate potential harms to populations made vulnerable by social and economic policies. Conclusion This work contributes to our understanding of how ML can be used to advance population and public health.
</summary>
<dc:date>2025-06-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>eaSEL: Promoting Social-Emotional Learning and Parent-Child Interaction through AI-Mediated Content Consumption</title>
<link href="https://hdl.handle.net/1721.1/162762" rel="alternate"/>
<author>
<name>Shen, Jocelyn</name>
</author>
<author>
<name>King Chen, Jennifer</name>
</author>
<author>
<name>Findlater, Leah</name>
</author>
<author>
<name>Dietz Smith, Griffin</name>
</author>
<id>https://hdl.handle.net/1721.1/162762</id>
<updated>2026-03-08T03:22:23Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">eaSEL: Promoting Social-Emotional Learning and Parent-Child Interaction through AI-Mediated Content Consumption
Shen, Jocelyn; King Chen, Jennifer; Findlater, Leah; Dietz Smith, Griffin
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis Facilities for the HL-LHC White Paper</title>
<link href="https://hdl.handle.net/1721.1/162761" rel="alternate"/>
<author>
<name>Ciangottini, D.</name>
</author>
<author>
<name>C. Forti, A.</name>
</author>
<author>
<name>Heinrich, L.</name>
</author>
<author>
<name>Skidmore, N.</name>
</author>
<author>
<name>Alpigiani, C.</name>
</author>
<author>
<name>Aly, M.</name>
</author>
<author>
<name>Benjamin, D.</name>
</author>
<author>
<name>Bockelman, B.</name>
</author>
<author>
<name>Bryant, L.</name>
</author>
<author>
<name>Catmore, J.</name>
</author>
<id>https://hdl.handle.net/1721.1/162761</id>
<updated>2026-03-08T03:20:47Z</updated>
<published>2025-07-13T00:00:00Z</published>
<summary type="text">Analysis Facilities for the HL-LHC White Paper
Ciangottini, D.; C. Forti, A.; Heinrich, L.; Skidmore, N.; Alpigiani, C.; Aly, M.; Benjamin, D.; Bockelman, B.; Bryant, L.; Catmore, J.
This white paper presents the current status of the R&amp;D for Analysis Facilities (AFs) and attempts to summarize the views on the future direction of these facilities. These views have been collected through the High Energy Physics (HEP) Software Foundation’s (HSF) Analysis Facilities forum (HSF Analysis Facilities Forum), established in March 2022, the Analysis Ecosystems II workshop (Analysis Ecosystems Workshop II), that took place in May 2022, and the WLCG/HSF pre-CHEP workshop (WLCG–HSF pre-CHEP Workshop), that took place in May 2023. The paper attempts to cover all the aspects of an analysis facility.
</summary>
<dc:date>2025-07-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>SymbolFit: Automatic Parametric Modeling with Symbolic Regression</title>
<link href="https://hdl.handle.net/1721.1/162760" rel="alternate"/>
<author>
<name>Tsoi, Ho F.</name>
</author>
<author>
<name>Rankin, Dylan</name>
</author>
<author>
<name>Caillol, Cecile</name>
</author>
<author>
<name>Cranmer, Miles</name>
</author>
<author>
<name>Dasu, Sridhara</name>
</author>
<author>
<name>Duarte, Javier</name>
</author>
<author>
<name>Harris, Philip</name>
</author>
<author>
<name>Lipeles, Elliot</name>
</author>
<author>
<name>Loncar, Vladimir</name>
</author>
<id>https://hdl.handle.net/1721.1/162760</id>
<updated>2026-03-08T03:20:50Z</updated>
<published>2025-07-01T00:00:00Z</published>
<summary type="text">SymbolFit: Automatic Parametric Modeling with Symbolic Regression
Tsoi, Ho F.; Rankin, Dylan; Caillol, Cecile; Cranmer, Miles; Dasu, Sridhara; Duarte, Javier; Harris, Philip; Lipeles, Elliot; Loncar, Vladimir
We introduce SymbolFit (API:  https://github.com/hftsoi/symbolfit ), a framework that automates parametric modeling by using symbolic regression to perform a machine-search for functions that fit the data while simultaneously providing uncertainty estimates in a single run. Traditionally, constructing a parametric model to accurately describe binned data has been a manual and iterative process, requiring an adequate functional form to be determined before the fit can be performed. The main challenge arises when the appropriate functional forms cannot be derived from first principles, especially when there is no underlying true closed-form function for the distribution. In this work, we develop a framework that automates and streamlines the process by utilizing symbolic regression, a machine learning technique that explores a vast space of candidate functions without requiring a predefined functional form because the functional form itself is treated as a trainable parameter, making the process far more efficient and effortless than traditional regression methods. We demonstrate the framework in high-energy physics experiments at the CERN Large Hadron Collider (LHC) using five real proton-proton collision datasets from new physics searches, including background modeling in resonance searches for high-mass dijet, trijet, paired-dijet, diphoton, and dimuon events. We show that our framework can flexibly and efficiently generate a wide range of candidate functions that fit a nontrivial distribution well using a simple fit configuration that varies only by random seed, and that the same fit configuration, which defines a vast function space, can also be applied to distributions of different shapes, whereas achieving a comparable result with traditional methods would have required extensive manual effort.
</summary>
<dc:date>2025-07-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Loss remakes you</title>
<link href="https://hdl.handle.net/1721.1/162759" rel="alternate"/>
<author>
<name>Edoh, Amah</name>
</author>
<id>https://hdl.handle.net/1721.1/162759</id>
<updated>2026-03-08T03:24:38Z</updated>
<published>2025-02-24T00:00:00Z</published>
<summary type="text">Loss remakes you
Edoh, Amah
This article tells the story of my research on Dutch wax cloth, a highly prized textileand cultural artifact in Togo, my home country. I examine the fate of the cloth and ofthe Togolese women who made it into an object of great significance in the wake ofpolitical upheaval starting in the late 1980s, the same upheaval that led to my family’spermanent departure from Togo in 1991. Tracking my trajectory through the researchas a Togolese émigrée, I come to see clearly for the first time that the cloth’s story andmy own were not only shaped by the same historical forces but that they also tracedsimilar arcs. Told together, the stories weave a tale of belonging, rupture, and of whatcomes after; a story of how loss remakes us, and how we remake ourselves in the faceof loss. Autoethnography emerges as a tool for unearthing the personal agendas thatso often guide our choice of research topics as anthropologists. And research on topicsthat are close to home proves to be as likely to reawaken old wounds as it is to openpathways to some measure of resolution.
</summary>
<dc:date>2025-02-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multidimensional Labeling of Gesture in Communication: the M3D Proposal</title>
<link href="https://hdl.handle.net/1721.1/162758" rel="alternate"/>
<author>
<name>Rohrer, Patrick L.</name>
</author>
<author>
<name>Tütüncübasi, Ulya</name>
</author>
<author>
<name>Florit-Pons, Júlia</name>
</author>
<author>
<name>Vilà-Giménez, Ingrid</name>
</author>
<author>
<name>Esteve-Gibert, Núria</name>
</author>
<author>
<name>Ren-Mitchell, Ada</name>
</author>
<author>
<name>Shattuck-Hufnagel, Stefanie</name>
</author>
<author>
<name>Prieto, Pilar</name>
</author>
<id>https://hdl.handle.net/1721.1/162758</id>
<updated>2026-03-08T03:21:06Z</updated>
<published>2025-06-26T00:00:00Z</published>
<summary type="text">Multidimensional Labeling of Gesture in Communication: the M3D Proposal
Rohrer, Patrick L.; Tütüncübasi, Ulya; Florit-Pons, Júlia; Vilà-Giménez, Ingrid; Esteve-Gibert, Núria; Ren-Mitchell, Ada; Shattuck-Hufnagel, Stefanie; Prieto, Pilar
Communication is multimodal in that speakers use not only their voices, but also co-speech gestures to communicate. Recent insights suggest that gestural behavior has a strong association with prosodic structure and that a single gesture can communicate various semantic and pragmatic meanings. This highlights the importance of developing a comprehensive, flexible, and transparent approach to gesture annotation that accounts for multiple dimensions of gesture, including a gesture’s form, prosodic properties, and semantic and pragmatic contributions. To address this need for an increasingly dimensionalized approach to multimodal data annotation, the main goal of this paper is to present and describe a novel labeling system for manual gestures. The MultiModal MultiDimensional (M3D) system consists of an open access package that has been developed in coordination with five different labs working on gesture and its interaction with speech. The package includes a set of reliable annotation guidelines, a validated training program, and two annotated audiovisual corpora that represent over 60 minutes of lecture-style speech.
</summary>
<dc:date>2025-06-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Why axis inversion? Optimizing interactions between users, interfaces, and visual displays in 3D environments</title>
<link href="https://hdl.handle.net/1721.1/162757" rel="alternate"/>
<author>
<name>Corbett, Jennifer E.</name>
</author>
<author>
<name>Munneke, Jaap</name>
</author>
<id>https://hdl.handle.net/1721.1/162757</id>
<updated>2026-03-08T03:20:13Z</updated>
<published>2025-06-23T00:00:00Z</published>
<summary type="text">Why axis inversion? Optimizing interactions between users, interfaces, and visual displays in 3D environments
Corbett, Jennifer E.; Munneke, Jaap
From video games to laparoscopic surgeries, differences in users’ abilities to adapt to new control schemes can have significant, even deadly impacts on performance. Starting with the question of why some video game players invert the y-axis on their console controllers, this work aims to provide a foundation for future investigations of how control schemes can significantly impact performance. We argue that fragmented research across disciplines hinders a unified understanding of how the spatial relationships between users, interfaces, and visual displays affect performance. Therefore, we begin with a multidisciplinary literature synthesis, clarifying existing findings, and identifying methodological inconsistencies that contribute to conflicting results. We then explore the relationship between key behavioral and cognitive factors and y-axis inversion preference in a group of experienced 3rd person gamers. Based on these preliminary results, we propose a “general purpose” framework to systematically investigate how control inversion and visual input influence perception and performance across various movement goals. We demonstrate how this framework can be used to evaluate performance in the context of a common and challenging laparoscopic procedure, and how it can be generalized to assess and predict sensorimotor compatibility effects across a wide variety of real-world situations.
</summary>
<dc:date>2025-06-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Realism Drives Interpersonal Reciprocity but Yields to AI-Assisted Egocentrism in a Coordination Experiment</title>
<link href="https://hdl.handle.net/1721.1/162756" rel="alternate"/>
<author>
<name>Shirado, Hirokazu</name>
</author>
<author>
<name>Shimizu, Kye</name>
</author>
<author>
<name>Christakis, Nicholas</name>
</author>
<author>
<name>Kasahara, Shunichi</name>
</author>
<id>https://hdl.handle.net/1721.1/162756</id>
<updated>2026-03-08T03:22:11Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Realism Drives Interpersonal Reciprocity but Yields to AI-Assisted Egocentrism in a Coordination Experiment
Shirado, Hirokazu; Shimizu, Kye; Christakis, Nicholas; Kasahara, Shunichi
Virtual reality technologies that enhance realism and artificial intelligence (AI) systems that assist human behavior are increasingly interwoven in social applications. However, how these technologies might jointly influence interpersonal coordination remains unclear. We conducted an experiment with 240 participants in 120 pairs who interacted through remote-controlled robot cars in a physical space or virtual cars in a digital space, with or without autosteering assistance, using the chicken game, an established model of interpersonal coordination. We find that both realism and AI assistance help improve user performance but through opposing mechanisms. Real-world contexts enhanced communication, fostering reciprocal actions and collective benefits. In contrast, autosteering assistance diminished the need for interpersonal coordination, shifting participants’ focus towards self-interest. Notably, when combined, the egocentric effects of autosteering assistance outweighed the prosocial effects of realism. The design of HCI systems that involve social coordination will, we believe, need to take such effects into account.
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>InteRecon: Towards Reconstructing Interactivity of Personal Memorable Items in Mixed Reality</title>
<link href="https://hdl.handle.net/1721.1/162755" rel="alternate"/>
<author>
<name>Li, Zisu</name>
</author>
<author>
<name>Li, Jiawei</name>
</author>
<author>
<name>Xiong, Zeyu</name>
</author>
<author>
<name>Zhang, Shumeng</name>
</author>
<author>
<name>Faruqi, Faraz</name>
</author>
<author>
<name>Mueller, Stefanie</name>
</author>
<author>
<name>Liang, Chen</name>
</author>
<author>
<name>Ma, Xiaojuan</name>
</author>
<author>
<name>Fan, Mingming</name>
</author>
<id>https://hdl.handle.net/1721.1/162755</id>
<updated>2026-03-08T03:22:19Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">InteRecon: Towards Reconstructing Interactivity of Personal Memorable Items in Mixed Reality
Li, Zisu; Li, Jiawei; Xiong, Zeyu; Zhang, Shumeng; Faruqi, Faraz; Mueller, Stefanie; Liang, Chen; Ma, Xiaojuan; Fan, Mingming
Digital capturing of memorable personal items is a key way to archive personal memories. Although current digitization methods (e.g., photos, videos, 3D scanning) can replicate the physical appearance of an item, they often cannot preserve its real-world interactivity. We present Interactive Digital Item (IDI), a concept of reconstructing both the physical appearance and, more importantly, the interactivity of an item. We first conducted a formative study to understand users’ expectations of IDI, identifying key physical interactivity features, including geometry, interfaces, and embedded content of items. Informed by these findings, we developed InteRecon, an AR prototype enabling personal reconstruction functions for IDI creation. An exploratory study was conducted to assess the feasibility of using InteRecon and explore the potential of IDI to enrich personal memory archives. Results show that InteRecon is feasible for IDI creation, and the concept of IDI brings new opportunities for augmenting personal memory archives.
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systematic refinement of experimental practices to improve repeatability in flow battery cycling</title>
<link href="https://hdl.handle.net/1721.1/162754" rel="alternate"/>
<author>
<name>O’Connor, Hugh</name>
</author>
<author>
<name>Quinn, Alexander H.</name>
</author>
<author>
<name>Brushett, Fikile R.</name>
</author>
<author>
<name>Istrate, Oana</name>
</author>
<author>
<name>Glover, Stephen</name>
</author>
<author>
<name>Bailey, Josh J.</name>
</author>
<author>
<name>Nockemann, Peter</name>
</author>
<id>https://hdl.handle.net/1721.1/162754</id>
<updated>2026-03-08T03:21:08Z</updated>
<published>2025-06-11T00:00:00Z</published>
<summary type="text">Systematic refinement of experimental practices to improve repeatability in flow battery cycling
O’Connor, Hugh; Quinn, Alexander H.; Brushett, Fikile R.; Istrate, Oana; Glover, Stephen; Bailey, Josh J.; Nockemann, Peter
Flow batteries represent one of the leading options for large-scale, long-duration energy storage. In recent years, research into this technology has accelerated, with numerous innovative studies focusing on electrolytes, membranes, and electrode materials. Despite this, there is presently no clear set of testing protocols followed during full-cell testing of flow batteries and the experimental techniques detailed in published literature are often insufficient to reproduce results. Furthermore, testing to quantify the repeatability of experiments is not often reported. In this work, various aspects of an experimental procedure developed from the peer-reviewed literature are refined, with voltage efficiency, coulombic efficiency, energy efficiency, and electrolyte utilization used as indicators of repeatability. A set of improved testing protocols are presented for researchers to consider when conducting charge–discharge testing, and additional factors to be reported and studied in the context of repeatability are suggested.
</summary>
<dc:date>2025-06-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Interpretable Multimodal Framework for Regional&#13;
Organ Transplantation Outcomes</title>
<link href="https://hdl.handle.net/1721.1/162752" rel="alternate"/>
<author>
<name>Lee, Ju Young</name>
</author>
<id>https://hdl.handle.net/1721.1/162752</id>
<updated>2025-09-19T04:50:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Interpretable Multimodal Framework for Regional&#13;
Organ Transplantation Outcomes
Lee, Ju Young
The demand for kidney transplants continues to outpace supply, with over 89,792 patients on the waitlist as of September 2024, yet only 27,332 transplants performed in 2023 [1], and 28% of recovered kidneys going non-utilized [2]. In this thesis, we highlight the use of large language model (LLM) embeddings combined with structured tabular data to build a predictive classifier that estimates offer outcomes for kidney donor-recipient matches. For each predictive model deployed, we provide further analysis on the interpretability of these black-box models using a custom-designed SHAP analysis framework. Our study focuses on three distinct U.S. regions (Regions 1,2,3) with markedly different demographics and amounts of data on organ acceptances (Region 1: 43,126 offers with 2.19% acceptance rate, Region 2: 394,640 offers with 1.57% acceptance rate, Region 3: 169,342 with 2.23% acceptance rate in years 2016-2019). Among the baseline XGBoost models, Region 3 achieved the highest performance, with a precision-accept score of 0.929 and accuracy of 0.993 in the test data. Building on this strong foundation, the multimodal TabText model in Region 3 achieved the best performance overall, with a precision-accept score of 0.959 and accuracy of 0.993 after fine-tuning for six epochs. Our findings suggest that increasing the number of text features, extending training epochs, and incorporating explicit numerical values led to improved model performance in Region 3. In Regions 1 and 2, the baseline model outperformed the TabText model, suggesting that data sparsity in these regions may have limited the effectiveness of the multimodal approach and that further hyperparameter tuning is needed. We also present several visualization techniques to enhance model interpretability. Specifically, we developed a novel SHAP explainer that illustrates feature interactions between multimodal inputs, including both tabular and textual data. Additionally, we explored methods to identify regions of high and low model fidelity by mapping per-sample prediction errors onto t-SNE embeddings. Overall, this thesis introduces new directions for transplant research in the context of transformer-based models and interpretable AI. Leveraging data-driven decision-support tools and refining allocation policies are essential steps toward addressing the persistent gap between supply and demand in the kidney transplant landscape.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Medium Access Control Protocol for Satellite Constellations</title>
<link href="https://hdl.handle.net/1721.1/162751" rel="alternate"/>
<author>
<name>Li, Brian</name>
</author>
<id>https://hdl.handle.net/1721.1/162751</id>
<updated>2025-09-19T04:50:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Medium Access Control Protocol for Satellite Constellations
Li, Brian
Satellite internet constellations have emerged as a promising solution for providing global internet connectivity, especially in regions underserved by terrestrial infrastructure. However, as user demand increases, especially in densely populated urban areas, existing Medium Access Control (MAC) protocols face significant scalability challenges and fail to take advantage of advanced antenna processing techniques, including phased array nulling, as well as capacity sharing via inter-satellite links.&#13;
We present both an offline linear program and a novel online greedy MAC protocol to assign satellite resources to users using either sequential service, capacity sharing, or interference-aware nulling. Our offline formulation provides an upper bound on system performance, and while our online protocol is sub-optimal compared to this optimum, it is designed to be implementable on a real-time system. Simulations demonstrate that incorporating nulling can increase effective capacity by up to 25 times, substantially boosting profit in high-demand scenarios. We further quantify the performance gap between the online protocol and the offline optimum under varying demand distributions, showing that our online approach achieves near-optimal results in low-peakiness settings and gracefully degrades under more extreme conditions. These results highlight the importance of spatial processing at the MAC layer and offer practical design insights for future satellite internet constellations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Copilot Tutor: Automated Software Engineering Practice&#13;
Augmented with LLMs</title>
<link href="https://hdl.handle.net/1721.1/162750" rel="alternate"/>
<author>
<name>Kong, Blisse</name>
</author>
<id>https://hdl.handle.net/1721.1/162750</id>
<updated>2025-09-19T04:50:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Copilot Tutor: Automated Software Engineering Practice&#13;
Augmented with LLMs
Kong, Blisse
In recent years, large language models (LLMs) have become more ubiquitous in the workplace. In software engineering, they are often realized as “copilots" which produce code given a prompt or existing code. Programmers using these tools to increase their coding productivity need to be proficient in inspecting and in understanding these copilots’ outputs. As engineers incorporate these tools to accelerate their workflows, they have a parallel opportunity to accelerate learning new programming languages. This thesis presents a tutor interface where students with some programming experience in an origin language can learn a target language while practicing how to critically read and fix a copilot’s output to write correct, safe programs. This work also introduces the automatic generation of exercises teaching syntax and semantics on which a programmer experienced in the origin language but not the target language should focus.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategic Physical Withholding of Renewable Energy&#13;
Generators</title>
<link href="https://hdl.handle.net/1721.1/162749" rel="alternate"/>
<author>
<name>Irvine, Paul M.</name>
</author>
<id>https://hdl.handle.net/1721.1/162749</id>
<updated>2025-09-19T04:50:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Strategic Physical Withholding of Renewable Energy&#13;
Generators
Irvine, Paul M.
Renewable generators may have incentives to strategically withhold energy output in electricity markets, either to exercise market power or to avoid congestion pricing caused by transmission constraints. Although academic work often treats renewables as not downward dispatchable, renewable generators technically can, at least in principle, reduce their output by self-curtailing. This paper shows that a firm with a large, diverse portfolio could find it profit-maximizing to withhold renewables over conventional thermal generators once it accounts for constraints on ramp rates and minimum generation, as well as the costs associated with starting-up generators and the probability of detection on generator type by market monitoring authorities. Long-term forward contracts like pay-as-produced Power Purchase Agreements (PPAs) can blunt incentives to exercise market power by insulating individual generators from wholesale prices; however, since generators under PPAs typically bid into the wholesale market and influence competitive prices, they may actually encourage renewable withholding if contract prices are sufficiently low and the parent firm’s portfolio is exposed to wholesale prices. To screen for renewable withholding, this paper proposes three methods: (1) examining the distribution of aggregate output across export interfaces for suspicious bunching, (2) testing deviations from ex-ante forecasts, and (3) identifying the time intervals where generators encounter model structural changes compared to a benchmark presumed free of withholding. Together, this work prepares academics and regulators to more accurately model the behavior of renewable generators in electricity markets and to screen for potential market abuses.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Argos: Verifiable FHE Using Commodity Hardware</title>
<link href="https://hdl.handle.net/1721.1/162748" rel="alternate"/>
<author>
<name>Jepsen, Fisher</name>
</author>
<id>https://hdl.handle.net/1721.1/162748</id>
<updated>2025-09-19T04:49:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Argos: Verifiable FHE Using Commodity Hardware
Jepsen, Fisher
We present Argos, a simple approach for adding verifiability to fully homomorphic encryption (FHE) schemes using trusted hardware. Traditional approaches to verifiable FHE require expensive cryptographic proofs, which incur an overhead of up to seven orders of magnitude on top of FHE, making them impractical. With Argos, we show that trusted hardware can be securely used to provide verifiability for FHE computations, with minimal overhead relative to the baseline FHE computation. An important contribution of Argos is showing that the major security pitfall associated with trusted hardware, microarchitectural side channels, can be completely mitigated by excluding any secrets from the CPU and the memory hierarchy. This is made possible by focusing on building a platform that only enforces program and data integrity and not confidentiality (which is sufficient for verifiable FHE, since all data remain encrypted at all times). All secrets related to the attestation mechanism are kept in a separate coprocessor (e.g., a TPM)—inaccessible to any software-based attacker. Relying on a discrete TPM typically incurs significant performance overhead, which is why (insecure) software-based TPMs are used in practice. As a second contribution, we show that for FHE applications, the attestation protocol can be adapted to only incur a fixed cost. Argos requires no dedicated hardware extensions and is supported on commodity processors from 2008 onward. Our prototype implementation introduces 3% overhead for FHE evaluation, and 8% for more complex protocols. In particular, we show that Argos can be used for real-world applications of FHE, such as private information retrieval (PIR) and private set intersection (PSI), where providing verifiability is imperative. By demonstrating how to combine cryptography with trusted hardware, Argos paves the way for widespread deployment of FHE-based protocols beyond the semi-honest setting, without the overhead of cryptographic proofs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automatic Conversion of C and C++ Programs to the BuildIt Multi-Stage Programming Framework</title>
<link href="https://hdl.handle.net/1721.1/162747" rel="alternate"/>
<author>
<name>Kumar, Aryan</name>
</author>
<id>https://hdl.handle.net/1721.1/162747</id>
<updated>2025-09-19T04:49:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Automatic Conversion of C and C++ Programs to the BuildIt Multi-Stage Programming Framework
Kumar, Aryan
BuildIt allows users to write C++ programs that can execute in multiple stages, where the output of one stage is the program source for the next stage, ending with some final output produced. This is particularly useful for writing specialized code and generating code for domain-specific languages. While there are other approaches to multi-stage programming, BuildIt has several advantages: it takes a library-based approach (so it requires no modifications to the compiler and is thus highly portable), and it has excellent ease of use as all the user has to do is change the declared types of variables in their C++ program. The goal of this thesis is to further improve BuildIt’s ease of use by simplifying this step: in particular, by developing a tool that will automatically convert existing C and C++ programs to the BuildIt framework. We show how to use Clang tooling in conjunction with modifications to the Clang compiler to perform non-trivial modifications to source, namely type-modification, to automatically convert code to its unstaged BuildIt equivalent. As the unstaged BuildIt code can be specialized by staging certain variables, this tool will ultimately enable more easily staging and optimizing C/C++ repositories with the BuildIt framework.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Association of GLP-1 Receptor Agonist Use with Kidney and Cardiovascular Outcomes in Stable Kidney Transplant Recipients</title>
<link href="https://hdl.handle.net/1721.1/162746" rel="alternate"/>
<author>
<name>Jung, Emma Yejoo</name>
</author>
<id>https://hdl.handle.net/1721.1/162746</id>
<updated>2025-09-19T04:49:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Association of GLP-1 Receptor Agonist Use with Kidney and Cardiovascular Outcomes in Stable Kidney Transplant Recipients
Jung, Emma Yejoo
Recent surges in the use of glucagon-like peptide-1 receptor agonists (GLP-1RA) have shown promise in reducing cardiovascular events and improving kidney function in patients with type 2 diabetes. Due to these hopeful improvements, kidney transplant recipients (KTRs) have started using GLP-1RA. However, their effects in KTRs remain largely unstudied in clinical studies. This thesis uses a large-scale Electronic Health Record (EHR) database to perform a retrospective cohort analysis to study the association between GLP-1RA use and kidney and cardiovascular outcomes amongst stable KTRs. Primary outcomes include all-cause mortality, major adverse kidney events (MAKE), and major adverse cardiac events (MACE). Among stable KTRs, GLP-1RA users show reduced risk for all-cause mortality (adjusted hazard ratio [aHR]: 0.45; 95% confidence interval [CI]: 0.32-0.62) and MAKE (aHR: 0.69; 95% CI: 0.58-0.81), but no significant difference for MACE (aHR: 0.84; 95% CI: 0.67-1.05). In addition, users show increased risk for irritable bowel syndrome (IBS) (aHR: 2.11; 95% CI: 1.07-4.15) and urinary tract infection (UTI) (aHR: 1.53; 95% CI: 1.27-1.85). These results indicate the potential of GLP-1RA to reduce mortality and adverse kidney outcomes and increase IBS and UTI in KTRs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hardware Acceleration for Real-Time Compression of 3D&#13;
Gaussians</title>
<link href="https://hdl.handle.net/1721.1/162745" rel="alternate"/>
<author>
<name>Kahler, Kailas B.</name>
</author>
<id>https://hdl.handle.net/1721.1/162745</id>
<updated>2025-09-19T04:49:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Hardware Acceleration for Real-Time Compression of 3D&#13;
Gaussians
Kahler, Kailas B.
3D Gaussian Splatting (3DGS) is a technique for novel view synthesis, where images of a scene from a specific viewpoint are generated using images from different viewpoints, that has gained popularity for its reduced computational overhead, resulting in faster training and rendering times compared to other methods like Neural Radiance Fields (NeRFs). Its applications outside of strictly novel view synthesis have also been explored, with monocular simultaneous localization and mapping (SLAM) in robotics being an emergent application. However, because of limited on-board battery capacity, the computer hardware used in small robots is much less capable than the high-powered GPUs that the 3DGS algorithm was originally developed on, having both less compute and memory capacity and bandwidth. While there has been work developing specialized compute for the rendering pipeline of 3DGS, memory remains an obstacle to deployment. The Gaussian map can occupy from 1MB − 700MB in memory, which is both too large to store on-chip within micro-robots and such that moving Gaussians from memory to compute can dominate power consumption. While there has been prior work on algorithms for compressing Gaussian representations, they are not yet capable of running in real-time on the hardware present in these robots, as would be required for SLAM. Thus, this thesis explores the limits of these compression methods on current hardware, resulting in an optimized CUDA implementation with better than 100× the throughput of prior work and achieving real-time operation on workstation-class hardware. However, after concluding that custom hardware is necessary for further improvement, this thesis also presents a hardware accelerator that nears real-time compression performance within a reduced power budget, outperforming an NVIDIA Jetson Orin Nano with 64% higher throughput while using 1/16th of the multipliers and drawing 38% of the power when running at 100MHz on an AMD UltraScale+ FPGA.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Personalization of AI Tutor Based on Knowledge Graphs</title>
<link href="https://hdl.handle.net/1721.1/162744" rel="alternate"/>
<author>
<name>Huang, Sheng</name>
</author>
<id>https://hdl.handle.net/1721.1/162744</id>
<updated>2025-09-19T04:49:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Personalization of AI Tutor Based on Knowledge Graphs
Huang, Sheng
Personalized tutoring, tailored to the specific knowledge and needs of individual students, has been shown to significantly enhance academic performance. Research by Schmidt and Moust, for example, highlights that tutors who engage with students on a personal level are more effective in guiding them toward higher academic achievement [1]. Inspired by this principle, the Axiom group at the MIT Media Lab developed an AI tutor for their Intro to Programming courses. The initial version of the tutor, RAGS, relied on analyzing past conversations between students and the tutor, as well as course content, to generate personalized responses. While this approach showed promise, it faced scalability challenges, such as the need to store an ever-growing volume of conversation history and the risk of exceeding token limits in prompt context windows. Additionally, the model occasionally struggled with over-generalization, particularly when responding to vague questions based solely on historical interactions. To address these limitations, this thesis introduces a new approach: a student knowledge graph. Rather than relying on an expanding archive of past conversations, the knowledge graph uses weighted nodes to represent a student’s understanding of each concept. A weight of -8 indicates subpar understanding, while a weight of 8 signifies mastery. After pre-processing the course data, the graph maintains a fixed size, eliminating the need for additional storage over time. This innovation solves two critical problems: &#13;
1. Scalability: By leveraging a fixed-size PostgreSQL database, the student knowledge graph avoids the storage challenges associated with saving endless conversation histories. &#13;
2. Improved Personalization: Instead of sifting through old conversations, the tutor uses concept weights to generate more precise and contextually relevant responses, even to vague questions. &#13;
Testing and evaluation of the implemented system demonstrate its effectiveness in both scalability and response quality. Over 60% of survey participants reported that the knowledge graph-enhanced tutor provided clearer and more relevant guidance, particularly when building on concepts they already understood. Additionally, over 80% of respondents noted improvements in the tutor’s ability to address weak areas and provide targeted practice, especially when preparing for quizzes or exams.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SPIRAL: Iterative Subgraph Expansion for Knowledge-Graph Based Retrieval-Augmented Generation</title>
<link href="https://hdl.handle.net/1721.1/162743" rel="alternate"/>
<author>
<name>Hadjiivanov, Michael D.</name>
</author>
<id>https://hdl.handle.net/1721.1/162743</id>
<updated>2025-09-19T04:49:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">SPIRAL: Iterative Subgraph Expansion for Knowledge-Graph Based Retrieval-Augmented Generation
Hadjiivanov, Michael D.
Large language models (LLMs) excel at generating fluent answers but are prone to hallucination when the prompt fails to anchor them to verifiable facts. Retrieval-augmented generation (RAG) mitigates this risk, yet existing graph-based retrievers either return bloated neighborhoods or incur prohibitive latency on large knowledge graphs (KGs). We introduce SPIRAL—Supervised Prior + Iterative Reinforcement with Adaptive Labelling—a lightweight two-stage framework that constructs compact, tree-shaped evidence subgraphs. This differs from previous work in its use of a trained, iterative policy network built on top of a prior over triples, delivering improved performance on multi-hop question answering tasks. Stage 1 trains a single-label GLASS-GNN on shortest-path heuristics, producing frozen, question-aware node embeddings at negligible runtime cost with significant local topology awareness around question entities. Stage 2 layers a GLASS policy—which re-labels the partial subgraph at each step—on top of these embeddings and optimizes it with proximal policy optimization. The policy scores only the 1-hop frontier, enabling sub-second inference even on million-edge graphs. On the multi-hop KGQA benchmark WebQSP, SPIRAL attains 0.95 triple recall and 0.97 answer recall while retrieving at most 50 triples—doubling the sampling efficiency of the strongest prior work. Coupled with Llama 3.1-8B, the retrieved trees boost Hit@1 by 2.5 % over SubgraphRAG. Ablation studies confirm that adaptive labels are critical for multi-hop reasoning. SPIRAL demonstrates that accurate and concise retrieval is achievable without resorting to massive models or expensive graph crawls, opening the door to real-time, KG-grounded assistants on modest hardware.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Injection of Domain-Specific Knowledge for Enterprise Text-to-SQL</title>
<link href="https://hdl.handle.net/1721.1/162742" rel="alternate"/>
<author>
<name>Choi, Justin J.</name>
</author>
<id>https://hdl.handle.net/1721.1/162742</id>
<updated>2025-12-09T18:27:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Injection of Domain-Specific Knowledge for Enterprise Text-to-SQL
Choi, Justin J.
This work examines the current state of using large language models (LLMs) to solve Text-to-SQL tasks on databases in an enterprise setting. Benchmarks on publicly available datasets do not fully capture the difficulty and complexity of this task in a real-world, enterprise setting. This study examines the critical steps needed to work with enterprise data as well as using knowledge-injection to enhance the performance of LLMs on Text-to-SQL tasks. We begin by evaluating the baseline performance of LLMs on enterprise databases, revealing that a predominant source of failure stems from a lack of domain-specific knowledge. To improve performance, we explore knowledge-injection: the process of incorporating internal and external knowledge. Internal knowledge consists of database-specific information such as join logic, while external knowledge refers to institutional acronyms or group names. We present a hybrid retrieval pipeline that combines embedding and text based searching with LLM-guided ranking to supply models with relevant external knowledge during Text-to-SQL generation. We evaluate the impact of the knowledge-injection by testing the performance of LLMs on the table retrieval task after being augmented with appropriate external knowledge. We demonstrate that knowledge-injection significantly improves accuracy on table retrieval using BEAVER: an enterprise-level Text-to-SQL benchmark. Our findings highlight the importance of domain-specific knowledge-injection and retrieval augmentation in bringing LLMs closer to deployment in enterprise-grade database systems, as well as common failure modes that occur when executing enterprise Text-to-SQL.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating the Feasibility of Transaction Scheduling via Hardware Accelerators</title>
<link href="https://hdl.handle.net/1721.1/162741" rel="alternate"/>
<author>
<name>Chomphoochan, Thanadol</name>
</author>
<id>https://hdl.handle.net/1721.1/162741</id>
<updated>2025-09-19T04:49:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Evaluating the Feasibility of Transaction Scheduling via Hardware Accelerators
Chomphoochan, Thanadol
As single-thread performance plateaus, modern systems increasingly rely on parallelism to scale throughput. Yet, efciently managing concurrency—particularly in transactional systems—remains a major bottleneck. This thesis explores the feasibility of accelerating transaction scheduling via hardware, leveraging FPGAs to ofoad scheduling logic from the CPU. We revisit Puppetmaster, a hardware transaction scheduler, and present a redesigned architecture emphasizing deployability, modularity, and evaluation. We implement both an optimized software baseline and a Bluespec-based hardware design, evaluating their performance across synthetic YCSB-style workloads with varying contention levels. Our hardware prototype demonstrates competitive throughput, achieving over 90% of peak throughput even under high-contention workloads. These results validate the potential of transaction scheduling as a target for hardware acceleration and highlight promising directions for future hybrid hardware-software concurrency-control systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational Exploration of Thermodynamic Models of&#13;
Geological CO₂ Injection</title>
<link href="https://hdl.handle.net/1721.1/162740" rel="alternate"/>
<author>
<name>Edelman, Jonathan</name>
</author>
<id>https://hdl.handle.net/1721.1/162740</id>
<updated>2025-09-19T04:49:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Computational Exploration of Thermodynamic Models of&#13;
Geological CO₂ Injection
Edelman, Jonathan
This thesis investigates the behavior of carbon dioxide flow in porous media through high-fidelity computational modeling, with a specific focus on the impact of the Span-Wagner equation of state (EOS). Accurate modeling of CO₂ transport in subsurface environments is essential for applications such as carbon capture and storage (CCS). We model the entire flow from injection, down throughout a vertical pipe and into a porous reservoir. To this end, we utilize the MOOSE (Multiphysics Object-Oriented Simulation Environment) framework developed by Idaho National Laboratory to perform finite element simulations. A key contribution of this work is the successful coupling of a porous rock domain with a one-dimensional pipe flow simulation in Julia, enabling a broader representation of injection scenarios. The study examines how the thermodynamic accuracy of the Span-Wagner Equation of State influences flow characteristics, in comparison to the Ideal Gas Equation of State. Through a series of coupled pipe-reservoir simulations, we assess variations in pressure and density as CO₂ is injected from the pipe into the porous medium. The model can detect phase change conditions, allowing us to predict the maximum mass flux that can be achieved below the liquefaction threshold, as defined by the binodal curve in the CO₂ phase diagram at a given temperature. The results highlight the importance of EOS selection in predicting multiphase flow behavior, especially under conditions relevant to geological storage. Furthermore, we find that the Ideal Gas EOS underpredicts injection rates under the same conditions. This integrated modeling approach advances the understanding of thermodynamic effects in coupled subsurface flow systems and supports the development of reliable tools for large-scale carbon storage applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Clinical Text De-identification Using Large Language Models: Insights from Organ Procurement Data</title>
<link href="https://hdl.handle.net/1721.1/162739" rel="alternate"/>
<author>
<name>Dahleh, Omar</name>
</author>
<id>https://hdl.handle.net/1721.1/162739</id>
<updated>2025-09-19T04:49:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Clinical Text De-identification Using Large Language Models: Insights from Organ Procurement Data
Dahleh, Omar
This thesis presents a novel approach to the de-identification of clinical notes from Organ Procurement Organization (OPO) records, leveraging advanced natural language processing (NLP) methodologies. Specifically, we employ in-context learning using large language models (LLMs) to effectively identify and remove protected health information (PHI), aiming to maintain high data utility post-redaction. Our work systematically evaluates the performance of the LLM-based method against established baseline techniques, including traditional Named Entity Recognition (NER) and rules-based systems. Through a slew of experiments, we assesses the strengths and limitations of each method regarding precision and recall. This work will contribute to a uniquely extensive dataset, comprising millions of de-identified OPO clinical notes, which will facilitate ethical healthcare research and enhance compliance with contemporary data protection standards. Ultimately, this dataset holds significant potential for improving processes and outcomes within the field of organ donation and procurement.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating Novel Energy Catalyst Discovery Using Automation, Active Learning, and AI</title>
<link href="https://hdl.handle.net/1721.1/162738" rel="alternate"/>
<author>
<name>Ren, Zhichu</name>
</author>
<id>https://hdl.handle.net/1721.1/162738</id>
<updated>2025-09-19T03:05:58Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Accelerating Novel Energy Catalyst Discovery Using Automation, Active Learning, and AI
Ren, Zhichu
The discovery of novel energy catalysts is a critical challenge in the field of materials science. Traditional methods for materials discovery are labor-intensive and time-consuming, hindering the rapid development of new catalysts. To address this issue, we introduce a comprehensive approach that integrates automation, active learning, and artificial intelligence (AI) to accelerate the discovery process.&#13;
&#13;
Our approach introduces the Copilot for Real-world Experimental Scientist (CRESt) system, which combines a large multimodal model (LMM) with an active learning-guided robotic system. CRESt streamlines the workflow of composition selection, high-throughput materials synthesis, electrochemical screening and characterization for the optimization of high-entropy alloy catalysts. The system allows researchers, regardless of their programming skills, to interact with the robotic platform using voice commands, making it highly accessible and user-friendly.&#13;
&#13;
We demonstrate the effectiveness of our approach by experimentally exploring over 700 chemistries and 1300 samples. The optimized 8-dimensional alloy (Pd-Pt-Cu-Au-Ir-Ce-Nb-Cr) achieved approximately 10 times the cost-specific performance of commercial catalysts for the direct formate fuel cell. This breakthrough highlights the potential of our approach to accelerate the discovery of novel energy catalysts across various domains.&#13;
&#13;
Furthermore, we discuss the challenges and considerations associated with implementing active learning in real-world experiments. We provide guidance on addressing model-centric and data-centric issues, such as model customization and data irreproducibility, to ensure the successful application of active learning in materials research projects.&#13;
&#13;
Looking ahead, we explore the role of human experimentalists in the era of AI-driven discovery. While AI and automation are poised to transform many aspects of experimental research, we argue that human experimentalists remain irreplaceable for now. Our ability to exercise critical thinking and engage in complex real-world interactions sets us apart from abiotic intelligence. However, as AI becomes more deeply integrated into research practices, the experimental landscape is bound to undergo significant changes.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient ML Inference via Matrix-Vector Approximations</title>
<link href="https://hdl.handle.net/1721.1/162737" rel="alternate"/>
<author>
<name>Li, Daniel D.</name>
</author>
<id>https://hdl.handle.net/1721.1/162737</id>
<updated>2025-09-19T04:49:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Efficient ML Inference via Matrix-Vector Approximations
Li, Daniel D.
Efficient inference is a growing priority in deep learning, where large model sizes and increasing deployment demands pose challenges for latency, memory, and energy usage. This thesis presents a unified framework for evaluating approximation methods that accelerate inference by modifying weight matrices. We model each method as a function ƒ_c(A) that approximates a weight matrix A under a compression rate c, and assess its impact on both matrix–vector accuracy and downstream task performance. We conduct empirical evaluations across two representative models, AlexNet on CIFAR10 and DistilBERT on AG News, comparing quantization, sparsification, and low-rank approximations. Our analysis spans four perspectives: (1) how different methods trade off ℓ₂ error and compression, (2) how weight statistics and input distributions shape error, (3) how well ℓ₂ error predicts classification accuracy, and (4) how idealized compression differs from real memory savings. We find that sparsification offers a strong trade-off between storage and accuracy, particularly because it preserves task-relevant structure in the weights. We also show that ℓ₂ error is not always a reliable proxy for accuracy, especially when input data lie on low-dimensional manifolds. These results suggest that approximation quality must be evaluated not only by global distortion metrics, but also by how the method interacts with model structure and input distributions. Our findings offer practical guidance for deploying efficient deep learning models and shed light on how compression affects performance in real-world settings.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Pedagogical Multimodal System for Mathematical Problem-Solving and Visual Reasoning</title>
<link href="https://hdl.handle.net/1721.1/162736" rel="alternate"/>
<author>
<name>Lee, Jimin</name>
</author>
<id>https://hdl.handle.net/1721.1/162736</id>
<updated>2025-09-19T04:49:50Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Pedagogical Multimodal System for Mathematical Problem-Solving and Visual Reasoning
Lee, Jimin
Effective reasoning often requires more than text or language. It requires visualizing, drawing, gesturing, and interacting for both humans and artificial intelligence (AI). Specifically in educational subjects, such as geometry and graphs, visual tools like auxiliary annotations and drawings can greatly help students understand abstract theories. This thesis explores and suggests how multimodal interaction between humans and AI helps humans engage with the system more naturally and effectively, leading to improved problem-solving in mathematical settings. Recent large multimodal models (LMMs) have the ability to facilitate collaborative reasoning by supporting textual, visual, and interactive inputs, diversifying methods of communication between humans and AI. Utilizing such advancements, this thesis also dives into the development of Interactive Sketchpad, a tutoring system that combines language-based explanations with interactive visualizations to enhance learning. It also reviews findings from user studies with Interactive Sketchpad, demonstrating that multimodality contributes to user task comprehension and engagement levels. Together, these contributions can reframe the role of AI in education as a visual and interactive collaborator that supports deeper reasoning rather than simply providing answers. Furthermore, this work demonstrates the potential of multimodal human-AI systems in fostering engagement and scaling personalized, visual learning across domains.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fast and Scalable Subgraph Learning</title>
<link href="https://hdl.handle.net/1721.1/162735" rel="alternate"/>
<author>
<name>Liang, Derrick</name>
</author>
<id>https://hdl.handle.net/1721.1/162735</id>
<updated>2025-09-19T04:49:53Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Fast and Scalable Subgraph Learning
Liang, Derrick
Graph Neural Networks (GNNs) are a powerful framework for learning over structured data, enabling predictive modeling across domains such as bioinformatics, recommendation systems, and financial fraud detection. While scalable systems like SALIENT++ have advanced the training of node-level GNN tasks at industrial scale, they do not support an emerging class of workloads: subgraph classification, which is increasingly common in real-world applications. Prior implementations address this gap by modifying both the data pipeline and the model architecture—but at the cost of composability, creating tightly coupled systems that slow further development. This thesis introduces MOSAIC, a lightweight data transformation that reframes subgraph classification as nodewise prediction by augmenting the graph with representative nodes. This approach enables direct compatibility with SALIENT++ and other nodewise systems while decoupling workload format, dataloader design, and model architecture. I demonstrate that MOSAIC enables modular reuse of architectures like GraphSAGE and subgraph-aware components from GLASS, while preserving SALIENT++’s system-level scalability. On the large-scale Elliptic2 dataset, this integration reduces training memory usage by 2.8× and epoch runtime from over 90 minutes to 0.4 seconds—while improving classification performance. I implement MOSAIC as a succinct (&lt;100-line), reusable preprocessing script, enabling integration of the GLASS architecture into SALIENT++ in &lt;10 lines of code, compared to Wang et al.’s tightly coupled 500+ line design. These results highlight the feasibility of scalable, composable experimentation for subgraph learning tasks in high-performance GNN systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamic Scene Editing via Semantically Trained 3D&#13;
Guassians</title>
<link href="https://hdl.handle.net/1721.1/162734" rel="alternate"/>
<author>
<name>Lam, Jordan</name>
</author>
<id>https://hdl.handle.net/1721.1/162734</id>
<updated>2025-09-19T04:49:42Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Dynamic Scene Editing via Semantically Trained 3D&#13;
Guassians
Lam, Jordan
Image-based 3D scene reconstruction continues to be a challenge as it involves solving both the sufficient 3D representation problem and the 3D reconstruction itself. One approach to tackle the rendering problem is 3D Gaussian Splatting because of its potential to produce fast and realistic renders via 3D Gaussian representation. With many applications in the entertainment industry, there is motivation in using 3D Gaussian Splatting for not only reconstructing 3D dynamic scenes but also editing them. However, extending the problem to dynamic 3D scenes proves to be a challenging task as it involves discerning the correct representation of a 3D scene while maintaining the capability to render in real time. State-ofthe-art methods have proposed methods that reconstruct dynamic scenes or edit static scenes, but the problem of editing dynamic scenes is still underexplored. This thesis analyzes the feasibility of editing semantically trained Gaussians for dynamic 3D scene editing. By training 3D Gaussians to represent the semantics across the time steps of a dynamic 3D scene, these primitives can be combined with an image editing pipeline to perform real-time, realistic 3D scene editing. Results show that editing segmented 3D Gaussians produces higher-quality and efficient renders as compared to editing without segmentation. However, when evaluated for mainstream applications, results show the impracticality of this pipeline and draw focus to memory and editing limitations that need to be further researched for future advances in 3D Gaussian Splatting.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decentralized AI for Methylation Data with Applications to&#13;
Precision Health</title>
<link href="https://hdl.handle.net/1721.1/162733" rel="alternate"/>
<author>
<name>Jamee, Mehrab S.</name>
</author>
<id>https://hdl.handle.net/1721.1/162733</id>
<updated>2025-09-19T04:49:50Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Decentralized AI for Methylation Data with Applications to&#13;
Precision Health
Jamee, Mehrab S.
Advances in precision health rely on integrating large-scale genomic data to identify biomarkers and predict health outcomes. However, sharing sensitive patient data between institutions like hospitals poses significant privacy and security challenges, limiting collaboration and the development of robust machine learning models. This thesis proposes a decentralized artificial intelligence framework for analyzing DNA methylation data, enabling institutions to collaboratively train models without exchanging sensitive information. By taking advantage of generative deep learning techniques and federated learning paradigms, the framework aims to impute missing biomarkers in fragmented datasets and improve the accuracy of downstream predictive tasks, like predicting chronological age, mortality, and cancer data. Two intermediate models are implemented and evaluated in this thesis. The first predicts age from DNA methylation data, and can be used for evaluation of the imputation model. The second is an imputation model that uses a conditional autoencoder architecture to reconstruct missing biomarker data in clinical datasets, which is designed to take advantage of contextual methylation embeddings, made available by recently published pretrained epigenomics foundation models. This work seeks to advance the use of decentralized AI in epigenomics, with the ultimate goal of improving personalized healthcare while preserving patient privacy.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring Smallholder Field Delineation</title>
<link href="https://hdl.handle.net/1721.1/162732" rel="alternate"/>
<author>
<name>Janjigian, Lily T.</name>
</author>
<id>https://hdl.handle.net/1721.1/162732</id>
<updated>2025-09-19T04:49:40Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Exploring Smallholder Field Delineation
Janjigian, Lily T.
Accurate crop field delineation from satellite imagery is a critical component of agricultural monitoring. However, most existing models are developed and evaluated in large-scale, industrial agricultural regions, where field boundaries are relatively regular and high-quality annotated data is more readily available. In contrast, smallholder regions—where fields are smaller, more irregularly shaped, and often lack precise geospatial labels—remain underrepresented in both data and model performance. This thesis investigates model architectures, loss functions, and learning paradigms for improving segmentation performance in smallholder settings. Using datasets from Austria, India, and Rwanda, we evaluate several model configurations including ResUNet++ with Dice+BCE and Tanimoto+BCE losses, a meta-learned ResUNet++ using Model-Agnostic Meta-Learning (MAML), and SAM2 ViT-H, a large vision transformer released by Meta, evaluated in a zero-shot setting. We introduce a data processing pipeline that converts vector field boundaries from the FTW dataset into highresolution image–mask pairs suitable for supervised learning. Quantitative and qualitative results reveal that models trained on industrial-scale data perform poorly in smallholder regions without adaptation. SAM2 exhibits strong zero-shot performance, especially on larger fields, while ResUNet++ models trained directly on India perform more consistently across small-to-medium sized fields. MAML yielded underwhelming performance under resource constraints, highlighting the need for further tuning. These findings underscore the importance of geographically diverse, well-aligned training data and support the case for developing globally representative agricultural segmentation datasets.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>You Only Look Twice: An Ensemble Deep Learning Model&#13;
for Wildfire Detection Using Terrestrial Camera Networks</title>
<link href="https://hdl.handle.net/1721.1/162731" rel="alternate"/>
<author>
<name>Jones, John M.</name>
</author>
<id>https://hdl.handle.net/1721.1/162731</id>
<updated>2025-09-19T04:49:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">You Only Look Twice: An Ensemble Deep Learning Model&#13;
for Wildfire Detection Using Terrestrial Camera Networks
Jones, John M.
Wildfires represent a growing global threat that requires rapid detection and response to minimize environmental damage, economic losses, and human casualties. In the United States, California stands out as a particularly common wildfire hot spot. Recent fire seasons have shattered historical records and been particularly devastating. This work investigates innovative methods for classifying and localizing wildfires through terrestrial cameras positioned on elevated terrain, aimed at improving early detection capabilities and response times while maintaining computational efficiency and reliability for the U.S. Space Force in Southern California. We present YOL2, a novel ensemble approach that combines a fine-tuned ConvNeXt Convolutional Neural Network incorporating a Dynamic Tanh normalization layer with a fine-tuned YOLO11 model for precise localization. Using a comprehensive dataset of 33,636 time-sequenced images from terrestrial cameras across the United States and Europe, our system achieves 98% fire detection accuracy and 55% localization mean average precision [50:95]. The implementation of Dynamic Tanh normalization—applied for the first time in wildfire detection—enhances computational efficiency without sacrificing performance. The images used capture the spread of incipient fires over time, with most containing bounding boxes denoting the approximate location of fire, allowing our system to identify fires quickly while minimizing false positives. Importantly, our spatiotemporal system operates effectively without requiring individual models to rely on multiple time steps as input, enabling modular component replacement and adaptation. The use of pan, tilt, and zoom cameras in concert with our YOLO model provides a more computationally efficient confirmation of fire than alternative methods, showing that extracting better results from less information is possible. Beyond wildfire applications, the YOL2 ensemble methodology demonstrates profound implications for remote sensing more broadly. This work establishes a foundation for highly efficient visual detection systems applicable across numerous domains requiring rapid and accurate object identification and localization.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards transparent representations: on internal structure and external world modeling in LLMs</title>
<link href="https://hdl.handle.net/1721.1/162730" rel="alternate"/>
<author>
<name>Hariharan, Kaivalya</name>
</author>
<id>https://hdl.handle.net/1721.1/162730</id>
<updated>2025-09-19T04:49:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards transparent representations: on internal structure and external world modeling in LLMs
Hariharan, Kaivalya
Large language models (LLMs) generalize far beyond their training distribution, enabling impressive downstream performance in domains vastly different from their pretraining distribution. In this thesis, we develop a data-centric view on machine learning. We suggest that the deep generalization of LLMs is best understood through studying the relationships between the four fundamental components of this data generalization: pretraining data, test-time inputs, model outputs, and internal structure. Of these, we present two full research studies characterizing test-time inputs and internal structure. Chapter 1 develops the data-centric view of machine learning, and outline the thesis. Chapter 2 presents Breakpoint, a method of generating difficult coding tasks for models at a large scale that attempts to disambiguate the factors that make problems at test-time difficult. Chapter 3 analyzes the structure of gradient-based jailbreaks in LLMs. We argue that even though GBJs are more out of distribution than even random text, they induce a low-rank, structured change in models. Finally, Chapter 4 discusses the recent rise of reasoning models and proposing some lines of future work in the data-centric view towards developing more robust understanding of LLMs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Biomechanical Validation of Skeletal Tracking Data and Developing Action Recognition Models for Basketball: A Baseline for NBA Officiating Tools</title>
<link href="https://hdl.handle.net/1721.1/162729" rel="alternate"/>
<author>
<name>Hong, Stephen S.</name>
</author>
<id>https://hdl.handle.net/1721.1/162729</id>
<updated>2025-09-19T04:49:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Biomechanical Validation of Skeletal Tracking Data and Developing Action Recognition Models for Basketball: A Baseline for NBA Officiating Tools
Hong, Stephen S.
Optical tracking technology in sports has advanced rapidly in recent years, enabling new opportunities for data-driven analysis and tools to enhance the game. This study presents a framework for processing and analyzing a new skeletal tracking dataset collected from NBA basketball games. The methodology includes biomechanical joint validation, anomaly detection, and region-based consistency analysis to assess the integrity of player motion data. Joint movement anomalies are used to detect tracking errors, while court region and stadium-level evaluations help identify where the optical tracking system may be underperforming. These patterns can guide data providers toward specific areas that require refinement, offering a clearer starting point for improving system accuracy. After cleaning the dataset of 117 NBA games, two action recognition models—a transformer-based model and a temporal graph neural network—are implemented to classify player actions, specifically dribbling, passing, shooting, and rebounding, from sequences of skeletal tracking frames. The objective is to establish a baseline for developing tools to support officiating decisions in the NBA. By leveraging spatiotemporal representations of joint motion, this work improves the reliability of skeletal tracking data and contributes to the advancement of automated decision support in professional sports officiating.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Enhanced Proposals for PINN-Based Neural&#13;
Sampler Training</title>
<link href="https://hdl.handle.net/1721.1/162728" rel="alternate"/>
<author>
<name>Erives, Ezra</name>
</author>
<id>https://hdl.handle.net/1721.1/162728</id>
<updated>2025-09-19T04:49:26Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards Enhanced Proposals for PINN-Based Neural&#13;
Sampler Training
Erives, Ezra
Sampling from distributions whose density is known up to a normalizing constant is an important problem with a wide range of applications including Bayesian posterior inference, statistical physics, and structural biology. Annealing-based neural samplers seek to amortize sampling from unnormalized distributions by training neural networks to transport a family of densities interpolating from source to target. A crucial design choice in the training phase of such samplers is the proposal distribution by which locations are generated at which to evaluate the loss. Previous work has obtained such a proposal distribution by combining a partially learned vector field with annealed Langevin dynamics. However, isolated modes and other pathological properties of the annealing path imply that such proposals achieve insufficient exploration and thereby lower performance post training. In this work we extend existing work and characterize new families of proposals based on controlled Langevin dynamics. In particular, we propose continuously tempered diffusion samplers, which leverage exploration techniques developed in the context of molecular dynamics to improve proposal distributions. Specifically, a family of distributions across different temperatures is introduced to lower energy barriers at higher temperatures and drive exploration at the lower temperature of interest. We additionally explore proposals based on Langevin dynamics involving non-Newtonian kinetic energies. We empirically validate improved sampler performance driven by extended exploration.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Sketch to Stage: Tools for Prototyping and Exporting&#13;
Collaborative DMIs on the Web</title>
<link href="https://hdl.handle.net/1721.1/162727" rel="alternate"/>
<author>
<name>Luchko, Yaro</name>
</author>
<id>https://hdl.handle.net/1721.1/162727</id>
<updated>2025-09-19T04:49:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">From Sketch to Stage: Tools for Prototyping and Exporting&#13;
Collaborative DMIs on the Web
Luchko, Yaro
This thesis presents tools and ideas for prototyping and exporting collaborative digital music instruments (DMIs) on the web, the primary purpose of which is to lower the barrier to making music and to enable easier collaboration. This is done in the context of the Creativitas website, which has become a tool of the MIT 21M.080 "Introduction to Music Technology" class to learn about music technology and audio on the web, and a tool for FaMLE (the Fabulous MIT Laptop Ensemble) to use in live performances. The website allows creators to execute code within an editor code box and partake in a practice known as live coding, ultimately creating both sound and visuals. Audio is primarily created with the Tone.js interactive web audio framework, and visuals are drawn on a provided canvas using p5.js. This thesis extends the Creativitas website by providing functionality for exporting the written code as a standalone website. The exported standalone websites serve as DMIs, with standard controls such as volume, tempo, and start and stop buttons. Furthermore, we discuss and implement strategies for synchronizing timing and instrument values. This includes state-of-the-art strategies, as well as ideas for creating extendable interfaces that can include more strategies as they are developed. We end with two examples of exported DMIs, which can be effectively used in performances.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Programmable Expressiveness in Non-Social Tasks: A Mixed-Methods Study of Middle School AI Learning</title>
<link href="https://hdl.handle.net/1721.1/162726" rel="alternate"/>
<author>
<name>Lei, Si Liang</name>
</author>
<id>https://hdl.handle.net/1721.1/162726</id>
<updated>2025-09-19T04:49:21Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Programmable Expressiveness in Non-Social Tasks: A Mixed-Methods Study of Middle School AI Learning
Lei, Si Liang
Background. Programmable expressive features—such as speech, facial expressions, and chatbot-style dialogue—are often promoted as tools to enhance engagement in educational robotics. While prior research shows benefits in socially-oriented tasks like storytelling or group collaboration, it remains unclear how student-controlled expressive blocks affect learning when the task itself is non-social. This study isolates the impact of such features in a context where expressiveness is not instructionally required. Method. We conducted a controlled, two-cohort study with 41 middle school students (ages 10–12) during a one-day AI-and-robotics workshop using the Doodlebot platform. Students in the experimental group had access to optional blocks enabling the robot to speak, emote, and use GPT-based responses. These features were hidden from the control group. All participants completed identical programming tasks (e.g., maze navigation, visual classification) that did not require social interaction. Data sources included pre/post surveys, facilitator notes, and student code. We applied the Mann–Whitney U test [1, 2] and reflexive thematic analysis [3, 4] to examine outcomes. Results. The expressive condition showed no significant gains in programming confidence or peer trust, but performed significantly worse on the post-workshop concept quiz (p = .007, r = .41). Qualitative data revealed that students in this group often used expressive blocks for entertainment rather than learning, leading to distraction, off-task behavior, and increased reliance on adult facilitation. Contributions. This study contributes (i) empirical evidence on the limitations of robot expressiveness in non-social learning contexts, (ii) a mixed-methods protocol for analyzing classroom robot deployments, and (iii) design guidance for aligning robot behavior with pedagogical intent. Implications. Expressiveness in educational robots should be contextually deployed—not assumed beneficial by default. In technical, goal-driven tasks that do not involve social reasoning, unscaffolded expressiveness may introduce cognitive overhead or divert attention. We propose a “dial-a-sociality” model, where robot behavior can be flexibly tuned to match the demands of the learning environment.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-Speed Simulator for Millimeter-Wave Synthetic Aperture Radar</title>
<link href="https://hdl.handle.net/1721.1/162724" rel="alternate"/>
<author>
<name>Kuka, Adrian</name>
</author>
<id>https://hdl.handle.net/1721.1/162724</id>
<updated>2025-09-19T04:49:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">High-Speed Simulator for Millimeter-Wave Synthetic Aperture Radar
Kuka, Adrian
The past few years have witnessed growing interest in using millimeter-wave signals for non-line-of-sight (NLOS) perception tasks, with applications in robotics, augmented reality, and smart-homes. However, existing systems suffer from a lack of large mmWave datasets, resulting in limited accuracy and generalizability compared to their line-of-sight, camera-based counterparts. We present the design, implementation, and evaluation of mmSim, a new, high-speed millimeter-wave (mmWave) simulator capable of producing large synthetic datasets to help drive the field of mmWave-based NLOS perception. mmSim introduces two main contributions to improve the speed over existing mmWave simulators. First, it pre-selects areas of the object, which will produce reflections towards each simulated antenna location, allowing it to minimize future computation. Second, it introduces a coarse-to-fine approach to allow early, less critical steps to operate at lower resolutions, while maintaining the high resolution in later steps required for high-accuracy images. These techniques, combined with other performance optimizations, allow mmSim to achieve an over 24x improvement in speed over state-of-the-art mmWave simulators.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards AI Safety via Interpretability and Oversight</title>
<link href="https://hdl.handle.net/1721.1/162723" rel="alternate"/>
<author>
<name>Kantamneni, Subhash</name>
</author>
<id>https://hdl.handle.net/1721.1/162723</id>
<updated>2025-09-19T04:49:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards AI Safety via Interpretability and Oversight
Kantamneni, Subhash
In this thesis, we advance AI safety through mechanistic interpretability and oversight methodologies across three key areas: mathematical reasoning in large language models (LLMs), the validity of sparse autoencoders, and scalable oversight. First, we reverse-engineer addition within mid-sized LLMs and discover that LLMs represent numbers as helices. We demonstrate that LLMs perform addition via the manipulation of these helices using a "Clock" algorithm, providing the first representation-level explanation of mathematical reasoning in LLMs, verified through causal interventions on model activations. Next, we rigorously evaluate sparse autoencoders (SAEs), a popular interpretability tool, by testing their effectiveness on the downstream task of probing. We test SAEs under challenging probing conditions, including data scarcity, class imbalance, label noise, and covariate shift. While SAEs occasionally outperform baseline methods, they fail to consistently enhance task performance, underscoring a potentially critical limitation of SAEs. Lastly, we introduce a quantitative framework to evaluate scalable oversight - a promising idea where weaker AI systems supervise stronger ones - as a function of model intelligence. Applying our framework to four oversight games ("Mafia," "Debate," "Backdoor Code," and "Wargames"), we identify clear scaling patterns and extend our findings through a theoretical analysis of Nested Scalable Oversight (NSO), deriving conditions for optimal oversight structures. Together, these studies advance our understanding of AI interpretability and alignment, providing insights and frameworks to progress AI safety.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Metagradient Descent: Differentiating Large-Scale Training</title>
<link href="https://hdl.handle.net/1721.1/162722" rel="alternate"/>
<author>
<name>Chen, Benjamin</name>
</author>
<id>https://hdl.handle.net/1721.1/162722</id>
<updated>2025-09-19T04:49:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Metagradient Descent: Differentiating Large-Scale Training
Chen, Benjamin
A major challenge in training large-scale machine learning models is configuring the training process to maximize model performance, i.e., finding the best training setup from a vast design space. In this work, we unlock a gradient-based approach to this problem. We first introduce an algorithm for efficiently calculating metagradients -- gradients through model training -- at scale. We then introduce a "smooth model training" framework that enables effective optimization using metagradients. With metagradient descent (MGD), we greatly improve on existing dataset selection methods, outperform accuracy-degrading data poisoning attacks by an order of magnitude, and automatically find competitive learning rate schedules.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A simplified approach to calculating personalized estimates for electric vehicle charging delays</title>
<link href="https://hdl.handle.net/1721.1/162721" rel="alternate"/>
<author>
<name>Chen, Helen</name>
</author>
<id>https://hdl.handle.net/1721.1/162721</id>
<updated>2025-09-19T04:49:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A simplified approach to calculating personalized estimates for electric vehicle charging delays
Chen, Helen
In the past decade, electric vehicles (EVs) have gained traction as a cleaner alternative to internal combustion engine vehicles, commonly referred to as gas-powered vehicles. To promote EV adoption, the government has implemented various regulations and incentives to support the transition to cleaner transportation. However, EV adoption in the United States has progressed more slowly than expected, with EVs accounting for less than 10 percent of new vehicle sales in 2023. Recent surveys indicate that a significant barrier is the perceived inconvenience and uncertainty surrounding EV charging, particularly the additional time required to charge during active use, which we call charging delay. Currently, there exist some models for estimating these charging delays, but these models require users to input a significant amount of information, such as their daily driving schedules, locations of charging stations, and exact distances of trips taken each year, which many users may not even remember. These more complex models are likely to overwhelm users, especially those who may be entirely new to EVs. To fill this gap, this thesis introduces a simplified model for estimating personalized annual EV charging delay using a set of easy-to-provide inputs, including typical driving behavior and access to home and work charging. The model logic captures delay from both routine usage, such as weekly driving patterns or typical trips, and occasional, high-energy long-distance trips, which, while not routine, are still important to account for. For weekly trips, the model considers four scenarios based on combinations of home and work charging access to determine driving and charging schedules. For long-distance travel, the model uses data from the 2022 National Household Travel Survey (NHTS) and performs multiple iterations of bootstrap resampling to create synthetic distributions of long-distance trips within a year. Data related to individual routine vehicle usage and charging delay is unavailable, so we are unable to validate the model’s performance through accuracy calculations. Instead, we performed a one-at-a-time sensitivity analysis to better understand how charging delay is affected by different factors. We found that access to private charging, such as home or work charging, improves charging delay robustness for regular weekly trips, with the exception that relying solely on work charging on workdays can cause stepwise increases in non-workday delays. Additionally, long-distance trip delays are no affected by private charging access and follow a stepwise pattern based on vehicle range. In general, the simplified approach presented in this thesis offers a more accessible way for current and prospective EV owners to clearly understand their own expected experience of EV ownership.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Limits of Temporal Abstractions for Reinforcement Learning with Sparse Rewards</title>
<link href="https://hdl.handle.net/1721.1/162720" rel="alternate"/>
<author>
<name>Li, Zhening</name>
</author>
<id>https://hdl.handle.net/1721.1/162720</id>
<updated>2025-12-05T17:48:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Limits of Temporal Abstractions for Reinforcement Learning with Sparse Rewards
Li, Zhening
Skills are temporal abstractions that are intended to improve reinforcement learning (RL) performance through hierarchical RL. Despite our intuition about the properties of an environment that make skills useful, there has been little theoretical work aimed to characterize these properties precisely. This work studies the utility of skills in sparse-reward environments with a discrete state space and finite action space. We show, both theoretically and empirically, that RL performance gains from skills are worse in environments where successful trajectories are less compressible. In environments with a highly incompressible distribution of successful trajectories, using unexpressive skills such as macroactions will provably worsen RL performance. We hope our findings can guide research on automatic skill discovery and help RL practitioners better decide when and how to use skills.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Self-Supervised ECG Learning for Multimodal Clinical Tasks</title>
<link href="https://hdl.handle.net/1721.1/162719" rel="alternate"/>
<author>
<name>Chen, Peilin</name>
</author>
<id>https://hdl.handle.net/1721.1/162719</id>
<updated>2025-09-19T04:49:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Self-Supervised ECG Learning for Multimodal Clinical Tasks
Chen, Peilin
We present a multimodal clinical AI framework that integrates time series, images, and text to support robust diagnostic reasoning across diverse input combinations. We first introduce ECG-JEPA, a self-supervised encoder pretrained on multiple ECG datasets to learn generalizable time series representations. This unimodal pretraining improves ECG classification, achieving a 23-point AUC gain on the underrepresented Ga dataset. We then align and fuse these ECG embeddings with chest X-rays and EHR text using a vision–language model backbone, enabling end-to-end multimodal inference. Our results show that incorporating ECG signals meaningfully improves diagnostic performance, highlighting the value of multitask time series pretraining and modular fusion for clinical AI.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Hierarchical Approach to Quantitative Portfolio Optimization for Technology Development Project Portfolios (OPTIM-H)</title>
<link href="https://hdl.handle.net/1721.1/162718" rel="alternate"/>
<author>
<name>Huang, Roderick W.</name>
</author>
<id>https://hdl.handle.net/1721.1/162718</id>
<updated>2025-09-19T04:49:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Hierarchical Approach to Quantitative Portfolio Optimization for Technology Development Project Portfolios (OPTIM-H)
Huang, Roderick W.
The use of Mean-Variance Portfolio Optimization (MVO) in Modern Portfolio Theory (MPT) has been a long-standing method to guide investment decisions for market-traded assets like stocks and bonds. Recent research shows that portfolio optimization developed using MPT could prove useful in investment decisions for technology projects. Traditionally, empirical data from past projects and statistically driven technology trends are used to predict the risk-return model necessary for MPT. This thesis introduces a new methodology, Optimizing Portfolios in Technologies Investments Methodology with Hierarchy (OPTIM-H), which extends MPT to make investment decisions within a hierarchical organizational structure of technology projects. An integrated dataset was developed to demonstrate this methodology, combining 19,000 data records from Techport and Small Business Innovation Research (SBIR) datasets. The dataset captures investment trends and maturity pathways across 17 taxonomy areas, revealing that most projects begin at Technology Readiness Levels (TRLs) 2–4, with average funding amounts near \$300,000. OPTIM-H effectively distinguishes between broader technology groups and their subcategories, showing the impact of community interest on investment decisions. Furthermore, this work investigates k-means clustering as a tool for classifying technology projects for targeted investment, with the analysis identifying seven clusters and achieving a mean utility score of 0.595 with a standard deviation of 0.651.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrating Canvas with a Large-Scale Social Annotation Platform</title>
<link href="https://hdl.handle.net/1721.1/162717" rel="alternate"/>
<author>
<name>Heiberger, Henry R.</name>
</author>
<id>https://hdl.handle.net/1721.1/162717</id>
<updated>2025-09-19T04:49:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Integrating Canvas with a Large-Scale Social Annotation Platform
Heiberger, Henry R.
The last decade has seen a growing interest in the use of collaborative annotation systems, educational tools that allow multiple users to asynchronously comment, highlight, and discuss digital content directly on the source material, transforming traditional classroom readings into a more engaging group activity. Originally developed by MIT CSAIL’s Haystack Group in 2012 under the direction of Professor David Karger, Nota Bene (NB) is a particular collaborative annotation tool that allows students to have annotated online discussions in the margins of textbooks, papers, and even webpages [1]. Though various studies have already proven its ability to succeed in a classroom setting, conversations with key stakeholders have revealed that the tool is missing a key feature found in many other popular collaborative annotation solutions: integration with the Canvas learning management system (LMS) [1–3]. Thus, this work sought to integrate the classroom management features that Canvas provides into the NB platform by supporting Canvas account linking, class importation and roster synchronization, and automatic grade uploading. By doing this, we hoped to improve NB’s quality as a classroom tool, enhancing its value to institutions, encouraging its wider adoption across the academic landscape, and aligning with a much broader trend of creating more integrated, efficient, and user-friendly educational technology solutions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Passive-Scoping as a method for Large Language&#13;
Model Robustness to Jailbreaks and Adversarial Examples</title>
<link href="https://hdl.handle.net/1721.1/162716" rel="alternate"/>
<author>
<name>Hernandez, Adriano</name>
</author>
<id>https://hdl.handle.net/1721.1/162716</id>
<updated>2025-09-19T04:49:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">On Passive-Scoping as a method for Large Language&#13;
Model Robustness to Jailbreaks and Adversarial Examples
Hernandez, Adriano
Artificial Intelligence (AI) and large language models (LLMs) not only present a challenge for adversarial robustness, but also the natural emergence of unwanted capabilities. Current approaches to safeguarding AI and LLMs predominantly rely on explicitly restricting known instances of these. However, this places a burden on model developers, because they cannot anticipate all the potential attacks and undesirable capabilities. To solve this problem, we leverage interdisciplinary knowledge. In the field of information security, the principle of least privilege provides guidance on how to defend from unknown threats. In AI, the principle could be implemented by ensuring that developers specify the knowledge and capabilities an AI system should retain, restricting all others by default. We call this application of the principle of least privilege, passive scoping. Our thesis makes two claims: &#13;
1. We argue that (a) passive scoping mitigates concerns about adversarial robustness and loss of control of AI systems and (b) passive scoping to edit the weights and activations at post-training time is underexplored by the literature. &#13;
2. Of possible approaches, our sparse autoencoder (SAE) filters can implement this underexplored type of passive scoping. They increase safety relative to LoRA finetuning and prompt engineering, but leave room for improvements. &#13;
The thesis is structured as follows: &#13;
1. Chapter 2 elucidates the challenges with adversarial robustness and loss of control risk. Chapter 3 puts forward a conceptual argument for the benefits of passive scoping. Later, it analyzes the extent to which passive scoping has been attempted. These two chapters work together to defend claims 1a and 1b. &#13;
2. Chapter 4 defines our optimization problem. Chapter 5 defines our experimental methodology and metrics. These two define our success criteria for claim 2. Chapter 6 finalizes our defense of claim 2 based on our results. &#13;
3. Chapter 7 explores related work, Chapter 8 engages in a broader discussion, and chapter 9 summarizes the contributions of this thesis.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Explorations in AI and Creative Learning New Tools to Expand How Young People Imagine, Create, and Tinker with Scratch</title>
<link href="https://hdl.handle.net/1721.1/162715" rel="alternate"/>
<author>
<name>Huang, Alexis</name>
</author>
<id>https://hdl.handle.net/1721.1/162715</id>
<updated>2025-09-19T04:49:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Explorations in AI and Creative Learning New Tools to Expand How Young People Imagine, Create, and Tinker with Scratch
Huang, Alexis
As generative AI tools become increasingly prevalent in young people’s lives, these technologies have a growing influence over the way that children learn. While much of the early work at the intersection of AI and education has focused on the development of intelligent tutoring systems designed to deliver content more efficiently, this thesis explores how generative AI might be used to support the creative learning process by sparking curiosity, encouraging exploration, and helping young people express themselves creatively. In this thesis, I explore ways of integrating generative AI with Scratch, the world's largest programming community for children, while remaining aligned with the core values of Scratch: creativity, playfulness, and self-expression. I designed three tools that extend the Scratch ecosystem: Scratch Connect, which explores using generative AI to help Scratchers discover projects that inspire them to create while opening the black box of recommendation systems; scrAItch, which investigates how people can iterate with generative AI by using text-based inputs to create and tinker with Scratch projects; and Scratch Spark, which reimagines the new learner experience by using generative AI to help users create personally meaningful “spark projects.” This thesis describes the process of imagining, creating, and reflecting on these tools, including many of the challenges and tensions that we encountered along the way. I discuss observations and feedback from creative workshops with young people, and conclude by reflecting on open questions and opportunities for future work in designing generative AI tools that support creative learning.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of Hardware Design Choices on Neural Network&#13;
Accuracy in Analog Inference Accelerators</title>
<link href="https://hdl.handle.net/1721.1/162714" rel="alternate"/>
<author>
<name>Forsythe, Eyan</name>
</author>
<id>https://hdl.handle.net/1721.1/162714</id>
<updated>2025-09-19T04:49:34Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Effects of Hardware Design Choices on Neural Network&#13;
Accuracy in Analog Inference Accelerators
Forsythe, Eyan
Analog accelerators can enable energy-efficient and high-throughput deep neural network (DNN) computations by computing in memory. Unfortunately, device and circuit nonidealities in these accelerators, such as noise and quantization, can also lead to low DNN inference accuracy due to computation errors arising from these non-idealities. These errors are largely a function of both the choice of DNN workload and different hardware design choices, such as circuit topology and DNN operand encoding. Different hardware design choices can affect the energy, throughput, and area of the system, so it is important to understand how these design choices interact with DNN inference accuracy. However, there is a lack of a systemic understanding of how each of these hardware design decisions affects accuracy and how they interact with other design decisions. To address these issues, we model how hardware design choices can lead to analog errors such as noise and quantization. Then, we explore these errors affect inference accuracy in analog accelerators and how tradeoffs can be made between inference accuracy, energy efficiency, area, and throughput. We find that analog errors generated from hardware design decisions can generate different amounts of accuracy loss depending on which layer in a DNN is subject to these analog errors. This leads to the structure of the DNN having a significant impact in how hardware design choices affect DNN inference accuracy, especially with respect to the individual layers of a DNN. We use knowledge of the relationships between device and circuit non-idealities to improve the accuracy of published analog accelerators and analyze the energy and area costs of the increased accuracy.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of High-Resolution SAR ADC for Detection of&#13;
Sub-Cortical Neuron Action Potentials for BMI&#13;
Applications</title>
<link href="https://hdl.handle.net/1721.1/162713" rel="alternate"/>
<author>
<name>Guobadia, Omozusi E.</name>
</author>
<id>https://hdl.handle.net/1721.1/162713</id>
<updated>2025-09-19T04:49:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design of High-Resolution SAR ADC for Detection of&#13;
Sub-Cortical Neuron Action Potentials for BMI&#13;
Applications
Guobadia, Omozusi E.
The advancement of brain-machine interfaces (BMIs) requires neural signal acquisition systems that are capable of resolving both fast, low-amplitude action potentials (APs) and slow, higher-amplitude local field potentials (LFPs) under stringent power and area constraints. This thesis presents the design and simulation of a high-resolution, low-power successive approximation register (SAR) analog-to-digital converter (ADC) tailored for sub-cortical neural signal detection. To optimize dynamic range and reduce power consumption, a novel adaptive zoom-and-tracking architecture is introduced, enabling the ADC to dynamically adjust its reference window based on LFP trends while maintaining high-resolution capture of APs. The proposed system integrates a bootstrapped track-and-hold circuit, a differential capacitive DAC, and a strong-arm comparator in the analog front-end, alongside a digital FIR filter and SAR logic with zoom-range control in the digital domain. Simulations validate the functionality of each subsystem independently and in concert, demonstrating the system’s ability to dynamically isolate APs from LFP-dominated baselines while reducing analog power draw by over 60% compared to fixed-range ADCs. This work offers a promising approach for scalable, energy-efficient neural recording architectures suited to future BMI applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transformer-Based Prediction of Coronary Artery Lumen&#13;
Expansion Post Angioplasty Using Optical Coherence&#13;
Tomography</title>
<link href="https://hdl.handle.net/1721.1/162712" rel="alternate"/>
<author>
<name>Gupta, Shreya</name>
</author>
<id>https://hdl.handle.net/1721.1/162712</id>
<updated>2025-09-19T04:49:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Transformer-Based Prediction of Coronary Artery Lumen&#13;
Expansion Post Angioplasty Using Optical Coherence&#13;
Tomography
Gupta, Shreya
Coronary artery disease is the leading cause of mortality globally, resulting in an urgent and critical need to better understand both vessel morphology and the processes of intervention. Angioplasty is an intervention which causes a previously constricted vessel to expand via placement of a stent, and is affected by numerous characteristics of the vessel such as calcium eccentricity and size, wall thickness, and prior lumen size. Being able to accurately assess whether a stent will properly expand allows cardiologists to pursue pre-stenting calcium lesion modification strategies that help avoid dangerous complications of improper stenting. This work introduces a pipeline for post-stenting lumen area prediction from pre-stenting optical coherence tomography (OCT) images. This pipeline includes morphological correction of OCT image segmentations, explainable feature extraction from OCT segmentations, and a predictive transformer network that combines morphological features with injected stent information. The aim is for such a pipeline to be used to support clinical decision making.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Complete Visual and Geometric Object Reconstruction&#13;
via Autonomous Robotic Manipulation</title>
<link href="https://hdl.handle.net/1721.1/162711" rel="alternate"/>
<author>
<name>Fu, Evelyn</name>
</author>
<id>https://hdl.handle.net/1721.1/162711</id>
<updated>2025-09-19T04:49:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Complete Visual and Geometric Object Reconstruction&#13;
via Autonomous Robotic Manipulation
Fu, Evelyn
Accurately simulating object dynamics based on real-world perception inputs has wide applications in digital twins and robotic manipulation. Yet, doing so requires practitioners to carefully measure and reconstruct the dynamic and geometric properties of the objects, which is time-consuming and requires domain expertise. This project proposes an automatic pipeline to construct 3D representations from a collection of real objects, which can further be used to generate assets with accurate visual texture and collision geometry to be used in simulation. This pipeline will be designed to have minimal hardware requirements and aim to be efficient in time for physical actuation to maximize data collection on minimal hardware.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Model-based Planning for Efficient Task Execution</title>
<link href="https://hdl.handle.net/1721.1/162710" rel="alternate"/>
<author>
<name>Ding, Wenqi</name>
</author>
<id>https://hdl.handle.net/1721.1/162710</id>
<updated>2025-09-19T04:49:30Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Model-based Planning for Efficient Task Execution
Ding, Wenqi
Robotic agents navigating 3D environments must continuously decide their next moves by reasoning about both visual observations and high-level language instructions. However, they plan in a high-dimensional latent space, opaque to human collaborators. Hence, it is difficult for humans to understand the agent’s decision-making process. This lack of interpretability hinders effective collaboration between humans and robots. The key question we are trying to answer in this thesis is: Can we build a unified planning framework that fuses visual and language into a single, interpretable representation, so that humans can interpret robots’ decisions? We propose a model-based planning framework built around pretrained vision-language models (VLMs). We show that VLMs can be used to plan in a unified embedding space, where visual and language representations can be decoded back to human-interpretable forms. Empirical evaluation on vision-language navigation benchmarks demonstrates both improved sample efficiency and transparent decision making, enabling human-in-the-loop planning and more effective human-robot collaboration.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Global Non-Convex Optimization with Integer Variables</title>
<link href="https://hdl.handle.net/1721.1/162709" rel="alternate"/>
<author>
<name>Kriezis, Demetrios C.</name>
</author>
<id>https://hdl.handle.net/1721.1/162709</id>
<updated>2025-09-19T04:49:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Global Non-Convex Optimization with Integer Variables
Kriezis, Demetrios C.
Non-convex optimization refers to the process of solving problems whose objective or constraints are non-convex. Historically, this type of problems have been very difficult to solve to global optimality, with traditional solvers often relying on approximate solutions. Bertsimas et al. [1] introduce a novel approach for solving continuous non-convex optimization problems to provable optimality, called the Relaxation Perspectification Technique - Branch and Bound (RPT-BB). In this thesis, we extend the RPT-BB approach to the binary, mixed-binary, integer, and mixed-integer variable domains. We outline a novel branch-and-bound algorithm that makes use of the Relaxation Perspectification Technique (RPT), as well as binary, integer, and eigenvector cuts. We demonstrate the performance of this approach on two representative non-convex problems, as well as two real-world non-convex optimization problems, and we benchmark its performance on BARON and SCIP, two state-of-the-art optimization solvers for non-convex mixed-integer problems. We observe that our algorithm, despite being more general, is able to outperform the state-of-the-art solvers on many problem instances.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing AI Agents for Automated Software&#13;
Engineering with Palimpzest</title>
<link href="https://hdl.handle.net/1721.1/162708" rel="alternate"/>
<author>
<name>Li, Jason</name>
</author>
<id>https://hdl.handle.net/1721.1/162708</id>
<updated>2025-09-19T04:49:10Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing AI Agents for Automated Software&#13;
Engineering with Palimpzest
Li, Jason
The deployment of large language models (LLMs) as autonomous agents is transforming the software development landscape. Increasingly more engineers are using natural language agents to expedite and guide development workflows, while large organizations are investing heavily on building agentic systems for tasks such as code generation and code repair. A key challenge in developing such systems is tuning agent hyperparameters— settings that affect performance such as choice of model, temperature settings, and context window sizes. As system complexity grows, the hyperparameter space expands, complicating optimization under real-world compute and time constraints. In this work, we present Palimpzest[1] as an agentic optimizer able to balance cost and performance objectives by tuning agentic hyperparameters. We demonstrate that Palimpzest can tune our agent hyperparameters at 8.5 times lower cost and with 24 times greater time efficiency compared to the conventional grid search. By integrating our custom-built Debugger and Code Editor Agents as new operators within Palimpzest, we enhance the system’s ability to resolve real-world GitHub issues. And to facilitate hyperparameter selection, we also introduce File Coverage, Report Accuracy, and Patch Similarity along with the traditional SWE-Bench Score as quality evaluation methods used by Palimpzest’s optimization loop. When evaluated on the SWE-Bench Lite[2] benchmark, our optimized system achieves a 15% score at a significantly lower cost compared to previous approaches.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrating Gradient Boosting and Generative Models:&#13;
Hybrid Approach to Address Class Imbalance and&#13;
Evaluation Gaps in Real-World Systems</title>
<link href="https://hdl.handle.net/1721.1/162707" rel="alternate"/>
<author>
<name>Lau, Mary</name>
</author>
<id>https://hdl.handle.net/1721.1/162707</id>
<updated>2025-09-19T04:49:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Integrating Gradient Boosting and Generative Models:&#13;
Hybrid Approach to Address Class Imbalance and&#13;
Evaluation Gaps in Real-World Systems
Lau, Mary
Anomaly detection remains a persistent challenge in machine learning due to the extreme class imbalance, high cost of false negatives, and the need to regulate false positives in realworld settings at scale. This thesis introduces Tail-end FPR Max Recall, a business-aware evaluation framework designed for such constrained environments. Using this framework, we benchmark LightGBM—a gradient boosting method known for its computational efficiency and predictive accuracy—on an imbalanced dataset, comparing its performance against standard academic evaluation criteria. Our results demonstrate that Tail-end FPR Max Recall fills critical gaps left by standard academic criteria, providing a more realistic assessment of model performance that aims to maximize recall while enforcing a false positive rate budget. Beyond benchmarking, we propose two strategies that incorporate deep learning methods to augment the already strong performance of gradient boosting: (1) using generative models to produce synthetic minority-class samples that outperform traditional oversampling techniques, and (2) using neural embeddings to improve feature representation for anomaly detection. Together, these contributions offer a methodology for evaluating and improving anomaly detection pipelines in domains where rare, high-impact events must be detected while meeting strict operational demands.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>FPGA Based Data Acquisition System for Cryogenic&#13;
Device Verification</title>
<link href="https://hdl.handle.net/1721.1/162706" rel="alternate"/>
<author>
<name>Kandeh, Stephen</name>
</author>
<id>https://hdl.handle.net/1721.1/162706</id>
<updated>2025-09-19T04:49:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">FPGA Based Data Acquisition System for Cryogenic&#13;
Device Verification
Kandeh, Stephen
In this work, a system of processors connected to an FPGA is interfaced with a custom analog frontend and used to create a verification environment for cryogenic devices. In particular, this thesis focuses on the technical structure of that system. Current validation efforts often rely on commercially available arbitrary waveform generators (AWGs) and oscilloscopes, which, while highly capable, are often prohibitively expensive and poorly suited for large-scale or parallelized testing environments. As noted in industry reports, scaling such instrumentation introduces significant challenges in cost, calibration, and signal synchronization, making them inefficient for high-resolution or high-speed analyses in multi-channel systems [1]. On the other hand, an FPGA provides the necessary performance to increase parallelism without a proportional increase in cost, greatly improving testing resolution and speed. When augmented with a set of processors, we introduce a level of accessibility and automatability not currently present in commercial products. To be clear, while the board was designed with the testing of nanowires in mind (and is not capable of measuring DC voltages), it can still be combined with separate lab equipment to interact with Josephson Junction based devices. That said, the flexibility of this system allows for a generalized application to any electronic that demands a specialized testing procedure involving arbitrary signal processing and generation. The money, time, and energy that this innovation will save on cryogenic electronic validation will significantly improve our progress in developing these technologies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Energy Efficient Real-time Operating Systems on Chip</title>
<link href="https://hdl.handle.net/1721.1/162705" rel="alternate"/>
<author>
<name>Kang, Ezra H.</name>
</author>
<id>https://hdl.handle.net/1721.1/162705</id>
<updated>2025-09-19T04:49:12Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Energy Efficient Real-time Operating Systems on Chip
Kang, Ezra H.
Autonomous micro-robots are crucial for several tasks, such as search and rescue, noknowledge mapping, and navigation. Without an external power connection, these robots are constrained by their on-platform energy capacity. The power consumption of actuation systems used in micro-robots is within the same magnitude of the power consumption of the compute system. Thus, the remaining factor for enabling these micro-robots is associated with the design of energy-efficient compute systems. Energy usage of compute systems is typically dominated by memory operations, which previous efforts have attempted to mitigate with memory efficient software and hardware. These efforts are enabled with the software/hardware interface, which is implemented as an Operating System (OS). However, Operating Systems for energy-efficient platforms have not been fully explored. Current approaches utilize full general-purpose Operating Systems such as Linux, which can incur large memory and compute overhead penalties. These overheads not only consume the typically limited memory resources of energy-efficient systems, but also increase the number of memory accesses and CPU cycles, both of which are significant contributors to energy consumption. To address these concerns, we propose the design of a computational and memory efficient Real-time Operating System (RTOS). Our RTOS is designed to minimize both memory footprint and compute cycle overhead. It achieves this primarily through direct physical memory access, cycle-efficient task scheduling, and minimal runtime services to avoid unnecessary processing. Additionally, the modular RTOS kernel includes only the components required by an application in the final binary, reducing code size and memory usage without compromising functionality. The design enables the utilization of energy-efficient hardware accelerators and software, allowing for execution of robotics workloads with minimal memory and cycle overhead. When comparing robotics algorithms implemented on our proposed RTOS and baseline OSes, our design was able to achieve a 99% reduction in memory footprint. Additionally, it achieved up to a 47% increase in throughput. Thus, our design demonstrates a direct reduction in memory and CPU cycle overhead, which in turn lowers total system memory and energy consumption. The proposed design was demonstrated and verified on a resource constrained system-on-chip on the AMD Virtex Ultrascale+ VCU118 FPGA.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Uncertainty and Generality of Transfer Learning Models&#13;
in Predicting Signaling History</title>
<link href="https://hdl.handle.net/1721.1/162704" rel="alternate"/>
<author>
<name>Lu, Claire</name>
</author>
<id>https://hdl.handle.net/1721.1/162704</id>
<updated>2025-09-19T04:49:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Uncertainty and Generality of Transfer Learning Models&#13;
in Predicting Signaling History
Lu, Claire
Proper cell-cell communication is essential for multicellular development, from embryogenesis to stem cell differentiation. To map these networks, we developed IRIS (Intracellular Response to Infer Signaling state), a semi-supervised deep learning method that fits conditional variational autoencoders (CVAE) to single-cell RNA sequencing (scRNA-seq) data. IRIS is able to annotate cellular signaling states of individual cells using only their gene expression. Currently, IRIS has been validated in developmental contexts, including gastrulation, early endoderm organogenesis, and mesoderm lineages in mouse embryos. However, its predictions often show extremely high or extremely low confidence, suggesting a need for methods to prevent overconfidence and better account for uncertainty. To generalize IRIS to broader cell-cell communication problems, we combined engineering and experimental approaches, integrating uncertainty quantification techniques with new biological datasets. We implemented three approaches for estimating uncertainty in IRIS predictions: stochastic sampling, Monte Carlo dropout, and ensemble prediction. These approaches were evaluated on two new endoderm and mesenchyme combinatorial perturbation screens. Across all methods, uncertainty values reliably reflected the varying difficulty of predicting different signaling pathways, driven by both biological complexity and dataset representation. Moreover, higher uncertainty was consistently associated with lower prediction accuracy, confirming uncertainty as a useful proxy for model confidence. All three methods identified similar high-uncertainty cell populations, supporting their consistency and validity. By incorporating uncertainty quantification into IRIS, we provide more robust and interpretable predictions that can guide future experiments and enhance the model’s applicability across diverse biological contexts.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Core Material Evaluation for Magnetic Energy Harvester&#13;
Applications</title>
<link href="https://hdl.handle.net/1721.1/162703" rel="alternate"/>
<author>
<name>Le, Khang D.</name>
</author>
<id>https://hdl.handle.net/1721.1/162703</id>
<updated>2025-09-19T04:49:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Core Material Evaluation for Magnetic Energy Harvester&#13;
Applications
Le, Khang D.
Current transformer magnetic energy harvesters (CTMEHs) harvest magnetic energy from an AC current-carrying conductor and convert this energy into usable electrical energy for use by various low-power devices, such as sensors and microcontrollers. The amount of power harvested by CTMEHs is determined by the primary current passing through the conductor; however, variables such as the magnetic core’s dimensions, magnetic properties, and turn count also influence performance. Previous works have focused mainly on analytical or numerical modeling of CTMEH behavior or improving power harvest performance given a specific magnetic core material. Some existing research has compared the effects of different core materials on CTMEH power harvest in limited fashion; but a comprehensive, comparative study of high permeability, high saturation flux density CTMEHs had yet to be explored. This thesis establishes core material as the primary independent variable along with primary current and frequency during testing to isolate the effects of magnetic properties on determining the amount of power a magnetic core can harvest under different current conditions. The thesis concludes that nanocrystalline material excels at lower-current applications, while silicon steel material offers better performance at higher-current applications across all frequencies when used as CTMEHs, offering system designers enticing material choices depending on the nature of the application.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Eliciting Visualization Attitudes with Repertory Grids</title>
<link href="https://hdl.handle.net/1721.1/162702" rel="alternate"/>
<author>
<name>Hua, Dana</name>
</author>
<id>https://hdl.handle.net/1721.1/162702</id>
<updated>2025-09-19T04:49:19Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Eliciting Visualization Attitudes with Repertory Grids
Hua, Dana
Research in public data communication typically focuses on improving the processes of encoding and decoding, answering the question of how to design a visualization to best communicate information to an audience. However, by treating visual communications as simply conduits for information, we ignore an important aspect of how people interact with communications. We ignore the attitudes – the thoughts, feelings, and intentions toward action – a person may form from communicative artifacts based on their personal values and experiences. Recent research has demonstrated that—much like natural language—readers of visualizations make social attributions: inferences about the identities and characteristics of an artifact’s makers, modes of distribution, and tools of production. In this thesis, I contribute a method to systematically map the visualization attitudes of an individual and the associated ideologies of their sociocultural group, by adapting the repertory grid technique from clinical psychology, to the context of data visualization. I demonstrate the effectiveness of this mixed methods approach by eliciting both the attitudes towards a visualization most salient to an individual, and the design features of the visualization that inform each attitude. This method offers a new way of exploring the content and latent structure of visualization attitudes, which opens new avenues for socioculturally-informed and intervention-driven research in data visualization.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing scheduling for stream structured programming for StreamIt</title>
<link href="https://hdl.handle.net/1721.1/162701" rel="alternate"/>
<author>
<name>Dow, Nicholas Lee</name>
</author>
<id>https://hdl.handle.net/1721.1/162701</id>
<updated>2025-09-19T04:49:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing scheduling for stream structured programming for StreamIt
Dow, Nicholas Lee
As straightforward increases in performance on general purpose CPUs slow down, the shift to application specific implementations and hardware has accelerated. This shift to towards specialization improves performance but often at the cost of developer productivity in learning these new tools. StreamIt is a Domain Specific Language developed to increase performance of streaming applications while being relatively user-friendly. While designed to be parallelized easily, the scheduling backend of the StreamIt compiler is not adapted to the heterogeneous and distributed nature of new accelerator hardware. This thesis details the design and development of a scheduler interface that enables hardware customized schedulers to be developed quickly. The scheduler interface allows for schedulers to take advantage of the unique compiler optimizations enabled by StreamIt’s structure. Two schedulers, one search based and another heuristic based, are built using this interface to schedule StreamIt workloads to optimize differing metrics such as throughput and latency. Our experiments evaluate the performance of these workloads, and details future direction for expanding the interface and scheduler designs that could take advantage of it.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mitigating Electromagnetic Interference in Unshielded MRI: Implementation, Experimentation, and Future Directions</title>
<link href="https://hdl.handle.net/1721.1/162700" rel="alternate"/>
<author>
<name>Flynn, John M.</name>
</author>
<id>https://hdl.handle.net/1721.1/162700</id>
<updated>2025-09-19T04:49:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Mitigating Electromagnetic Interference in Unshielded MRI: Implementation, Experimentation, and Future Directions
Flynn, John M.
Portable, Low-Field MRI broadens access and enables numerous new applications such as point-of-care. Operating outside an RF-shielded room introduces electromagnetic interference (EMI), degrading further the signal-to-noise ratio (SNR) which is already diminished due to the lower magnetic fields used in portable imaging. Existing methods to reduce EMI perform well in simple noise environments, but can struggle with more complex profiles. Relaxing the linear assumptions is hypothesized to bring more robust mitigation algorithms. A system-wide characterization of SNR challenges was carried out on a rebuilt 800G scanner, existing techniques were validated, and new signal processing approaches were explored to drive image quality upwards. Various analytical approaches showed promise, such as dynamic coils/preamps, averaging methods, calibration, and smoothing methods. Groundwork was laid for learning-based methods throughout the pipeline. This work serves as an important baseline for the numerous experiments necessary for the full-system optimization.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring Atom-Light Scattering in the Quantum Regime</title>
<link href="https://hdl.handle.net/1721.1/162699" rel="alternate"/>
<author>
<name>Lu, Yu-Kun</name>
</author>
<id>https://hdl.handle.net/1721.1/162699</id>
<updated>2025-09-19T03:05:30Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Exploring Atom-Light Scattering in the Quantum Regime
Lu, Yu-Kun
Ultracold atoms and molecules are promising platforms for exploring modern quantum science and technologies, such as quantum simulation and quantum computation. Here, light is the essential tool to manipulate and probe these systems. However, unlike in condensed matter systems where scattering experiments are routinely employed to characterize materials, ultracold atom and molecule systems are usually probed by imaging and not by light scattering.&#13;
&#13;
In this thesis, I present a systematic investigation of atom-light scattering under various scenarios. When atoms are confined in optical lattices, light scattering can be used to explore single-body, two-body, and many-body physics. Focusing on single-atom physics, I study coherent and incoherent light scattering of single-atom wavepackets and the relation to which-way information. For two atoms tightly localized to a 20nm size on the same lattice site, I demonstrate the strong electric dipolar interactions between them, which result in large momentum transfers and spectroscopic shift of the resonance. On the many-body side, I show how light scattering can reveal distinct quantum phases at thermal equilibrium or defect generation in dynamical ramps. For atoms released from the optical lattice, I demonstrate that light scattering can read out the quantum statistical information and initial density correlations hidden in the interference of atomic wavepackets.&#13;
&#13;
When atoms move freely in the form of degenerate quantum gases, I investigate how quantum statistics, phase transition, and interactions modify the atomic pair correlation and consequently the light scattering. For thermal gas at high density, I demonstrate nonlinear optical effects from high optical density and high scattering rate. &#13;
&#13;
Finally, I describe our recent efforts on manipulating atoms at subwavelength length scales. I discuss our attempts in optical tweezers and in optical lattices, and the prospect of observing magnetic pairing between two distant layers under attractive dipolar interaction.&#13;
&#13;
The techniques presented in this thesis should be of general use for pursuing quantum science and technology with ultracold atoms and molecules.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Single-Model Any-Subgroup Equivariance via Symmetric Positional Encodings</title>
<link href="https://hdl.handle.net/1721.1/162698" rel="alternate"/>
<author>
<name>Goel, Abhinav</name>
</author>
<id>https://hdl.handle.net/1721.1/162698</id>
<updated>2025-12-09T18:18:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Single-Model Any-Subgroup Equivariance via Symmetric Positional Encodings
Goel, Abhinav
The inclusion of symmetries as an inductive bias, known as “equivariance”, often improves generalization on geometric data (e.g. grids, sets, and graphs). However, equivariant architectures are usually highly constrained, designed for pre-chosen symmetries, and cannot be applied to datasets with different symmetries. This work constructs a single model that is simultaneously equivariant to several groups, by simply regulating a certain input feature. Starting with a permutation-equivariant base model respecting the full Sₙ symmetry group, we can obtain subgroup G ⊆ Sₙ equivariance by using a symmetry-breaking input that is G-symmetric. Under mild conditions, the resultant network is only G-equivariant. But finding an input with automorphism group exactly G is computationally hard, which can be overcome by relaxing exact symmetry breaking to approximate symmetry breaking. This is done by leveraging the notion of 2-closure to derive fast algorithms. This method is validated on symmetry selection, multitask, and transfer learning settings, demonstrating that a single network equivariant to multiple permutation subgroups outperforms both separate equivariant models or a single non-equivariant model.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of Precision Successive-approximation-register&#13;
Analog-to-digital Converters for Digital Root-mean-square&#13;
Calculation</title>
<link href="https://hdl.handle.net/1721.1/162697" rel="alternate"/>
<author>
<name>Choi, Sun Mee</name>
</author>
<id>https://hdl.handle.net/1721.1/162697</id>
<updated>2025-09-19T04:49:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Application of Precision Successive-approximation-register&#13;
Analog-to-digital Converters for Digital Root-mean-square&#13;
Calculation
Choi, Sun Mee
The advancement of semiconductor manufacturing processes has allowed for the availability of powerful microcontrollers at lower costs, granting system designers the flexibility to select between analog and digital signal processing techniques. Enabled by recent developments in low-power successive approximation register (SAR) analog-to-digital converter (ADC) technology, a digital approach to root-mean-square (RMS) measurement is proposed. The work begins with an explicit accumulation and averaging approach, and a set of improvements were designed to increase measurement accuracy and reliability. Algorithms are compared using the metrics of error, power efficiency, latency, and digital overhead. High-performing and power-efficient digital RMS measurement methods could be valuable for decentralized instrumentation systems such as smart grids and factory automation where long-lasting handheld and portable solutions are becoming critical.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hosting LLMs on Shared GPUs</title>
<link href="https://hdl.handle.net/1721.1/162696" rel="alternate"/>
<author>
<name>Choi, Kenneth K.</name>
</author>
<id>https://hdl.handle.net/1721.1/162696</id>
<updated>2025-09-19T04:49:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Hosting LLMs on Shared GPUs
Choi, Kenneth K.
Large language models (LLMs) have emerged as powerful tools for a wide array of applications. Serving multiple LLMs on shared GPUs has increasingly gained attention as single providers need to support multiple applications (summarization, chat, code generation), different model versions (A/B testing), and various types of customers. However, multi-model serving is particularly challenging, as static memory partitioning can lead to severe under-utilization, fragmentation, and latency spikes, while dynamic loading of model weights can cause unacceptable downtime due to high model loading overheads. To address these issues, we introduce hierarchical paging, a novel key-value (KV) cache management strategy, and we implement it within the vLLM serving engine. Hierarchical paging organizes GPU memory into a two-level hierarchy: large contiguous memory blocks allocated to individual models, which are then subdivided into smaller blocks that are allocated to different requests issued to that model. Our design enables dynamic memory sharing across models, improving model throughput and overcoming key problems of existing approaches. We detail our implementation and present end-to-end experiments that showcase these throughput improvements under different workloads. We include further evaluations on the runtime overheads of our hierarchical paging implementation, which show that the overheads are insignificant. Most importantly, we demonstrate that hierarchical paging is easy to implement, optimizing for implementation effort and maintainability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Topology-Guided Diffusion Process for Synthetic Tabular Data Generation</title>
<link href="https://hdl.handle.net/1721.1/162695" rel="alternate"/>
<author>
<name>Cheng, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/162695</id>
<updated>2025-09-19T04:49:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Topology-Guided Diffusion Process for Synthetic Tabular Data Generation
Cheng, Emily
Synthesizing realistic tabular data is crucial for any analytical application, including policy evaluation related to household energy use. However, detailed household-level consumption data, necessary for such evaluation, are scare at fine geographic scales, as public surveys like the U.S. Residential Energy Consumption Survey (RECS) provide too few observations. We address this gap by developing a topology-guided diffusion-based generative model that produces realistic synthetic household data, and our approach handles two key challenges in this setting: (1) mixed continuous and discrete features and (2) strong hierarchical dependencies among variables. To handle categorical features, we build upon recent advancements in discrete diffusion, particularly TabDDPM [1] and TabDiff [2], which discretize the diffusion process through noise transition matrices, effectively extending diffusion methods to discrete tabular domains. To address hierarchical dependence, we include (1) a structure-aware noise schedule that injects noise from the leaves to the root along an approximate Chow–Liu tree constructed from the variables and (ii) a masked self-attention denoiser that aligns with the same graphical structure. Extensive experiments show that our structured diffusion model outperforms the baseline TabDiff on data with tree-like dependencies, due to the inductive bias from our structure-aware noise schedule. On data that only approximately follows a tree, such as the RECS dataset, our model maintains competitive performance, only slightly outperforming standard diffusion methods. These results highlight the potential for future work to further optimize the tradeoff between structural approximation and estimation accuracy and for future work beyond the energy domain.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Dynamic Treatment Regimes: Collaborative Search&#13;
and LLM-Driven Decision Trees</title>
<link href="https://hdl.handle.net/1721.1/162694" rel="alternate"/>
<author>
<name>Gregory, Cale</name>
</author>
<id>https://hdl.handle.net/1721.1/162694</id>
<updated>2025-09-19T04:49:05Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">On Dynamic Treatment Regimes: Collaborative Search&#13;
and LLM-Driven Decision Trees
Gregory, Cale
This thesis evaluates the validity of current dynamic treatment regime algorithms and presents a novel data structure for extracting treatment decisions from unstructured clinical notes. The main contribution is the Clinical Decision Tree (CDT) which uses large language models (LLMs) to extract key decisions in chronic disease treatment. This addresses the main pain points in dynamic treatment regimes of low interpretability and reliance on poorly collected data for traditional machine learning methods. This work contains extensive experiments on mortality prediction, time series forecasting, and synthetic patient modeling. Experiments show that vital-based representations do not capture enough meaningful data about a patient to accurately predict and evaluate new treatment methods. By utilizing latent embeddings and vector search, experiments show that the collected vitals of patients fail to differentiate the outcomes of the related patients. Conversely, the clinical notes contain complex and substantial information about clinical decision making. LLMs enable the valuable knowledge extraction from unstructured data. Utilizing LLMs, experimental results and expert evaluation indicates that CDTs can extract and distill interpretable treatment decisions. Thus, CDTs are a valuable tool that can be refined to increase confidence in treatment decisions and identifying rare and uncommon medical practices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>"Eliminating the Friction": An AI-powered Assistant for StarLogo Nova</title>
<link href="https://hdl.handle.net/1721.1/162693" rel="alternate"/>
<author>
<name>Han, Aileen</name>
</author>
<id>https://hdl.handle.net/1721.1/162693</id>
<updated>2025-09-19T04:49:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">"Eliminating the Friction": An AI-powered Assistant for StarLogo Nova
Han, Aileen
Agent-based modeling is a technique that allows students to reason about and create models of real-life phenomena. However, the programmatic implementations of this technique, such as StarLogo Nova, often introduce “friction”; students may get stuck on the syntactical details of the implementation before being able to engage in the mechanistic thinking behind their models. In order to shift students’ focus towards the goal of understanding the systems they are building, we set out to create an AI-powered assistant for StarLogo Nova that can explain and debug students’ code. After identifying and experimenting with various parameters of AI models in an attempt to improve their performance, we were able to build the StarLogo Turtle Helper, an easily accessible assistant integrated into the platform that can produce accurate responses to StarLogo-related questions. Through this process, we discovered two key properties of these models: first, the method through which these models use provided documentation (called retrieval-augmented generation, or RAG) is quite rudimentary, so any background knowledge should be included in the prompt or the model’s system instructions instead. Second, these models perform best if they are designed to only serve one purpose, so creating multiple models and chaining them together may be the best way to achieve more complex functionality.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predicting Progression of Metabolic Dysfunction-associated Steatotic Liver Disease</title>
<link href="https://hdl.handle.net/1721.1/162692" rel="alternate"/>
<author>
<name>Li, Jonathan</name>
</author>
<id>https://hdl.handle.net/1721.1/162692</id>
<updated>2025-09-19T04:49:07Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Predicting Progression of Metabolic Dysfunction-associated Steatotic Liver Disease
Li, Jonathan
This work focuses on the progression from metabolic dysfunction-associated fatty liver to metabolic dysfunction-associated steatohepatitis, a more serious prognosis that can lead to liver failure and death. Additional adverse progressed outcomes include hepatic failure, fibrosis, cirrhosis, and malignant neoplasm of liver and intrahepatic bile ducts. We explore the possibility of using different machine learning techniques, including logistic regression, XGBoost, random forest, and decision trees to predict the likelihood of progression. We use data from Massachusetts General Brigham to train our models, incorporating demographics, physical measurements, lab results, and doctor notes. As a result of this project, we our best model was an XGBoost classifier with an AUROC of 0.800 with random forest at a similar performance of 0.786. However, all of our models had low AUPRC and sensitivity, indicating both overfitting and an imbalanced dataset.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data Traceability via OTrace Concepts and Implementation</title>
<link href="https://hdl.handle.net/1721.1/162691" rel="alternate"/>
<author>
<name>Farooq, Ashar</name>
</author>
<id>https://hdl.handle.net/1721.1/162691</id>
<updated>2025-09-19T04:49:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Data Traceability via OTrace Concepts and Implementation
Farooq, Ashar
Financial transactions are commonplace in the modern world. Everyday consumers make purchases on many e-commerce sites and often use many third-party financial services, such as to predict your credit score, to obtain customized budget recommendations, and to find out which specific loan is the best for them. These financial services often need financial information from the consumer, which is not always clear to the consumer. In other words, consumer data are being used without their knowledge and consent. The proposed solution of using a traceability protocol called OTrace aims to mitigate this issue of not knowing where a consumer’s data is along with what is being done with it. This paper will aim to bolster OTrace to be more representative of a protocol that consumers can actually use as a service, and financial institutions can have trust that this will solve the problem of consumers not knowing which third-party financial services have their data. In other words, this work will create a more general traceable and accountable data sharing system specification that includes the OTrace layer on top of an OAuth layer that will be complemented with a model deployment example. The addition of more relevant OTrace API endpoints corresponding to a new specification along with an entire new OTrace Web implementation along with analysis will guide the data traceability world, data privacy world, open banking world, financial world and ultimately the global world forward. There will be a model deployment of an OTrace service on top of an OAuth protocol that can allow everyone to see it being used by various parties that can ultimately scale up to fix the problem of unintended data usage and lack of transparency of location of data.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Equivariant Autoregressive Models for Molecular Generation</title>
<link href="https://hdl.handle.net/1721.1/162690" rel="alternate"/>
<author>
<name>Kim, Song Eun</name>
</author>
<id>https://hdl.handle.net/1721.1/162690</id>
<updated>2025-09-19T04:48:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Equivariant Autoregressive Models for Molecular Generation
Kim, Song Eun
In-silico generation of diverse molecular structures has emerged as a promising method to navigate the complex chemical landscape, with direct applications to inverse material design and drug discovery. However, 3D molecular structure generation comes with several unique challenges; generated structures must be invariant under rotations and translations in 3D space, and must satisfy basic chemical bonding rules. Recently, E(3)-equivariant neural networks that utilize higher-order rotationally-equivariant features have shown improved performance on a wide range of atomistic tasks, including structure generation. Previously, we have developed Symphony, an E(3)-equivariant autoregressive generative model for 3D structures of small molecules. At each sampling iteration, a single focus atom is selected, which is then used to decide on the next atom’s position within its neighborhood. Symphony built on previous autoregressive models by using message-passing with higher-order equivariant features, allowing a novel representation of probability distributions via spherical harmonic signals. Symphony’s performance approached that of state-of-the-art diffusion models while remaining relatively lightweight. However, it continued to face challenges in error accumulation and determining bond lengths, and it was only evaluated against small organic molecules. Here, we expand on Symphony’s capabilities and make it more compatible with larger atomic structures. We add improvements to the embedders, split the radial and angular components when predicting atom positions, and increase the radial cutoff for atomic neighborhoods considered during prediction. We also increase Symphony’s training and inference speeds through a new implementation in PyTorch, making inference nearly 4x faster than previously. In addition, we demonstrate its effectiveness across a variety of tasks, including small molecule and protein backbone generation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Vigilis: Leveraging Language Models for Fraud Detection in Mobile Communications and Financial Transactions</title>
<link href="https://hdl.handle.net/1721.1/162689" rel="alternate"/>
<author>
<name>Das, Gaurab</name>
</author>
<id>https://hdl.handle.net/1721.1/162689</id>
<updated>2025-12-09T18:09:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Vigilis: Leveraging Language Models for Fraud Detection in Mobile Communications and Financial Transactions
Das, Gaurab
Although advances in security have strengthened defenses in digital financial systems, attackers increasingly rely on social engineering to achieve their goals. These attacks are difficult to detect and prevent with existing security measures. To address this, we propose Vigilis, a fraud-protected application that employs advanced language models to counter such attacks in calls, texts, and payments. We first collect and make available a corpus of fraudulent calls from the Internet and train lightweight transformer-based models that achieve fraud detection accuracies of up to 94% and 87% on transcript and audio modalities, respectively. We integrate these models into a real-time call system within Vigilis that operates entirely on-device, enabling accurate fraud detection in an efficient and privacy-preserving manner. We then extend Vigilis to incorporate context-aware transaction authentication, where the underlying social context behind a transaction is determined from calls, texts, and browsing history and used to infer the transaction’s validity. By uniquely incorporating social concepts into traditional cybersecurity techniques, we attempt to counter and mitigate issues related to social engineering attacks in financial fraud.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>GDSVD: Scalable k-SVD via Gradient Descent</title>
<link href="https://hdl.handle.net/1721.1/162688" rel="alternate"/>
<author>
<name>Gan, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/162688</id>
<updated>2025-09-19T04:48:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">GDSVD: Scalable k-SVD via Gradient Descent
Gan, Emily
We show that a gradient-descent with a simple, universal rule for step-size selection provably finds k-SVD, i.e., the k ≥ 1 largest singular values and corresponding vectors, of any matrix, despite nonconvexity. There has been substantial progress towards this in the past few years where existing results are able to establish such guarantees for the exact-parameterized and over-parameterized settings, with choice of oracle-provided step size. But guarantees for generic setting with a step size selection that does not require oracle-provided information has remained a challenge. We overcome this challenge and establish that gradient descent with an appealingly simple adaptive step size (akin to preconditioning) and random initialization enjoys global linear convergence for generic setting. Our convergence analysis reveals that the gradient method has an attracting region, and within this attracting region, the method behaves like Heron’s method (a.k.a. the Babylonian method). Empirically, we validate the theoretical results. The emergence of a modern compute infrastructure for iterative optimization coupled with this work is likely to provide a means of solving k-SVD for very large matrices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparison of dispersion metrics for estimating transcriptional noise in single-cell RNA-seq data and applications to cardiomyocyte biology</title>
<link href="https://hdl.handle.net/1721.1/162687" rel="alternate"/>
<author>
<name>Chen, Tina T.</name>
</author>
<id>https://hdl.handle.net/1721.1/162687</id>
<updated>2025-09-19T04:48:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Comparison of dispersion metrics for estimating transcriptional noise in single-cell RNA-seq data and applications to cardiomyocyte biology
Chen, Tina T.
Transcription is a dynamic process with a multitude of characteristics, including transcript level, burst frequency, amplitude, and variability. Single-cell RNA sequencing data analysis often focuses on comparing transcription levels. However, these analyses capture only a portion of the wealth of information conveyed by transcription. The quantification and analysis of transcriptional variability poses an opportunity to study transcription and gene regulation from a new angle. Transcriptional variability has already been implicated in a number of biological processes, including in immune system development and in aging. Yet, the most appropriate method for measuring transcriptional variability in single-cell data has remained relatively unclear. Here, we simulated single-cell data with varying dispersion and dataset size to assess the relative responsiveness of the Gini index, variance-to-mean ratio, variance, and Shannon entropy to variability in single-cell counts. We found that the variance-to-mean ratio scales approximately linearly with increasing dispersion, and that it is scale-invariant. The Gini index displayed paradoxical behavior, and Shannon entropy was not scale-invariant. Thus, we applied the variance-to-mean to measure transcriptional variability in two publicly available datasets studying congenital heart defects in mouse models. We first found that change in transcriptional variability does not correlate with gene characteristics such as transcript level and evolutionary gene age. We also found that using change in transcriptional variability to focus GSEA and TF motif enrichment analyses revealed both genes with known involvement in cardiomyopathy and new genes and pathways as potential targets for future study. Notably, many of the genes and pathways identified through transcriptional variability analysis were not found by differential expression analysis, suggesting that transcriptional variability can provide additional biologically relevant information beyond what is observed from studying mean expression alone.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Expanding Annotation to Mixed-Media Types in a Large-Scale Social Annotation Platform</title>
<link href="https://hdl.handle.net/1721.1/162686" rel="alternate"/>
<author>
<name>Heiberger, Harry G.</name>
</author>
<id>https://hdl.handle.net/1721.1/162686</id>
<updated>2025-09-19T04:48:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Expanding Annotation to Mixed-Media Types in a Large-Scale Social Annotation Platform
Heiberger, Harry G.
In recent years, social annotation systems have become a popular and effective tool for hosting collaborative discussions on assigned readings. One such tool created by our lab is NB. Over the last twelve years, hundreds of instructors have incorporated NB within their classes, with over 50,000 students leaving millions of annotations [1]. While feedback for NB has mostly been positive, one major limitation is its difficulty in annotating documents with nested media types. As multimodal forms of learning beyond just text are becoming increasingly common in educational assignments, having the ability to annotate beyond simple text documents would greatly increase the utility of NB in the modern classroom. This work seeks to remedy this issue by expanding the types of documents NB can successfully annotate, specifically focusing on three mixed-media issue types: independently moving text components, image annotation, and video annotation. We will explore the design space of possible implementation strategies for these features and discuss the specific design decisions that were made when adding them to NB. We hope that by increasing the types of documents NB can annotate, we will better fulfill its goal of enhancing student engagement and learning.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pareto Task Inference Analysis of Single-Cell RNASequencing of Human Placenta Reveals Biological Insightsinto Adverse Pregnancy Outcomes</title>
<link href="https://hdl.handle.net/1721.1/162685" rel="alternate"/>
<author>
<name>Eppinger, Aria R.</name>
</author>
<id>https://hdl.handle.net/1721.1/162685</id>
<updated>2025-09-19T04:48:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Pareto Task Inference Analysis of Single-Cell RNASequencing of Human Placenta Reveals Biological Insightsinto Adverse Pregnancy Outcomes
Eppinger, Aria R.
Adverse pregnancy outcomes (APOs), such as preeclampsia, fetal growth restriction, and preterm birth, occur in 10-15% of pregnancies. There is limited knowledge of how the cellular states in the placenta and decidua tissues are altered in women with particular APOs or may contribute to APOs. Single-cell RNA sequencing (scRNAseq) approaches have characterized cellular populations and interactions at the maternal-fetal interface using traditional dimensionality-reducing methods such as UMAP-based clustering. However, these techniques may generate limited representations of nuanced cellular functions and biological relationships among and within cell clusters. Pareto Task Inference (ParTI), a dimensionality reduction technique that fits data to an n-dimensional polygon or polytope, models how cells optimize among multiple biological functions and transition between states. We applied ParTI to assess its ability to identify nuanced cellular states and intercellular relationships and to highlight biological mechanisms underlying specific APOs. We analyzed scRNAseq data from 50 whole placental homogenates collected from healthy pregnancies and those complicated by fetal growth restriction (FGR), preterm preeclampsia (PrePET), spontaneous preterm birth (PTB), term preeclampsia or gestational hypertension (TermPET/GHTN), or type 1 diabetes (DM1). ParTI was applied to the dataset with 1) all main cell lineages (B-cells, trophoblasts, stromal, endothelial, Haufbauer, T-NK, maternal myeloid cells) and 2) syncytiotrophoblasts (SCTs), a sublineage of trophoblasts. Marker genes and gene set enrichment analysis for the ParTI polytope vertices, called archetypes, were performed to assess the biological states associated with the archetypes. We demonstrated that the ParTI polytope can separate both broad cell lineages and sublineages, suggesting that iteratively applying ParTI can serve as an alternative clustering approach when cell-lineage marker genes are previously known. Additionally, ParTI applied to SCTs separated healthy controls from pregnancies complicated by specific APOs. Gene set enrichment analysis of the cells proximal to the archetypes suggests biological differences in SCTs with specific APOs compared to the controls. Thus, ParTI can identify biological mechanisms underlying specific APOs and be applied to additional datasets to uncover biological relationships among and within cell-type clusters.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Recursion with Iteration: Enabling LLVM Loop Optimizations for Recursive Data Structure Traversal</title>
<link href="https://hdl.handle.net/1721.1/162684" rel="alternate"/>
<author>
<name>Cuevas, Elie E.</name>
</author>
<id>https://hdl.handle.net/1721.1/162684</id>
<updated>2025-09-19T04:49:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Modeling Recursion with Iteration: Enabling LLVM Loop Optimizations for Recursive Data Structure Traversal
Cuevas, Elie E.
Recursive algorithms are a natural and expressive way to traverse complex data structures, but they often miss opportunities for optimization in modern compiler infrastructures like LLVM. This thesis explores a novel technique that temporarily transforms recursive traversals into synthetic loop-like structures, enabling existing loop-specific optimizations to apply, before transforming them back. By extending Clang’s semantic analysis and implementing a custom LLVM transformation pass, recursive traversals are initially structured into synthetic loops that can benefit from existing loop analyses and optimizations. After these optimizations are applied, the transformation restores the original recursive semantics, preserving program behavior while incorporating performance gains. Evaluation across custom microbenchmarks shows that while general recursive traversals suffer a modest overhead, workloads designed to benefit specific loop-focused optimizations achieve up to a 30% performance improvement. This demonstrates that even though the approach requires temporarily "misrepresenting" code to the compiler, selective exposure of recursive patterns to loop-based optimization infrastructure is practical and effective. This work establishes a proof-of-concept for compiler transformations that bridge recursion and iteration, paving the way for future systems that better optimize real-world recursive code without sacrificing clarity or maintainability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Grounding Time Series in Language: Interpretable Reasoning with Large Language Models</title>
<link href="https://hdl.handle.net/1721.1/162683" rel="alternate"/>
<author>
<name>Chen, Lily</name>
</author>
<id>https://hdl.handle.net/1721.1/162683</id>
<updated>2025-12-09T18:21:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Grounding Time Series in Language: Interpretable Reasoning with Large Language Models
Chen, Lily
Can large language models (LLMs) classify time-series data by reasoning like a domain expert—if given the right language? We propose a method that expresses statistical time-series features in natural language, enabling LLMs to perform classification with structured, interpretable reasoning. By grounding low-level signal descriptors in semantic context, our approach reframes time-series classification as a language-based reasoning task. We evaluate this method across 23 diverse univariate datasets spanning biomedical, sensor, and human activity domains. Despite requiring no fine-tuning, it achieves competitive accuracy compared to traditional and foundation model baselines. Our method also enables models to generate expert-style justifications, providing interpretable insights into their decision-making process. We present one of the first large-scale analyses of LLM reasoning over statistical time-series features, examining calibration, explanation structure, and reasoning behavior. This work highlights the potential of language native interfaces for interpretable and trustworthy time-series classification.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>National crop field delineation for the United States</title>
<link href="https://hdl.handle.net/1721.1/162681" rel="alternate"/>
<author>
<name>Chen, Zitong</name>
</author>
<id>https://hdl.handle.net/1721.1/162681</id>
<updated>2025-09-19T04:48:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">National crop field delineation for the United States
Chen, Zitong
Comprehensive and accurate crop field boundary maps are crucial for digital agriculture, land management, and environmental monitoring. However, no high-quality field boundary dataset is publicly available in the United States. This thesis addresses this gap by creating a new, large dataset and training a deep learning model capable of mapping field boundaries. We built a dataset of over 15,000 image-mask pairs using high-resolution National Agriculture Imagery Program (NAIP) satellite imagery and curated field boundary labels. This dataset covers a variety of leading agricultural states and includes images taken at different scales to capture a wide variety of field sizes and layouts. We used this dataset to train an adapted ResUNet++ neural network model designed to segment crop fields. The trained model achieved around 0.8 for pixel-level accuracy, showing it can generally identify field areas well. However, its performance in matching predicted individual field instances with the ground truth instances (measured by mean instance Intersection over Union, or mIoU) was around 0.5. This lower instance score was largely due to the post-processing step, which converts the model’s probability predictions into separate field instances. Despite this, the field polygons produced by our approach are visually coherent with satellite field images and can be readily used with geospatial tools like Google Earth Engine. Our work provides a practical starting point for future research on mapping fields across the contiguous U.S. Potential directions for improvements may involve developing sharper boundary predictions, exploring direct instance segmentation models, refining post-processing methods, and expanding the dataset to include more challenging areas.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Implementation of an Analog High Power Broadband Self Interference Canceller for In Band Full Duplex</title>
<link href="https://hdl.handle.net/1721.1/162679" rel="alternate"/>
<author>
<name>Hanly, Bianca Marie</name>
</author>
<id>https://hdl.handle.net/1721.1/162679</id>
<updated>2025-09-19T04:48:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design and Implementation of an Analog High Power Broadband Self Interference Canceller for In Band Full Duplex
Hanly, Bianca Marie
A Self-Interference Canceler is the principle component that allows for Simultaneous Transmit And Receive (STAR) of radio signal broadcasting. Previous research and designs by other groups have resulted in systems that either operate at high powers or are capable of cancellation over a wide bandwidth. This work seeks to build upon previous research in order to design an analog SIC that is capable of both high power (∼100W) and wide instantaneous bandwidth (∼1GHz) cancellation. The system is designed as a vector modulator using off-the-shelf hybrid couplers and switches with a custom variable attenuator designed using PIN diodes in a Waugh attenuator architecture. The system was fabricated on a four layer PCB and measured with a network analyzer. Simulated results for variable attenuator and overall vector modulator are presented.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementation of Semantic SLAM on a Mobile&#13;
Manipulator System</title>
<link href="https://hdl.handle.net/1721.1/162678" rel="alternate"/>
<author>
<name>Francis, Zachary R.</name>
</author>
<id>https://hdl.handle.net/1721.1/162678</id>
<updated>2025-09-19T04:48:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Implementation of Semantic SLAM on a Mobile&#13;
Manipulator System
Francis, Zachary R.
In the field of robotics, the development of household robots capable of performing everyday tasks continues to be a major area of research and practical interest. Many domestic chores—such as picking up and moving objects from one location to another—have been successfully performed by stationary robotic manipulators paired with visual perception systems. However, accomplishing more complex, varied, and spatially distributed tasks in real-world home environments requires a mobile platform with a more human-like form factor. These tasks demand greater flexibility, spatial awareness, and interaction capabilities than fixed systems can typically provide. This work focuses on the RBY1 robot from Rainbow Robotics, a humanoid platform designed to support advanced manipulation and mobility. A range of tools and modules were developed to enhance its functionality, including software for semantic perception, task execution, and environment interaction. This thesis provides a technical overview of these tools, highlighting their roles in collecting new datasets that can be used for semantic SLAM research. In the future, these tools can enable the robot to operate more effectively in domestic settings, towards the ultimate goal of enabling more capable home-assistive robots.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Introductory Computer Science Students’&#13;
Programming Process When Using a Generative AI Tutor&#13;
(PyTutor)</title>
<link href="https://hdl.handle.net/1721.1/162677" rel="alternate"/>
<author>
<name>Cunningham, Caroline K.</name>
</author>
<id>https://hdl.handle.net/1721.1/162677</id>
<updated>2025-09-19T04:48:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Improving Introductory Computer Science Students’&#13;
Programming Process When Using a Generative AI Tutor&#13;
(PyTutor)
Cunningham, Caroline K.
This thesis examined students’ programming process while using PyTutor, a generative AI tutor for introductory computer science students. This thesis had the research questions: (1) How does the process of test case creation, with or without PyTutor’s Test Case Runner, impact students’ programming process while using PyTutor? (2) How can prompt engineering of PyTutor’s system prompt be leveraged to improve AI Chat response quality with respect to: (a) reducing the amount of code revealed in the answer, (b) improving the conciseness of responses, and (c) having the AI chat give the student test cases as a tool to understand code correctness? (3) How does PyTutor’s responses from the updated prompt affect the programming process for computer science students? A key finding from a focus group in the first stage (n=9) was apart from test cases and was that the majority of participants who asked questions to PyTutor received at least three lines of code, unideal for PyTutor’s pedagogical purpose. This discovery inspired the next phase of this thesis of prompt engineering PyTutor, which resulted in an updated prompt. Responses from the both the updated prompt and the original prompt were scored using an evaluation rubric. For the Students thinking through problem category of the evaluation rubric, it was statistically significant that the distribution of points for responses from the updated prompt was greater than the distribution of points for responses from the original prompt. Finally, participants were asked to solve a programming problem using either PyTutor with the updated prompt (n=10) or PyTutor with the original prompt (n=2). Across the focus groups from the first and final stage, I found that fewer participants who used PyTutor with the updated prompt received at least three lines of code. Furthermore, participants who used PyTutor with the updated prompt required a greater number of messages to first receive three lines of code. Additionally, all four participants who received at least three lines of code from PyTutor with the updated prompt asked majority high-level questions. As participant feedback suggested that PyTutor’s responses for high-level questions could be repetitive, this data highlights a new direction of improving PyTutor’s responses when answering high-level questions to benefit students’ programming process.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>More with less: topology optimization strategies for structural glass design</title>
<link href="https://hdl.handle.net/1721.1/162676" rel="alternate"/>
<author>
<name>Jewett, Jackson L.</name>
</author>
<author>
<name>Koniari, Anna M.</name>
</author>
<author>
<name>Andriotis, Charalampos P.</name>
</author>
<author>
<name>Oikonomopoulou, Faidra</name>
</author>
<author>
<name>Bristogianni, Telesilla</name>
</author>
<author>
<name>Carstensen, Josephine V.</name>
</author>
<id>https://hdl.handle.net/1721.1/162676</id>
<updated>2025-09-18T03:08:17Z</updated>
<published>2025-05-30T00:00:00Z</published>
<summary type="text">More with less: topology optimization strategies for structural glass design
Jewett, Jackson L.; Koniari, Anna M.; Andriotis, Charalampos P.; Oikonomopoulou, Faidra; Bristogianni, Telesilla; Carstensen, Josephine V.
Advances in structural glass have enabled a new paradigm in expressive and transparent architecture. Cast glass can further extend the possibilities of structural glass by allowing for more complex and sophisticated shapes than the current planar geometries of structural float glass. However, the use of cast glass is currently limited because of the lengthy annealing process, making massive component sizes impractical to fabricate. Topology optimization (TO) has been proposed as a solution to this problem, as it is known to generate structurally efficient designs with a low volume of material. If tailored appropriately, TO can reduce component sizes and thereby diminish the total annealing time needed, while intelligently placing material in the areas where it will be utilized most effectively. For TO of glass to be successful, algorithms must properly capture glass’s specific material behavior. This research proposes a suite of TO algorithmic frameworks that design specifically for structural glass. These algorithms are demonstrated in a 2D design space, and the resulting geometries are fabricated using cut float glass and tested for experimental comparison on a 4-point bending load case. The results of these experiments provide valuable insights into the development of TO for structural glass, and help inform future research in TO of large-scale cast glass structures.
</summary>
<dc:date>2025-05-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Study of an effective machine learning-integrated science curriculum for high school youth in an informal learning setting</title>
<link href="https://hdl.handle.net/1721.1/162675" rel="alternate"/>
<author>
<name>Rabinowitz, Gabrielle</name>
</author>
<author>
<name>Moore, Katherine S.</name>
</author>
<author>
<name>Ali, Safinah</name>
</author>
<author>
<name>Weckel, Mark</name>
</author>
<author>
<name>Lee, Irene</name>
</author>
<author>
<name>Gupta, Preeti</name>
</author>
<author>
<name>Chaffee, Rachel</name>
</author>
<id>https://hdl.handle.net/1721.1/162675</id>
<updated>2025-09-18T03:08:14Z</updated>
<published>2025-04-19T00:00:00Z</published>
<summary type="text">Study of an effective machine learning-integrated science curriculum for high school youth in an informal learning setting
Rabinowitz, Gabrielle; Moore, Katherine S.; Ali, Safinah; Weckel, Mark; Lee, Irene; Gupta, Preeti; Chaffee, Rachel
Abstract Purpose This study evaluates the effectiveness of a machine learning (ML) integrated science curriculum implemented within the Science Research Mentorship Program (SRMP) for high school youth at the American Museum of Natural History (AMNH) over 2 years. The 4-week curriculum focused on ML knowledge gain, skill development, and self-efficacy, particularly for under-represented youth in STEM. Background ML is increasingly prevalent in STEM fields, making early exposure to ML methods and artificial intelligence (AI) literacy crucial for youth pursuing STEM careers. However, STEM fields, particularly those focused on AI research and development, suffer from a lack of diversity. Learning experiences that support the participation of under-represented groups in STEM and ML are essential to addressing this gap. Results Participant learning was assessed through pre- and post-surveys measuring ML knowledge, skills, and self-efficacy. Results from the implementation of the curriculum show that participants gained understanding of ML knowledge and skills (p &lt; 0.001, d = 1.083) and self-efficacy in learning ML concepts (p = 0.004, d = 0.676). On average, participants who identified as female and non-white showed greater learning gains than their white male peers (ML knowledge: p &lt; 0.001, d = 1.191; self-efficacy: p = 0.006, d = 0.631), decreasing gaps in ML knowledge, skills, and self-efficacy identified in pre-survey scores. Conclusions The ML-integrated curriculum effectively enhances students’ understanding and confidence in ML concepts, especially for under-represented groups in STEM, and provides a model for future ML education initiatives in informal science settings. We suggest that policy makers and school leaders take into account that high school age youth can learn ML concepts through integrated curricula while maintaining an awareness that curriculum effectiveness varies across demographic groups.
</summary>
<dc:date>2025-04-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ancestry inferences from DNA testing results: The problem of sociogenetic essentialism</title>
<link href="https://hdl.handle.net/1721.1/162674" rel="alternate"/>
<author>
<name>Kampourakis, Kostas</name>
</author>
<author>
<name>Fux, Michal</name>
</author>
<id>https://hdl.handle.net/1721.1/162674</id>
<updated>2025-09-18T03:08:16Z</updated>
<published>2025-05-16T00:00:00Z</published>
<summary type="text">Ancestry inferences from DNA testing results: The problem of sociogenetic essentialism
Kampourakis, Kostas; Fux, Michal
Millions of people have now taken DNA ancestry tests, with many of them looking for information about their origins or even their ethnic identity. However, what these tests can only do is allow for a probabilistic estimate of a person’s similarity to a reference group. This is often based on research in population genetics that study human genetic variation by identifying ancestry informative markers, that is, DNA markers that are found more often in one population rather than others. Whereas these markers are not the criteria for membership in a group, they can serve as indicia for it. However, a confusion of indicia for criteria can emerge supported by a particular form of intuitive thinking, psychological essentialism. It consists of a set of interrelated beliefs: (a) Particular categories distinguish between fundamentally different kinds of people; (b) The boundaries that separate these categories are strict and absolute; (c) These categories have internal homogeneity and differ fundamentally from one another; (d) All this is due to internal essences that make the members of each category what they are. When our genome or DNA are perceived to be these essences and when this kind of thinking is applied to social categories such as race and ethnicity, a view that we call “sociogenetic essentialism”, it can be highly problematic as it can form the basis for discrimination and exclusion. We argue that the use and reference to ancestry informative markers, unless clearly explained, may be misinterpreted due to a sociogenetic essentialist bias as confirming the genetic basis of social groups.
</summary>
<dc:date>2025-05-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>RiD-kit: software package designed to do enhanced sampling using reinforced dynamics</title>
<link href="https://hdl.handle.net/1721.1/162673" rel="alternate"/>
<author>
<name>Fan, Jiahao</name>
</author>
<author>
<name>Wang, Yanze</name>
</author>
<author>
<name>Wang, Dongdong</name>
</author>
<author>
<name>Zhang, Linfeng</name>
</author>
<id>https://hdl.handle.net/1721.1/162673</id>
<updated>2025-09-18T03:08:19Z</updated>
<published>2025-06-24T00:00:00Z</published>
<summary type="text">RiD-kit: software package designed to do enhanced sampling using reinforced dynamics
Fan, Jiahao; Wang, Yanze; Wang, Dongdong; Zhang, Linfeng
Background Developing an efficient method to accelerate the speed of molecular dynamics is a central theme in the field of molecular simulation. One category among the methods are collective-variable-based methods, which rely on predefined collective variables. The difficulty of selecting a few important collective variables hinders the methods to be applied to large systems easily. Method Here we present RiD-kit, which can utilize a large number of collective variables for enhanced sampling. The method could be applied to various kinds of systems, including biomolecules, chemical reactions and materials. In this protocol, we guide the users through all phases of the RiD-kit workflow, from preparing the input files, setting the simulation parameters and analyzing the results. Discussion The RiD-kit workflow provides an efficient and user-friendly command line tool which could submit jobs to various kinds of platforms including the high-performance computing platform, cloud server and local machines.
</summary>
<dc:date>2025-06-24T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Norman B. Leventhal Center for advanced Urbanism</title>
<link href="https://hdl.handle.net/1721.1/162672" rel="alternate"/>
<author>
<name>Williams, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/162672</id>
<updated>2025-09-18T03:09:54Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Norman B. Leventhal Center for advanced Urbanism
Williams, Sarah
This report contains the following sections: Finance and Funding, Accomplishments, Administrative Initiatives, Personnel Updates, Teaching Impacts, Research Activities, Conferences and Presentations, Press and Publications, and Affiliated Faculty.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing VGOS observations using an SNR-based scheduling approach</title>
<link href="https://hdl.handle.net/1721.1/162671" rel="alternate"/>
<author>
<name>Schartner, Matthias</name>
</author>
<author>
<name>Petrachenko, Bill</name>
</author>
<author>
<name>Titus, Mike</name>
</author>
<author>
<name>Krásná, Hana</name>
</author>
<author>
<name>Barrett, John</name>
</author>
<author>
<name>Hoak, Dan</name>
</author>
<author>
<name>Mondal, Dhiman</name>
</author>
<author>
<name>Xu, Ming H.</name>
</author>
<author>
<name>Soja, Benedikt</name>
</author>
<id>https://hdl.handle.net/1721.1/162671</id>
<updated>2025-09-18T03:08:04Z</updated>
<published>2025-05-07T00:00:00Z</published>
<summary type="text">Optimizing VGOS observations using an SNR-based scheduling approach
Schartner, Matthias; Petrachenko, Bill; Titus, Mike; Krásná, Hana; Barrett, John; Hoak, Dan; Mondal, Dhiman; Xu, Ming H.; Soja, Benedikt
The geodetic and astrometric very long baseline interferometry (VLBI) community is in the process of upgrading its existing infrastructure with the VLBI Global Observing System (VGOS). The primary objective of VGOS is to substantially boost the number of scans per hour for enhanced parameter estimation. However, the current observing strategy results in fewer scans than anticipated. During 2022, six 24-h VGOS Research and Development (R&amp;D) sessions were conducted to demonstrate a proof-of-concept aimed at addressing this shortcoming. The new observation strategy centers around a signal-to-noise (SNR)-based scheduling approach combined with eliminating existing overhead times in existing VGOS sessions. Two SNR-based scheduling approaches were tested during these sessions: one utilizing inter-/extrapolation of existing S/X source flux density models and another based on a newly derived source flux density catalog at VGOS frequencies. Both approaches proved effective, leading to a 2.3-fold increase in the number of scheduled scans per station and a 2.6-fold increase in the number of observations per station while maintaining a high observation success rate of approximately 90 % to 95 %. Consequently, both strategies succeeded in the main objective of these sessions by successfully increasing the number of scans per hour. The strategies described in this work can be easily applied to operational VGOS observations. Besides outlining and discussing the observation strategy, we further provide insight into the resulting signal-to-noise ratios, and discuss the impact on the precision of the estimated geodetic parameters. Monte Carlo simulations predicted a roughly 50 % increase in geodetic precision compared to operational VGOS sessions. The analysis confirmed that the formal errors in estimated station coordinates were reduced by 40 % to 50 %. In addition, Earth orientation parameters showed significant improvement, with a 40 % to 50 % reduction in formal errors.
</summary>
<dc:date>2025-05-07T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging AI-Generated Emotional Self-Voice to Nudge People towards their Ideal Selves</title>
<link href="https://hdl.handle.net/1721.1/162670" rel="alternate"/>
<author>
<name>Fang, Cathy Mengying</name>
</author>
<author>
<name>Chua, Phoebe</name>
</author>
<author>
<name>Chan, Samantha</name>
</author>
<author>
<name>Leong, Joanne</name>
</author>
<author>
<name>Bao, Andria</name>
</author>
<author>
<name>Maes, Pattie</name>
</author>
<id>https://hdl.handle.net/1721.1/162670</id>
<updated>2025-09-18T03:08:24Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Leveraging AI-Generated Emotional Self-Voice to Nudge People towards their Ideal Selves
Fang, Cathy Mengying; Chua, Phoebe; Chan, Samantha; Leong, Joanne; Bao, Andria; Maes, Pattie
Emotions, shaped by past experiences, significantly influence decision-making and goal pursuit. Traditional cognitive-behavioral techniques for personal development rely on mental imagery to envision ideal selves, but may be less effective for individuals who struggle with visualization. This paper introduces Emotional Self-Voice (ESV), a novel system combining emotionally expressive language models and voice cloning technologies to render customized responses in the user’s own voice. We investigate the potential of ESV to nudge individuals towards their ideal selves in a study with 60 participants. Across all three conditions (ESV, text-only, and mental imagination), we observed an increase in resilience, confidence, motivation, and goal commitment, and the ESV condition was perceived as uniquely engaging and personalized. We discuss the implications of designing generated self-voice systems as a personalized behavioral intervention for different scenarios.
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>ReMirrorFugue: Examining the Emotional Experience of Presence and (Illusory) Communications Across Time</title>
<link href="https://hdl.handle.net/1721.1/162669" rel="alternate"/>
<author>
<name>Xiao, Xiao</name>
</author>
<author>
<name>Noh, Hayoun</name>
</author>
<author>
<name>Lefevre, Adrien</name>
</author>
<author>
<name>Li, Lucy</name>
</author>
<author>
<name>McKee, Holly</name>
</author>
<author>
<name>Algargoosh, Alaa</name>
</author>
<author>
<name>Ishii, Hiroshi</name>
</author>
<id>https://hdl.handle.net/1721.1/162669</id>
<updated>2025-09-18T03:08:27Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">ReMirrorFugue: Examining the Emotional Experience of Presence and (Illusory) Communications Across Time
Xiao, Xiao; Noh, Hayoun; Lefevre, Adrien; Li, Lucy; McKee, Holly; Algargoosh, Alaa; Ishii, Hiroshi
This paper examines how strategies for simulating social presence across distance can evoke a sense of presence and facilitate illusory interactions across time. We conducted a mixed-methods study with 28 participants, exploring their emotional experience of interacting with decade-old recorded piano performances on MirrorFugue—a player piano enhanced with life-sized projections of the pianist’s hands and body, creating the illusion of a virtual reflection playing the instrument. Data were collected via wearable sensors, questionnaires, and interviews.&#13;
Results showed that participants felt a strong presence of past pianists, with some experiencing the illusion of two-way communication and an overall increase in connection. The emotional experience was significantly influenced by the participant’s relationship with the recorded pianist and the pianist’s vital status. These findings suggest that telepresence technologies can foster connections with the past, offering spaces for memory recall, self-reflection, and a sense of “time travel.”
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Why Is the Monsoon Coastal Upwelling Signal Subdued in the Bay of Bengal?</title>
<link href="https://hdl.handle.net/1721.1/162668" rel="alternate"/>
<author>
<name>Abbott, Kathleen</name>
</author>
<author>
<name>Mahadevan, Amala</name>
</author>
<id>https://hdl.handle.net/1721.1/162668</id>
<updated>2025-09-18T03:08:32Z</updated>
<published>2024-12-10T00:00:00Z</published>
<summary type="text">Why Is the Monsoon Coastal Upwelling Signal Subdued in the Bay of Bengal?
Abbott, Kathleen; Mahadevan, Amala
The Indian summer monsoon, which brings heavy precipitation to the densely populated Indiansubcontinent, plays an important role in the development of a coastal upwelling circulation that brings colder,nutrient‐rich water to the surface. Although the western shores of the Arabian Sea (AS) and Bay of Bengal(BoB) both experience upwelling‐favorable winds during June‐August, only the AS coastline exhibitssignificant surface cooling. In contrast, the BoB remains warm and its upwelling is characterized by a transient,weak sea surface temperature (SST) response confined to the east coast of India. A weaker mean alongshorewind stress and coastal circulation do not sufficiently explain the lack of SST response in the BoB. Here, weexamine other reasons for the differing behavior of these two coastal margins. Firstly, we show that while windsare persistently upwelling‐favorable in the western AS, intraseasonal wind variability in the BoB inducesintermittent upwelling. Secondly, the vertical density stratification is controlled by salinity in the BoB, andupwelled waters are saltier, but only marginally cooler than surface waters. By contrast, the density in the AS istemperature‐controlled, and upwelled waters are substantially colder than the surface. Additionally, satellite‐based SST in the BoB does not adequately resolve the upwelling signal. Using a numerical model, we find thatsalinity stratification has a greater influence on the mean SST, while wind frequency alters near‐shore SST andits temporal variability. This work has implications for the sensitivity of upwelling regions and their response towind stress and stratification in a warming climate.
</summary>
<dc:date>2024-12-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Autonomous observations enhance our ability to observe the biological carbon pump across diverse carbon export regimes</title>
<link href="https://hdl.handle.net/1721.1/162667" rel="alternate"/>
<author>
<name>Traylor, Shawnee</name>
</author>
<author>
<name>Nicholson, David P</name>
</author>
<author>
<name>Clevenger, Samantha J</name>
</author>
<author>
<name>Buesseler, Ken O</name>
</author>
<author>
<name>D'Asaro, Eric</name>
</author>
<author>
<name>Lee, Craig M</name>
</author>
<id>https://hdl.handle.net/1721.1/162667</id>
<updated>2025-09-18T03:08:34Z</updated>
<published>2025-08-28T00:00:00Z</published>
<summary type="text">Autonomous observations enhance our ability to observe the biological carbon pump across diverse carbon export regimes
Traylor, Shawnee; Nicholson, David P; Clevenger, Samantha J; Buesseler, Ken O; D'Asaro, Eric; Lee, Craig M
The expansion of autonomous observation platforms offers vast opportunities for analyzing ocean ecosystems and their role in carbon export. As part of the EXport Processes in the Ocean from RemoTe Sensing campaign, we autonomously measured the productivity regimes in two contrasting end-member ecosystem states. The first campaign occurred in the subpolar North Pacific near Ocean Station Papa (Site 1), characterized by iron limitation and a highly regenerative regime. The second captured a springtime bloom in the North Atlantic (Site 2), which typically drives efficient export of productivity. Using a combination of floats and gliders carrying biogeochemical sensors, we quantified gross primary productivity, net community production, and organic carbon export potential (fCorg) to assess biological carbon pump strength. Site 2 demonstrated higher cruise-period productivity, with roughly 5× the gross primary productivity and 13× the euphotic zone net community production seen at Site 1. Greater export efficiency at Site 2 was reflected in numerous indices, such as the ratio of new production to net primary productivity (ef-ratio; Site 1: 0.33; Site 2: 0.73), the ratio of sinking particulate organic carbon to net primary productivity (ez-ratio; Site 1: 0.24; Site 2: 0.69), and mean daily fCorg (Site 1: 3.4 ± 0.7; Site 2: 20.3 ± 2.3 mmol C m−2 d−1). Together with particulate organic carbon flux derived from thorium-234 measurements, we infer that observed low net community production was almost entirely routed to sinking particulate organic carbon at Site 1, while the much higher net community production at Site 2 resulted in near-equal proportions routed to dissolved organic carbon production and sinking particulate organic carbon.
</summary>
<dc:date>2025-08-28T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transitive Array: An Efficient GEMM Accelerator with Result Reuse</title>
<link href="https://hdl.handle.net/1721.1/162666" rel="alternate"/>
<author>
<name>Guo, Cong</name>
</author>
<author>
<name>Wei, Chiyue</name>
</author>
<author>
<name>Tang, Jiaming</name>
</author>
<author>
<name>Duan, Bowen</name>
</author>
<author>
<name>Han, Song</name>
</author>
<author>
<name>Li, Hai</name>
</author>
<author>
<name>Chen, Yiran</name>
</author>
<id>https://hdl.handle.net/1721.1/162666</id>
<updated>2025-09-17T07:42:17Z</updated>
<published>2025-06-20T00:00:00Z</published>
<summary type="text">Transitive Array: An Efficient GEMM Accelerator with Result Reuse
Guo, Cong; Wei, Chiyue; Tang, Jiaming; Duan, Bowen; Han, Song; Li, Hai; Chen, Yiran
Deep Neural Networks (DNNs) and Large Language Models (LLMs) have revolutionized artificial intelligence, yet their deployment faces significant memory and computational challenges, especially in resource-constrained environments. Quantization techniques have mitigated some of these issues by reducing data precision, primarily focusing on General Matrix Multiplication (GEMM). This study introduces a novel sparsity paradigm, transitive sparsity, which leverages the reuse of previously computed results to substantially minimize computational overhead in GEMM operations. By representing transitive relations using a directed acyclic graph, we develop an efficient strategy for determining optimal execution orders, thereby overcoming inherent challenges related to execution dependencies and parallelism. Building on this foundation, we present the Transitive Array, a multiplication-free accelerator designed to exploit transitive sparsity in GEMM. Our architecture effectively balances computational workloads across multiple parallel lanes, ensuring high efficiency and optimal resource utilization. Comprehensive evaluations demonstrate that the Transitive Array achieves approximately 7.46 × and 3.97 × speedup and 2.31 × and 1.65 × energy reduction compared to state-of-the-art accelerators such as Olive and BitVert while maintaining comparable model accuracy on LLaMA models.
ISCA ’25, Tokyo, Japan
</summary>
<dc:date>2025-06-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sonora: Human-AI Co-Creation of 3D Audio Worlds and its Impact on Anxiety and Cognitive Load</title>
<link href="https://hdl.handle.net/1721.1/162665" rel="alternate"/>
<author>
<name>De La Torre, Fernanda</name>
</author>
<author>
<name>Hernandez, Javier</name>
</author>
<author>
<name>Wilson, Andrew</name>
</author>
<author>
<name>Amores, Judith</name>
</author>
<id>https://hdl.handle.net/1721.1/162665</id>
<updated>2025-09-17T07:42:22Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Sonora: Human-AI Co-Creation of 3D Audio Worlds and its Impact on Anxiety and Cognitive Load
De La Torre, Fernanda; Hernandez, Javier; Wilson, Andrew; Amores, Judith
Soundscapes are widely used for relaxation, but their potential for personalized, navigable experiences remains under-explored. To address this, we developed Sonora, an AI tool that enables real-time generation of synthetic, spatialized soundscapes, allowing users to navigate immersive auditory environments and customize soundscapes using voice commands. Sonora’s architecture integrates audio diffusion models and LLMs within Unity3D. A between-subjects study with 32 participants investigated its effects on anxiety and user experience, compared to a control condition involving passive listening to a soundscape. Participants who interacted with Sonora reported higher entertainment than the control group. A positive correlation was found between state anxiety and user requests for Sonora, suggesting anxious users engaged more. Participants with moderate to high trait anxiety experienced significant reductions in state anxiety across both conditions, with no significant difference in cognitive load. Our findings highlight Sonora’s potential to promote relaxation, emphasizing the value of personalized experiences for mental health.
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Concorde: Fast and Accurate CPU Performance Modeling with Compositional Analytical-ML Fusion</title>
<link href="https://hdl.handle.net/1721.1/162664" rel="alternate"/>
<author>
<name>Nasr-Esfahany, Arash</name>
</author>
<author>
<name>Alizadeh, Mohammad</name>
</author>
<author>
<name>Lee, Victor</name>
</author>
<author>
<name>Alam, Hanna</name>
</author>
<author>
<name>Coon, Brett</name>
</author>
<author>
<name>Culler, David</name>
</author>
<author>
<name>Dadu, Vidushi</name>
</author>
<author>
<name>Dixon, Martin</name>
</author>
<author>
<name>Levy, Henry</name>
</author>
<author>
<name>Pandey, Santosh</name>
</author>
<author>
<name>Ranganathan, Parthasarathy</name>
</author>
<author>
<name>Yazdanbakhsh, Amir</name>
</author>
<id>https://hdl.handle.net/1721.1/162664</id>
<updated>2025-09-17T07:42:08Z</updated>
<published>2025-06-20T00:00:00Z</published>
<summary type="text">Concorde: Fast and Accurate CPU Performance Modeling with Compositional Analytical-ML Fusion
Nasr-Esfahany, Arash; Alizadeh, Mohammad; Lee, Victor; Alam, Hanna; Coon, Brett; Culler, David; Dadu, Vidushi; Dixon, Martin; Levy, Henry; Pandey, Santosh; Ranganathan, Parthasarathy; Yazdanbakhsh, Amir
Cycle-level simulators such as gem5 are widely used in microarchitecture design, but they are prohibitively slow for large-scale design space explorations. We present Concorde, a new methodology for learning fast and accurate performance models of microarchitectures. Unlike existing simulators and learning approaches that emulate each instruction, Concorde predicts the behavior of a program based on compact performance distributions that capture the impact of different microarchitectural components. It derives these performance distributions using simple analytical models that estimate bounds on performance induced by each microarchitectural component, providing a simple yet rich representation of a program’s performance characteristics across a large space of microarchitectural parameters. Experiments show that Concorde is more than five orders of magnitude faster than a reference cycle-level simulator, with about 2% average Cycles-Per-Instruction (CPI) prediction error across a range of SPEC, open-source, and proprietary benchmarks. This enables rapid design-space exploration and performance sensitivity analyses that are currently infeasible, e.g., in about an hour, we conducted a first-of-its-kind fine-grained performance attribution to different microarchitectural components across a diverse set of programs, requiring nearly 150 million CPI evaluations.
ISCA ’25, Tokyo, Japan
</summary>
<dc:date>2025-06-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>What's in a Query: Polarity-Aware Distribution-Based Fair Ranking</title>
<link href="https://hdl.handle.net/1721.1/162663" rel="alternate"/>
<author>
<name>Balagopalan, Aparna</name>
</author>
<author>
<name>Wang, Kai</name>
</author>
<author>
<name>Salaudeen, Olawale</name>
</author>
<author>
<name>Biega, Asia</name>
</author>
<author>
<name>Ghassemi, Marzyeh</name>
</author>
<id>https://hdl.handle.net/1721.1/162663</id>
<updated>2025-09-17T07:42:25Z</updated>
<published>2025-04-22T00:00:00Z</published>
<summary type="text">What's in a Query: Polarity-Aware Distribution-Based Fair Ranking
Balagopalan, Aparna; Wang, Kai; Salaudeen, Olawale; Biega, Asia; Ghassemi, Marzyeh
Machine learning-driven rankings, where individuals (or items) are ranked in response to a query, mediate search exposure or attention in a variety of safety-critical settings. Thus, it is important to ensure that such rankings are fair. Under the goal of equal opportunity, attention allocated to an individual on a ranking interface should be proportional to their relevance across search queries. In this work, we examine amortized fair ranking -- where relevance and attention are cumulated over a sequence of user queries to make fair ranking more feasible in practice. Unlike prior methods that operate on expected amortized attention for each individual, we define new divergence-based measures for attention distribution-based fairness in ranking (DistFaiR), characterizing unfairness as the divergence between the distribution of attention and relevance corresponding to an individual over time. This allows us to propose new definitions of unfairness, which are more reliable at test time. Second, we prove that group fairness is upper-bounded by individual fairness under this definition for a useful class of divergence measures, and experimentally show that maximizing individual fairness through an integer linear programming-based optimization is often beneficial to group fairness. Lastly, we find that prior research in amortized fair ranking ignores critical information about queries, potentially leading to a fairwashing risk in practice by making rankings appear more fair than they actually are.
WWW ’25, April 28-May 2, 2025, Sydney, NSW, Australia
</summary>
<dc:date>2025-04-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Computational Advantage of MIP* Vanishes in the Presence of Noise</title>
<link href="https://hdl.handle.net/1721.1/162662" rel="alternate"/>
<author>
<name>Dong, Yangjing</name>
</author>
<author>
<name>Fu, Honghao</name>
</author>
<author>
<name>Natarajan, Anand</name>
</author>
<author>
<name>Qin, Minglong</name>
</author>
<author>
<name>Xu, Haochen</name>
</author>
<author>
<name>Yao, Penghui</name>
</author>
<id>https://hdl.handle.net/1721.1/162662</id>
<updated>2025-09-17T07:42:34Z</updated>
<published>2025-08-16T00:00:00Z</published>
<summary type="text">The Computational Advantage of MIP* Vanishes in the Presence of Noise
Dong, Yangjing; Fu, Honghao; Natarajan, Anand; Qin, Minglong; Xu, Haochen; Yao, Penghui
Quantum multiprover interactive proof systems with entanglement MIP* are much more powerful than its classical counterpart MIP (Babai et al. '91, Ji et al. '20): while MIP = NEXP, the quantum class MIP* is equal to RE, a class including the halting problem. This is because the provers in MIP* can share unbounded quantum entanglement. However, recent works of Qin and Yao '21 and '23 have shown that this advantage is significantly reduced if the provers' shared state contains noise. This paper attempts to exactly characterize the effect of noise on the computational power of quantum multiprover interactive proof systems. We investigate the quantum two-prover one-round interactive system MIP*[poly, O(1)], where the verifier sends polynomially many bits to the provers and the provers send back constantly many bits. We show noise completely destroys the computational advantage given by shared entanglement in this model. Specifically, we show that if the provers are allowed to share arbitrarily many noisy EPR states, where each EPR state is affected by an arbitrarily small constant amount of noise, the resulting complexity class is equivalent to NEXP = MIP. This improves significantly on the previous best-known bound of NEEEXP (nondeterministic triply exponential time) by Qin and Yao '21. We also show that this collapse in power is due to the noise, rather than the O(1) answer size, by showing that allowing for noiseless EPR states gives the class the full power of RE = MIP*[poly, poly]. Along the way, we develop two technical tools of independent interest. First, we give a new, deterministic tester for the positivity of an exponentially large matrix, provided it has a low-degree Fourier decomposition in terms of Pauli matrices. Secondly, we develop a new invariance principle for smooth matrix functions having bounded third-order Fr&amp;#233;chet derivatives or which are Lipschitz continuous.
</summary>
<dc:date>2025-08-16T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inherent Bias in Electronic Health Records: A Scoping Review of Sources of Bias</title>
<link href="https://hdl.handle.net/1721.1/162661" rel="alternate"/>
<author>
<name>Perets, Oriel</name>
</author>
<author>
<name>Stagno, Emanuela</name>
</author>
<author>
<name>Ben Yehuda, Eyal</name>
</author>
<author>
<name>McNichol, Megan</name>
</author>
<author>
<name>Celi, Leo</name>
</author>
<author>
<name>Rappoport, Nadav</name>
</author>
<author>
<name>Dorotic, Matilda</name>
</author>
<id>https://hdl.handle.net/1721.1/162661</id>
<updated>2025-09-17T07:42:31Z</updated>
<published>2024-08-05T00:00:00Z</published>
<summary type="text">Inherent Bias in Electronic Health Records: A Scoping Review of Sources of Bias
Perets, Oriel; Stagno, Emanuela; Ben Yehuda, Eyal; McNichol, Megan; Celi, Leo; Rappoport, Nadav; Dorotic, Matilda
Biases inherent in electronic health records (EHRs), a common data source for training medical AI models, may exacerbate health inequities and hinder the adoption of ethical, responsible AI in healthcare. These biases originate from various sources, including implicit clinician biases, data collection and labeling practices, medical devices, and tools used for data processing. Such biases undermine data reliability, influence clinical decisions, and worsen healthcare disparities. When EHR data is used to develop data-driven solutions, biases can further propagate, creating systems that perpetuate inequities. This scoping review categorizes the primary sources of bias in EHRs. We conducted a literature search on PubMed and Web of Science (January 19, 2023) for English-language studies published between 2016 and 2023, following the PRISMA methodology. From 430 initial papers, 27 duplicates were removed, and 403 studies were screened for eligibility. After title, abstract, and full-text reviews, 116 articles were included in the final analysis.    Existing studies often focus on isolated biases in EHRs but lack a comprehensive taxonomy. To address this gap, we propose a systematic classification framework encompassing six key sources of bias: (a) biases from prior clinical trials; (b) data-related biases, such as missing or incomplete information; (c) implicit clinician bias; (d) referral and admission bias; (e) diagnosis or risk disparity biases; and (f) biases in medical devices and algorithms. This taxonomy, outlined in Table 1, provides a foundation for evaluating and addressing these issues.    While machine learning has transformative potential in healthcare, its effectiveness depends on the integrity of its inputs. Current evidence predominantly addresses data-related biases, with less attention to human or device-related biases, which are often anecdotal or underexplored. For example, racial biases in EHRs are well-documented, but gender-related, sexual orientation, and socially induced biases remain less studied. Compounding biases from these diverse sources can significantly impact AI recommendations, clinical decisions, and patient outcomes. Our review underscores the prevalence of data, human, and machine biases in healthcare and their role in amplifying disparities. To mitigate these challenges, we recommend adopting a ?bias-in-mind? approach when designing data-driven solutions, along with developing safeguards and generating more empirical evidence on bias impacts. This holistic understanding is essential for ensuring equitable and reliable AI applications in healthcare.
</summary>
<dc:date>2024-08-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing and Categorizing Emerging Cybersecurity Regulations</title>
<link href="https://hdl.handle.net/1721.1/162660" rel="alternate"/>
<author>
<name>Marotta, Angelica</name>
</author>
<author>
<name>Madnick, Stuart</name>
</author>
<id>https://hdl.handle.net/1721.1/162660</id>
<updated>2025-09-17T07:42:36Z</updated>
<published>2028-09-08T00:00:00Z</published>
<summary type="text">Analyzing and Categorizing Emerging Cybersecurity Regulations
Marotta, Angelica; Madnick, Stuart
As cyber-attacks become more frequent, sophisticated, and impactful, governments worldwide are responding by introducing or proposing new cybersecurity regulations. This paper examines over 170 recent regulations and trends in cybersecurity across various regions, including the United States, Europe, and beyond. It identifies 17 key features in many of these regulations, which we have grouped into 5 categories, analyzes observed patterns, and proposes areas for improvement. This paper's primary objective is to significantly contribute to the cybersecurity compliance domain by helping researchers understand the structure of these regulations and helping organizations to assess and mitigate their cyber risk within an increasingly complex and regulated cybersecurity environment. Our findings provide valuable direction to those trying to navigate the flood of new cybersecurity regulations and the governments enacting new cybersecurity regulations.
</summary>
<dc:date>2028-09-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Tech Lash to Tech Fash: Strategic Reflections on a Decade of Collective Organizing in Computing</title>
<link href="https://hdl.handle.net/1721.1/162659" rel="alternate"/>
<author>
<name>Huber, Linda</name>
</author>
<author>
<name>Reynolds-Cu?llar, Pedro</name>
</author>
<author>
<name>DeVrio, Alicia</name>
</author>
<author>
<name>Raihan, Jensine</name>
</author>
<author>
<name>Sum, Cella</name>
</author>
<author>
<name>Dombrowski, Lynn</name>
</author>
<author>
<name>Zhang, Justine</name>
</author>
<author>
<name>Becker, Christoph</name>
</author>
<author>
<name>Irani, Lilly</name>
</author>
<author>
<name>Krafft, P M</name>
</author>
<author>
<name>Hughes, Margaret</name>
</author>
<id>https://hdl.handle.net/1721.1/162659</id>
<updated>2026-03-08T03:24:31Z</updated>
<published>2025-08-30T00:00:00Z</published>
<summary type="text">From Tech Lash to Tech Fash: Strategic Reflections on a Decade of Collective Organizing in Computing
Huber, Linda; Reynolds-Cu?llar, Pedro; DeVrio, Alicia; Raihan, Jensine; Sum, Cella; Dombrowski, Lynn; Zhang, Justine; Becker, Christoph; Irani, Lilly; Krafft, P M; Hughes, Margaret
Computing is a field plagued with presentism, oriented towards the new in ways that limit our design and research practices - as well as our capacity to understand and collectively respond to emerging crises. To improve our sensemaking and strategizing about today’s crises, this workshop explores what Tamara Kneese has deemed the last decade’s shift from “techlash” to “tech fash.” What have we learned from the era of misinformation and bias, of “surveillance capitalism” and tech worker organizing that can inform our struggle against the increasing power of a techno-fascist oligarchy? We will also look towards previous generations of computing professionals and activists, who likewise sought to address the harms of emerging automated systems and the complicity of computing within violent, imperialist projects. This workshop will create space for participants to explore these questions collectively, bridging past and present moments in an effort to devise strategies moving forward.
</summary>
<dc:date>2025-08-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using experimental data in computationally guided rational design of inorganic materials with machine learning</title>
<link href="https://hdl.handle.net/1721.1/162658" rel="alternate"/>
<author>
<name>Kulik, Heather J.</name>
</author>
<id>https://hdl.handle.net/1721.1/162658</id>
<updated>2026-03-08T03:21:09Z</updated>
<published>2025-04-08T00:00:00Z</published>
<summary type="text">Using experimental data in computationally guided rational design of inorganic materials with machine learning
Kulik, Heather J.
While the impact of machine learning (ML) has been felt everywhere, its effect has been most transformative where large, high-quality datasets are available. For promising materials spaces, such as transition metal coordination complexes and metal–organic frameworks, the large chemical diversity has not yet been matched by similarly large datasets, and computational datasets (e.g., from density functional theory) may not be predictive. Extraction of experimental data from the literature represents an alternative approach to the data-driven design of materials. This perspective will describe efforts in (i) extracting experimental data; (ii) associating extracted data with known chemical structures; (iii) leveraging data in ML and screening; (iv) designing materials with enriched stability; and (v) using experimental data to improve high-throughput workflows. I will summarize some of the outstanding challenges and opportunities for data enrichment with high-throughput experimentation and large language models.
</summary>
<dc:date>2025-04-08T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reduced-Order Modeling for Physical Simulation: From the Classical to the Neural</title>
<link href="https://hdl.handle.net/1721.1/162657" rel="alternate"/>
<author>
<name>Levin, David IW</name>
</author>
<author>
<name>Chen, Peter Yichen</name>
</author>
<author>
<name>Grinspun, Eitan</name>
</author>
<id>https://hdl.handle.net/1721.1/162657</id>
<updated>2026-03-08T03:24:30Z</updated>
<published>2025-08-19T00:00:00Z</published>
<summary type="text">Reduced-Order Modeling for Physical Simulation: From the Classical to the Neural
Levin, David IW; Chen, Peter Yichen; Grinspun, Eitan
This workshop aims to explore the evolution of subspace methods&#13;
in physical simulation, tracing their origins from classical engineering formulations to cutting-edge neural techniques. By gathering&#13;
leading researchers, students, and practitioners, the session will&#13;
serve as a platform for cross-disciplinary dialogue, education, and&#13;
community building around model reduction techniques in graphics and simulation.
SIGGRAPH Frontiers ’25, Vancouver, BC, Canada
</summary>
<dc:date>2025-08-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>Drawing and Sketching: Art, Psychology, and Computer Graphics</title>
<link href="https://hdl.handle.net/1721.1/162656" rel="alternate"/>
<author>
<name>Vinker, Yael</name>
</author>
<author>
<name>Tang, Mia</name>
</author>
<author>
<name>Hertzmann, Aaron</name>
</author>
<author>
<name>Fan, Judith</name>
</author>
<author>
<name>Agrawala, Maneesh</name>
</author>
<author>
<name>Chandra, Kartik</name>
</author>
<author>
<name>Fu, Hongbo</name>
</author>
<author>
<name>Schaldenbrand, Peter</name>
</author>
<id>https://hdl.handle.net/1721.1/162656</id>
<updated>2026-03-08T03:24:28Z</updated>
<summary type="text">Drawing and Sketching: Art, Psychology, and Computer Graphics
Vinker, Yael; Tang, Mia; Hertzmann, Aaron; Fan, Judith; Agrawala, Maneesh; Chandra, Kartik; Fu, Hongbo; Schaldenbrand, Peter
Sketching is a fundamental form of expression that supports visual thinking, conceptual exploration, and communication across cultures, generations, and disciplines [Fan et al. 2023; Goel 1995; Hertzmann 2021; Tversky 2002; 2011; Tversky et al. 2003]. Whether through quick marks or detailed renderings, it externalizes ideas into tangible visual form, serving as both a creative act and a cognitive tool. For example, designers use sketches to explore new ideas [Goldschmidt 1992; Tversky et al. 2003], scientists employ them to formulate problems [Kaiser 2019; Nasim 2019], and children engage in sketching to learn and express themselves [Fiorella and Kuhlmann 2020; Forbus et al. 2011]. This central role has made drawing and sketching a long-standing topic of interest in computer graphics, computer vision, and machine learning [Bénard and Hertzmann 2018; Berger et al. 2013; Canny 1986; DeCarlo et al. 2003; Ha and Eck 2017; Hertzmann 2003; Judd et al. 2007; Vinker et al. 2022; Winnemöller et al. 2012; Xie and Tu 2017; Xu et al. 2020].
SIGGRAPH Frontiers ’25, Vancouver, BC, Canada
</summary>
</entry>
<entry>
<title>Towards Interoperability: Pursuing an ontology for data exchange between deliberative democratic platforms</title>
<link href="https://hdl.handle.net/1721.1/162655" rel="alternate"/>
<author>
<name>Hughes, Margaret</name>
</author>
<author>
<name>DeSota, Elianna</name>
</author>
<author>
<name>Victor, Matthew</name>
</author>
<author>
<name>Lynn, Stuart</name>
</author>
<author>
<name>Stormonth-Darling, John</name>
</author>
<author>
<name>Barry, Liz</name>
</author>
<id>https://hdl.handle.net/1721.1/162655</id>
<updated>2026-03-08T03:24:30Z</updated>
<published>2025-08-30T00:00:00Z</published>
<summary type="text">Towards Interoperability: Pursuing an ontology for data exchange between deliberative democratic platforms
Hughes, Margaret; DeSota, Elianna; Victor, Matthew; Lynn, Stuart; Stormonth-Darling, John; Barry, Liz
In response to the fragmented state of civic engagement tools and the urgent challenges facing democratic systems, this paper introduces a shared, contributor-driven ontology to connect diverse civic tech platforms, emerging from the work of the Interoperable Deliberative Tool cohort at Metagov. By integrating platforms like Voice to Vision, Assemblis, and Decidim, we enable the flow of deliberative data across contexts, supporting more cohesive decision-making. This approach helps bridge gaps between input, analysis, and action, enhancing democratic resilience in crisis moments. Through our work, we demonstrate how interoperability can strengthen civic engagement and provide a foundation for more responsive, collaborative governance.
AAR Adjunct 2025, Aarhus N, Denmark
</summary>
<dc:date>2025-08-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Calorimetric wire detector for measurement of atomic hydrogen beams</title>
<link href="https://hdl.handle.net/1721.1/162654" rel="alternate"/>
<author>
<name>Astaschov, M.</name>
</author>
<author>
<name>Bhagvati, S.</name>
</author>
<author>
<name>Böser, S.</name>
</author>
<author>
<name>Brandsema, M. J.</name>
</author>
<author>
<name>Cabral, R.</name>
</author>
<author>
<name>Claessens, C.</name>
</author>
<author>
<name>de Viveiros, L.</name>
</author>
<author>
<name>Enomoto, S.</name>
</author>
<author>
<name>Fenner, D.</name>
</author>
<author>
<name>Fertl, M.</name>
</author>
<author>
<name>Formaggio, J. A.</name>
</author>
<author>
<name>Foust, B. T.</name>
</author>
<author>
<name>Gaison, J. K.</name>
</author>
<author>
<name>Harmston, P.</name>
</author>
<author>
<name>Heeger, K. M.</name>
</author>
<author>
<name>Hüneborn, M. B.</name>
</author>
<author>
<name>Huyan, X.</name>
</author>
<author>
<name>Jones, A. M.</name>
</author>
<author>
<name>Jones, B. J. P.</name>
</author>
<author>
<name>Karim, E.</name>
</author>
<id>https://hdl.handle.net/1721.1/162654</id>
<updated>2026-03-08T03:19:43Z</updated>
<published>2025-05-26T00:00:00Z</published>
<summary type="text">Calorimetric wire detector for measurement of atomic hydrogen beams
Astaschov, M.; Bhagvati, S.; Böser, S.; Brandsema, M. J.; Cabral, R.; Claessens, C.; de Viveiros, L.; Enomoto, S.; Fenner, D.; Fertl, M.; Formaggio, J. A.; Foust, B. T.; Gaison, J. K.; Harmston, P.; Heeger, K. M.; Hüneborn, M. B.; Huyan, X.; Jones, A. M.; Jones, B. J. P.; Karim, E.
A calorimetric detector for minimally disruptive measurements of atomic hydrogen beams is described. The calorimeter measures heat released by the recombination of hydrogen atoms into molecules on a thin wire. As a demonstration, the angular distribution of a beam with a peak intensity of ≈ 10 16 atoms / ( cm 2 s ) is measured by translating the wire across the beam. The data agree well with an analytic model of the beam from the thermal hydrogen atom source. Using the beam shape model, the relative intensity of the beam can be determined to 5% precision or better at any angle. Graphical abstract
</summary>
<dc:date>2025-05-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Artificial Intelligence for Tactical Network Troubleshooting</title>
<link href="https://hdl.handle.net/1721.1/162653" rel="alternate"/>
<author>
<name>Jaimes, Rafael</name>
</author>
<author>
<name>Mendez, Maximillian</name>
</author>
<id>https://hdl.handle.net/1721.1/162653</id>
<updated>2025-09-13T03:09:38Z</updated>
<published>2025-09-12T00:00:00Z</published>
<summary type="text">Artificial Intelligence for Tactical Network Troubleshooting
Jaimes, Rafael; Mendez, Maximillian
The tactical network is a key component of most&#13;
United States Marine Corps missions. It is critical to expeditiously&#13;
stand up a robust communications architecture for both voice&#13;
and data transmissions across a variety of classification levels.&#13;
However, when there are unforeseen or induced faults in network&#13;
configurations, the establishment time can increase by hours&#13;
if not days. The research described in this report sought to&#13;
determine if a large language model (LLM), when provided&#13;
the correct baseline network configurations, would be able to&#13;
identify errors in active working network configurations and&#13;
reduce network establishment time. A/B testing was conducted to&#13;
see whether teams assisted by artificial intelligence (AI) or control&#13;
teams with no AI assistance could establish the network faster.&#13;
The LLM hosted by NIPRGPT decreased the establishment time&#13;
by 50 percent (p &lt;0.05) compared to warfighters unaided by AI.&#13;
The results conclude that AI agents such as LLMs can be useful&#13;
in providing commanders with a course of action to establish&#13;
command, control, communications, and computers (C4) faster.
The Department of the Air Force Artificial Intelligence Accelerator
</summary>
<dc:date>2025-09-12T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimal transport for generating transition states in chemical reactions</title>
<link href="https://hdl.handle.net/1721.1/162652" rel="alternate"/>
<author>
<name>Duan, Chenru</name>
</author>
<author>
<name>Liu, Guan-Horng</name>
</author>
<author>
<name>Du, Yuanqi</name>
</author>
<author>
<name>Chen, Tianrong</name>
</author>
<author>
<name>Zhao, Qiyuan</name>
</author>
<author>
<name>Jia, Haojun</name>
</author>
<author>
<name>Gomes, Carla P</name>
</author>
<author>
<name>Theodorou, Evangelos A</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<id>https://hdl.handle.net/1721.1/162652</id>
<updated>2026-03-08T03:24:33Z</updated>
<published>2025-04-23T00:00:00Z</published>
<summary type="text">Optimal transport for generating transition states in chemical reactions
Duan, Chenru; Liu, Guan-Horng; Du, Yuanqi; Chen, Tianrong; Zhao, Qiyuan; Jia, Haojun; Gomes, Carla P; Theodorou, Evangelos A; Kulik, Heather J
Transition states (TSs) are transient structures that are key to understanding reaction mechanisms and designing catalysts but challenging to capture in experiments. Many optimization algorithms have been developed to search for TSs computationally. Yet, the cost of these algorithms driven by quantum chemistry methods (usually density functional theory) is still high, posing challenges for their applications in building large reaction networks for reaction exploration. Here we developed React-OT, an optimal transport approach for generating unique TS structures from reactants and products. React-OT generates highly accurate TS structures with a median structural root mean square deviation of 0.053 Å and median barrier height error of 1.06 kcal mol&lt;jats:sup&gt;−1&lt;/jats:sup&gt; requiring only 0.4 s per reaction. The root mean square deviation and barrier height error are further improved by roughly 25% through pretraining React-OT on a large reaction dataset obtained with a lower level of theory, GFN2-xTB. We envision that the remarkable accuracy and rapid inference of React-OT will be highly useful when integrated with the current high-throughput TS search workflow. This integration will facilitate the exploration of chemical reactions with unknown mechanisms.
</summary>
<dc:date>2025-04-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Blueprints for the Geometric Control of N-Heterocyclic Carbene–Carbodiimide Isomers</title>
<link href="https://hdl.handle.net/1721.1/162651" rel="alternate"/>
<author>
<name>Day, Craig S</name>
</author>
<author>
<name>Grabicki, Niklas</name>
</author>
<author>
<name>Chu, Daniel BK</name>
</author>
<author>
<name>Keys, Allison</name>
</author>
<author>
<name>Singhal, Avni</name>
</author>
<author>
<name>Vennelakanti, Vyshnavi</name>
</author>
<author>
<name>Kevlishvili, Ilia</name>
</author>
<author>
<name>Gómez‐Bombarelli, Rafael</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<author>
<name>Johnson, Jeremiah</name>
</author>
<id>https://hdl.handle.net/1721.1/162651</id>
<updated>2026-03-08T03:24:35Z</updated>
<published>2025-05-20T00:00:00Z</published>
<summary type="text">Blueprints for the Geometric Control of N-Heterocyclic Carbene–Carbodiimide Isomers
Day, Craig S; Grabicki, Niklas; Chu, Daniel BK; Keys, Allison; Singhal, Avni; Vennelakanti, Vyshnavi; Kevlishvili, Ilia; Gómez‐Bombarelli, Rafael; Kulik, Heather J; Johnson, Jeremiah
Rational control of the 3D presentation of atoms—stereochemistry—lies at the heart of synthetic organic and materials chemistries. Here, researchers report detailed computational studies on conformational isomerism in N-heterocyclic carbene–carbodiimide (NHC–CDI) zwitterionic adducts. By varying the steric and electronic parameters of the NHC and CDI components, criteria for controlling isomerization thermodynamics and predicting energetically favorable conformations are identified. These criteria is validated experimentally using a novel synthetic approach to NHC–CDIs, which exploits the thermodynamic equilibrium between sterically unencumbered NHC dimers to access NHC–CDI adducts with low barriers to conformational isomerization, including the first example of an (E/E)-NHC–CDI.
</summary>
<dc:date>2025-05-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>CoRE MOF DB: A curated experimental metal-organic framework database with machine-learned properties for integrated material-process screening</title>
<link href="https://hdl.handle.net/1721.1/162650" rel="alternate"/>
<author>
<name>Zhao, Guobin</name>
</author>
<author>
<name>Brabson, Logan M</name>
</author>
<author>
<name>Chheda, Saumil</name>
</author>
<author>
<name>Huang, Ju</name>
</author>
<author>
<name>Kim, Haewon</name>
</author>
<author>
<name>Liu, Kunhuan</name>
</author>
<author>
<name>Mochida, Kenji</name>
</author>
<author>
<name>Pham, Thang D</name>
</author>
<author>
<name>Prerna</name>
</author>
<author>
<name>Terrones, Gianmarco G</name>
</author>
<author>
<name>Yoon, Sunghyun</name>
</author>
<author>
<name>Zoubritzky, Lionel</name>
</author>
<author>
<name>Coudert, François-Xavier</name>
</author>
<author>
<name>Haranczyk, Maciej</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<author>
<name>Moosavi, Seyed Mohamad</name>
</author>
<author>
<name>Sholl, David S</name>
</author>
<author>
<name>Siepmann, J Ilja</name>
</author>
<author>
<name>Snurr, Randall Q</name>
</author>
<author>
<name>Chung, Yongchul G</name>
</author>
<id>https://hdl.handle.net/1721.1/162650</id>
<updated>2025-09-12T05:30:07Z</updated>
<published>2025-06-04T00:00:00Z</published>
<summary type="text">CoRE MOF DB: A curated experimental metal-organic framework database with machine-learned properties for integrated material-process screening
Zhao, Guobin; Brabson, Logan M; Chheda, Saumil; Huang, Ju; Kim, Haewon; Liu, Kunhuan; Mochida, Kenji; Pham, Thang D; Prerna; Terrones, Gianmarco G; Yoon, Sunghyun; Zoubritzky, Lionel; Coudert, François-Xavier; Haranczyk, Maciej; Kulik, Heather J; Moosavi, Seyed Mohamad; Sholl, David S; Siepmann, J Ilja; Snurr, Randall Q; Chung, Yongchul G
We present an updated version of the Computation-Ready, Experimental (CoRE) Metal-Organic Framework (MOF) database, which includes a curated set of computation-ready MOF crystal structures designed for high-throughput computational materials discovery. Data collection and curation procedures were improved from the previous version to enable more frequent updates in the future. Machine-learning-predicted properties, such as stability metrics and heat capacities, are included in the dataset to streamline screening activities. An updated version of MOFid was developed to provide detailed information on metal nodes, organic linkers, and topologies of an MOF structure. DDEC6 partial atomic charges of MOFs were assigned based on a machine-learning model. Gibbs ensemble Monte Carlo simulations were used to classify the hydrophobicity of MOFs. The finalized dataset was subsequently used to perform integrated material-process screening for various carbon-capture conditions using high-fidelity temperature-swing adsorption (TSA) simulations. Our workflow identified multiple MOF candidates that are predicted to outperform CALF-20 for these applications.
</summary>
<dc:date>2025-06-04T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward Scalable Learning-Based Optical Restoration</title>
<link href="https://hdl.handle.net/1721.1/162649" rel="alternate"/>
<author>
<name>Huang, Siyong</name>
</author>
<author>
<name>Song, Qingyu</name>
</author>
<author>
<name>Yu, Kexin</name>
</author>
<author>
<name>Wang, Zhaoning</name>
</author>
<author>
<name>Zhong, Zhizhen</name>
</author>
<author>
<name>Xiang, Qiao</name>
</author>
<author>
<name>Shu, Jiwu</name>
</author>
<id>https://hdl.handle.net/1721.1/162649</id>
<updated>2026-03-08T03:24:29Z</updated>
<published>2025-08-06T00:00:00Z</published>
<summary type="text">Toward Scalable Learning-Based Optical Restoration
Huang, Siyong; Song, Qingyu; Yu, Kexin; Wang, Zhaoning; Zhong, Zhizhen; Xiang, Qiao; Shu, Jiwu
The increasing scale and dynamic nature of modern optical networks present significant challenges to the scalability and adaptability of fault recovery. Existing state-of-the-art (SOTA) optical restoration methods rely primarily on offline pre-computation for each fault scenario, followed by online traffic reallocation. Their scalability to large network topologies is limited by the reliance on traditional solvers and imprecise modeling of potential faults.&#13;
This paper proposes LBOR, an optical restoration system built on multi-agent reinforcement learning (MARL) and integrated with a traffic allocation framework. We introduce a sequential restoration workflow for each failed IP link, employing two agents dedicated to path selection and wavelength assignment, respectively. In addition, we develop a randomized assignment ordering strategy to mitigate premature convergence to local optima and an action masking mechanism to prune the MARL search space. Experiments conducted on a large topology with 70 nodes indicate that LBOR achieves up to a 1000 × speedup compared to the SOTA approach, with only a slight reduction in allocation precision.
APNET 2025, Shang Hai, China
</summary>
<dc:date>2025-08-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beyond Words: An Experimental Study of Signaling in Crowdfunding</title>
<link href="https://hdl.handle.net/1721.1/162648" rel="alternate"/>
<author>
<name>Dambanemuya, Henry</name>
</author>
<author>
<name>Choi, Eunseo</name>
</author>
<author>
<name>Gergle, Darren</name>
</author>
<author>
<name>Horv?t, Em?ke-?gnes</name>
</author>
<id>https://hdl.handle.net/1721.1/162648</id>
<updated>2026-03-08T03:24:37Z</updated>
<published>2025-06-14T00:00:00Z</published>
<summary type="text">Beyond Words: An Experimental Study of Signaling in Crowdfunding
Dambanemuya, Henry; Choi, Eunseo; Gergle, Darren; Horv?t, Em?ke-?gnes
Increasingly, crowdfunding is transforming financing for many people worldwide. Yet we know relatively little about how, why, and when funding outcomes are impacted by signaling between funders. We conduct two studies of N=500 and N=750 participants involved in crowdfunding to investigate the effect of crowd signals, i.e., certain characteristics deduced from the amounts and timing of contributions, on the decision to fund. In our first study, we find that, under a variety of conditions, contributions of heterogeneous amounts arriving at varying time intervals are significantly more likely to be selected than homogeneous contribution amounts and times. The impact of signaling is strongest among participants who are susceptible to social influence. The effect is remarkably general across different project types, fundraising goals, participant interest in the projects, and participants' altruistic attitudes. Our second study using less strict controls indicates that the role of crowd signals in decision-making is typically unrecognized by participants. Our results underscore the fundamental nature of social signaling in crowdfunding. They highlight the importance of designing around these crowd signals and inform user strategies both on the project creator and funder side.
</summary>
<dc:date>2025-06-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Runtime Bounds for a Coevolutionary Algorithm on Classes of Potential Games</title>
<link href="https://hdl.handle.net/1721.1/162647" rel="alternate"/>
<author>
<name>Hevia Fajardo, Mario Alejandro</name>
</author>
<author>
<name>Toutouh, Jamal</name>
</author>
<author>
<name>Hemberg, Erik</name>
</author>
<author>
<name>O'Reilly, Una-May</name>
</author>
<author>
<name>Lehre, Per Kristian</name>
</author>
<id>https://hdl.handle.net/1721.1/162647</id>
<updated>2026-03-08T03:24:28Z</updated>
<summary type="text">Runtime Bounds for a Coevolutionary Algorithm on Classes of Potential Games
Hevia Fajardo, Mario Alejandro; Toutouh, Jamal; Hemberg, Erik; O'Reilly, Una-May; Lehre, Per Kristian
Coevolutionary algorithms are a family of black-box optimisation algorithms with many applications in game theory. We study a coevolutionary algorithm on an important class of games in game theory: potential games. In these games, a real-valued function defined over the entire strategy space encapsulates the strategic choices of all players collectively. We present the first theoretical analysis of a coevolutionary algorithm on potential games, showing a runtime guarantee that holds for all exact potential games, some weighted and ordinal potential games, and certain non-potential games. Using this result, we show a polynomial runtime on singleton congestion games. Furthermore, we show that there exist games for which coevolutionary algorithms find Nash equilibria exponentially faster than best or better response dynamics, and games for which coevolutionary algorithms find better Nash equilibria as well. Finally, we conduct experimental evaluations showing that our algorithm can outperform widely used algorithms, such as better response on random instances of singleton congestion games, as well as fictitious play, counterfactual regret minimisation (CFR), and external sampling CFR on dynamic routing games.
FOGA ’25, Leiden, Netherlands
</summary>
</entry>
<entry>
<title>MeshTorrent: A Community-Driven P2P System for AI-Generated 3D Model Creation and Distribution</title>
<link href="https://hdl.handle.net/1721.1/162646" rel="alternate"/>
<author>
<name>Lewis, Ryan Hardesty</name>
</author>
<id>https://hdl.handle.net/1721.1/162646</id>
<updated>2026-03-08T03:24:27Z</updated>
<published>2025-08-10T00:00:00Z</published>
<summary type="text">MeshTorrent: A Community-Driven P2P System for AI-Generated 3D Model Creation and Distribution
Lewis, Ryan Hardesty
MeshTorrent is a peer-to-peer platform for automated 3D content creation and exchange, inspired by BitTorrent-style file sharing. By merging AI-based text-to-3D generation with swarm-based distribution, MeshTorrent harnesses the combined bandwidth and storage resources of its users, enabling scalable and decentralized sharing of 3D assets. This paper describes the core design of MeshTorrent, including an AI workflow for generating fresh .glb files, metadata management via a distributed hash table, partial previews for quick inspection, and specialized extensions for 2D sprites (SpriteTorrent) and rigged character models (RigTorrent). Preliminary tests show faster content download times than single-host alternatives, reduced server costs, and robust resilience to network churn, advancing an open ecosystem for collaborative 3D model exchange.
SIGGRAPH Labs ’25, Vancouver, BC, Canada
</summary>
<dc:date>2025-08-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Graphics4Science: Computer Graphics for Scientific Impacts</title>
<link href="https://hdl.handle.net/1721.1/162645" rel="alternate"/>
<author>
<name>Chen, Peter Yichen</name>
</author>
<author>
<name>Guo, Minghao</name>
</author>
<author>
<name>Pfister, Hanspeter</name>
</author>
<author>
<name>Lin, Ming</name>
</author>
<author>
<name>Freeman, William</name>
</author>
<author>
<name>Huang, Qixing</name>
</author>
<author>
<name>Shen, Han-Wei</name>
</author>
<author>
<name>Matusik, Wojciech</name>
</author>
<id>https://hdl.handle.net/1721.1/162645</id>
<updated>2026-03-08T03:24:26Z</updated>
<published>2025-08-14T00:00:00Z</published>
<summary type="text">Graphics4Science: Computer Graphics for Scientific Impacts
Chen, Peter Yichen; Guo, Minghao; Pfister, Hanspeter; Lin, Ming; Freeman, William; Huang, Qixing; Shen, Han-Wei; Matusik, Wojciech
Computer graphics, often associated with films, games, and visual effects, has long been a powerful tool for addressing scientific challenges—from its origins in 3D visualization for medical imaging to its role in modern computational modeling and simulation. This course explores the deep and evolving relationship between computer graphics and science, highlighting past achievements, ongoing contributions, and open questions that remain. We show how core methods, such as geometric reasoning and physical modeling, provide inductive biases that help address challenges in both fields, especially in data-scarce settings. To that end, we aim to reframe graphics as a modeling language for science by bridging vocabulary gaps between the two communities. Designed for both newcomers and experts, Graphics4Science invites the graphics community to engage with science, tackle high-impact problems where graphics expertise can make a difference, and contribute to the future of scientific discovery. Additional details are available on the course website: https://graphics4science.github.io.
SIGGRAPH Courses ’25, Vancouver, BC, Canada
</summary>
<dc:date>2025-08-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mobile Underwater Backscatter Networking</title>
<link href="https://hdl.handle.net/1721.1/162642" rel="alternate"/>
<author>
<name>Wang, Purui</name>
</author>
<author>
<name>Afzal, Sayed Saad</name>
</author>
<author>
<name>Adib, Fadel</name>
</author>
<id>https://hdl.handle.net/1721.1/162642</id>
<updated>2025-09-11T03:09:31Z</updated>
<published>2025-08-27T00:00:00Z</published>
<summary type="text">Mobile Underwater Backscatter Networking
Wang, Purui; Afzal, Sayed Saad; Adib, Fadel
Underwater backscatter is a promising technology for ultra-low-power underwater networking, but existing systems break down in mobile scenarios. This paper presents EchoRider, the first system to enable reliable underwater backscatter networking under mobility.&#13;
EchoRider introduces three key components. First, it incorporates a robust and energy-efficient downlink architecture that uses chirp-modulated transmissions at the reader and a sub-Nyquist chirp decoder on backscatter nodes—bringing the resilience of LoRa-style signaling to underwater backscatter while remaining ultra-low-power. Second, it introduces a NACK-based full-duplex retransmission protocol, enabling efficient, reliable packet delivery. Third, it implements a Doppler-resilient uplink decoding pipeline that includes adaptive equalization, polar coding, and dynamic retraining to combat channel variation.&#13;
We built a full EchoRider prototype and evaluated it across over 1,200 real-world mobile experiments. EchoRider improves bit error rate by over 125× compared to a state-of-the-art baseline and maintains underwater goodput of 0.8 kbps at speeds up to 2.91 knots. In contrast, the baseline fails at speeds as low as 0.17 knots. Finally, we demonstrate EchoRider in end-to-end deployments involving mobile drones and sensor nodes, showing its effectiveness in practical underwater networked applications.
SIGCOMM ’25, Coimbra, Portugal
</summary>
<dc:date>2025-08-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>PreTE: Traffic Engineering with Predictive Failures</title>
<link href="https://hdl.handle.net/1721.1/162641" rel="alternate"/>
<author>
<name>Miao, Congcong</name>
</author>
<author>
<name>Zhong, Zhizhen</name>
</author>
<author>
<name>Zhao, Yiren</name>
</author>
<author>
<name>Gupta, Arpit</name>
</author>
<author>
<name>Zhang, Ying</name>
</author>
<author>
<name>Li, Sirui</name>
</author>
<author>
<name>He, Zekun</name>
</author>
<author>
<name>Zou, Xianneng</name>
</author>
<author>
<name>Wang, Jilong</name>
</author>
<id>https://hdl.handle.net/1721.1/162641</id>
<updated>2025-09-11T03:09:33Z</updated>
<published>2025-08-27T00:00:00Z</published>
<summary type="text">PreTE: Traffic Engineering with Predictive Failures
Miao, Congcong; Zhong, Zhizhen; Zhao, Yiren; Gupta, Arpit; Zhang, Ying; Li, Sirui; He, Zekun; Zou, Xianneng; Wang, Jilong
Fiber links in wide-area networks (WANs) are exposed to complicated environments and hence are vulnerable to failures like fiber cuts. The conventional approach of using static probabilistic failures falls short in fiber-cut scenarios because these fiber cuts are rare but disruptive, making it difficult for network operators to balance network utilization and availability in WAN traffic engineering. Our large-scale measurements of per-second optical-layer data reveal that the fiber's failure probability increases by several orders of magnitude when experiencing a rare and ephemeral degradation state. Therefore, we present a novel traffic engineering (TE) system called PreTE to factor in the dynamic fiber cut probabilities directly into TE systems. At the core of the PreTE system, fiber degradation facilitates failure predictions and traffic tunnels to be proactively updated, followed by traffic allocation optimizations among updated tunnels. We evaluate PreTE using a production-level WAN testbed and large-scale simulations. The testbed evaluation quantifies PreTE's runtime to demonstrate the feasibility to implement in large-scale WANs. Our large-scale simulation results show that PreTE can support up to 2× more demand at the same level of availability as compared to existing TE schemes.
SIGCOMM ’25, September 8–11, 2025, Coimbra, Portugal
</summary>
<dc:date>2025-08-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Carbon- and Precedence-Aware Scheduling for Data Processing Clusters</title>
<link href="https://hdl.handle.net/1721.1/162640" rel="alternate"/>
<author>
<name>Lechowicz, Adam</name>
</author>
<author>
<name>Shenoy, Rohan</name>
</author>
<author>
<name>Bashir, Noman</name>
</author>
<author>
<name>Hajiesmaili, Mohammad</name>
</author>
<author>
<name>Wierman, Adam</name>
</author>
<author>
<name>Delimitrou, Christina</name>
</author>
<id>https://hdl.handle.net/1721.1/162640</id>
<updated>2025-09-11T03:09:23Z</updated>
<published>2025-08-27T00:00:00Z</published>
<summary type="text">Carbon- and Precedence-Aware Scheduling for Data Processing Clusters
Lechowicz, Adam; Shenoy, Rohan; Bashir, Noman; Hajiesmaili, Mohammad; Wierman, Adam; Delimitrou, Christina
As large-scale data processing workloads continue to grow, their carbon footprint raises concerns. Prior research on carbon-aware schedulers has focused on shifting computation to align with the availability of low-carbon energy, but these approaches assume that each task can be executed independently. In contrast, data processing jobs have precedence constraints that complicate decisions, since delaying an upstream "bottleneck" task to a low-carbon period also blocks downstream tasks, impacting makespan. In this paper, we show that carbon-aware scheduling for data processing benefits from knowledge of both time-varying carbon and precedence constraints. Our main contribution is PCAPS, a carbon-aware scheduler that builds on state-of-the-art scoring or probability-based techniques - in doing so, it explicitly relates the structural importance of each task against the time-varying characteristics of carbon intensity. To illustrate gains due to fine-grained task-level scheduling, we also study CAP, a wrapper for any carbon-agnostic scheduler that generalizes the provisioning ideas of PCAPS. Both techniques allow a user-configurable priority between carbon and makespan, and we give basic analytic results to relate the trade-off between these objectives. Our prototype on a 100-node Kubernetes cluster shows that a moderate configuration of PCAPS reduces carbon footprint by up to 32.9% without significantly impacting total efficiency.
SIGCOMM ’25, Coimbra, Portugal
</summary>
<dc:date>2025-08-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>MixNet: A Runtime Reconfigurable Optical-Electrical Fabric for Distributed Mixture-of-Experts Training</title>
<link href="https://hdl.handle.net/1721.1/162639" rel="alternate"/>
<author>
<name>Liao, Xudong</name>
</author>
<author>
<name>Sun, Yijun</name>
</author>
<author>
<name>Tian, Han</name>
</author>
<author>
<name>Wan, Xinchen</name>
</author>
<author>
<name>Jin, Yilun</name>
</author>
<author>
<name>Wang, Zilong</name>
</author>
<author>
<name>Ren, Zhenghang</name>
</author>
<author>
<name>Huang, Xinyang</name>
</author>
<author>
<name>Li, Wenxue</name>
</author>
<author>
<name>Tse, Kin Fai</name>
</author>
<author>
<name>Zhong, Zhizhen</name>
</author>
<author>
<name>Liu, Guyue</name>
</author>
<author>
<name>Zhang, Ying</name>
</author>
<author>
<name>Ye, Xiaofeng</name>
</author>
<author>
<name>Zhang, Yiming</name>
</author>
<author>
<name>Chen, Kai</name>
</author>
<id>https://hdl.handle.net/1721.1/162639</id>
<updated>2025-09-11T03:09:28Z</updated>
<published>2025-08-27T00:00:00Z</published>
<summary type="text">MixNet: A Runtime Reconfigurable Optical-Electrical Fabric for Distributed Mixture-of-Experts Training
Liao, Xudong; Sun, Yijun; Tian, Han; Wan, Xinchen; Jin, Yilun; Wang, Zilong; Ren, Zhenghang; Huang, Xinyang; Li, Wenxue; Tse, Kin Fai; Zhong, Zhizhen; Liu, Guyue; Zhang, Ying; Ye, Xiaofeng; Zhang, Yiming; Chen, Kai
Mixture-of-Expert (MoE) models outperform conventional models by selectively activating different subnets, named experts, on a per-token basis. This gated computation generates dynamic communications that cannot be determined beforehand, challenging the existing GPU interconnects that remain static during distributed training. In this paper, we advocate for a first-of-its-kind system, called MixNet, that unlocks topology reconfiguration during distributed MoE training. Towards this vision, we first perform a production measurement study and show that the MoE dynamic communication pattern has strong locality, alleviating the need for global reconfiguration. Based on this, we design and implement a regionally reconfigurable high-bandwidth domain that augments existing electrical interconnects using optical circuit switching (OCS), achieving scalability while maintaining rapid adaptability. We build a fully functional MixNet prototype with commodity hardware and a customized collective communication runtime. Our prototype trains state-of-the-art MoE models with in-training topology reconfiguration across 32 A100 GPUs. Large-scale packet-level simulations show that MixNet achieves performance comparable to a non-blocking fat-tree fabric while boosting the networking cost efficiency (e.g., performance per dollar) of four representative MoE models by 1.2×–1.5× and 1.9×–2.3× at 100 Gbps and 400 Gbps link bandwidths, respectively.
SIGCOMM ’25, Coimbra, Portugal
</summary>
<dc:date>2025-08-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Political Prediction and the Wisdom of Crowds</title>
<link href="https://hdl.handle.net/1721.1/162638" rel="alternate"/>
<author>
<name>Sethi, Rajiv</name>
</author>
<author>
<name>Seager, Julie</name>
</author>
<author>
<name>Morstatter, Fred</name>
</author>
<author>
<name>Benjamin, Daniel</name>
</author>
<author>
<name>Hammell, Anna</name>
</author>
<author>
<name>Liu, Tianshuo</name>
</author>
<author>
<name>Patel, Sachi</name>
</author>
<author>
<name>Subramanian, Ramya</name>
</author>
<id>https://hdl.handle.net/1721.1/162638</id>
<updated>2025-09-11T03:09:26Z</updated>
<published>2025-08-03T00:00:00Z</published>
<summary type="text">Political Prediction and the Wisdom of Crowds
Sethi, Rajiv; Seager, Julie; Morstatter, Fred; Benjamin, Daniel; Hammell, Anna; Liu, Tianshuo; Patel, Sachi; Subramanian, Ramya
We evaluate the relative forecasting performance of three statistical models and a prediction market for several outcomes decided during the November 2024 elections in the United States—the winner of the presidency, the popular vote, fifteen competitive states in the Electoral College, eleven Senate races, and thirteen House races. We argue that conventional measures of predictive accuracy such as the average daily Brier score reward modeling flaws that result in predicable reversals, as long as such movements are in a direction that is aligned with the eventual outcome. Instead, we adopt a test based on the idea that the strength of a model can be measured by the profitability of a trader who believes its forecasts and bets on the market based on this belief. The results of this test depend on the risk preferences with which the trader is endowed, but we show that within a large parameter range this does not lead to ranking reversals. We find that all models failed to beat the market in the headline contract but some did so convincingly in contracts referencing less visible races.
CI 2025, San Diego, CA, USA
</summary>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evolutionary and Coevolutionary Multi-Agent Design Choices and Dynamics</title>
<link href="https://hdl.handle.net/1721.1/162637" rel="alternate"/>
<author>
<name>Hemberg, Erik</name>
</author>
<author>
<name>Moskal, Stephen</name>
</author>
<author>
<name>O'Reilly, Una-May</name>
</author>
<author>
<name>Liu</name>
</author>
<author>
<name>Fuller</name>
</author>
<id>https://hdl.handle.net/1721.1/162637</id>
<updated>2025-09-11T03:09:14Z</updated>
<published>2025-08-11T00:00:00Z</published>
<summary type="text">Evolutionary and Coevolutionary Multi-Agent Design Choices and Dynamics
Hemberg, Erik; Moskal, Stephen; O'Reilly, Una-May; Liu; Fuller
We investigate two representation alternatives for the controllers of teams of cyber agents. We combine these controller representations with different evolutionary algorithms, one of which introduces a novel LLM-supported mutation operator. Using a cyber security scenario, we evaluate agent learning when one side is trained to compete against a side that does not evolve and when two sides coevolve with each other. This allows us to quantify the relative merits and tradeoffs of representation and algorithm combinations in terms of team performance. The scenario also allows us to compare the performance impact and dynamics of coevolution versus evolution under different combinations. Across the algorithms and representations, we observe that coevolution reduces the performance highs and lows of both sides while it induces fluctuations on both sides. In contrast, when only one-side is optimized, performance peaks are higher and is more sustained than when both sides are optimized with coevolution.
GECCO ’25 Companion, July 14–18, 2025, Malaga, Spain
</summary>
<dc:date>2025-08-11T00:00:00Z</dc:date>
</entry>
<entry>
<title>MEDS: Building Models and Tools in a Reproducible Health AI Ecosystem</title>
<link href="https://hdl.handle.net/1721.1/162636" rel="alternate"/>
<author>
<name>McDermott, Matthew</name>
</author>
<author>
<name>Xu, Justin</name>
</author>
<author>
<name>Bergamaschi, Teya</name>
</author>
<author>
<name>Jeong, Hyewon</name>
</author>
<author>
<name>Lee, Simon</name>
</author>
<author>
<name>Oufattole, Nassim</name>
</author>
<author>
<name>Rockenschaub, Patrick</name>
</author>
<author>
<name>Steinberg, Ethan</name>
</author>
<author>
<name>Sun, Jimeng</name>
</author>
<author>
<name>Water, Robin</name>
</author>
<author>
<name>Wornow, Michael</name>
</author>
<author>
<name>Wu, John</name>
</author>
<author>
<name>Wu, Zhenbang</name>
</author>
<author>
<name>Stankevičiūtė, Kamilė</name>
</author>
<id>https://hdl.handle.net/1721.1/162636</id>
<updated>2025-09-11T03:09:29Z</updated>
<published>2025-08-03T00:00:00Z</published>
<summary type="text">MEDS: Building Models and Tools in a Reproducible Health AI Ecosystem
McDermott, Matthew; Xu, Justin; Bergamaschi, Teya; Jeong, Hyewon; Lee, Simon; Oufattole, Nassim; Rockenschaub, Patrick; Steinberg, Ethan; Sun, Jimeng; Water, Robin; Wornow, Michael; Wu, John; Wu, Zhenbang; Stankevičiūtė, Kamilė
Health AI suffers from a systemic reproducibility crisis that irreparably hinders research in this space across academia and industry. To combat this and empower researchers in the health AI space, we propose a comprehensive interactive tutorial introducing the ''Medical Event Data Standard'' (MEDS) and its growing open-source ecosystem. Working in MEDS allows you to more easily build AI models over public or private longitudinal EHR datasets and to readily benchmark existing, published models against contributions on local datasets and tasks. MEDS simplifies the construction of AI models on longitudinal Electronic Health Record (EHR) datasets and enables straightforward benchmarking against established models. Reflecting its growing adoption, MEDS is utilized at over 15 institutions across 8 countries, features 7+ open-source tools, supports 10+ published models, and provides publicly available Extract-Transform-Load (ETL) pipelines for major public EHR datasets. A KDD tutorial offering practical experience with MEDS will significantly enhance reproducibility and comparability in health AI research.&#13;
In this tutorial, we will teach attendees how to (1) transform datasets into the MEDS format(2) pre-process MEDS data for modeling needs(3) build highly effective, efficient, AI models for diverse predictive tasks on their datasets, and (4) contribute their results to MEDS-DEV, a decentralized benchmark enabling robust evaluation against meaningful baselines. Participants will engage in collaborative, minimal-dependency Jupyter notebook exercises, guided through each step by structured instruction and practical coding sessions. Attendees will leave equipped with practical knowledge to build reproducible, state-of-the-art AI models within the MEDS ecosystem.
KDD ’25, Toronto, ON, Canada
</summary>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Guide to Misinformation Detection Data and Evaluation</title>
<link href="https://hdl.handle.net/1721.1/162635" rel="alternate"/>
<author>
<name>Thibault, Camille</name>
</author>
<author>
<name>Tian, Jacob-Junqi</name>
</author>
<author>
<name>P?loquin-Skulski, Gabrielle</name>
</author>
<author>
<name>Curtis, Taylor</name>
</author>
<author>
<name>Zhou, James</name>
</author>
<author>
<name>Laflamme, Florence</name>
</author>
<author>
<name>Guan, Luke Yuxiang</name>
</author>
<author>
<name>Rabbany, Reihaneh</name>
</author>
<author>
<name>Godbout, Jean-Fran?ois</name>
</author>
<author>
<name>Pelrine, Kellin</name>
</author>
<id>https://hdl.handle.net/1721.1/162635</id>
<updated>2025-09-11T03:09:24Z</updated>
<published>2025-08-03T00:00:00Z</published>
<summary type="text">A Guide to Misinformation Detection Data and Evaluation
Thibault, Camille; Tian, Jacob-Junqi; P?loquin-Skulski, Gabrielle; Curtis, Taylor; Zhou, James; Laflamme, Florence; Guan, Luke Yuxiang; Rabbany, Reihaneh; Godbout, Jean-Fran?ois; Pelrine, Kellin
Misinformation is a complex societal issue, and mitigating solutions are difficult to create due to data deficiencies. To address this, we have curated the largest collection of (mis)information datasets in the literature, totaling 75. From these, we evaluated the quality of 36 datasets that consist of statements or claims, as well as the 9 datasets that consist of data in purely paragraph form. We assess these datasets to identify those with solid foundations for empirical work and those with flaws that could result in misleading and non-generalizable results, such as spurious correlations, or examples that are ambiguous or otherwise impossible to assess for veracity. We find the latter issue is particularly severe and affects most datasets in the literature. We further provide state-of-the-art baselines on all these datasets, but show that regardless of label quality, categorical labels may no longer give an accurate evaluation of detection model performance. Finally, we propose and highlight Evaluation Quality Assurance (EQA) as a tool to guide the field toward systemic solutions rather than inadvertently propagating issues in evaluation. Overall, this guide aims to provide a roadmap for higher quality data and better grounded evaluations, ultimately improving research in misinformation detection. All datasets and other artifacts are available at misinfo-datasets.complexdatalab.com. The extended paper, including the appendices, can be accessed via arXiv at arxiv.org/abs/2411.05060.
KDD ’25, Toronto, ON, Canada
</summary>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>A KNOWLEDGE GRAPH IS ALL YOU NEED</title>
<link href="https://hdl.handle.net/1721.1/162634" rel="alternate"/>
<author>
<name>Streilen, William</name>
</author>
<author>
<name>Brooks, Nicholas</name>
</author>
<author>
<name>Burill, Daniel</name>
</author>
<author>
<name>Smith, Corey</name>
</author>
<id>https://hdl.handle.net/1721.1/162634</id>
<updated>2025-09-11T03:11:03Z</updated>
<published>2025-09-10T00:00:00Z</published>
<summary type="text">A KNOWLEDGE GRAPH IS ALL YOU NEED
Streilen, William; Brooks, Nicholas; Burill, Daniel; Smith, Corey
The Department of the Air Force (DAF) faces&#13;
unique challenges in adopting Large Language Models&#13;
(LLMs). Commercially available models often lack the&#13;
domain-specific knowledge necessary to support airmen,&#13;
as this information is not inherently embedded. To maintain&#13;
a competitive edge, the integration of LLMs to&#13;
improve efficiency and decision making is a critical priority.&#13;
This presentation explores two innovative methodologies&#13;
designed to better integrate domain-specific knowledge&#13;
into language models and improve the discovery of&#13;
relevant information. The first is EntiGraph Continuous&#13;
Pretraining, which leverages continuous training to embed&#13;
specialized knowledge into language models. The second&#13;
is the GFM-RAG Graph RAG Framework, a novel approach&#13;
to knowledge retrieval and synthesis that enhances&#13;
model performance by improving multi-hop retrieval and&#13;
complex information connections.&#13;
Through both quantitative and qualitative evaluations, we&#13;
assess their impact on retrieval accuracy and response&#13;
relevance. Our findings demonstrate the potential of these&#13;
customized approaches to streamline information access,&#13;
improve decision making, and better support the operational&#13;
needs of the DAF.
</summary>
<dc:date>2025-09-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>LLM-Based Entity Extraction for Cyber Threat Reports</title>
<link href="https://hdl.handle.net/1721.1/162632" rel="alternate"/>
<author>
<name>Alperin, Kenneth</name>
</author>
<author>
<name>de Silva, Alexis</name>
</author>
<id>https://hdl.handle.net/1721.1/162632</id>
<updated>2025-09-11T03:11:04Z</updated>
<published>2025-09-10T00:00:00Z</published>
<summary type="text">LLM-Based Entity Extraction for Cyber Threat Reports
Alperin, Kenneth; de Silva, Alexis
As the cyber threat landscape and capabilities&#13;
of advance persistent threats continue to expand, applying&#13;
cutting-edge technology to the domain of cyber intelligence&#13;
is necessary for the United States Space Force&#13;
to keep pace in the Great Power Competition. Cyber&#13;
intelligence analysts spend an estimated time of nearly&#13;
840 man-hours annually on the extraction and validation&#13;
of relevant intelligence from cyber threat reports (CTRs).&#13;
Named entity recognition (NER) is a natural language&#13;
processing technique capable of automatically extracting&#13;
and labeling all relevant information from a given text.&#13;
Although not a novel idea, this paper aims to expand&#13;
the current but limited research on the applications of&#13;
NER to the domain of cyber intelligence. This study&#13;
uses a new openly-licensed dataset, AnnoCTR, to finetune&#13;
a cybersecurity-specific, transformers-based model,&#13;
CYBERT. The performance of the model is compared&#13;
to the models from the derived literature. Although the&#13;
results showed an F1 score of 0.733 – a less optimal&#13;
performance compared to previous models – there is&#13;
still more work to explore to reduce the production time&#13;
of intelligence analysis by half.
</summary>
<dc:date>2025-09-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Democratizing Data: An Intelligent Querying System for Marine Corps Data</title>
<link href="https://hdl.handle.net/1721.1/162631" rel="alternate"/>
<author>
<name>Johnson, Lane</name>
</author>
<author>
<name>Nam, Kevin</name>
</author>
<id>https://hdl.handle.net/1721.1/162631</id>
<updated>2025-09-11T03:11:08Z</updated>
<published>2025-09-10T00:00:00Z</published>
<summary type="text">Democratizing Data: An Intelligent Querying System for Marine Corps Data
Johnson, Lane; Nam, Kevin
This research presents the development and implementation&#13;
of a text-to-Structured Query Language (SQL)&#13;
system tailored for Marine Corps logistics, capitalizing upon&#13;
the proven capabilities of Large Language Models (LLMs). By&#13;
fine-tuning an open-source LLM on a curated Global Combat&#13;
Support System - Marine Corps supply and maintenance dataset,&#13;
we demonstrate how non-technical users can intuitively interact&#13;
with Marine Corps data through natural language queries,&#13;
enhancing data accessibility and operational decision-making.&#13;
Our approach assumes a resource-constrained environment,&#13;
demonstrating that fine-tuning and deploying the model on a&#13;
single NVIDIA A100 graphics processing unit (GPU) is not&#13;
only feasible, but also highlights the potential for local or edgebased&#13;
artificial intelligence (AI) solutions. We further identify the&#13;
critical importance of high-quality, representative datasets and&#13;
propose a hybrid approach combining prompt engineering with&#13;
fine-tuning to improve performance. Our findings culminate in&#13;
concrete recommendations for the Marine Corps regarding data&#13;
governance, AI integration, and workforce development.
</summary>
<dc:date>2025-09-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pixels to Places: Improving Zero-Shot Image Geolocalization using Prior Knowledge</title>
<link href="https://hdl.handle.net/1721.1/162630" rel="alternate"/>
<author>
<name>Cha, Miriam</name>
</author>
<author>
<name>Borg, Trent</name>
</author>
<id>https://hdl.handle.net/1721.1/162630</id>
<updated>2025-09-11T03:10:57Z</updated>
<published>2025-09-10T00:00:00Z</published>
<summary type="text">Pixels to Places: Improving Zero-Shot Image Geolocalization using Prior Knowledge
Cha, Miriam; Borg, Trent
The ability to predict the geographic origin of&#13;
a photo is critical for open-source investigation applications.&#13;
However, image geolocalization is highly challenging due to&#13;
the vast diversity of images captured worldwide. While vision&#13;
transformer-based approaches have demonstrated success—&#13;
even outperforming grandmasters in geolocation games like&#13;
GeoGuessr—their performance does not generalize well to unseen&#13;
locations. Prior methods rely solely on visual cues, neglecting&#13;
broader contextual knowledge that image analysts typically&#13;
employ. To bridge this gap, our research integrates the contextual&#13;
understanding of geographic regions that imagery analysts&#13;
possess into the geolocalization model. Specifically, we develop a&#13;
variant of StreetCLIP, which embeds CLIP within geolocalization&#13;
tasks and facilitates the incorporation of user-supplied prior&#13;
knowledge such as continental or national boundaries. Our&#13;
results on the IM2GPS3K benchmark dataset demonstrate a&#13;
10.66% improvement in regional prediction (within 200 km)&#13;
and a 15.27% improvement in country-level prediction (within&#13;
750 km) over baseline models. Our results suggest that humanprovided&#13;
supervision can enhance image geolocalization accuracy,&#13;
highlighting the potential of interactive systems where human&#13;
expertise and AI work collaboratively to refine predictions.&#13;
Index Terms—image geolocalization, CLIP, human-machine&#13;
teaming, vision transformers
</summary>
<dc:date>2025-09-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improved Automatic Electronic Intelligence Collection System for Internal and External Forward Fusion and Collaborative Geolocation of Adversary Emitters</title>
<link href="https://hdl.handle.net/1721.1/162629" rel="alternate"/>
<author>
<name>Botero, Joey</name>
</author>
<author>
<name>Benge, Arianne</name>
</author>
<author>
<name>Heisey, Curtis</name>
</author>
<id>https://hdl.handle.net/1721.1/162629</id>
<updated>2025-09-11T03:11:09Z</updated>
<published>2025-09-10T00:00:00Z</published>
<summary type="text">Improved Automatic Electronic Intelligence Collection System for Internal and External Forward Fusion and Collaborative Geolocation of Adversary Emitters
Botero, Joey; Benge, Arianne; Heisey, Curtis
With the 2022 National Defense Strategy shifting&#13;
focus from counterinsurgency operations to near-peer adversaries,&#13;
airborne ISR platforms within the USAF and DoD must&#13;
be improved for effectiveness in a near-peer conflict. They&#13;
need to be able to operate quickly and effectively in contested&#13;
environments with longer-range threats, act as a forward edge&#13;
intelligence node for blue forces and provide DoD Research&#13;
and Development efforts with cutting-edge data regarding new&#13;
adversary signals and technology.&#13;
To aid in tackling these challenges, this project introduces a&#13;
Machine Learning (ML)-driven approach that revamps the Automatic&#13;
Electronic Intelligence Collection System (ACS) on U.S.&#13;
Airborne ISR platforms in four ways: First, by providing nodal&#13;
analysis to the user in real time by automatically aggregating&#13;
existing data across the aircraft to the user for decreased operator&#13;
cognitive load. Second, increasing internal aircraft database&#13;
information with external intelligence database information to&#13;
increase confidence in targeting. Third, by providing automatic&#13;
signal anomaly detection to the operator utilizing a support&#13;
vector machines algorithm that cues operators to potential&#13;
signals of interest based on previous activity and pattern of life&#13;
prediction. Lastly, by providing better surface against airborne&#13;
identification through utilization of cone angle to the system&#13;
to help operators with faster threat warning and situational&#13;
awareness of the environment.&#13;
Findings include Support Vector Machines being the most&#13;
effective tested binary classifier for predicting single signal&#13;
anomaly detection at 84% AUC and a rule-based method of&#13;
averages successfully classifying 1089 surface versus air ELINT&#13;
samples with a success rate of 89% compared to other tested&#13;
methods, such as Gaussian Mixture Models at 68% and KNearest&#13;
Neighbor at 66%.
</summary>
<dc:date>2025-09-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>Artificial Intelligence for Derivative Security Classification: Applications to DoD</title>
<link href="https://hdl.handle.net/1721.1/162628" rel="alternate"/>
<author>
<name>Gelbard, Andrew</name>
</author>
<author>
<name>Hamilton, Lei</name>
</author>
<id>https://hdl.handle.net/1721.1/162628</id>
<updated>2025-09-11T03:11:08Z</updated>
<published>2025-09-10T00:00:00Z</published>
<summary type="text">Artificial Intelligence for Derivative Security Classification: Applications to DoD
Gelbard, Andrew; Hamilton, Lei
The accurate classification of government&#13;
documents according to their sensitivity (e.g., UNCLASSIFIED,&#13;
SECRET, TOP SECRET) is critical for national&#13;
security, yet historically has relied on time-intensive&#13;
manual review. The current manual classification process&#13;
consumes millions of labor hours annually within the&#13;
U.S. government, significantly diverting skilled personnel&#13;
from essential analytical tasks. This research explores&#13;
automating this security classification task using recently&#13;
available declassified materials from the DISC&#13;
dataset [1], addressing practical challenges such as&#13;
noisy Optical Character Recognition (OCR) output,&#13;
imbalanced data distributions, and potential leakage&#13;
of explicit classification markers within document text.&#13;
This dataset contains declassified government documents&#13;
sourced from the Digital National Security Archive, providing&#13;
authentic textual examples representative of actual&#13;
classification scenarios. We evaluate both traditional&#13;
machine learning approaches and advanced transformerbased&#13;
language models to classify documents accurately&#13;
across multiple sensitivity levels. Our results highlight&#13;
that transformer-based models, particularly DeBERTa,&#13;
effectively improve identification of the minority but&#13;
critical TOP SECRET class, achieving recall over 70%&#13;
and an overall balanced performance (macro F1 score of&#13;
0.75), while traditional methods exhibit similar overall&#13;
accuracy but struggle with minority class recall. Despite&#13;
promising findings, we caution that conclusions drawn&#13;
here remain constrained by limited training data size&#13;
and inherent uncertainties in human-labeled documents.&#13;
We emphasize the need for larger, rigorously preprocessed&#13;
datasets and suggest future research integrating&#13;
authoritative classification guidelines directly into model&#13;
training, potentially via retrieval-augmented methods.&#13;
This work thus contributes a foundational, reproducible&#13;
framework that demonstrates significant potential for&#13;
machine-assisted security classification, guiding future&#13;
research and practical applications in the information&#13;
security domain.
</summary>
<dc:date>2025-09-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Area-of-Measurable-Performance (AOMP) Method Standard as a Foundational Archetype for the Cyclical Enhancement of the State of the Art Joint Simulation Environment (JSE) Technology</title>
<link href="https://hdl.handle.net/1721.1/162627" rel="alternate"/>
<author>
<name>Li, William</name>
</author>
<author>
<name>Johnson, Kevin</name>
</author>
<author>
<name>Picardo, Christopher</name>
</author>
<author>
<name>Ambion, Francis</name>
</author>
<id>https://hdl.handle.net/1721.1/162627</id>
<updated>2025-09-11T03:11:05Z</updated>
<published>2025-09-10T00:00:00Z</published>
<summary type="text">The Area-of-Measurable-Performance (AOMP) Method Standard as a Foundational Archetype for the Cyclical Enhancement of the State of the Art Joint Simulation Environment (JSE) Technology
Li, William; Johnson, Kevin; Picardo, Christopher; Ambion, Francis
The Department of the Air Force (DAF) envisions the need to incorporate Artificial Intelligence and Machine Learning (AI/ML) models into novel systems it develops for the purpose of enhancing them to meet its primary goal of maintaining total air superiority [2]. There is currently a need for developing a standard process for the design of successful AI/ML models capable of enhancing the novel systems the DAF develops. In this white paper we introduce the Area of Measurable Performance (AOMP) Method Standard and apply it to the Joint Simulation Environment (JSE) Technology, a state of the art system of systems under test, to identify AOMPS and their modular requirements [3] and metrics that lead to the accurate characterization of modular AI/ML models through a process that offers a high degree of trust and reuse, resulting in a method standard that organically promotes the development of successful modular AI/ML models for use in the performance improvement of the JSE technology or other system of system(s) [4] under test.
</summary>
<dc:date>2025-09-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>CH−π interactions confer orientational flexibility in protein–carbohydrate binding sites</title>
<link href="https://hdl.handle.net/1721.1/162626" rel="alternate"/>
<author>
<name>Keys, Allison M</name>
</author>
<author>
<name>Kastner, David W</name>
</author>
<author>
<name>Kiessling, Laura L</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<id>https://hdl.handle.net/1721.1/162626</id>
<updated>2025-09-10T07:26:14Z</updated>
<published>2025-08-01T00:00:00Z</published>
<summary type="text">CH−π interactions confer orientational flexibility in protein–carbohydrate binding sites
Keys, Allison M; Kastner, David W; Kiessling, Laura L; Kulik, Heather J
Protein-carbohydrate binding plays an essential role in biological processes including cellular recognition and immune signaling. However, glycans are hydrophilic with limited hydrophobic surfaces, a challenge for selective recognition by proteins. CH-π stacking interactions are pervasive in protein-carbohydrate binding sites and have emerged as critical drivers of protein-carbohydrate recognition. These interactions are highly favorable and have a broad orientational landscape. However, it is unknown how the orientations of CH-π stacking interactions are influenced by the protein environment; their functional interplay with hydrogen bonds in protein-carbohydrate binding is also unclear. Here, we employ well-tempered metadynamics simulations to obtain binding free energy landscapes for a set of protein-β-D-galactoside complexes with CH-π stacking interactions. Our data show that the favored orientation of a CH-π stacking interaction is controlled by the location of hydrogen bonds in the protein binding site. Complexes with extended carbohydrate ligands that form additional hydrogen bonds have more specific orientational dependencies, while protein variant complexes with fewer hydrogen bonds have broader free energy landscapes with glycan ligands adopting multiple CH-π stacking interaction orientations. We also show that forming multiple CH-π stacking interactions facilitates the dynamics necessary for the translocation of oligosaccharide ligands within a processive enzyme. Our findings underscore the cooperative nature of hydrogen bonds and CH-π stacking interactions, demonstrating that tuning the number and positions of these interactions through evolution or protein engineering can alter ligand recognition or support ligand movement.
</summary>
<dc:date>2025-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-Throughput Discovery of Ferrocene Mechanophores with Enhanced Reactivity and Network Toughening</title>
<link href="https://hdl.handle.net/1721.1/162625" rel="alternate"/>
<author>
<name>Kevlishvili, Ilia</name>
</author>
<author>
<name>Vakil, Jafer</name>
</author>
<author>
<name>Kastner, David W</name>
</author>
<author>
<name>Huang, Xiao</name>
</author>
<author>
<name>Craig, Stephen L</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<id>https://hdl.handle.net/1721.1/162625</id>
<updated>2025-09-10T07:26:16Z</updated>
<published>2025-08-01T00:00:00Z</published>
<summary type="text">High-Throughput Discovery of Ferrocene Mechanophores with Enhanced Reactivity and Network Toughening
Kevlishvili, Ilia; Vakil, Jafer; Kastner, David W; Huang, Xiao; Craig, Stephen L; Kulik, Heather J
Mechanophores are molecules that undergo chemical changes in response to mechanical force, offering unique opportunities in chemistry, materials science, and drug delivery. However, many potential mechanophores remain unexplored. For example, ferrocenes are attractive targets as mechanophores due to their combination of high thermal stability and mechanochemical lability. However, the mechanochemical potential of ferrocene derivatives remains dramatically underexplored despite the synthesis of thousands of structurally diverse complexes. Herein, we report the computational, machine learning guided discovery of synthesizable ferrocene mechanophores. We identify over one hundred potential target ferrocene mechanophores with wide-ranging mechanochemical activity and use data-driven computational screening to identify a select number of promising complexes. We highlight design principles to alter their mechanochemical activation, including regio-controlled transition state stabilization through bulky groups and a change in mechanism through noncovalent ligand–ligand interactions. The computational screening is validated experimentally both at the polymer strand level through sonication experiments and at the network level, where a computationally discovered ferrocene mechanophore cross-linker leads to greater than 4-fold enhancement in material tearing energy. This work establishes a generalizable framework for the high-throughput discovery and rational design of mechanophores and offers insights into structure–activity relationships in mechanically responsive materials.
</summary>
<dc:date>2025-08-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MOSS: Multi-Objective Optimization for Stable Rule Sets</title>
<link href="https://hdl.handle.net/1721.1/162624" rel="alternate"/>
<author>
<name>Liu, Brian</name>
</author>
<author>
<name>Mazumder, Rahul</name>
</author>
<id>https://hdl.handle.net/1721.1/162624</id>
<updated>2025-09-10T07:26:06Z</updated>
<published>2025-08-03T00:00:00Z</published>
<summary type="text">MOSS: Multi-Objective Optimization for Stable Rule Sets
Liu, Brian; Mazumder, Rahul
We present MOSS, a multi-objective optimization framework for constructing stable sets of decision rules. MOSS incorporates three important criteria for interpretability: sparsity, accuracy, and stability, into a single multi-objective optimization framework. Importantly, MOSS allows a practitioner to rapidly evaluate the trade-off between accuracy and stability in sparse rule sets in order to select an appropriate model. We develop a specialized cutting plane algorithm in our framework to rapidly compute the Pareto frontier between these two objectives, and our algorithm scales to problem instances beyond the capabilities of commercial optimization solvers. Our experiments show that MOSS outperforms state-of-the-art rule ensembles in terms of both predictive performance and stability.
KDD ’25, Toronto, ON, Canada
</summary>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>FlanS: A Foundation Model for Free-Form Language-based Segmentation in Medical Images</title>
<link href="https://hdl.handle.net/1721.1/162623" rel="alternate"/>
<author>
<name>Da, Longchao</name>
</author>
<author>
<name>Wang, Rui</name>
</author>
<author>
<name>Xu, Xiaojian</name>
</author>
<author>
<name>Bhatia, Parminder</name>
</author>
<author>
<name>Kass-Hout, Taha</name>
</author>
<author>
<name>Wei, Hua</name>
</author>
<author>
<name>Xiao, Cao</name>
</author>
<id>https://hdl.handle.net/1721.1/162623</id>
<updated>2025-09-10T07:25:59Z</updated>
<published>2025-08-03T00:00:00Z</published>
<summary type="text">FlanS: A Foundation Model for Free-Form Language-based Segmentation in Medical Images
Da, Longchao; Wang, Rui; Xu, Xiaojian; Bhatia, Parminder; Kass-Hout, Taha; Wei, Hua; Xiao, Cao
Medical imaging is crucial for diagnosing a patient's health condition, and accurate segmentation of these images is essential for isolating regions of interest to ensure precise diagnosis and treatment planning. Existing methods primarily rely on bounding boxes or point-based prompts, while few have explored text-related prompts, despite clinicians often describing their observations and instructions in natural language. To address this gap, we first propose a RAG-based free-form text prompt generator that leverages the domain corpus to generate diverse and realistic descriptions. Then, we introduce FLanS, a novel medical image segmentation model that handles various free-form text prompts, including professional anatomy-informed queries, anatomy-agnostic position-driven queries, and anatomy-agnostic size-driven queries. Additionally, our model also incorporates a symmetry-aware canonicalization module to ensure consistent, accurate segmentations across varying scan orientations and reduce confusion between the anatomical position of an organ and its appearance in the scan. FLanS is trained on a large-scale dataset of over 100k medical images from 7 public datasets. Comprehensive experiments demonstrate the model's superior language understanding and segmentation precision, along with a deep comprehension of the relationship between them, outperforming SOTA baselines on both in-domain and out-of-domain datasets.
KDD ’25, August 3–7, 2025, Toronto, ON, Canada
</summary>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>RL4CO: An Extensive Reinforcement Learning for Combinatorial Optimization Benchmark</title>
<link href="https://hdl.handle.net/1721.1/162622" rel="alternate"/>
<author>
<name>Berto, Federico</name>
</author>
<author>
<name>Hua, Chuanbo</name>
</author>
<author>
<name>Park, Junyoung</name>
</author>
<author>
<name>Luttmann, Laurin</name>
</author>
<author>
<name>Ma, Yining</name>
</author>
<author>
<name>Bu, Fanchen</name>
</author>
<author>
<name>Wang, Jiarui</name>
</author>
<author>
<name>Ye, Haoran</name>
</author>
<author>
<name>Kim, Minsu</name>
</author>
<author>
<name>Choi, Sanghyeok</name>
</author>
<author>
<name>Zepeda, Nayeli</name>
</author>
<author>
<name>Hottung, Andr?</name>
</author>
<author>
<name>Zhou, Jianan</name>
</author>
<author>
<name>Bi, Jieyi</name>
</author>
<author>
<name>Hu, Yu</name>
</author>
<author>
<name>Liu, Fei</name>
</author>
<author>
<name>Kim, Hyeonah</name>
</author>
<author>
<name>Son, Jiwoo</name>
</author>
<author>
<name>Kim, Haeyeon</name>
</author>
<author>
<name>Angioni, Davide</name>
</author>
<author>
<name>Kool, Wouter</name>
</author>
<id>https://hdl.handle.net/1721.1/162622</id>
<updated>2025-09-10T07:26:12Z</updated>
<published>2025-08-03T00:00:00Z</published>
<summary type="text">RL4CO: An Extensive Reinforcement Learning for Combinatorial Optimization Benchmark
Berto, Federico; Hua, Chuanbo; Park, Junyoung; Luttmann, Laurin; Ma, Yining; Bu, Fanchen; Wang, Jiarui; Ye, Haoran; Kim, Minsu; Choi, Sanghyeok; Zepeda, Nayeli; Hottung, Andr?; Zhou, Jianan; Bi, Jieyi; Hu, Yu; Liu, Fei; Kim, Hyeonah; Son, Jiwoo; Kim, Haeyeon; Angioni, Davide; Kool, Wouter
Combinatorial optimization (CO) is fundamental to several real-world applications, from logistics and scheduling to hardware design and resource allocation. Deep reinforcement learning (RL) has recently shown significant benefits in solving CO problems, reducing reliance on domain expertise and improving computational efficiency. However, the absence of a unified benchmarking framework leads to inconsistent evaluations, limits reproducibility, and increases engineering overhead, raising barriers to adoption for new researchers. To address these challenges, we introduce RL4CO, a unified and extensive benchmark with in-depth library coverage of 27 CO problem environments and 23 state-of-the-art baselines. Built on efficient software libraries and best practices in implementation, RL4CO features modularized implementation and flexible configurations of diverse environments, policy architectures, RL algorithms, and utilities with extensive documentation. RL4CO helps researchers build on existing successes while exploring and developing their own designs, facilitating the entire research process by decoupling science from heavy engineering. We finally provide extensive benchmark studies to inspire new insights and future work. RL4CO has already attracted numerous researchers in the community and is open-sourced at https://github.com/ai4co/rl4co.
KDD ’25, Toronto, ON, Canada
</summary>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>SPARTA: An Optimization Framework for Differentially Private Sparse Fine-Tuning</title>
<link href="https://hdl.handle.net/1721.1/162621" rel="alternate"/>
<author>
<name>Makni, Mehdi</name>
</author>
<author>
<name>Behdin, Kayhan</name>
</author>
<author>
<name>Afriat, Gabriel</name>
</author>
<author>
<name>Xu, Zheng</name>
</author>
<author>
<name>Vassilvitskii, Sergei</name>
</author>
<author>
<name>Ponomareva, Natalia</name>
</author>
<author>
<name>Mazumder, Rahul</name>
</author>
<author>
<name>Hazimeh, Hussein</name>
</author>
<id>https://hdl.handle.net/1721.1/162621</id>
<updated>2025-09-10T07:26:08Z</updated>
<published>2025-08-03T00:00:00Z</published>
<summary type="text">SPARTA: An Optimization Framework for Differentially Private Sparse Fine-Tuning
Makni, Mehdi; Behdin, Kayhan; Afriat, Gabriel; Xu, Zheng; Vassilvitskii, Sergei; Ponomareva, Natalia; Mazumder, Rahul; Hazimeh, Hussein
Differentially private stochastic gradient descent (DP-SGD) is broadly considered to be the gold standard for training and fine-tuning neural networks under differential privacy (DP). With the increasing availability of high-quality pre-trained model checkpoints (e.g., vision and language models), fine-tuning has become a popular strategy. However, despite recent progress in understanding and applying DP-SGD for private transfer learning tasks, significant challenges remain - most notably, the performance gap between models fine-tuned with DP-SGD and their non-private counterparts. Sparse fine-tuning on private data has emerged as an alternative to full-model fine-tuning -- recent work has shown that privately fine-tuning only a small subset of model weights and keeping the rest of the weights fixed can lead to better performance. In this work, we propose a new approach for sparse fine-tuning of neural networks under DP. Existing work on private sparse finetuning often used fixed choice of trainable weights (e.g., updating only the last layer), or relied on public model's weights to choose the subset of weights to modify. Such choice of weights remains suboptimal. In contrast, we explore an optimization-based approach, where our selection method makes use of the private gradient information, while using off the shelf privacy accounting techniques. Our numerical experiments on several computer vision models and datasets show that our parameter selection method leads to better prediction accuracy, compared to full-model private fine-tuning or existing private sparse fine-tuning approaches. Our code is available here: https://github.com/mazumder-lab/SPARTA/tree/main
KDD ’25, Toronto, ON, Canada
</summary>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>When Heterophily Meets Heterogeneity: Challenges and a New Large-Scale Graph Benchmark</title>
<link href="https://hdl.handle.net/1721.1/162620" rel="alternate"/>
<author>
<name>Lin, Junhong</name>
</author>
<author>
<name>Guo, Xiaojie</name>
</author>
<author>
<name>Zhang, Shuaicheng</name>
</author>
<author>
<name>Zhu, Yada</name>
</author>
<author>
<name>Shun, Julian</name>
</author>
<id>https://hdl.handle.net/1721.1/162620</id>
<updated>2025-09-10T07:26:10Z</updated>
<published>2025-08-03T00:00:00Z</published>
<summary type="text">When Heterophily Meets Heterogeneity: Challenges and a New Large-Scale Graph Benchmark
Lin, Junhong; Guo, Xiaojie; Zhang, Shuaicheng; Zhu, Yada; Shun, Julian
Graph mining has become crucial in fields such as social science, finance, and cybersecurity. Many large-scale real-world networks exhibit both heterogeneity, where multiple node and edge types exist in the graph, and heterophily, where connected nodes may have dissimilar labels and attributes. However, existing benchmarks primarily focus on either heterophilic homogeneous graphs or homophilic heterogeneous graphs, leaving a significant gap in understanding how models perform on graphs with both heterogeneity and heterophily. To bridge this gap, we introduce H2GB, a large-scale node-classification graph benchmark that brings together the complexities of both the heterophily and heterogeneity properties of real-world graphs. H2GB encompasses 9 real-world datasets spanning 5 diverse domains, 28 baseline models, and a unified benchmarking library with a standardized data loader, evaluator, unified modeling framework, and an extensible framework for reproducibility. We establish a standardized workflow supporting both model selection and development, enabling researchers to easily benchmark graph learning methods. Extensive experiments across 28 baselines reveal that current methods struggle with heterophilic and heterogeneous graphs, underscoring the need for improved approaches. Finally, we present a new variant of the model, H2G-former, developed following our standardized workflow, that excels at this challenging benchmark. Both the benchmark and the framework are publicly available at Github and PyPI, with documentation hosted at https://junhongmit.github.io/H2GB.
KDD ’25, Toronto, ON, Canada
</summary>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Initial segments in ordinal recursion theory.</title>
<link href="https://hdl.handle.net/1721.1/162619" rel="alternate"/>
<author>
<name>Dorer, David John.</name>
</author>
<id>https://hdl.handle.net/1721.1/162619</id>
<updated>2025-10-06T17:04:14Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">Initial segments in ordinal recursion theory.
Dorer, David John.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 1979; Vita.; Bibliography: leaf 49.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tool and chip temperatures in machine shop practice</title>
<link href="https://hdl.handle.net/1721.1/162618" rel="alternate"/>
<author>
<name>Shore, Henry.</name>
</author>
<id>https://hdl.handle.net/1721.1/162618</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1924-01-01T00:00:00Z</published>
<summary type="text">Tool and chip temperatures in machine shop practice
Shore, Henry.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1924; Includes bibliographical references.
</summary>
<dc:date>1924-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The oxidation of sulphur dioxide in Cottrell precipitators of a contact sulphuric acid plant</title>
<link href="https://hdl.handle.net/1721.1/162617" rel="alternate"/>
<author>
<name>Haberstoh, Robert H.</name>
</author>
<author>
<name>Milligan, Sydney.</name>
</author>
<author>
<name>Roever, Paul H.</name>
</author>
<id>https://hdl.handle.net/1721.1/162617</id>
<updated>2025-10-30T15:50:06Z</updated>
<published>1931-01-01T00:00:00Z</published>
<summary type="text">The oxidation of sulphur dioxide in Cottrell precipitators of a contact sulphuric acid plant
Haberstoh, Robert H.; Milligan, Sydney.; Roever, Paul H.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1931
</summary>
<dc:date>1931-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Production of carbon black by the decomposition of methane with electrically heated wires</title>
<link href="https://hdl.handle.net/1721.1/162616" rel="alternate"/>
<author>
<name>Donatello, Dominic G.</name>
</author>
<id>https://hdl.handle.net/1721.1/162616</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1939-01-01T00:00:00Z</published>
<summary type="text">Production of carbon black by the decomposition of methane with electrically heated wires
Donatello, Dominic G.
Thesis: B.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1939; Includes bibliographical references (leaf 24).
</summary>
<dc:date>1939-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Models for investigating the unreliability of freight shipments by rail.</title>
<link href="https://hdl.handle.net/1721.1/162615" rel="alternate"/>
<author>
<name>Folk, Joseph Frederick.</name>
</author>
<id>https://hdl.handle.net/1721.1/162615</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Models for investigating the unreliability of freight shipments by rail.
Folk, Joseph Frederick.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering, 1972; Vita.; Bibliography: leaves 279-284.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A set-theoretic approach to state estimation.</title>
<link href="https://hdl.handle.net/1721.1/162614" rel="alternate"/>
<author>
<name>Hnyilicza, Esteban.</name>
</author>
<id>https://hdl.handle.net/1721.1/162614</id>
<updated>2025-10-30T15:50:06Z</updated>
<published>1969-01-01T00:00:00Z</published>
<summary type="text">A set-theoretic approach to state estimation.
Hnyilicza, Esteban.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1969; Bibliography: leaves 112-113.
</summary>
<dc:date>1969-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a voltage-limiting device using SIC nonlinear resistors.</title>
<link href="https://hdl.handle.net/1721.1/162613" rel="alternate"/>
<author>
<name>Asamoah, William Kafui.</name>
</author>
<id>https://hdl.handle.net/1721.1/162613</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1974-01-01T00:00:00Z</published>
<summary type="text">Design of a voltage-limiting device using SIC nonlinear resistors.
Asamoah, William Kafui.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1974; Includes bibliographical references.
</summary>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An analog circuit simulator for the Connection Machine</title>
<link href="https://hdl.handle.net/1721.1/162612" rel="alternate"/>
<author>
<name>De Beus, Eric.</name>
</author>
<id>https://hdl.handle.net/1721.1/162612</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1987-01-01T00:00:00Z</published>
<summary type="text">An analog circuit simulator for the Connection Machine
De Beus, Eric.
Thesis: B.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1987; Includes bibliographical references.
</summary>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Boiling heat transfer in rotating channels with reference to gas turbine blade cooling</title>
<link href="https://hdl.handle.net/1721.1/162611" rel="alternate"/>
<author>
<name>Mudawar, Issam Abdallah.</name>
</author>
<id>https://hdl.handle.net/1721.1/162611</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1984-01-01T00:00:00Z</published>
<summary type="text">Boiling heat transfer in rotating channels with reference to gas turbine blade cooling
Mudawar, Issam Abdallah.
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1984; Includes bibliographical references.
</summary>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of the developments in the construction, equipment and operation of street railway cars</title>
<link href="https://hdl.handle.net/1721.1/162610" rel="alternate"/>
<author>
<name>French, Grant Keith.</name>
</author>
<id>https://hdl.handle.net/1721.1/162610</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>1920-01-01T00:00:00Z</published>
<summary type="text">A study of the developments in the construction, equipment and operation of street railway cars
French, Grant Keith.
Thesis: B.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1920
</summary>
<dc:date>1920-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President year ended June 30, 2025, Office of the Secretary of the Corporation</title>
<link href="https://hdl.handle.net/1721.1/162609" rel="alternate"/>
<author>
<name>Donahue, Rachel</name>
</author>
<id>https://hdl.handle.net/1721.1/162609</id>
<updated>2025-09-10T07:27:36Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President year ended June 30, 2025, Office of the Secretary of the Corporation
Donahue, Rachel
This report contains the following sections: Office of the Secretary of the Corporation; Corporation Meetings; Annals of Corporation Membership, 2024-2025; Corporation Committees; Corporation Development Committee; MIT Investment Management Company Board; Corporation Screening Committee for Nomination of Recent Graduates; Corporation Joint Advisory Committee on Institute-Wide Affairs; Corporation Visiting Committees; and Office Activities and Personnel.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Microsystems Technology Laboratories</title>
<link href="https://hdl.handle.net/1721.1/162608" rel="alternate"/>
<author>
<name>Palacios, Tomas</name>
</author>
<id>https://hdl.handle.net/1721.1/162608</id>
<updated>2025-09-29T13:43:04Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Microsystems Technology Laboratories
Palacios, Tomas
This report contains the following sections: MTL and RLE: A Successful Integration &amp; Key Outcomes, Industry Engagements, MTL Research: Advancing Innovation Through Collaborative Centers, MTL Activities in the Context of the CHIPS and Science Act, 2025 Research Highlights, MTL/MIT.nano Collaboration and Facilities Update, MTL Outreach and Educational Activities, MTL Core Faculty Promotions, Awards, and Honors.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>System Pathways Measurement Toolkit</title>
<link href="https://hdl.handle.net/1721.1/142753.2" rel="alternate"/>
<author>
<name>Gralla, Erica</name>
</author>
<author>
<name>Downing, Tristan</name>
</author>
<author>
<name>Blair, Courtney</name>
</author>
<author>
<name>Goentzel, Jarrod</name>
</author>
<author>
<name>Russell, Timothy Edward</name>
</author>
<author>
<name>Wetmore, Finley</name>
</author>
<author>
<name>Peters, Megan</name>
</author>
<author>
<name>Wiseman, Michaela</name>
</author>
<author>
<name>Miles, Jillian</name>
</author>
<author>
<name>Reinker, Madison</name>
</author>
<author>
<name>Steinberg, Sophie</name>
</author>
<id>https://hdl.handle.net/1721.1/142753.2</id>
<updated>2025-10-06T19:35:14Z</updated>
<published>2022-05-25T00:00:00Z</published>
<summary type="text">System Pathways Measurement Toolkit
Gralla, Erica; Downing, Tristan; Blair, Courtney; Goentzel, Jarrod; Russell, Timothy Edward; Wetmore, Finley; Peters, Megan; Wiseman, Michaela; Miles, Jillian; Reinker, Madison; Steinberg, Sophie
This toolkit’s purpose is to support the measurement of system status and change in systems-oriented development projects. Measuring change in a market system (or another complex development system) is challenging because of the system’s complexity: it is difficult (1) to know which parts of the system to measure and (2) how to interpret what a collection of diverse measurements tells us about change in the system.&#13;
&#13;
To address both of these challenges, the System Pathways Measurement Toolkit relies on a system map to capture the structure and interconnections of the system, then layers measurements onto the system map to enable the collective interpretation of diverse data on wide-ranging parts of the system. Tools are provided to interpret the measured map by zooming in and out to understand the progression of change in the system, diagnose problems and explain success. Guidance is provided in deciding which parts of a complex system to measure and in developing indicators that can be interpreted easily on the map, based on either available or to-be-collected data.
</summary>
<dc:date>2022-05-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Seeding metascience in open and equitable scholarship: An Environmental Scan</title>
<link href="https://hdl.handle.net/1721.1/162607" rel="alternate"/>
<author>
<name>Heidbrick, Amber</name>
</author>
<author>
<name>Ratan, Kristen</name>
</author>
<id>https://hdl.handle.net/1721.1/162607</id>
<updated>2025-09-06T03:04:54Z</updated>
<published>2024-12-01T00:00:00Z</published>
<summary type="text">Seeding metascience in open and equitable scholarship: An Environmental Scan
Heidbrick, Amber; Ratan, Kristen
This document synthesizes the findings of a landscape analysis of open scholarship, equity research, and their intersection. In our research we found that those areas were still incredibly broad, and focused our efforts on the most impactful levers for change, listed below&#13;
&#13;
Areas of focus:&#13;
Policy (governments, funders, institutions)&#13;
Funding priorities and allocations (governments, funders, institutions) and receiving funding (researchers within academia and outside it)&#13;
Research and research communication (journals, non-journals, all stakeholders)&#13;
Research assessment, career advancement (researchers within academia and outside it, how credit gets assigned)
This paper was part of an NSF EAGER grant titled "Developing a Model for Integrating&#13;
Research in Open and Equitable Scholarship into Open Science Platform"
</summary>
<dc:date>2024-12-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cross-disciplinary fellowships are a key to rigorous open and equitable science; position paper</title>
<link href="https://hdl.handle.net/1721.1/162606" rel="alternate"/>
<author>
<name>Kriegsman, Suzanne A</name>
</author>
<author>
<name>Altman, Micah</name>
</author>
<id>https://hdl.handle.net/1721.1/162606</id>
<updated>2025-09-06T03:05:11Z</updated>
<published>2025-09-05T00:00:00Z</published>
<summary type="text">Cross-disciplinary fellowships are a key to rigorous open and equitable science; position paper
Kriegsman, Suzanne A; Altman, Micah
In this position paper, we describe how interdisciplinary fellowships can play a pivotal role in the understanding and practice of open and equitable science. To achieve a substantial and durable impact, fellowships will serve a dual role. First, fellows will contribute systematically to the empirical evidence of the effects of open science policies and practices by embedding in highly-active, well-instrumented research environments to conduct highly-targeted, time-limited research projects. Second, the fellowship program will foster the development of leadership and capacity for open science practices within existing communities of scientific practice by recruiting scholars across multiple disciplines who have the capacity and interest to perform systematic empirical metascientific analysis, supporting them in executing and publishing high-impact science-of-science of research while simultaneously facilitating their professional advancement within their primary field.
This paper was part of an NSF EAGER grant titled "Developing a Model for Integrating&#13;
Research in Open and Equitable Scholarship into Open Science Platform"
</summary>
<dc:date>2025-09-05T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, MIT Media Lab</title>
<link href="https://hdl.handle.net/1721.1/162605" rel="alternate"/>
<author>
<name>Sweeney, David</name>
</author>
<id>https://hdl.handle.net/1721.1/162605</id>
<updated>2025-09-06T03:09:43Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, MIT Media Lab
Sweeney, David
This report contains the following sections: Introduction; Research Updates; New Lab-Level Programs; Publications; Awards and Recognitions; Industry Collaboration; Development; Media Reach; and Conclusion. Aligned with prior President’s Reports, these map to Current Goals, Objectives, and Priorities (Introduction, Conclusion); Accomplishments (Research Updates, Publications, Awards and Recognitions); Administrative Initiatives (New Lab-Level Programs, Industry Collaboration, Media Reach); Finances and Funding (Development); Personnel Information (honors for faculty, students, and alumni within Awards and Recognitions); Teaching and Curriculum (educational innovation noted in Research Updates and Lab-Level Programs); and Research Activities (Research Updates, New Lab-Level Programs, Publications).
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, MIT Haystack Observatory</title>
<link href="https://hdl.handle.net/1721.1/162604" rel="alternate"/>
<author>
<name>Erickson, Philip J</name>
</author>
<id>https://hdl.handle.net/1721.1/162604</id>
<updated>2025-09-06T03:09:38Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, MIT Haystack Observatory
Erickson, Philip J
This report contains the following sections: Introduction, Astronomy, Geodesy, Geospace, Space research and technology, and Education and public outreach.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inflammation awakens dormant cancer cells by modulating the epithelial–mesenchymal phenotypic state</title>
<link href="https://hdl.handle.net/1721.1/162603" rel="alternate"/>
<author>
<name>Zhang, Jingwei</name>
</author>
<author>
<name>Zhang, Jingwen</name>
</author>
<author>
<name>Han, Longfei</name>
</author>
<author>
<name>Wu, Shiyi</name>
</author>
<author>
<name>Li, Jie</name>
</author>
<author>
<name>Eaton, Elinor Ng</name>
</author>
<author>
<name>Yuan, Bingbing</name>
</author>
<author>
<name>Reinhardt, Ferenc</name>
</author>
<author>
<name>Li, Hao</name>
</author>
<author>
<name>Strasser, Patrick C.</name>
</author>
<author>
<name>Das, Sunnny</name>
</author>
<author>
<name>Donaher, Joana Liu</name>
</author>
<author>
<name>Khalil, Md Imtiaz</name>
</author>
<author>
<name>Jiang, Haiping</name>
</author>
<author>
<name>Deuschel, Alexander</name>
</author>
<author>
<name>Lin, Danni</name>
</author>
<author>
<name>Sebastiany, Carolin</name>
</author>
<author>
<name>Maranga, Mariana</name>
</author>
<author>
<name>Shubitidze, Salomé</name>
</author>
<author>
<name>Liu, Xiaofei</name>
</author>
<author>
<name>Lambert, Arthur W.</name>
</author>
<author>
<name>Zhang, Yun</name>
</author>
<author>
<name>Liu, Yana</name>
</author>
<author>
<name>Sui, Lufei</name>
</author>
<author>
<name>Elmiligy, Sarah</name>
</author>
<author>
<name>Pozza, Umberto</name>
</author>
<author>
<name>Günsay, Rauf</name>
</author>
<author>
<name>Mishra, Ranjan</name>
</author>
<author>
<name>Velarde, Jose</name>
</author>
<author>
<name>Iyer, Sonia</name>
</author>
<author>
<name>Henry, Whitney S.</name>
</author>
<author>
<name>Weiskopf, Kipp</name>
</author>
<author>
<name>Feng, Guihai</name>
</author>
<author>
<name>Oni, Tobiloba E.</name>
</author>
<author>
<name>Watnick, Randolph S.</name>
</author>
<author>
<name>Li, Xin</name>
</author>
<author>
<name>Weinberg, Robert A</name>
</author>
<id>https://hdl.handle.net/1721.1/162603</id>
<updated>2026-03-08T03:24:34Z</updated>
<published>2025-09-03T00:00:00Z</published>
<summary type="text">Inflammation awakens dormant cancer cells by modulating the epithelial–mesenchymal phenotypic state
Zhang, Jingwei; Zhang, Jingwen; Han, Longfei; Wu, Shiyi; Li, Jie; Eaton, Elinor Ng; Yuan, Bingbing; Reinhardt, Ferenc; Li, Hao; Strasser, Patrick C.; Das, Sunnny; Donaher, Joana Liu; Khalil, Md Imtiaz; Jiang, Haiping; Deuschel, Alexander; Lin, Danni; Sebastiany, Carolin; Maranga, Mariana; Shubitidze, Salomé; Liu, Xiaofei; Lambert, Arthur W.; Zhang, Yun; Liu, Yana; Sui, Lufei; Elmiligy, Sarah; Pozza, Umberto; Günsay, Rauf; Mishra, Ranjan; Velarde, Jose; Iyer, Sonia; Henry, Whitney S.; Weiskopf, Kipp; Feng, Guihai; Oni, Tobiloba E.; Watnick, Randolph S.; Li, Xin; Weinberg, Robert A
The awakening of dormant disseminated cancer cells appears to be responsible for the clinical relapses of patients whose primary tumors have been successfully cured months and even years earlier. In the present study, we demonstrate that dormant breast cancer cells lodged in the lungs reside in a highly mesenchymal, nonproliferative phenotypic state. The awakening of these cells is not triggered by a cancer cell-autonomous process. Instead, lung inflammation induced by the chemotherapeutic agent bleomycin effectively awakens dormant cancer cells, providing useful models for studying metastatic awakening. Mechanistically, the awakened cells shift from a highly mesenchymal to a quasi-mesenchymal phenotypic state in which they acquire tumorigenicity and proliferative ability. Once awakened, these cells can stably reside in this quasi-mesenchymal state and maintain their tumor-initiating ability, doing so without ongoing heterotypic signaling from the lung microenvironment. Epidermal growth factor receptor ligands released by the cells of the injured tissue microenvironment, including notably M2 type macrophages, promote dormant cancer cells to move toward this quasi-mesenchymal state, a transition that is critical for the awakening process. An understanding of the mechanisms of metastatic awakening may lead in the future to treatment strategies designed to prevent such awakening and resulting metastatic relapse.
</summary>
<dc:date>2025-09-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, MIT Energy Initiative</title>
<link href="https://hdl.handle.net/1721.1/162602" rel="alternate"/>
<author>
<name>Green, William H.</name>
</author>
<id>https://hdl.handle.net/1721.1/162602</id>
<updated>2025-09-05T03:09:34Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, MIT Energy Initiative
Green, William H.
This report contains the following sections: 2025 Highlights, FY25 Research Accomplishments and Updates, FY25 Education Accomplishments and Updates, FY25 Outreach Accomplishments and Updates, Organization, and Affiliated Groups.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Picower Institute for Learning and Memory</title>
<link href="https://hdl.handle.net/1721.1/162601" rel="alternate"/>
<author>
<name>Tsai, Li-Huei</name>
</author>
<id>https://hdl.handle.net/1721.1/162601</id>
<updated>2025-09-05T03:09:45Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Picower Institute for Learning and Memory
Tsai, Li-Huei
This report contains the following sections: Awards and Honors, Research Advances, Personnel, Resource Development, Media Recognition, Programs and Activities, Research Initiatives, and Faculty Research Summaries.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Computer Science and Artificial Intelligence Laboratory (CSAIL)</title>
<link href="https://hdl.handle.net/1721.1/162600" rel="alternate"/>
<author>
<name>Rus, Daniela L</name>
</author>
<id>https://hdl.handle.net/1721.1/162600</id>
<updated>2025-09-29T13:47:13Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Computer Science and Artificial Intelligence Laboratory (CSAIL)
Rus, Daniela L
This report contains the following sections: Overview, CSAIL Growth and Research, Industrial Outreach, PI Spotlight , Laboratory Sponsored Activities, CSAIL Hosted Lecture Series, Organizational Changes, Awards and Honors, and Key Statistics for Academic Year 2025.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Causality - Exploiting Multi-Modal Data</title>
<link href="https://hdl.handle.net/1721.1/162599" rel="alternate"/>
<author>
<name>Uhler, Caroline</name>
</author>
<id>https://hdl.handle.net/1721.1/162599</id>
<updated>2025-09-03T03:27:31Z</updated>
<published>2025-08-03T00:00:00Z</published>
<summary type="text">Causality - Exploiting Multi-Modal Data
Uhler, Caroline
Massive data collection holds the promise of a better understanding of complex phenomena and ultimately, of better decisions. Representation learning has become a key driver of deep learning applications, since it allows learning latent spaces that capture important properties of the data without requiring any supervised annotations. While representation learning has been hugely successful in predictive tasks, it can fail miserably in causal tasks including predicting the effect of an intervention. This calls for a marriage between representation learning and causal inference. An exciting opportunity in this regard stems from the growing availability of multi-modal and interventional data (in medicine, advertisement, education, etc.). However, these datasets are still miniscule compared to the action spaces of interest in these applications (e.g. interventions can take on continuous values like the dose of a drug or can be combinatorial as in combinatorial drug therapies). In this talk, we will present a statistical and computational framework for causal representation learning from multi-modal data and its application towards optimal intervention design.
KDD '25, August 3–7, 2025, Toronto, ON, Canada
</summary>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Survey on Trustworthy LLM Agents: Threats and Countermeasures</title>
<link href="https://hdl.handle.net/1721.1/162598" rel="alternate"/>
<author>
<name>Yu, Miao</name>
</author>
<author>
<name>Meng, Fanci</name>
</author>
<author>
<name>Zhou, Xinyun</name>
</author>
<author>
<name>Wang, Shilong</name>
</author>
<author>
<name>Mao, Junyuan</name>
</author>
<author>
<name>Pan, Linsey</name>
</author>
<author>
<name>Chen, Tianlong</name>
</author>
<author>
<name>Wang, Kun</name>
</author>
<author>
<name>Li, Xinfeng</name>
</author>
<author>
<name>Zhang, Yongfeng</name>
</author>
<author>
<name>An, Bo</name>
</author>
<author>
<name>Wen, Qingsong</name>
</author>
<id>https://hdl.handle.net/1721.1/162598</id>
<updated>2025-09-03T03:27:16Z</updated>
<published>2025-08-03T00:00:00Z</published>
<summary type="text">A Survey on Trustworthy LLM Agents: Threats and Countermeasures
Yu, Miao; Meng, Fanci; Zhou, Xinyun; Wang, Shilong; Mao, Junyuan; Pan, Linsey; Chen, Tianlong; Wang, Kun; Li, Xinfeng; Zhang, Yongfeng; An, Bo; Wen, Qingsong
With the rapid evolution of Large Language Models (LLMs), LLM-based agents and Multi-agent Systems (MAS) have significantly expanded the capabilities of LLM ecosystems. This evolution stems from empowering LLMs with additional modules such as memory, tools, environment, and even other agents. However, this advancement has also introduced more complex issues of trustworthiness, which previous research focusing solely on LLMs could not cover. In this survey, we propose the TrustAgent framework, a comprehensive study on the trustworthiness of agents, characterized by modular taxonomy, multi-dimensional connotations, and technical implementation. By thoroughly investigating and summarizing newly emerged attacks, defenses, and evaluation methods for agents and MAS, we extend the concept of Trustworthy LLM to the emerging paradigm of Trustworthy Agent. In TrustAgent, we begin by deconstructing and introducing various components of the Agent and MAS. Then, we categorize their trustworthiness into intrinsic (brain, memory, and tool) and extrinsic (user, agent, and environment) aspects. Subsequently, we delineate the multifaceted meanings of trustworthiness and elaborate on the implementation techniques of existing research related to these internal and external modules. Finally, we present our insights and outlook on this domain, aiming to provide guidance for future endeavors. For easy reference, we categorize all the studies mentioned in this survey according to our taxonomy, available at: https://github.com/Ymm-cll/TrustAgent.
KDD ’25, Toronto, ON, Canada
</summary>
<dc:date>2025-08-03T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hopps: Leveraging Sparsity to Accelerate Automata Processing</title>
<link href="https://hdl.handle.net/1721.1/162597" rel="alternate"/>
<author>
<name>Du, Xingran</name>
</author>
<author>
<name>Emer, Joel</name>
</author>
<author>
<name>Sanchez, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/162597</id>
<updated>2025-09-03T03:27:29Z</updated>
<published>2025-08-06T00:00:00Z</published>
<summary type="text">Hopps: Leveraging Sparsity to Accelerate Automata Processing
Du, Xingran; Emer, Joel; Sanchez, Daniel
Automata processing (AP) is a key kernel in data analytics and scientific computing. AP workloads process a stream of symbols with many automata (FSMs) in parallel, e.g., pattern-matching network traffic against many malicious strings.&#13;
The need for high-performance AP has sparked the design of specialized accelerators. But prior AP accelerators are inefficient: AP workloads have substantial sparsity, but accelerators exploit no or limited sparsity. Specifically, each AP workload can be expressed as the concurrent traversal of all automata, which are encoded as graphs. But state-of-the-art accelerators store these graphs uncompressed, using bitsets. This allows the use of specialized memory crossbars that provide high parallelism and efficiency when graphs are dense. But many graphs are highly sparse, making crossbar-based accelerators inefficient.&#13;
We present Hopps, the first automata processing accelerator that exploits sparse data representations. Hopps combines two types of processing units: one represents data uncompressed, which achieves high throughput but is space-inefficient, while the other uses a compressed-sparse representation, which achieves high space efficiency but lower and more variable throughput. To use Hopps well, we present a novel automata mapping algorithm that maps most work to high-throughput units, while keeping a large fraction of state in space-efficient units. Hopps's hybrid design relaxes several constraints in crossbar-based designs, allowing for more efficient high-throughput units (e.g., by using a large number of smaller crossbars). Thus, by making the uncommon case cheap, Hopps makes the common case even faster.&#13;
We evaluate Hopps on AutomataZoo benchmarks. Hopps outperforms prior state-of-the-art accelerators Impala and SpAP by gmean 2.5x and 2.2x when using equal area.
ASPLOS ’25, Rotterdam, Netherlands
</summary>
<dc:date>2025-08-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sampling-based sublinear low-rank matrix arithmetic framework for dequantizing quantum machine learning</title>
<link href="https://hdl.handle.net/1721.1/162596" rel="alternate"/>
<author>
<name>Chia, Nai-Hui</name>
</author>
<author>
<name>Gilyen, Andras Pal</name>
</author>
<author>
<name>Li, Tongyang</name>
</author>
<author>
<name>Lin, Han-Hsuan</name>
</author>
<author>
<name>Tang, Ewin</name>
</author>
<author>
<name>Wang, Chunhao</name>
</author>
<id>https://hdl.handle.net/1721.1/162596</id>
<updated>2025-09-03T03:27:27Z</updated>
<published>2022-10-27T00:00:00Z</published>
<summary type="text">Sampling-based sublinear low-rank matrix arithmetic framework for dequantizing quantum machine learning
Chia, Nai-Hui; Gilyen, Andras Pal; Li, Tongyang; Lin, Han-Hsuan; Tang, Ewin; Wang, Chunhao
We present an algorithmic framework for quantum-inspired classical algorithms on close-to-low-rank matrices, generalizing the series of results started by Tang’s breakthrough quantum-inspired algorithm for recommendation systems [STOC’19]. Motivated by quantum linear algebra algorithms and the quantum singular value transformation (SVT) framework of Gilyén et al. [STOC’19], we develop classical algorithms for SVT that run in time independent of input dimension, under suitable quantum-inspired sampling assumptions. Our results give compelling evidence that in the corresponding QRAM data structure input model, quantum SVT does not yield exponential quantum speedups. Since the quantum SVT framework generalizes essentially all known techniques for quantum linear algebra, our results, combined with sampling lemmas from previous work, suffice to generalize all prior results about dequantizing quantum machine learning algorithms. In particular, our classical SVT framework recovers and often improves the dequantization results on recommendation systems, principal component analysis, supervised clustering, support vector machines, low-rank regression, and semidefinite program solving. We also give additional dequantization results on low-rank Hamiltonian simulation and discriminant analysis. Our improvements come from identifying the key feature of the quantum-inspired input model that is at the core of all prior quantum-inspired results: ℓ2-norm sampling can approximate matrix products in time independent of their dimension. We reduce all our main results to this fact, making our exposition concise, self-contained, and intuitive.
</summary>
<dc:date>2022-10-27T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interaction Is Necessary for Distributed Learning with Privacy or Communication Constraints</title>
<link href="https://hdl.handle.net/1721.1/162595" rel="alternate"/>
<author>
<name>Dagan, Yuval</name>
</author>
<author>
<name>Feldman, Vitaly</name>
</author>
<id>https://hdl.handle.net/1721.1/162595</id>
<updated>2025-09-03T03:27:56Z</updated>
<published>2020-06-22T00:00:00Z</published>
<summary type="text">Interaction Is Necessary for Distributed Learning with Privacy or Communication Constraints
Dagan, Yuval; Feldman, Vitaly
Local differential privacy (LDP) is a model where users send privatized data to an untrusted central server whose goal it to solve some data analysis task. In the non-interactive version of this model the protocol consists of a single round in which a server sends requests to all users then receives their responses. This version is deployed in industry due to its practical advantages and has attracted significant research interest.&#13;
Our main result is an exponential lower bound on the number of samples necessary to solve the standard task of learning a large-margin linear separator in the non-interactive LDP model. Via a standard reduction this lower bound implies an exponential lower bound for stochastic convex optimization and specifically, for learning linear models with a convex, Lipschitz and smooth loss. These results answer the questions posed by Smith, Thakurta, and Upadhyay (IEEE Symposium on Security and Privacy 2017) and Daniely and Feldman (NeurIPS 2019). Our lower bound relies on a new technique for constructing pairs of distributions with nearly matching moments but whose supports can be nearly separated by a large margin hyperplane. These lower bounds also hold in the model where communication from each user is limited and follow from a lower bound on learning using non-adaptive statistical queries.
STOC ’20, June 22–26, 2020, Chicago, IL, USA
</summary>
<dc:date>2020-06-22T00:00:00Z</dc:date>
</entry>
<entry>
<title>Imprinto: Enhancing Infrared Inkjet Watermarking for Human and Machine Perception</title>
<link href="https://hdl.handle.net/1721.1/162594" rel="alternate"/>
<author>
<name>Feick, Martin</name>
</author>
<author>
<name>Tang, Xuxin</name>
</author>
<author>
<name>Garcia-Martin, Raul</name>
</author>
<author>
<name>Luchianov, Alexandru</name>
</author>
<author>
<name>Huang, Roderick</name>
</author>
<author>
<name>Xiao, Chang</name>
</author>
<author>
<name>Siu, Alexa</name>
</author>
<author>
<name>Dogan, Mustafa Doga</name>
</author>
<id>https://hdl.handle.net/1721.1/162594</id>
<updated>2026-03-08T03:22:14Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Imprinto: Enhancing Infrared Inkjet Watermarking for Human and Machine Perception
Feick, Martin; Tang, Xuxin; Garcia-Martin, Raul; Luchianov, Alexandru; Huang, Roderick; Xiao, Chang; Siu, Alexa; Dogan, Mustafa Doga
Hybrid paper interfaces leverage augmented reality to combine the desired tangibility of paper documents with the affordances of interactive digital media. Typically, virtual content can be embedded through direct links (e.g., QR codes); however, this impacts the aesthetics of the paper print and limits the available visual content space. To address this problem, we present Imprinto, an infrared inkjet watermarking technique that allows for invisible content embeddings only by using off-the-shelf IR inks and a camera. Imprinto  was established through a psychophysical experiment, studying how much IR ink can be used while remaining invisible to users regardless of background color. We demonstrate that we can detect invisible IR content through our machine learning pipeline, and we developed an authoring tool that optimizes the amount of IR ink on the color regions of an input document for machine and human detectability. Finally, we demonstrate several applications, including augmenting paper documents and objects.
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Environment, Health, and Safety Office</title>
<link href="https://hdl.handle.net/1721.1/162593" rel="alternate"/>
<author>
<name>Durak, Tolga</name>
</author>
<id>https://hdl.handle.net/1721.1/162593</id>
<updated>2025-08-30T03:11:23Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Environment, Health, and Safety Office
Durak, Tolga
This report contains an overview of the office, a summary of accomplishments, and the following sections: By the Numbers, Recognition and Awards, and Organization and Professional Development.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Office of Public Safety</title>
<link href="https://hdl.handle.net/1721.1/162592" rel="alternate"/>
<author>
<name>DiFava, John</name>
</author>
<id>https://hdl.handle.net/1721.1/162592</id>
<updated>2025-08-30T03:11:17Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Office of Public Safety
DiFava, John
This report contains information about the Office of Public Safety. It includes projects, personnel changes, community engagement initiatives, and noteworthy investigations by the MIT Police Department. It also includes projects, initiatives, and accomplishments by the Office of Emergency Management, International Safety and Security, and MIT EMS.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Report to the President for year ended June 30, 2025, Institute Discrimination and Harassment Response Office</title>
<link href="https://hdl.handle.net/1721.1/162591" rel="alternate"/>
<author>
<name>Rankin, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/162591</id>
<updated>2025-08-30T03:11:22Z</updated>
<published>2025-06-30T00:00:00Z</published>
<summary type="text">Report to the President for year ended June 30, 2025, Institute Discrimination and Harassment Response Office
Rankin, Sarah
This report contains the following sections: Academic year 2025 overview, Incident reports, Initiatives, Committee progress, and IDHR staff updates.
</summary>
<dc:date>2025-06-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Large Language Models in Qualitative Research: Uses, Tensions, and Intentions</title>
<link href="https://hdl.handle.net/1721.1/162590" rel="alternate"/>
<author>
<name>Schroeder, Hope</name>
</author>
<author>
<name>Randazzo, Casey</name>
</author>
<author>
<name>Mimno, David</name>
</author>
<author>
<name>Schoenebeck, Sarita</name>
</author>
<author>
<name>Le Quéré, Marianne Aubin</name>
</author>
<id>https://hdl.handle.net/1721.1/162590</id>
<updated>2026-03-08T03:22:29Z</updated>
<published>2025-04-25T00:00:00Z</published>
<summary type="text">Large Language Models in Qualitative Research: Uses, Tensions, and Intentions
Schroeder, Hope; Randazzo, Casey; Mimno, David; Schoenebeck, Sarita; Le Quéré, Marianne Aubin
Qualitative researchers use tools to collect, sort, and analyze their&#13;
data. Should qualitative researchers use large language models&#13;
(LLMs) as part of their practice? LLMs could augment qualitative&#13;
research, but it is unclear if their use is appropriate, ethical, or&#13;
aligned with qualitative researchers’ goals and values. We interviewed twenty qualitative researchers to investigate these tensions.&#13;
Many participants see LLMs as promising interlocutors with attractive use cases across the stages of research, but wrestle with their&#13;
performance and appropriateness. Participants surface concerns&#13;
regarding the use of LLMs while protecting participant interests,&#13;
and call attention to an urgent lack of norms and tooling to guide&#13;
the ethical use of LLMs in research. We document the rapid and&#13;
broad adoption of LLMs across surfaces, which can interfere with&#13;
intentional use vital to qualitative research. We use the tensions&#13;
surfaced by our participants to outline recommendations for researchers considering using LLMsin qualitative research and design&#13;
principles for LLM-assisted qualitative research tools.
CHI ’25, Yokohama, Japan
</summary>
<dc:date>2025-04-25T00:00:00Z</dc:date>
</entry>
<entry>
<title>Slow but Steady: Progress Toward Accessibility-Focused Initiatives in Computer Science Education</title>
<link href="https://hdl.handle.net/1721.1/162589" rel="alternate"/>
<author>
<name>Jimenez, Yerika</name>
</author>
<author>
<name>Daily, Shaundra</name>
</author>
<author>
<name>Washington, A. Nicki</name>
</author>
<author>
<name>Sadler, Cecil?</name>
</author>
<id>https://hdl.handle.net/1721.1/162589</id>
<updated>2026-03-08T03:22:37Z</updated>
<published>2025-07-14T00:00:00Z</published>
<summary type="text">Slow but Steady: Progress Toward Accessibility-Focused Initiatives in Computer Science Education
Jimenez, Yerika; Daily, Shaundra; Washington, A. Nicki; Sadler, Cecil?
Accessibility remains insufficiently integrated in computer science&#13;
(CS) education, despite its recognized importance. This paper examines how the 3C Fellows Program, a two-year professional development program, facilitated and supported the incorporation of&#13;
identity-inclusive topics, namely disability, into the postsecondary&#13;
CS education space. Through analysis of participant interviews and&#13;
deliverable documentation, findings reveal that through the program, participants deepened their understanding of how disability&#13;
impacts and is impacted by computing, leading to the design and&#13;
implementation of five unique accessibility-focused educational&#13;
initiatives. Results demonstrate that professional development can&#13;
effectively increase accessibility-focused content in CS education.
RESPECT 2025, Newark, NJ, USA
</summary>
<dc:date>2025-07-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Making Space: Dis/ability and the Scratch Online Community</title>
<link href="https://hdl.handle.net/1721.1/162588" rel="alternate"/>
<author>
<name>Sadler, Cecil?</name>
</author>
<author>
<name>Trapp, Jaleesa</name>
</author>
<id>https://hdl.handle.net/1721.1/162588</id>
<updated>2026-03-08T03:22:39Z</updated>
<published>2025-07-14T00:00:00Z</published>
<summary type="text">Making Space: Dis/ability and the Scratch Online Community
Sadler, Cecil?; Trapp, Jaleesa
Dis/abled youth often face barriers to participation in computational making spaces. This paper examines how youth engage with the Scratch online community to share projects and discussions around dis/ability, creating meaningful connections through creative self-expression. Through counter-storytelling examples, we demonstrate how young people leverage Scratch not only as a programming platform but as a space to build community and celebrate dis/ability identity. Our findings uplift the ways in which young people engage in these spaces to highlight how creative computing environments foster inclusion and connection, dispelling deficit-based narratives in computer science education.
RESPECT 2025, July 14–16, 2025, Newark, NJ, USA
</summary>
<dc:date>2025-07-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Redefining Data Science: Where Transformative Youth Organizing Meets Arts-Based Abolitionist Education</title>
<link href="https://hdl.handle.net/1721.1/162587" rel="alternate"/>
<author>
<name>Walker, Raechel</name>
</author>
<author>
<name>Cruse, Brady</name>
</author>
<author>
<name>Cora, Aisha</name>
</author>
<author>
<name>Breazeal, Cynthia</name>
</author>
<id>https://hdl.handle.net/1721.1/162587</id>
<updated>2026-03-08T03:21:56Z</updated>
<published>2025-07-14T00:00:00Z</published>
<summary type="text">Redefining Data Science: Where Transformative Youth Organizing Meets Arts-Based Abolitionist Education
Walker, Raechel; Cruse, Brady; Cora, Aisha; Breazeal, Cynthia
Data science courses often exclude engagement with minoritized groups, discouraging these students from persuing this field. Our Data Activism Program for African American students integrated arts-based abolitionist education and transformative youth organizing. Students collaborated with four community organizations, conducting interviews and surveys to engage with their community and highlight racial disparities in environmental injustice. Post-course surveys and interviews showed an increase in students' ability to apply transformative youth organizing to data science, demonstrating real-world impact. They found the program accessible and meaningful, transforming data science into a tool for self-expression, critical analysis, and activism rather than just an academic subject.
RESPECT 2025, Newark, NJ, USA
</summary>
<dc:date>2025-07-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Liberatory Computing: Empowering African American Students Through Data Activism</title>
<link href="https://hdl.handle.net/1721.1/162586" rel="alternate"/>
<author>
<name>Walker, Raechel</name>
</author>
<id>https://hdl.handle.net/1721.1/162586</id>
<updated>2026-03-08T03:21:57Z</updated>
<published>2025-07-14T00:00:00Z</published>
<summary type="text">Liberatory Computing: Empowering African American Students Through Data Activism
Walker, Raechel
Computing curricula often inadvertently reinforce a harmful, singular narrative about African American communities, focusing solely on stories that emphasize crime prediction and policing [4, 8, 9]. This reinforces the harmful stereotype that African American communities are primarily sites of criminal activity rather than centers of innovation, creativity, and resilience [1, 5, 7]. In contrast, the framework I developed, ''liberatory computing'', offers a guideline that can be integrated into computing curricula precisely to counter these cliches [13]. Composed of Dr. Aaliyah El-Amin's five pillars of liberation-a sound racial identity, critical consciousness, collective obligation, a liberation-centered academic identity, and activism skills-liberatory computing empowers students to challenge and mitigate systemic oppression through computing [2]. My research applies this framework as a way to empower African American students to address embedded racism through data activism, in which I created two Data Activism Programs [10]. The first taught students how to use data science to support the minoritized communities of the participants, while the second incorporated collaboration with community organizers, increasing the inclusion of desire-based research [12].&#13;
My first Data Activism program engaged 12 high school students of color; the second included 24 students of African descent who partnered with Greater Boston community organizations on projects involving data, geospatial, and qualitative analysis, as well as artistic expression. Pre- and post-surveys showed increased awareness of data science's role in addressing racism and enhanced advocacy skills [12]. Interviews revealed that working to challenge systemic oppression inspired students to continue integrating data activism into their futures.
RESPECT 2025, Newark, NJ, USA
</summary>
<dc:date>2025-07-14T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Agentic AI for Science Hypothesis Generation, Comprehension, Quantification, and Validation</title>
<link href="https://hdl.handle.net/1721.1/162585" rel="alternate"/>
<author>
<name>Buehler, Markus</name>
</author>
<id>https://hdl.handle.net/1721.1/162585</id>
<updated>2026-03-08T03:21:58Z</updated>
<published>2025-05-23T00:00:00Z</published>
<summary type="text">Towards Agentic AI for Science Hypothesis Generation, Comprehension, Quantification, and Validation
Buehler, Markus
AI is revolutionizing scientific discovery by connecting&#13;
seemingly unrelated fields – from mechanics to biology, and&#13;
science to art. However, how can we build AI models that don’t&#13;
merely retrieve information but make new discoveries, going&#13;
beyond interpolation to extrapolate to reason over never-beforeseen scenarios and concepts? In this talk we describe how a new&#13;
generation of physics-aware AI is breaking traditional&#13;
boundaries through:&#13;
• Innovative graph-based generative AI combining physics&#13;
and data-driven modeling&#13;
• Biologically-inspired neural structures that adapt&#13;
dynamically&#13;
• Multi-agent systems that mirror natural systems&#13;
Through practical case studies, I will present how this&#13;
technology transforms materials science across scales – from&#13;
silk and collagen to biomineralized materials – with direct&#13;
applications in medicine, food systems, and agriculture. The&#13;
versatility in agent development allows for expertise in diverse&#13;
domains, including knowledge retrieval, protein structure&#13;
analysis, physics-based simulations, and results analysis, is&#13;
presented. The dynamic collaboration between agents,&#13;
empowered by LLMs that can reason over sequences, data,&#13;
images, and text, provides a versatile approach to tackling&#13;
protein design and analysis problems, as demonstrated through&#13;
diverse examples in this study.
WWW Companion '25, April 28-May 2, 2025, Sydney, NSW, Australia
</summary>
<dc:date>2025-05-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>TIME 2025: 1st International Workshop on Transformative Insights in Multi-faceted Evaluation</title>
<link href="https://hdl.handle.net/1721.1/162584" rel="alternate"/>
<author>
<name>Wang, Lei</name>
</author>
<author>
<name>Hossain, Md Zakir</name>
</author>
<author>
<name>Islam, Syed</name>
</author>
<author>
<name>Gedeon, Tom</name>
</author>
<author>
<name>Alghowinem, Sharifa</name>
</author>
<author>
<name>Yu, Isabella</name>
</author>
<author>
<name>Bono, Serena</name>
</author>
<author>
<name>Zhu, Xuanying</name>
</author>
<author>
<name>Nguyen, Gennie</name>
</author>
<author>
<name>Haldar, Nur Al Hasan</name>
</author>
<author>
<name>Jalali, Seyed Mohammad Jafar</name>
</author>
<author>
<name>Razzaque, Md Abdur</name>
</author>
<author>
<name>Razzak, Imran</name>
</author>
<author>
<name>Islam, Md Rafiqul</name>
</author>
<author>
<name>Uddin, Shahadat</name>
</author>
<author>
<name>Janjua, Naeem</name>
</author>
<author>
<name>Krishna, Aneesh</name>
</author>
<author>
<name>Ashraf, Manzur</name>
</author>
<id>https://hdl.handle.net/1721.1/162584</id>
<updated>2026-03-08T03:21:58Z</updated>
<published>2025-05-23T00:00:00Z</published>
<summary type="text">TIME 2025: 1st International Workshop on Transformative Insights in Multi-faceted Evaluation
Wang, Lei; Hossain, Md Zakir; Islam, Syed; Gedeon, Tom; Alghowinem, Sharifa; Yu, Isabella; Bono, Serena; Zhu, Xuanying; Nguyen, Gennie; Haldar, Nur Al Hasan; Jalali, Seyed Mohammad Jafar; Razzaque, Md Abdur; Razzak, Imran; Islam, Md Rafiqul; Uddin, Shahadat; Janjua, Naeem; Krishna, Aneesh; Ashraf, Manzur
Our workshop brings together domain experts and research students to share insights, practical guidance, and evaluations on key topics, including social network analysis, graph algorithms, web mining, semantics and knowledge, security, privacy, fairness, and ethics on the web. We invite survey, evaluation, or review papers that critically analyze models and datasets from diverse perspectives. These papers serve as essential resources by (i) providing quick reference guides for researchers and practitioners, (ii) enhancing accessibility for newcomers, and (iii) distilling key insights into actionable knowledge. Complementing these contributions, invited talks from experts and industry leaders will offer practical perspectives, fostering cross-domain collaboration in web technologies. Through thought-provoking discussions and networking opportunities, the workshop bridges research and real-world applications, setting a new standard for interdisciplinary exchange in the field.
WWW Companion ’25, April 28-May 2, 2025, Sydney, NSW, Australia
</summary>
<dc:date>2025-05-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mitigating Spatial Disparity in Urban Prediction Using Residual-Aware Spatiotemporal Graph Neural Networks: A Chicago Case Study</title>
<link href="https://hdl.handle.net/1721.1/162583" rel="alternate"/>
<author>
<name>Zhuang, Dingyi</name>
</author>
<author>
<name>Xu, Hanyong</name>
</author>
<author>
<name>Guo, Xiaotong</name>
</author>
<author>
<name>Zheng, Yunhan</name>
</author>
<author>
<name>Wang, Shenhao</name>
</author>
<author>
<name>Zhao, Jinhua</name>
</author>
<id>https://hdl.handle.net/1721.1/162583</id>
<updated>2026-03-08T03:22:02Z</updated>
<published>2025-05-23T00:00:00Z</published>
<summary type="text">Mitigating Spatial Disparity in Urban Prediction Using Residual-Aware Spatiotemporal Graph Neural Networks: A Chicago Case Study
Zhuang, Dingyi; Xu, Hanyong; Guo, Xiaotong; Zheng, Yunhan; Wang, Shenhao; Zhao, Jinhua
Urban prediction tasks, such as forecasting traffic flow, temperature, and crime rates, are crucial for efficient urban planning and management. However, existing Spatiotemporal Graph Neural Networks (ST-GNNs) often rely solely on accuracy, overlooking spatial and demographic disparities in their predictions. This oversight can lead to imbalanced resource allocation and exacerbate existing inequities in urban areas. This study introduces a Residual-Aware Attention (RAA) Block and an equality-enhancing loss function to address these disparities. By adapting the adjacency matrix during training and incorporating spatial disparity metrics, our approach aims to reduce local segregation of residuals and errors. We applied our methodology to urban prediction tasks in Chicago, utilizing travel demand datasets as an example. Our model achieved a 48% significant improvement in fairness metrics with only a 9% increase in error metrics. Spatial analysis of residual distributions revealed that models with RAA Blocks produced more equitable prediction results, particularly by reducing errors clustered in central regions, supporting more balanced and equitable urban planning and policy-making.
WWW Companion ’25, Sydney, NSW, Australia
</summary>
<dc:date>2025-05-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Commonsense AI in the History of the Web</title>
<link href="https://hdl.handle.net/1721.1/162582" rel="alternate"/>
<author>
<name>Kejriwal, Mayank</name>
</author>
<author>
<name>McGuinness, Deborah</name>
</author>
<author>
<name>Lieberman, Henry</name>
</author>
<id>https://hdl.handle.net/1721.1/162582</id>
<updated>2026-03-08T03:22:16Z</updated>
<published>2025-05-23T00:00:00Z</published>
<summary type="text">Commonsense AI in the History of the Web
Kejriwal, Mayank; McGuinness, Deborah; Lieberman, Henry
Machine common sense (MCS)-the challenge of enabling computers to grasp everyday human knowledge-has been a grand challenge in Artificial Intelligence (AI) since the 1950s. While recent advances in large language models have led to impressive progress, there is still no consensus on how much common sense today's AI actually possesses. In this brief review, we revisit the historical development of MCS in the context of the Web, examining how the Web's evolution-from early knowledge representation efforts to knowledge graphs, the Semantic Web, and crowdsourcing-has shaped MCS research. We argue that key breakthroughs in Web technologies were instrumental in addressing longstanding challenges of scale and coverage in commonsense reasoning. At the same time, MCS research has influenced the development of core Web applications, including intelligent agents, plausibility-based reasoning, and robust evaluation of black-box AI systems.
WWW Companion ’25, Sydney, NSW, Australia
</summary>
<dc:date>2025-05-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Artificial Intelligence for Complex Network: Potential, Methodology and Application</title>
<link href="https://hdl.handle.net/1721.1/162581" rel="alternate"/>
<author>
<name>Ding, Jingtao</name>
</author>
<author>
<name>Zheng, Yu</name>
</author>
<author>
<name>Wang, Huandong</name>
</author>
<author>
<name>Cannistraci, Carlo Vittorio</name>
</author>
<author>
<name>Gao, Jianxi</name>
</author>
<author>
<name>Li, Yong</name>
</author>
<author>
<name>Shi, Chuan</name>
</author>
<id>https://hdl.handle.net/1721.1/162581</id>
<updated>2026-03-08T03:22:31Z</updated>
<published>2025-05-23T00:00:00Z</published>
<summary type="text">Artificial Intelligence for Complex Network: Potential, Methodology and Application
Ding, Jingtao; Zheng, Yu; Wang, Huandong; Cannistraci, Carlo Vittorio; Gao, Jianxi; Li, Yong; Shi, Chuan
This tutorial will explore the fascinating domain of empirical network modeling through artificial intelligence (AI) techniques, with&#13;
applications across social media, web systems, and urban environments. Participants will gain valuable insights into incorporating&#13;
advanced AI methods—such as graph machine learning, deep reinforcement learning, and generative models—within complex network science. The goal is to provide a comprehensive understanding&#13;
of how these models can effectively represent, predict, and control&#13;
empirical networked systems with heterogeneous structures and&#13;
dynamic processes. The tutorial will begin by introducing essential background knowledge, outlining motivations and challenges,&#13;
exploring recent methodological advances, and highlighting key&#13;
applications.
WWW Companion ’25, Sydney, NSW, Australia
</summary>
<dc:date>2025-05-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wikipedia Contributions in the Wake of ChatGPT</title>
<link href="https://hdl.handle.net/1721.1/162580" rel="alternate"/>
<author>
<name>Lyu, Liang</name>
</author>
<author>
<name>Siderius, James</name>
</author>
<author>
<name>Li, Hannah</name>
</author>
<author>
<name>Acemoglu, Daron</name>
</author>
<author>
<name>Huttenlocher, Daniel</name>
</author>
<author>
<name>Ozdaglar, Asuman</name>
</author>
<id>https://hdl.handle.net/1721.1/162580</id>
<updated>2026-03-08T03:22:41Z</updated>
<published>2025-05-23T00:00:00Z</published>
<summary type="text">Wikipedia Contributions in the Wake of ChatGPT
Lyu, Liang; Siderius, James; Li, Hannah; Acemoglu, Daron; Huttenlocher, Daniel; Ozdaglar, Asuman
How has Wikipedia activity changed for articles with content similar to ChatGPT following its introduction? We estimate the impact using differences-in-differences models, with dissimilar Wikipedia articles as a baseline for comparison, to examine how changes in voluntary knowledge contributions and information-seeking behavior differ by article content. Our analysis reveals that newly created, popular articles whose content overlaps with ChatGPT 3.5 saw a greater decline in editing and viewership after the November 2022 launch of ChatGPT than dissimilar articles did. These findings indicate heterogeneous substitution effects, where users selectively engage less with existing platforms when AI provides comparable content. This points to potential uneven impacts on the future of human-driven online knowledge contributions.
WWW Companion ’25, Sydney, NSW, Australia
</summary>
<dc:date>2025-05-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Constant-Rate Entanglement Distillation for Fast Quantum Interconnects</title>
<link href="https://hdl.handle.net/1721.1/162579" rel="alternate"/>
<author>
<name>Pattison, Christopher</name>
</author>
<author>
<name>Baranes, Gefen</name>
</author>
<author>
<name>Bonilla Ataides, Juan Pablo</name>
</author>
<author>
<name>Lukin, Mikhail D.</name>
</author>
<author>
<name>Zhou, Hengyun</name>
</author>
<id>https://hdl.handle.net/1721.1/162579</id>
<updated>2026-03-08T03:21:48Z</updated>
<published>2025-06-20T00:00:00Z</published>
<summary type="text">Constant-Rate Entanglement Distillation for Fast Quantum Interconnects
Pattison, Christopher; Baranes, Gefen; Bonilla Ataides, Juan Pablo; Lukin, Mikhail D.; Zhou, Hengyun
Distributed quantum computing allows the modular construction of large-scale quantum computers and enables new protocols for blind quantum computation. However, such applications in the large-scale, fault-tolerant regime place stringent demands on the fidelity and rate of entanglement generation, which are not met by existing methods for quantum interconnects.&#13;
In this work, we develop constant-rate entanglement distillation methods to address this bottleneck in the setting of noisy local operations. By using a sequence of two-way entanglement distillation protocols based on quantum error detecting codes with increasing rate, and combining with standard fault tolerance techniques, we achieve constant-rate entanglement distillation. We show that the scheme has constant-rate in expectation, and further numerically optimize to achieve low practical overhead under memory constraints. We find that compared to existing quantum interconnect schemes, our methods reduce the communication overhead by more than 10 × in relevant regimes, leading to a direct speed-up in the execution of distributed quantum algorithms.
ISCA ’25, Tokyo, Japan
</summary>
<dc:date>2025-06-20T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advancing the Science of Teaching with Tutoring Data: A Collaborative Workshop with the National Tutoring Observatory</title>
<link href="https://hdl.handle.net/1721.1/162578" rel="alternate"/>
<author>
<name>Thomas, Danielle R.</name>
</author>
<author>
<name>Demszky, Dorottya</name>
</author>
<author>
<name>Koedinger, Kenneth R.</name>
</author>
<author>
<name>Marland, Joshua</name>
</author>
<author>
<name>Pietrzak, Doug</name>
</author>
<author>
<name>Reich, Justin</name>
</author>
<author>
<name>Slama, Rachel</name>
</author>
<author>
<name>Toutziaridi, Amalia</name>
</author>
<author>
<name>Kizilcec, Ren?</name>
</author>
<id>https://hdl.handle.net/1721.1/162578</id>
<updated>2026-03-08T03:21:51Z</updated>
<published>2025-07-17T00:00:00Z</published>
<summary type="text">Advancing the Science of Teaching with Tutoring Data: A Collaborative Workshop with the National Tutoring Observatory
Thomas, Danielle R.; Demszky, Dorottya; Koedinger, Kenneth R.; Marland, Joshua; Pietrzak, Doug; Reich, Justin; Slama, Rachel; Toutziaridi, Amalia; Kizilcec, Ren?
Effective teaching is among the most powerful influences on student learning, but scientific progress in understanding effective teaching moves has been held back by insufficient data on teaching. Despite extensive research efforts, progress is hindered by persistent challenges related to data de-identification and preprocessing, annotation and segmentation, multimodal analysis, predictive and causal modeling of student outcomes. Addressing these barriers requires a concerted, interdisciplinary approach. The National Tutoring Observatory (NTO) is a first-of-its-kind research infrastructure designed to unite researchers, developers, tutoring providers, and educational organizations in tackling common barriers to uncovering the dynamics of effective tutoring moves. The NTO is spearheading the creation of the Million Tutor Moves dataset, the largest open-access collection of tutoring interactions, leveraging artificial intelligence to unlock insights that accelerate the science of teaching at scale. This workshop aims to bring together the Learning at Scale community to share progress, identify common challenges, and explore collaborative solutions. The agenda will feature presentations of accepted papers, interactive demos, and a moderated panel bringing together researchers, developers, and tutoring providers. This workshop aims to advance a shared vision for uncovering the fundamental principles of impactful tutoring and teaching through the power of collaborative research and data-driven discovery.
L@S ’25, Palermo, Italy
</summary>
<dc:date>2025-07-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Adding Metacognitive Requirements in Support of AI Feedback in Practice Exams Transforms Student Learning Behaviors</title>
<link href="https://hdl.handle.net/1721.1/162577" rel="alternate"/>
<author>
<name>Ahmad, Mak</name>
</author>
<author>
<name>Ravi, Prerna</name>
</author>
<author>
<name>Karger, David</name>
</author>
<author>
<name>Facciotti, Marc</name>
</author>
<id>https://hdl.handle.net/1721.1/162577</id>
<updated>2026-03-08T03:21:53Z</updated>
<published>2025-07-17T00:00:00Z</published>
<summary type="text">How Adding Metacognitive Requirements in Support of AI Feedback in Practice Exams Transforms Student Learning Behaviors
Ahmad, Mak; Ravi, Prerna; Karger, David; Facciotti, Marc
Providing personalized, detailed feedback at scale in large undergraduate STEM courses remains a persistent challenge. We present&#13;
an empirically evaluated practice exam system that integrates AI&#13;
generated feedback with targeted textbook references, deployed in&#13;
a large introductory biology course. Our system specifically aims&#13;
to encourage metacognitive behavior by asking students to explain&#13;
their answers and declare their confidence. It uses OpenAI’s GPT4o to generate personalized feedback based on this information,&#13;
while directing them to relevant textbook sections. Through detailed interaction logs from consenting participants across three&#13;
midterms (541, 342, and 413 students respectively), totaling 28,313&#13;
question-student interactions across 146 learning objectives, along&#13;
with 279 post-exam surveys and 23 semi-structured interviews, we&#13;
examined the system’s impact on learning outcomes and student&#13;
engagement. Analysis showed that across all midterms, the different feedback types showed no statistically significant differences in&#13;
performance, though there were some trends suggesting potential&#13;
benefits worth further investigation. The system’s most substantial impact emerged through its required confidence ratings and&#13;
explanations, which students reported transferring to their actual&#13;
exam strategies. Approximately 40% of students engaged with textbook references when prompted by feedback—significantly higher&#13;
than traditional reading compliance rates. Survey data revealed&#13;
high student satisfaction (M=4.1/5), with 82.1% reporting increased&#13;
confidence on midterm topics they had practiced, and 73.4% indicating they could recall and apply specific concepts from practice&#13;
sessions. Our findings demonstrate how thoughtfully designed AIenhanced systems can scale formative assessment while promoting&#13;
sustainable study practices and self-regulated learning behaviors,&#13;
suggesting that embedding structured reflection requirements may&#13;
be more impactful than sophisticated feedback mechanisms.
L@S ’25, Palermo, Italy
</summary>
<dc:date>2025-07-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bridging the Scientific Knowledge Gap and Reproducibility: A Survey of Provenance, Assertion and Evidence Ontologies</title>
<link href="https://hdl.handle.net/1721.1/162576" rel="alternate"/>
<author>
<name>Chhetri, Tek Raj</name>
</author>
<author>
<name>Halchenko, Yaroslav</name>
</author>
<author>
<name>Jarecka, Dorota</name>
</author>
<author>
<name>Trivedi, Puja</name>
</author>
<author>
<name>Ghosh, Satrajit</name>
</author>
<author>
<name>Ray, Patrick</name>
</author>
<author>
<name>Ng, Lydia</name>
</author>
<id>https://hdl.handle.net/1721.1/162576</id>
<updated>2026-03-08T03:21:54Z</updated>
<published>2025-05-23T00:00:00Z</published>
<summary type="text">Bridging the Scientific Knowledge Gap and Reproducibility: A Survey of Provenance, Assertion and Evidence Ontologies
Chhetri, Tek Raj; Halchenko, Yaroslav; Jarecka, Dorota; Trivedi, Puja; Ghosh, Satrajit; Ray, Patrick; Ng, Lydia
The rapid growth of scientific publications and evolving experimental paradigms create significant challenges in staying up-to-date with current advances. Assertions are often unstructured and have limited provenance, which hinders reproducibility. Ontologies and knowledge graphs (KGs) offer structured solutions by capturing assertions, evidence, and provenance to support reproducibility. This paper reviews 23 ontologies -- 13 focused on assertions and evidence and 10 on provenance -- providing an overview of the current landscape while highlighting key challenges and opportunities for improvement.
WWW Companion ’25, Sydney, NSW, Australia
</summary>
<dc:date>2025-05-23T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tracing the stepwise Darwinian evolution of a plant halogenase.</title>
<link href="https://hdl.handle.net/1721.1/162575" rel="alternate"/>
<author>
<name>Kim, Colin Y</name>
</author>
<author>
<name>Kastner, David W</name>
</author>
<author>
<name>Mitchell, Andrew J</name>
</author>
<author>
<name>Gutierrez, Michael A</name>
</author>
<author>
<name>Yao, Jocelyn S</name>
</author>
<author>
<name>Neumann, Edwin N</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<author>
<name>Weng, Jing-Ke</name>
</author>
<id>https://hdl.handle.net/1721.1/162575</id>
<updated>2026-03-08T03:24:19Z</updated>
<published>2025-08-13T00:00:00Z</published>
<summary type="text">Tracing the stepwise Darwinian evolution of a plant halogenase.
Kim, Colin Y; Kastner, David W; Mitchell, Andrew J; Gutierrez, Michael A; Yao, Jocelyn S; Neumann, Edwin N; Kulik, Heather J; Weng, Jing-Ke
Biohalogenation is rare in plant metabolism, with the Menispermaceae's chloroalkaloid acutumine being an exception. This involves a specialized dechloroacutumine halogenase (DAH) from the iron- and 2-oxoglutarate-dependent dioxygenase (2ODD) family. While DAH is presumed to have evolved from an ancestral 2ODD, how enzyme specialization arises through Darwinian processes remains a fundamental question in understanding metabolic evolution. Here, we investigate the evolutionary history of DAH using the chromosomal-level genome of &lt;i&gt;Menispermum canadense&lt;/i&gt;. Phylogenomic dating and synteny analyses reveal DAH evolution through tandem duplication of an ancestral flavonol synthase (FLS) gene, followed by neofunctionalization and gene loss events. Structural modeling, molecular dynamics, and site-directed mutagenesis identify mutations enabling the catalytic switch from FLS to DAH. This required traversing a complex evolutionary landscape with deep fitness valleys separating intermediate states captured in the &lt;i&gt;M. canadense&lt;/i&gt; genome. Our findings illustrate how enzymatic functions evolve through lineage-specific pathways, reshaping active sites and enabling catalytic mechanism-switching mutations.
</summary>
<dc:date>2025-08-13T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systematic Bandgap Engineering of a 2D Organic–Inorganic Chalcogenide Semiconductor via Ligand Modification</title>
<link href="https://hdl.handle.net/1721.1/162574" rel="alternate"/>
<author>
<name>Sakurada, Tomoaki</name>
</author>
<author>
<name>Paritmongkol, Watcharaphol</name>
</author>
<author>
<name>Cho, Yeongsu</name>
</author>
<author>
<name>Lee, Woo Seok</name>
</author>
<author>
<name>Chatsiri, Petcharaphorn</name>
</author>
<author>
<name>Oppenheim, Julius J</name>
</author>
<author>
<name>Wan, Ruomeng</name>
</author>
<author>
<name>Su, Annlin</name>
</author>
<author>
<name>Samulewicz, Nicholas</name>
</author>
<author>
<name>Wannakan, Khemika</name>
</author>
<author>
<name>Müller, Peter</name>
</author>
<author>
<name>Dincă, Mircea</name>
</author>
<author>
<name>Kulik, Heather J</name>
</author>
<author>
<name>Tisdale, William A</name>
</author>
<id>https://hdl.handle.net/1721.1/162574</id>
<updated>2026-03-08T03:24:21Z</updated>
<published>2025-08-19T00:00:00Z</published>
<summary type="text">Systematic Bandgap Engineering of a 2D Organic–Inorganic Chalcogenide Semiconductor via Ligand Modification
Sakurada, Tomoaki; Paritmongkol, Watcharaphol; Cho, Yeongsu; Lee, Woo Seok; Chatsiri, Petcharaphorn; Oppenheim, Julius J; Wan, Ruomeng; Su, Annlin; Samulewicz, Nicholas; Wannakan, Khemika; Müller, Peter; Dincă, Mircea; Kulik, Heather J; Tisdale, William A
Hybrid organic–inorganic semiconductors present new opportunities for optoelectronic materials design not available in all-organic or all-inorganic materials. One example is silver phenylselenide (AgSePh) – or “mithrene” – a blue-emitting 2D organic–inorganic semiconductor exhibiting strong optical and electronic anisotropy. Here, we show that the bandgap of mithrene can be systematically tuned by introducing electron-donating and electron-withdrawing groups to the phenyl ligands. We synthesized nine mithrene variants, eight of which formed 2D van der Waals crystals analogous to those of AgSePh. Density functional theory calculations reveal that these 2D mithrene variants are direct-gap or nearly direct gap semiconductors. Furthermore, we identify correlations between the optical gap and three experimental observables – the Hammett constant, 77Se chemical shift, and selenium partial charge – offering predictive power for bandgap tuning. These findings highlight new opportunities for applying the tools of chemical synthesis to semiconductor materials design.
</summary>
<dc:date>2025-08-19T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Open Reaction Database</title>
<link href="https://hdl.handle.net/1721.1/162573" rel="alternate"/>
<author>
<name>Kearnes, Steven M</name>
</author>
<author>
<name>Maser, Michael R</name>
</author>
<author>
<name>Wleklinski, Michael</name>
</author>
<author>
<name>Kast, Anton</name>
</author>
<author>
<name>Doyle, Abigail G</name>
</author>
<author>
<name>Dreher, Spencer D</name>
</author>
<author>
<name>Hawkins, Joel M</name>
</author>
<author>
<name>Jensen, Klavs F</name>
</author>
<author>
<name>Coley, Connor W</name>
</author>
<id>https://hdl.handle.net/1721.1/162573</id>
<updated>2026-03-08T03:24:18Z</updated>
<published>2021-11-02T00:00:00Z</published>
<summary type="text">The Open Reaction Database
Kearnes, Steven M; Maser, Michael R; Wleklinski, Michael; Kast, Anton; Doyle, Abigail G; Dreher, Spencer D; Hawkins, Joel M; Jensen, Klavs F; Coley, Connor W
Chemical reaction data in journal articles, patents, and even electronic laboratory notebooks are currently stored in various formats, often unstructured, which presents a significant barrier to downstream applications, including the training of machine-learning models. We present the Open Reaction Database (ORD), an open-access schema and infrastructure for structuring and sharing organic reaction data, including a centralized data repository. The ORD schema supports conventional and emerging technologies, from benchtop reactions to automated high-throughput experiments and flow chemistry. The data, schema, supporting code, and web-based user interfaces are all publicly available on GitHub. Our vision is that a consistent data representation and infrastructure to support data sharing will enable downstream applications that will greatly improve the state of the art with respect to computer-aided synthesis planning, reaction prediction, and other predictive chemistry tasks.
</summary>
<dc:date>2021-11-02T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automation and Microfluidics for the Efficient, Fast, and Focused Reaction Development of Asymmetric Hydrogenation Catalysis</title>
<link href="https://hdl.handle.net/1721.1/162572" rel="alternate"/>
<author>
<name>van Putten, Robbert</name>
</author>
<author>
<name>Eyke, Natalie S</name>
</author>
<author>
<name>Baumgartner, Lorenz M</name>
</author>
<author>
<name>Schultz, Victor L</name>
</author>
<author>
<name>Filonenko, Georgy A</name>
</author>
<author>
<name>Jensen, Klavs F</name>
</author>
<author>
<name>Pidko, Evgeny A</name>
</author>
<id>https://hdl.handle.net/1721.1/162572</id>
<updated>2026-03-08T03:24:15Z</updated>
<published>2022-04-26T00:00:00Z</published>
<summary type="text">Automation and Microfluidics for the Efficient, Fast, and Focused Reaction Development of Asymmetric Hydrogenation Catalysis
van Putten, Robbert; Eyke, Natalie S; Baumgartner, Lorenz M; Schultz, Victor L; Filonenko, Georgy A; Jensen, Klavs F; Pidko, Evgeny A
Automation and microfluidic tools potentially enable efficient, fast, and focused reaction development of complex chemistries, while minimizing resource- and material consumption. The introduction of automation-assisted workflows will contribute to the more sustainable development and scale-up of new and improved catalytic technologies. Herein, the application of automation and microfluidics to the development of a complex asymmetric hydrogenation reaction is described. Screening and optimization experiments were performed using an automated microfluidic platform, which enabled a drastic reduction in the material consumption compared to conventional laboratory practices. A suitable catalytic system was identified from a library of RuII-diamino precatalysts. In situ precatalyst activation was studied with 1H/31P nuclear magnetic resonance (NMR), and the reaction was scaled up to multigram quantities in a batch autoclave. These reactions were monitored using an automated liquid-phase sampling system. Ultimately, in less than a week of total experimental time, multigram quantities of the target enantiopure alcohol product were provided by this automation-assisted approach.
</summary>
<dc:date>2022-04-26T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bayesian Optimization of Computer-Proposed Multistep Synthetic Routes on an Automated Robotic Flow Platform</title>
<link href="https://hdl.handle.net/1721.1/162571" rel="alternate"/>
<author>
<name>Nambiar, Anirudh MK</name>
</author>
<author>
<name>Breen, Christopher P</name>
</author>
<author>
<name>Hart, Travis</name>
</author>
<author>
<name>Kulesza, Timothy</name>
</author>
<author>
<name>Jamison, Timothy F</name>
</author>
<author>
<name>Jensen, Klavs F</name>
</author>
<id>https://hdl.handle.net/1721.1/162571</id>
<updated>2026-03-08T03:24:14Z</updated>
<published>2022-06-10T00:00:00Z</published>
<summary type="text">Bayesian Optimization of Computer-Proposed Multistep Synthetic Routes on an Automated Robotic Flow Platform
Nambiar, Anirudh MK; Breen, Christopher P; Hart, Travis; Kulesza, Timothy; Jamison, Timothy F; Jensen, Klavs F
Computer-aided synthesis planning (CASP) tools can propose retrosynthetic pathways and forward reaction conditions for the synthesis of organic compounds, but the limited availability of context-specific data currently necessitates experimental development to fully specify process details. We plan and optimize a CASP-proposed and human-refined multistep synthesis route toward an exemplary small molecule, sonidegib, on a modular, robotic flow synthesis platform with integrated process analytical technology (PAT) for data-rich experimentation. Human insights address catalyst deactivation and improve yield by strategic choices of order of addition. Multi-objective Bayesian optimization identifies optimal values for categorical and continuous process variables in the multistep route involving 3 reactions (including heterogeneous hydrogenation) and 1 separation. The platform's modularity, robotic reconfigurability, and flexibility for convergent synthesis are shown to be essential for allowing variation of downstream residence time in multistep flow processes and controlling the order of addition to minimize undesired reactivity. Overall, the work demonstrates how automation, machine learning, and robotics enhance manual experimentation through assistance with idea generation, experimental design, execution, and optimization.
</summary>
<dc:date>2022-06-10T00:00:00Z</dc:date>
</entry>
<entry>
<title>REEV SENSE IMUs for Gait Analysis in Stroke: A Clinical Study on Lower Limb Kinematics</title>
<link href="https://hdl.handle.net/1721.1/162570" rel="alternate"/>
<author>
<name>Marsan, Thibault</name>
</author>
<author>
<name>Clauzade, Sacha</name>
</author>
<author>
<name>Zhang, Xiang</name>
</author>
<author>
<name>Grandin, Nicolas</name>
</author>
<author>
<name>Urman, Tatiana</name>
</author>
<author>
<name>Linton, Evan</name>
</author>
<author>
<name>Elsayed-Aly, Ingy</name>
</author>
<author>
<name>Ricciardi, Catherine E.</name>
</author>
<author>
<name>Temporelli, Robin</name>
</author>
<id>https://hdl.handle.net/1721.1/162570</id>
<updated>2026-03-08T03:24:21Z</updated>
<published>2025-08-18T00:00:00Z</published>
<summary type="text">REEV SENSE IMUs for Gait Analysis in Stroke: A Clinical Study on Lower Limb Kinematics
Marsan, Thibault; Clauzade, Sacha; Zhang, Xiang; Grandin, Nicolas; Urman, Tatiana; Linton, Evan; Elsayed-Aly, Ingy; Ricciardi, Catherine E.; Temporelli, Robin
Human gait analysis is essential for clinical evaluation and rehabilitation monitoring, particularly in post-stroke individuals, where joint kinematics provide valuable insights into motor recovery. While optical motion capture (OMC) is the gold standard, its high cost and restricted use in laboratory settings limit its accessibility. This study aimed to evaluate the accuracy of REEV SENSE, a novel magnetometer-free inertial measurement unit (IMU), in capturing knee and ankle joint angles during overground walking in post-stroke individuals using assistive devices. Twenty participants with chronic stroke walked along a 10-m walkway with their usual assistive device (cane or walker), while joint kinematics were simultaneously recorded using OMC and IMUs. Agreement between the systems was assessed using the mean absolute error, root mean square error, 95% confidence intervals, and Pearson’s correlation coefficient. Knee angles measured with the IMUs showed a strong correlation with the OMC (r &gt; 0.9) and low errors (MAE &lt; 5°), consistent with clinical acceptability. Ankle angle accuracy was lower for participants using walkers, while knee measurements remained stable regardless of the assistive device. These findings demonstrate that REEV SENSE IMUs provide clinically relevant kinematic data and support their use as a practical wearable tool for gait analysis in real-world or remote clinical settings.
</summary>
<dc:date>2025-08-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Techno-Economic Analysis of Decarbonized Backup Power Systems Using Scenario-Based Stochastic Optimization</title>
<link href="https://hdl.handle.net/1721.1/162569" rel="alternate"/>
<author>
<name>Schweiger, Jonas</name>
</author>
<author>
<name>Macdonald, Ruaridh</name>
</author>
<id>https://hdl.handle.net/1721.1/162569</id>
<updated>2026-03-08T03:24:22Z</updated>
<published>2025-08-18T00:00:00Z</published>
<summary type="text">Techno-Economic Analysis of Decarbonized Backup Power Systems Using Scenario-Based Stochastic Optimization
Schweiger, Jonas; Macdonald, Ruaridh
first_pagesettingsOrder Article Reprints&#13;
Open AccessArticle&#13;
Techno-Economic Analysis of Decarbonized Backup Power Systems Using Scenario-Based Stochastic Optimization&#13;
by Jonas Schweiger 1,2,*ORCID andRuaridh Macdonald 1ORCID&#13;
1&#13;
MIT Energy Initiative, Massachusetts Institute of Technology, 50 Ames St., Cambridge, MA 02142, USA&#13;
2&#13;
College of Management of Technology, École Polytechnique Fédérale de Lausanne, Station 5, CH-1015 Lausanne, Switzerland&#13;
*&#13;
Author to whom correspondence should be addressed.&#13;
Energies 2025, 18(16), 4388; https://doi.org/10.3390/en18164388&#13;
Submission received: 14 July 2025 / Revised: 4 August 2025 / Accepted: 14 August 2025 / Published: 18 August 2025&#13;
(This article belongs to the Section C: Energy Economics and Policy)&#13;
Downloadkeyboard_arrow_down Browse Figures Versions Notes&#13;
Abstract&#13;
In the context of growing concerns about power disruptions, grid reliability and the need for decarbonization, this study evaluates a broad range of clean backup power systems (BPSs) to replace traditional emergency diesel generators. A scenario-based stochastic optimization framework using actual load profiles and outage probabilities is proposed to assess the most promising options from a pool of 27 technologies. This framework allows a comparison of the cost effectiveness and environmental impact of individual technologies and hybrid BPSs across various scenarios. The results highlight the trade-off between total annual system cost and emissions. Significant emission reductions can be achieved at moderate cost increases but deep decarbonization levels incur higher costs. Primary and secondary batteries are included in optimal clean fuel-based systems across all decarbonization levels, combining cost-effective power delivery and long-term storage benefits. The findings highlight the often-overlooked importance of fuel replacement on both emissions and costs. Among the assessed technologies, ammonia generators and hydrogen fuel cells combined with secondary iron–air batteries emerge as cost-effective solutions for achieving decarbonization goals. To ensure a broad range of applicability, the study outlines the impact of emergency fuel purchases, varying demand patterns and demand response options on the optimal BPS. The research findings are valuable for optimizing the design of clean BPSs to economically meet the needs of many facility types and decarbonization targets.
</summary>
<dc:date>2025-08-18T00:00:00Z</dc:date>
</entry>
<entry>
<title>Time-Marching Quantum Algorithm for Simulation of Nonlinear Lorenz Dynamics</title>
<link href="https://hdl.handle.net/1721.1/162568" rel="alternate"/>
<author>
<name>Koukoutsis, Efstratios</name>
</author>
<author>
<name>Vahala, George</name>
</author>
<author>
<name>Soe, Min</name>
</author>
<author>
<name>Hizanidis, Kyriakos</name>
</author>
<author>
<name>Vahala, Linda</name>
</author>
<author>
<name>Ram, Abhay K.</name>
</author>
<id>https://hdl.handle.net/1721.1/162568</id>
<updated>2026-03-08T03:24:23Z</updated>
<published>2025-08-17T00:00:00Z</published>
<summary type="text">Time-Marching Quantum Algorithm for Simulation of Nonlinear Lorenz Dynamics
Koukoutsis, Efstratios; Vahala, George; Soe, Min; Hizanidis, Kyriakos; Vahala, Linda; Ram, Abhay K.
Simulating nonlinear classical dynamics on a quantum computer is an inherently challenging task due to the linear operator formulation of quantum mechanics. In this work, we provide a systematic approach to alleviate this difficulty by developing an explicit quantum algorithm that implements the time evolution of a second-order time-discretized version of the Lorenz model. The Lorenz model is a celebrated system of nonlinear ordinary differential equations that has been extensively studied in the contexts of climate science, fluid dynamics, and chaos theory. Our algorithm possesses a recursive structure and requires only a linear number of copies of the initial state with respect to the number of integration time-steps. This provides a significant improvement over previous approaches, while preserving the characteristic quantum speed-up in terms of the dimensionality of the underlying differential equations system, which similar time-marching quantum algorithms have previously demonstrated. Notably, by classically implementing the proposed algorithm, we showcase that it accurately captures the structural characteristics of the Lorenz system, reproducing both regular attractors&amp;ndash;limit cycles&amp;ndash;and the chaotic attractor within the chosen parameter regime.
</summary>
<dc:date>2025-08-17T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Formal Definition of Scale-Dependent Complexity and the Multi-Scale Law of Requisite Variety</title>
<link href="https://hdl.handle.net/1721.1/162567" rel="alternate"/>
<author>
<name>Siegenfeld, Alexander F.</name>
</author>
<author>
<name>Bar-Yam, Yaneer</name>
</author>
<id>https://hdl.handle.net/1721.1/162567</id>
<updated>2026-03-08T03:24:24Z</updated>
<published>2025-08-06T00:00:00Z</published>
<summary type="text">A Formal Definition of Scale-Dependent Complexity and the Multi-Scale Law of Requisite Variety
Siegenfeld, Alexander F.; Bar-Yam, Yaneer
Ashby’s law of requisite variety allows a comparison of systems with their environments, providing a necessary (but not sufficient) condition for system efficacy: A system must possess at least as much complexity as any set of environmental behaviors that require distinct responses from the system. However, to account for the dependence of a system’s complexity on the level of detail—or scale—of its description, a multi-scale generalization of Ashby’s law is needed. We define a class of complexity profiles (complexity as a function of scale) that is the first, to our knowledge, to exhibit a multi-scale law of requisite variety. This formalism provides a characterization of multi-scale complexity and generalizes the law of requisite variety’s single constraint on system behaviors to a class of multi-scale constraints. We show that these complexity profiles satisfy a sum rule, which reflects a tradeoff between smaller- and larger-scale degrees of freedom, and we extend our results to subdivided systems and systems with a continuum of components.
</summary>
<dc:date>2025-08-06T00:00:00Z</dc:date>
</entry>
<entry>
<title>High Precision Binary Trait Association on PhylogeneticTrees</title>
<link href="https://hdl.handle.net/1721.1/162565" rel="alternate"/>
<author>
<name>Balogun, Ishaq O.</name>
</author>
<id>https://hdl.handle.net/1721.1/162565</id>
<updated>2025-08-28T03:07:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">High Precision Binary Trait Association on PhylogeneticTrees
Balogun, Ishaq O.
Understanding how genetic variation drives microbial phenotypes is fundamental to advancing microbiology, particularly in pathogenicity, drug resistance, and host adaptation. Traditional genome-wide association study (GWAS) methods fail to account for shared evolutionary history, confounding association analyses. Microbial GWAS approaches emerged to address this, but modern methods often lack the statistical power to detect associations while controlling false discoveries, and face computational limits at scale. Here, we present SimPhyNI (Simulation-based Phylogenetic iNteraction Inference), a computational framework for detecting binary trait-trait associations in microbial populations. &#13;
&#13;
SimPhyNI uses stochastic simulations of trait evolution on phylogenetic trees to detect positive and negative associations with high precision and recall. Benchmarking on large synthetic datasets, SimPhyNI achieved a precision-recall AUC (PR AUC) of 0.987 and 0.975 for positive and negative interactions, respectively, indicating near-perfect discrimination of true from neutral associations. Competing methods showed substantially lower performance, especially for negative associations. We further applied SimPhyNI to empirical datasets, recovering known biology and generating plausible hypotheses for novel mechanisms. &#13;
&#13;
Though tested here on binary traits, SimPhyNI’s design supports future extension to multi-state and continuous traits using generalized models. Its high recall also makes it well-suited for constructing gene interaction networks and identifying co-evolving trait modules. By combining evolutionary modeling with scalable statistics, SimPhyNI advances our ability to uncover the genetic interactions that drive microbial function, ecology, and disease.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Foundations for Building an Innovation-Centric Product Development Framework for Medical Devices</title>
<link href="https://hdl.handle.net/1721.1/162564" rel="alternate"/>
<author>
<name>Rajan, Neena E.</name>
</author>
<id>https://hdl.handle.net/1721.1/162564</id>
<updated>2025-08-28T03:07:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Foundations for Building an Innovation-Centric Product Development Framework for Medical Devices
Rajan, Neena E.
The medical device industry, governed by a tight regulatory landscape, often relies heavily on structured Product Development Processes (PDPs) to bring innovative solutions to market. These structured processes create significant challenges when integrating technological innovations that emerge in the later stages of the development cycle. This study explores the complexities of this "innovation paradox" within large United States-based medical device corporations, examining how the rigidity of traditional PDP models affects the incorporation of innovative changes to in-flight projects. Drawing upon insights from a comprehensive literature review and a quantitative analysis utilizing a Monte Carlo simulation, this research highlights the impact of integrating an innovative change on the overall project timeline and cost. The simulation results show that introducing innovative changes to the PDP typically extends project timelines and increases the total net present costs and are affected by the timing of the change and its technological maturity. Introducing changes in later project phases significantly increases both duration and cost compared to earlier phases. Changes with lower technological maturity led to greater duration and cost escalations, especially when introduced late in the development cycle. To balance regulatory requirements and PDP agility, large medical device companies can adopt hybrid PDP models, establish dedicated innovation assessment teams, create flexible product designs, and focus on value-driven innovations that meet patient and market needs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Savaal: A system for automatically generating high-quality questions from unseen documents</title>
<link href="https://hdl.handle.net/1721.1/162563" rel="alternate"/>
<author>
<name>Chandler, Joseph A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162563</id>
<updated>2025-08-28T03:07:50Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Savaal: A system for automatically generating high-quality questions from unseen documents
Chandler, Joseph A.
Assessing human understanding through exams and quizzes is fundamental to learning and advancement in both educational and professional settings. However, current solutions to automate the generation of challenging questions from educational materials and documents are insufficient, resulting in superficial or often irrelevant questions. While LLMs have been shown to excel in tasks like question answering, their usage on question generation is underexplored for general domains and at scale. This work presents Savaal, a scalable question-generation system that generates higher-order questions from documents, as well as a real-world system implementation for general use. Savaal accomplishes the following goals and objectives: (i) scalability, capable of generating hundreds of questions from any document (ii) depth of understanding, synthesizing higherorder concepts to test learners’ understanding of the material, and (iii) domain independence, generalizing broadly to any field. Rather than naively providing the entire document in context to an LLM, Savaal breaks down the process of generating questions into a three-stage pipeline. We demonstrate that Savaal outperforms the direct prompting baseline as evaluated by 76 human experts on 71 documents across conference papers and PhD dissertations. We additionally contribute a general system for serving Savaal in real-world scenarios. We demonstrate that our system is scalable, enabling fault-tolerant and horizontal scaling of each individual component in response to fluctuations in usage. Moreover, our architecture enables interactive usage from users and collaboration in groups, reflecting real-world organizations like classrooms or enterprises. We hope that the system enables scalable question generation for educational and corporate use-cases.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Objective Exploration of Refueling Architecture for Sustainable Crewed and Cargo Space Transportation</title>
<link href="https://hdl.handle.net/1721.1/162562" rel="alternate"/>
<author>
<name>Terakado, Daiki</name>
</author>
<id>https://hdl.handle.net/1721.1/162562</id>
<updated>2025-08-28T03:07:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Multi-Objective Exploration of Refueling Architecture for Sustainable Crewed and Cargo Space Transportation
Terakado, Daiki
This thesis presents a new integrated framework for evaluating in-space refueling architectures, focusing on their application to the human space missions such as Artemis. The framework tightly couples vehicle sizing with a boil-off control model, allowing the evaluation of various combinations of propellant types, refueling locations, and boil-off control. The model captures the dynamic interdependence between the components of the refueling system, the transport vehicle, the refueler, and the depot, using an iterative approach to ensure consistent mass estimates across configurations.&#13;
&#13;
The framework is applied to analyze human landing system (HLS) architectures with refueling in cis-lunar space. The key findings highlight the mass savings benefits of cryocoolers, the benefits of high Isp with Lox/LH2, the benefits with NRHO refueling for acceptable ΔV requirement, and positive and negative effects of reusability in mass and mission time. Furthermore, the study indicates that the number of required refueling events is more sensitive to payload and refueler capacity than to boil-off losses.&#13;
&#13;
To extend the framework toward long-term, scalable transportation solutions, the thesis compiles a comprehensive set of figures of merits (FoMs) and discusses future model extensions including risks, ISRU, and electric propulsion. Limitations such as lack of reusable configuration flexibility, and insufficient support for Mars mission parameters are identified as areas for future development. This work provides a foundational framework for the exploration of refueling architecture and solid next steps to design sustainable and scalable human space transportation systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Embedded Software-Defined Radio Architectures for 6G Cellular Networks</title>
<link href="https://hdl.handle.net/1721.1/162561" rel="alternate"/>
<author>
<name>Urbonas, Jonas</name>
</author>
<id>https://hdl.handle.net/1721.1/162561</id>
<updated>2025-08-28T03:07:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Embedded Software-Defined Radio Architectures for 6G Cellular Networks
Urbonas, Jonas
Over the past decades, the widespread adoption of wireless communication technologies in the industrial, scientific, medical, defense, and commercial sectors has resulted in substantial advancements in digital radio technologies. Each new generation of cellular technology, beginning with 1G, has introduced novel use-case scenarios that have challenged the performance of the prevailing digital radio architectures. The newly proposed scenarios for 5G-Advanced, and the upcoming 6G cellular networks due to be standardized by 2030 are no exception. The emerging 6G network components, such as the space-air-ground integrated cell-less networks, as well as the artificial intelligence-native network architecture, drive the demand for flexible and fully reconfigurable radio units supporting multi-GHz instantaneous signal bandwidths, frequency agile radio architectures covering multi-octave frequency ranges, and highly sensitive receivers.&#13;
&#13;
To support these requirements, software-defined radios (SDR) are becoming an essential building block of next-generation radio networks. This thesis presents a review of softwaredefined radio technology, examines its history, proposes the requirements of SDR units for 6G cellular networks, and presents a quantitative performance analysis of over 2 million distinct SDR architectures that could be used in 6G communication networks. It does so by defining the key system architectural decisions and their options, including the data converters, filters, mixer and amplifier technologies. It also examines different radio transmitter and receiver architectural topologies, including baseband sampling, IF sampling, direct RF sampling, and fully digital RFSoC, and constructs a multi-attribute utility (MAU) to quantify the system performance. The MAU is used to build a tradespace of SDR architectures, enabling the identification of the Pareto frontier. Analysis of SDR system architectures on the Pareto frontier reveals that the performance of direct RF sampling SDR architectures is highly competitive with industry-standard IF sampling. The tradespace is also used to analyze the sensitivity of system performance to individual architectural decisions via a main-effect analysis, allowing quantification of connectivity and sensitivity of available architectural decisions. Sensitivity analysis reveals that system performance is highly sensitive to receiver architectural decisions, particularly analog-to-digital converters, indicating the need for continued advances in this technology to produce high-performance SDR systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational Design of Architected Lattices for Construction Applications</title>
<link href="https://hdl.handle.net/1721.1/162560" rel="alternate"/>
<author>
<name>Leamon, Sophie</name>
</author>
<id>https://hdl.handle.net/1721.1/162560</id>
<updated>2025-08-28T03:07:47Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Computational Design of Architected Lattices for Construction Applications
Leamon, Sophie
Architected lattices have been utilized in aerospace and research applications for their modularity, scability, reconfigurability, and high strength-to-weight properties. However, voxels have yet to find widespread integration in the residential or commercial construction industry because of the industry’s distinct system needs. This study identifies the pain points unique to the construction industry that have slowed or disabled the adoption of new practices, highlighting the importance of utilizing known materials, methods, and the transparency of the design process, as major hurdles to adoption of innovation in the industry. This study presents a computational approach to designing architected lattices that seeks to undermine these core issues by making building with architected lattice structures agnostic to material and manufacturing methodology. Three open source computational approaches to architectural design are proposed: 1) integration of support structures for additively manufactured structures; 2) parametric design of voxels from 2D material, their manufacturing molds, and optional alignment features; and 3) generation of two-dimensional cut files for assembly with 3D printable joinery. These files are computationally designed and arranged for instantaneous production to demystify the lattice architectural design process, establish a pathway for utilizing all available materials in lattice construction, reduce the overhead costs for experimentation with lattice structures, and eliminate barriers to the fabrication process by enabling accessible manufacturing methods.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating the Strategic Intent and Competitive Dynamics of China’s Satellite Communications Constellations</title>
<link href="https://hdl.handle.net/1721.1/162559" rel="alternate"/>
<author>
<name>Delkowski, Michal  .</name>
</author>
<id>https://hdl.handle.net/1721.1/162559</id>
<updated>2025-08-28T03:07:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Evaluating the Strategic Intent and Competitive Dynamics of China’s Satellite Communications Constellations
Delkowski, Michal  .
This thesis examines the strategic, technical, and economic feasibility of China’s two flagship low Earth orbit (LEO) satellite megaconstellation programs, Guowang and Qianfan, in the context of the rapidly evolving global satellite communication (Satcom) market. Against the backdrop of SpaceX’s Starlink dominance and intensifying geopolitical competition, China’s efforts represent not only a telecommunications infrastructure push but also a broader assertion of technological sovereignty and global influence. This study uses a scenario-based analysis that integrates system throughput analysis and financial forecasting. Three deployment scenarios (base, optimistic, and pessimistic) are analyzed, accounting for satellite production rates, launch capabilities, and regional adoption patterns, particularly across Belt and Road Initiative (BRI) markets. The study also evaluates "system-of-systems" integration with China’s military objectives, and spectrum coordination challenges. Key findings reveal that Guowang becomes marginally viable only in the optimistic scenario, assuming deployment of at least 9,000 satellites, reduced satellite unit costs (targeting ~$300,000 per satellite), expanded gateway infrastructure, and realization of these targets by 2035, while remaining unviable in base and pessimistic cases. Qianfan faces greater commercial risk, achieving viability only with early adoption in BRI countries and government dual-use contracts, incurring a pessimistic-case NPV loss exceeding $76B. Resource allocation problem (RAP) modeling suggests that projected throughput may saturate early without major gateway expansion. Both constellations require China to scale reusable rockets and sustain a combined annual launch rate exceeding 1,000 satellites by the early 2030s. Neither constellation system meets China’s 2030 rural broadband targets under base-case conditions, over 40% of the 336M unconnected citizens remain underserved without terminal subsidies. Ultimately, China’s LEO Satcom strategy depends not on satellite count alone but on coordinated progress in launch economics, affordability, dual-use policy, and international partnerships.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization of CPG budgets in Retailer-led marketing&#13;
programs</title>
<link href="https://hdl.handle.net/1721.1/162558" rel="alternate"/>
<author>
<name>Gandhi, Abhinav</name>
</author>
<id>https://hdl.handle.net/1721.1/162558</id>
<updated>2025-08-28T03:07:07Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimization of CPG budgets in Retailer-led marketing&#13;
programs
Gandhi, Abhinav
Grocery retailers and Consumer Packaged Goods (CPG) companies have a symbiotic relationship. Retailers need CPGs to supply the products, and CPGs need retailers’ customers to grow their brands. Since shelf space is limited, CPGs offer trade and marketing funds to prominently feature their brands.&#13;
As part of loyalty programs, retailers offer coupons to customers that are often funded by CPGs. In return, CPGs expect a return on their investment(ROI). Since budgets are limited and are also expected to be utilized, it becomes a challenge for the retailer to find the right size of a mailer which can balance costs and relevance to customers. This thesis explores how knapsack problems can be used in an non-adaptive setting to help maximize the reach of print and email campaigns.&#13;
Seeking inspiration from existing literature, multiple simulations were set up to evaluate budget-constrained allocation and compare two approaches, the multiple-choice Knapsack (MCK) and a greedy algorithm. Considering uncertainty in redemption, the Newsvendor model was also explored to review the possibility of over-allocation to improve budget utilization and increase reach. The preliminary analysis findings offer promising results and provide a setting for further research.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploration of design strategies and optimization for efficient mass timber structures as a function of column position</title>
<link href="https://hdl.handle.net/1721.1/162557" rel="alternate"/>
<author>
<name>Gerken, Christoph</name>
</author>
<id>https://hdl.handle.net/1721.1/162557</id>
<updated>2025-08-28T03:07:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Exploration of design strategies and optimization for efficient mass timber structures as a function of column position
Gerken, Christoph
The building sector is responsible for a large share in global carbon emissions. As the load bearing structure is particularly material-intensive, a decisive shift can be achieved by improving its design and decreasing its volume. This thesis examines how structural mass timber floor systems can be designed in an efficient, low-waste manner through a design-oriented approach that is immediately applicable within the context of conventional construction techniques and building practices. Reducing material in timber structures has economical and ecological benefits. Reduced timber demand entails significant cost savings and decreased building weight which considerably cuts embodied carbon.&#13;
Since common floor systems mainly act in bending, this work focuses on the reduction of moment forces in standard setups comprised of timber slabs, beams, and columns. In principle, bending forces in beams and slabs can be reduced by moving the supports inwards, leading to overhanging structural elements. The original method presented in this thesis shows how this approach applies to conventional mass timber floor systems. This work provides an understanding of how informed column positioning can take advantage of this behavior and allows for material and embodied carbon reduction trough design. The consequent architectural implications of the resulting irregular column grid are explored in a floor plan design suggestion&#13;
Material demand and embodied carbon are evaluated as a function of column position through finite element analysis and optimization as part of a computational model. By consulting a mass timber manufacturer’s catalogue to assign appropriate products to structural members, this approach enables material reduction in the design process rather than in the production. Bypassing slow-changing, inert fabrication procedures, this method can be realized instantaneously.&#13;
This work identifies the optimal support position to reduce bending forces in beams and slabs to be at 41% of the distance from the element’s edge to its midspan. Furthermore, this research finds that the impact of ideal column position on material efficiency depends on required minimum effective spans. While being negligible in the absence of constraints, informed column positioning can reduce timber demand by 20% and embodied carbon by 16% when subjected to a minimum effective span requirement of 6 m – a common span in timber construction – in a building of 30x30 m and five floors. Building dimensions are found to have an insignificant impact on these results.&#13;
This thesis illustrates the potential for architects and engineers to enhance structural efficiency of mass timber floor systems merely by deviating from the usual, regular column grid and taking advantage of straightforward structural principles through design.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning for the Condition Assessment of&#13;
 Concrete Bridges</title>
<link href="https://hdl.handle.net/1721.1/162556" rel="alternate"/>
<author>
<name>Fayad, Fred</name>
</author>
<id>https://hdl.handle.net/1721.1/162556</id>
<updated>2025-08-28T03:07:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Machine Learning for the Condition Assessment of&#13;
 Concrete Bridges
Fayad, Fred
The assessment of concrete bridge conditions is critical for ensuring structural integrity and public safety. Traditional inspection methods, which rely heavily on visual inspections and manual assessments, are time-consuming, subjective, and prone to human error. With the increasing number of aging bridges worldwide, there is a growing need for more efficient and accurate methods to assess bridge health. This thesis aims to explore the application of machine learning techniques for automating the bridge condition assessment process and improving the accuracy and reliability of bridge evaluations.&#13;
 This study investigates the development and implementation of a model consisting of two machine learning algorithms to predict the condition of concrete bridges based on data collected from various public sources. The first algorithm appraises the structural health of a bridge based on bridge rating and the second algorithm assesses the condition of a bridge after a specific failure mechanism. Specifically, this work focuses on using classification algorithms such as Random Forest (RF), XGBoost, and Neural Networks (NN) in both algorithms to achieve their purpose.&#13;
 The results of this study demonstrate that machine learning models can provide a decent performance in predicting bridge conditions. The overall model achieved a testing accuracy of 79%. This research contributes to the field of civil engineering by showcasing the potential of machine learning in infrastructure management. By automating the assessment process, the proposed models can help reduce the time and cost of inspections while providing more accurate data to guide maintenance planning and bridge rehabilitation efforts. Future work will focus on further optimizing the models, incorporating additional data sources, and deploying the system for real-time bridge monitoring.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Comparison of Theoretical and Actual Coumarin Exudation Under Iron Limitation to Understand Root Exudation Mechanics</title>
<link href="https://hdl.handle.net/1721.1/162555" rel="alternate"/>
<author>
<name>Van Note, Lana</name>
</author>
<id>https://hdl.handle.net/1721.1/162555</id>
<updated>2025-08-28T03:08:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Comparison of Theoretical and Actual Coumarin Exudation Under Iron Limitation to Understand Root Exudation Mechanics
Van Note, Lana
Nutrient cycling is an important component of plants’ immune systems, largely driven by the act of exuding environmentally influential metabolites from roots. Root exudation may be driven by multiple unique mass-transport mechanisms, including active and passive transport types, though the latter is not well-studied despite being labelled a significant driver of low molecular weight metabolite exudation. This research investigates the generally accepted assumption that low molecular weight metabolites, including iron-fixing coumarins (scopoletin, fraxetin, etc.) are primarily exuded passively,  and high molecular weight metabolites follow an active exudation approach. Scopoletin and scopolin exudation from Arabidopsis thaliana in low-iron and replete conditions is quantified to determine if the hypothesized passive diffusion mechanism is a significant contributor to coumarin exudation. LC-MS analysis suggests that passive diffusion of scopoletin and scopolin from roots plays a significant role in total coumarin exudation values. Further research should include investigating the implications of passive coumarin exudation on long-term iron storage and soil health in addition to the relationship between coumarin production and exudation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Offshore floating solar with compressed air storage as a&#13;
baseload power plant for a data center</title>
<link href="https://hdl.handle.net/1721.1/162554" rel="alternate"/>
<author>
<name>Athanasopoulos, Panagiotis Rafail</name>
</author>
<id>https://hdl.handle.net/1721.1/162554</id>
<updated>2025-08-28T03:08:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Offshore floating solar with compressed air storage as a&#13;
baseload power plant for a data center
Athanasopoulos, Panagiotis Rafail
This thesis presents the conceptual design, technical modeling, and economic analysis of a novel offshore floating solar energy system integrated with Compressed Air Energy Storage (CAES) for reliable baseload power delivery to coastal data centers. The system architecture is modular, consisting of multiple “powercells,” each comprising a 5×5 photovoltaic (PV) array mounted above a matrix of submerged compressed air storage cylinders anchored below the floating platform, addressing the energy resilience and spatial constraints of coastal computing infrastructure. This scalable configuration enables distributed energy collection and localized storage, tailored to meet site-specific demands. Detailed thermodynamic modeling of both charging and discharging cycles is conducted, with analytical solutions validated against a full numerical implementation. Results show that under realistic operating assumptions, the temperature inside the storage vessels remains nearly isothermal due to the long charging duration and large heat exchange surface, enabling a simplified energy balance model.&#13;
&#13;
A techno-economic analysis evaluates both structural steel requirements and photovoltaic investment, benchmarked against market data from 2024. Key metrics such as structural cost per unit energy ($/kWh) and per rated power output ($/kW) are derived. The hybrid system is found to be economically competitive with lithium-ion (Li-ion) battery alternatives, offering extended lifespan (20–30 years), lower material costs, and enhanced sustainability through avoidance of critical minerals. Environmental and mooring considerations for offshore deployment are also addressed, demonstrating the feasibility of integrating energy generation, storage, and maritime infrastructure. This work advances the development of resilient, decarbonized energy systems aligned with global renewable energy targets and the rising demand for sustainable data center operations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Destructive Behaviors in Naval Shipyards: A STAMP and System Dynamics Analysis</title>
<link href="https://hdl.handle.net/1721.1/162552" rel="alternate"/>
<author>
<name>Brower, Braden C.</name>
</author>
<id>https://hdl.handle.net/1721.1/162552</id>
<updated>2025-08-28T03:08:10Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Destructive Behaviors in Naval Shipyards: A STAMP and System Dynamics Analysis
Brower, Braden C.
United States Navy Refueling and Complex Overhauls (RCOHs), and other extended maintenance availabilities, present uniquely demanding environments where Sailors face elevated risks for destructive behaviors, including suicide and substance abuse. Prolonged exposure to harsh industrial conditions, significantly degraded Quality of Service, demanding workloads, and critical manning shortfalls create cumulative stress distinct from operational duty. These destructive behaviors severely impact personnel’s well-being, erode force readiness through attrition and morale issues, and indicate systemic contributing factors as highlighted by recent investigations into carrier suicides during shipyard periods.&#13;
&#13;
This thesis utilizes Causal Analysis based on Systems Theory (CAST), grounded in systems thinking, to analyze the USS George Washington RCOH events and identify the underlying safety control structure flaws that contributed to this hazardous environment. Insights from the CAST analysis were then integrated with a qualitative System Dynamics model to better understand the feedback loops and dynamic interactions driving system behavior, particularly revealing a capability trap dynamic exacerbated by resource constraints and personnel pressures.&#13;
&#13;
The analysis identified critical, interacting systemic flaws across multiple organizational levels that contributed to the accident: (a) inadequate strategic resourcing and manning prioritization for RCOH personnel support, (b) deficient planning, risk management, and oversight processes that were ineffective at protecting Sailor well-being amidst budget and schedule pressures, (c) ineffective feedback mechanisms that prevented critical information from reaching decision-makers, (d) and reliance on flawed assumptions regarding the RCOH environment, Sailor resilience, and standard process adequacy. Based on these findings, the thesis provides actionable, systemically focused recommendations aimed at strengthening the Navy's safety control structure by improving decision makers’ mental models, enhancing feedback and oversight, enforcing well-being constraints, and fostering organizational learning. Combined, these recommendations empower leaders to proactively manage risks, reduce destructive behaviors, and ensure a safer, more resilient environment during future RCOHs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Community Risk Preparedness for Flooding Emergencies: A System Dynamics Approach for the U.S. Army Corps of Engineers</title>
<link href="https://hdl.handle.net/1721.1/162551" rel="alternate"/>
<author>
<name>Hoyt, Thomas S.</name>
</author>
<id>https://hdl.handle.net/1721.1/162551</id>
<updated>2025-08-28T03:08:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Enhancing Community Risk Preparedness for Flooding Emergencies: A System Dynamics Approach for the U.S. Army Corps of Engineers
Hoyt, Thomas S.
Flooding events pose a significant and growing threat to communities in the United States, particularly as climate change alters weather patterns and sea levels continue to rise. This thesis examines how the U.S. Army Corps of Engineers (USACE) can enhance community preparedness for flood emergencies through improved risk communication strategies. Focusing on the New England District as a representative case, it integrates data from the Federal Emergency Management Agency’s (FEMA) National Household Survey and the National Flood Insurance Program (NFIP) claims archive to develop and calibrate a System Dynamics model of flood risk perception and preparedness.&#13;
The model built in this thesis incorporates key variables and captures the feedback loops that influence community preparedness over time. Scenario testing demonstrates that monthly to quarterly engagements by USACE help sustain risk awareness and reduce flood-related damage, whereas less frequent engagement demonstrates minimal improvement above the baseline. By contrast, barriers to action, such as complex procedures or limited access to information, can substantially slow the adoption of preparedness measures. High levels of trust in authorities further amplify the effectiveness of risk communication and foster community engagement.&#13;
This model quantifies the importance of frequent engagement, low barriers to action, and trust-building initiatives in reducing flood impact. Through calibration against historical claims and survey data, the model provides a robust framework that can guide USACE and partner agencies in refining their own flood risk communication strategies to bolster community resilience.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structural engineering model of irregular and efficient concrete beams: Application to topology optimized shapes and integrated textile reinforcement</title>
<link href="https://hdl.handle.net/1721.1/162550" rel="alternate"/>
<author>
<name>Stribos, Sophia</name>
</author>
<id>https://hdl.handle.net/1721.1/162550</id>
<updated>2025-08-28T03:08:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Structural engineering model of irregular and efficient concrete beams: Application to topology optimized shapes and integrated textile reinforcement
Stribos, Sophia
Concrete remains one of the most widely used construction materials due to its strength, durability, and availability. However, it is responsible for a large share of the global carbon emissions. Within the 40% of the global emissions attributed to the building sector, 5-8% alone accounts for the production of cement, a key component in concrete. As the construction industry seeks innovations towards sustainable practices, alternative beam designs that improve material efficiency and introduce nontraditional reinforcement systems are emerging as promising potential. However, accurate structural models capable of predicting and validating the performance of these innovative beams are often lacking, limiting their implementation in the industry, primarily due to safety and code compliance.&#13;
This thesis bridges this gap by developing and validating a structural engineering model to predict the shear and flexural capacities and the deflection of irregular, efficiently shaped concrete beams, including those with alternative reinforcement and formwork. The model discretizes a 3D beam geometry into 2D sections to perform a geometric and structural cross-sectional analysis along the beam’s length. The structural engineering model is applied to two case studies: a topology-optimized steel-reinforced concrete beam and an integrated knit textile reinforced concrete beam, using experimentally measured material properties and beam testing data. The predicted engineering model results are compared against experimental data to validate the model’s accuracy.&#13;
While the model could accurately capture the behavior of the topology-optimized steel-reinforced beam, it slightly overestimated the strength of the knit-textile reinforced beam. The engineering model for the topology-optimized beam had a close alignment in flexural capacity and had a slightly conservative estimate in shear and deflection due to the nature of the design equations. However, the model showed a minor overprediction in the flexural capacity and deflection of the integrated knit textile beam. Discrepancies in this model were linked to inaccurate material properties, experimental imperfections, and prestressing effects. To ensure complete accuracy and reliability, additional beam analysis using this model is needed.&#13;
This research advances structural design by offering a tool for predicting the capacity and serviceability of irregular, efficiently shaped concrete beams, including those with alternative reinforcement. This thesis enables designers to validate and optimize their innovative beam designs and support their ideas as sustainable solutions within the concrete construction industry.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessment of Decarbonization Pathways of Japan</title>
<link href="https://hdl.handle.net/1721.1/162549" rel="alternate"/>
<author>
<name>Suto, Sadami</name>
</author>
<id>https://hdl.handle.net/1721.1/162549</id>
<updated>2025-08-28T03:08:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Assessment of Decarbonization Pathways of Japan
Suto, Sadami
Developing realistic pathways for decarbonization is crucial for the success of climate change mitigation actions. To evaluate Japan’s pathways toward achieving carbon neutrality, this study enhances the MIT Economic Projection and Policy Analysis (EPPA) model and analyzes a suite of policy scenarios that combine domestic mitigation measures such as emissions targets from the updated Japan’s Nationally Determined Contribution (NDC), power mix goals, and availability of carbon capture and storage (CCS) with international emissions trading. The impacts on CO₂ emissions, GDP, consumption, carbon prices, and sectoral output in Japan between 2030 and 2050 are assessed.&#13;
&#13;
Under the baseline scenario, emissions over time remain flat at about 1,000 MtCO₂e, far exceeding the carbon neutrality goal. Even when Japan’s 2030 and 2040 NDC for CO₂ and power mix targets are fully achieved, residual emissions of 100 – 200 MtCO₂e remain, which calls for a need of carbon offsets. Relying on domestic-only measures is costly for Japan. In high-ambition domestic-only scenarios without CCS, carbon prices soar to over $46,000/tCO₂ by 2050, leading to GDP losses exceeding $1.5 trillion (23% of GDP) and significant contractions in key sectors of the economy.&#13;
&#13;
In contrast, scenarios incorporating international emissions trading enable Japan to achieve comparable total emissions reductions by partially relying on imported carbon credits. This mechanism significantly lowers marginal abatement costs, allowing carbon prices to stabilize at $20 –$30/tCO₂ and reducing GDP losses to about $100 billion (1.6% of GDP) by 2050.&#13;
&#13;
Scenarios that emphasize domestic reductions while flexibly using international credits emerge as manageable pathways. These scenarios achieve domestic emissions reductions of 40 – 60% by 2050, with carbon prices ranging from $140 to $340/tCO₂ and GDP losses contained between $150 and $290 billion (2.3% and 4.3% of GDP). Importantly, these scenarios incorporate the deployment of CCS, which plays a critical role in reducing marginal costs and enabling deeper abatement in hard-to-decarbonize sectors. Most industrial sectors maintain stable output, while carbon-intensive sectors undergo gradual structural transitions.&#13;
&#13;
Overall, these findings suggest that Japan can achieve carbon neutrality through an integrated strategy that combines strengthened domestic action, technological deployment, and international cooperation. This study provides a robust quantitative foundation for designing feasible, equitable, and cost-effective climate policies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact-Induced Bridge Failures: Analyzing Structural Vulnerabilities and Optimizing Pier Designs for Enhanced Resilience</title>
<link href="https://hdl.handle.net/1721.1/162548" rel="alternate"/>
<author>
<name>Ren, Daisy</name>
</author>
<id>https://hdl.handle.net/1721.1/162548</id>
<updated>2025-08-28T03:08:05Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Impact-Induced Bridge Failures: Analyzing Structural Vulnerabilities and Optimizing Pier Designs for Enhanced Resilience
Ren, Daisy
Due to the rise in global traffic patterns in recent years, bridge failures due to impact effects are becoming an increasing concern, especially for aging infrastructure. Following the recent collapse of the Francis Scott Key Bridge, issues regarding bridge vulnerabilities and design deficiencies arose, which highlighted the need for better design codes and protection for bridge piers. This study aims to address these issues by better understanding bridges' impact-related structural failure mechanisms by developing a comprehensive optimization framework to enhance the resilience of structures to dynamic impact forces using three phases: (i) statistical analysis of bridge failure data from the Multidisciplinary Center for Earthquake Engineering Research (MCEER), with data focusing on the frequency, bridge types, and bridge material trends associated with different bridge failures across the United States, (ii) development of a compliance-based optimization for trusses using MATLAB that is applied to 2D representations of pier structures for different truss configurations (2X3, 3X4, 3X5) under stress, load, and volume constraints to simulate large magnitude impact conditions, and (iii) design and validation of optimization results through mathematical calculations of compliance and strain energy to ensure consistency between numerical results and structural mechanics principles. Both fail-safe and shape optimization strategies are employed and compared across all truss configurations, revealing distinct design methodologies between maximum and minimum compliance optimizations and the trade-offs between stiffness and energy dissipation. Maximum compliance optimization designs demonstrate increased redundancy and strain energy capacity, while minimum compliance optimization designs showed increased efficiency but were more prone to brittle failure. The final study utilizing volume constraints further examined material distribution under realistic impact loads and highlighted the importance of distributed load paths and deformation capacity in structural performance. This work provides a design framework for energy-absorbing pier geometries and aims to offer insight into improving current design standards for pier designs to account for extreme events and help guide retrofitting efforts that could prevent future failures.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computing Economic Equilibria and Their Applications to&#13;
Market Games</title>
<link href="https://hdl.handle.net/1721.1/162547" rel="alternate"/>
<author>
<name>Bruce, Samuel G.</name>
</author>
<id>https://hdl.handle.net/1721.1/162547</id>
<updated>2025-08-28T03:08:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Computing Economic Equilibria and Their Applications to&#13;
Market Games
Bruce, Samuel G.
The emergence of new technologies such as e-payments and tokenized assets, distributed ledgers, smart contracts and encryption have created new opportunities for improving access and equity in financial institutions. These new tools can be used to build better infrastructure and improve economic efficiency, especially in previously underdeveloped countries. The use of these tools in various applications however requires and intimate link between economics and computer science to ensure an implementation that is both computationally efficient and improves social welfare. There has been significant research in the field of computer science concerning the computation of economic equilibria, specifically Nash Equilibria and Correlated Equilibria. These algorithms, however, have not been used in many financial applications. Further, while research exists on various methods of computation for Correlated Equilibria, little exploration has been done evaluating the quality of these equilibria in terms of economic efficiency in specific mechanisms. This work provides a sweeping view of the existing literature on equilibrium computation as well as an analysis on the economic and algorithmic tradeoffs of different approaches. The discussion begins with simple 2-player, finite action games, then moves to more complex machine learning based method for equilibrium computation in difficult settings. One of these methods is then extended to a limit-order market game explicitly described by Dubey [1] and implemented, with small modifications, by SPEEDEX [2]. This limit-order game offers a continuous, vector-valued action space with complex payoff functions, causing tension with many of the equilibrium computation algorithms explored previously. This paper identifies these tensions, then offers modifications to algorithms which allow tractable, welfare improving approximate Coarse Correlated Equilibrium computation. Finally, there is a discussion on future work which aims to generalize the developed framework. The code corresponding to the equilibria computation will be released publicly in this repository [3].
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing SigmaOS for Efficient Orchestration of Fault-Tolerant, Burst-Parallel Workloads</title>
<link href="https://hdl.handle.net/1721.1/162546" rel="alternate"/>
<author>
<name>Chang, Ryan</name>
</author>
<id>https://hdl.handle.net/1721.1/162546</id>
<updated>2025-08-28T03:08:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing SigmaOS for Efficient Orchestration of Fault-Tolerant, Burst-Parallel Workloads
Chang, Ryan
SigmaOS is a multi-tenant cloud operating system designed for efficient orchestration of fault-tolerant, burst-parallel workloads. It provides users with isolated cloud environments called realms, where resources are accessed through a Unix-like filesystem interface, and supports applications built from procs—lightweight, rapidly-spawnable programs that can be both short-lived for bursty tasks or long-running and stateful for persistent services. However, the current prototype exhibits performance bottlenecks that hinder its scalability for larger, more demanding applications. This thesis addresses these limitations by introducing two key optimizations: (1) a rearchitected watch API, enhancing its efficiency and scalability for monitoring directory changes crucial for inter-proc coordination and event notification, and (2) a new ft/task server, providing a robust and high-performance mechanism for managing fault-tolerant bags of tasks, essential for applications like MapReduce. Through these enhancements, this work demonstrates significant improvements in SigmaOS’s performance on the MapReduce benchmark, showcasing improved scaling capabilities for larger cluster deployments, larger inputs, and more granular tasks. These optimizations are crucial steps towards enabling SigmaOS to effectively realize its vision as a scalable and performant platform for complex cloud workloads.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Can a Global Climate Model Reproduce a Tornado Outbreak Atmospheric Pattern? Methodology and a Case Study</title>
<link href="https://hdl.handle.net/1721.1/162544" rel="alternate"/>
<author>
<name>Ćwik, Paulina</name>
</author>
<author>
<name>McPherson, Renee A.</name>
</author>
<author>
<name>Li, Funing</name>
</author>
<author>
<name>Furtado, Jason C.</name>
</author>
<id>https://hdl.handle.net/1721.1/162544</id>
<updated>2026-03-08T03:24:15Z</updated>
<published>2025-07-30T00:00:00Z</published>
<summary type="text">Can a Global Climate Model Reproduce a Tornado Outbreak Atmospheric Pattern? Methodology and a Case Study
Ćwik, Paulina; McPherson, Renee A.; Li, Funing; Furtado, Jason C.
Tornado outbreaks can cause substantial damage, injuries, and fatalities, highlighting the need to understand their characteristics for assessing present and future risks. However, global climate models (GCMs) lack the resolution to explicitly simulate tornado outbreaks. As an alternative, researchers examine large-scale atmospheric ingredients that approximate tornado-conducive environments. Building on this approach, we tested whether patterns of covariability between WMAXSHEAR and 500-hPa geopotential height anomalies, previously identified in ERA5 reanalysis, could approximate major U.S. May tornado outbreaks in a GCM. We developed a proxy-based methodology by systematically testing pairs of thresholds for both variables to identify the combination that best reproduced the leading pattern selected for analysis. These thresholds were then applied to simulations from the high-resolution MPI-ESM1.2-HR model to assess its ability to reproduce the original pattern. Results show that the model closely mirrored the observed tornado outbreak pattern, as indicated by a low normalized root mean square error, high spatial correlation, and similar distributions. This study demonstrates a replicable approach for approximating tornado outbreak patterns, applied here to the leading pattern, within a GCM, providing a foundation for future research on how such environments might evolve in a warming climate.
</summary>
<dc:date>2025-07-30T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-Driven Modeling and Real-Time Optimal Control of Continuous Manufacturing Processes</title>
<link href="https://hdl.handle.net/1721.1/162543" rel="alternate"/>
<author>
<name>Gomez, Samuel John</name>
</author>
<id>https://hdl.handle.net/1721.1/162543</id>
<updated>2025-08-28T03:07:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Data-Driven Modeling and Real-Time Optimal Control of Continuous Manufacturing Processes
Gomez, Samuel John
When faced with complex disturbances, continuous manufacturing processes require robust control and adaptability to maintain product quality and operational efficiency. Although advanced control strategies such as linear quadratic regulator, model predictive control, and adaptive control have demonstrated strong performance, many industrial processes still rely predominantly on classical proportional-integral-derivative (PID) controllers because of their simplicity, ease of implementation, and sufficient results.&#13;
&#13;
This thesis investigates the effectiveness of data-driven modeling techniques in capturing system dynamics more accurately than traditional physics-based models. It further examines using a high-fidelity digital twin, constructed from experimental data via linear system identification and nonlinear deep learning (NARX) approaches, to optimize PID controller parameters through simulation-based gradient descent methods.&#13;
&#13;
A comprehensive experimental platform was developed to collect synchronized sensor and video data from a roll-to-roll continuous manufacturing system, specifically targeting disturbance scenarios that cause process interruptions. The digital twin created from these data was validated against physical experiments and shown to outperform conventional physics-based models when predicting the system’s dynamic response to disturbance inputs.&#13;
&#13;
Optimal control of the system was explored by implementing a virtual PID controller that closely replicates the physical controller. Optimal gain settings were identified through simulation and applied to the physical manufacturing process. The experimental results showed a significant reduction in the mean squared error and the maximum web deviation. These results demonstrate the substantial potential of digital twin-driven, data-centric control approaches in enhancing resilience, efficiency, and adaptability in manufacturing processes. This research also lays the foundation for the future development of real-time, adaptive, and autonomous control strategies in industrial applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Power and Progress in Japan: The Past, Present, and Future of Japan as a Tech Powerhouse</title>
<link href="https://hdl.handle.net/1721.1/162542" rel="alternate"/>
<author>
<name>Maruyama, Shun</name>
</author>
<id>https://hdl.handle.net/1721.1/162542</id>
<updated>2025-08-28T03:08:07Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Power and Progress in Japan: The Past, Present, and Future of Japan as a Tech Powerhouse
Maruyama, Shun
This paper analyzes Japan’s economic and technological history since the Meiji Restoration through the framework of Power and Progress proposed by Acemoglu and Johnson (2023), focusing on the concepts of direction of technology and productivity bandwagon. A historical review reveals that technological progress and the distribution of its benefits were not determined solely by market mechanisms or technological inevitability, but were shaped by the power dynamics among governments, companies, workers, and others. Periods when workers held strong bargaining power and inclusive social institutions were in place saw the emergence of a virtuous cycle, in which the direction of technology moved toward broad-based innovation and the productivity bandwagon functioned effectively. Conversely, after the collapse of the bubble economy, a shift in the power balance in favor of companies led to a rise in short-term cost-cutting, resulting in a divergence from inclusiveness and innovation in the direction of technology, as well as a breakdown of the productivity bandwagon. This ultimately undermined Japan’s ability to leverage the strengths of its production system and led to a decline in technological capabilities. Currently, a new wave of technological innovation centered on AI is emerging. However, its impact remains heavily dependent on existing employment practices and corporate behavior models, making a short-term shift in direction unlikely. In the medium-to-long term, however, the societal will and collective action may create an opportunity to rebuild a virtuous cycle. This paper proposes action guidelines for companies, workers, and the government, and argues that realizing true prosperity from technological progress requires reassessing existing power structures and actively choosing new pathways as a society.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bluespec Language Server: Adapting Rust Analyzer for Bluespec SystemVerilog</title>
<link href="https://hdl.handle.net/1721.1/162541" rel="alternate"/>
<author>
<name>Chan, Martin</name>
</author>
<id>https://hdl.handle.net/1721.1/162541</id>
<updated>2025-08-28T03:08:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Bluespec Language Server: Adapting Rust Analyzer for Bluespec SystemVerilog
Chan, Martin
The Language Server Protocol (LSP) and popularity of VS Code have facilitated the current ubiquity of smart code editing features like hover or goto-definition. These features are powered by language servers, which are programs that perform compiler-like functions at keystroke latency on potentially incomplete code. Mainstream languages like Rust or Python have the large userbases to motivate the creation of bespoke language servers like Rust Analyzer or Pylance. However, smaller languages like Bluespec SystemVerilog, used in computer architecture classes at MIT, often need to make do without a language server. As students come to expect smart code editing features, they may miss the convenience when working with languages like Bluespec. In this thesis, we present a Bluespec Language Server forked from Rust Analyzer. This involved adapting the Rust Analyzer parser, HIR, and other internals to work for Bluespec SystemVerilog. The resulting artifact supports the full suite of typical smart editing features for classroom-grade Bluespec projects and continues to mostly work for industrial-grade projects. We discuss the many changes and challenges required to adapt this language server to work for a different language than it was designed for. Further, to address the current gap in the literature covering language server implementation, we include thorough discussion of the overall system architecture and several important subsystems with significant overlap with Rust Analyzer's internals. Finally, we conclude with a discussion of current limitations of our language server and directions for future work.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Microelectromechanical-Cantilever Hydrogen Sensor with Palladium-Driven Bending and Piezoresistive Readout</title>
<link href="https://hdl.handle.net/1721.1/162540" rel="alternate"/>
<author>
<name>Andrade, Marco A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162540</id>
<updated>2025-08-28T03:07:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Microelectromechanical-Cantilever Hydrogen Sensor with Palladium-Driven Bending and Piezoresistive Readout
Andrade, Marco A.
Hydrogen gas (H₂) is considered a promising source of environmentally friendly and sustainable energy of benefit for global decarbonization. However, given the flammable and explosive nature of H₂, highly sensitive and selective detection systems with fast response are needed to enable leakage monitoring to ensure safe deployment and use. To address this need, we propose a microelectromechanical (MEMS) platform for H₂ sensing with the aim of achieving sub-1-ppm sensitivity. Our platform employs a MEMS structure that has H₂-responsive palladium (Pd) features. Once exposed to H₂, the Pd lattice expands as H₂ diffuses into it. This results in the structural deflection of a mechanically-mobile feature, in particular a cantilever. This deflection is measured using piezoresistors, which are embedded in the cantilever using a spin-on glass doping process. Piezoresistors enable rapid high-accuracy detection and quantification of H₂, as will be shown in this thesis through a combination of modeling, sensor development, sensor fabrication, and basic experimental characterization. In this thesis, we have successfully developed a fabrication plan, demonstrated the two key aspects of our fabrication, namely beam release and piezoresistor fabrication, shown beam bending driven by absorption of hydrogen by palladium, and shown that our piezoresistors respond to beam bending. Our physical results match our theoretical predictions for a beam of size 100 µm by 20 µm and a resistor with resistance 115 kΩ fabricated on SOI chips. This beam could be used to detect H₂ below 1 ppm.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Emergent Dynamics in AI-Enhanced Entrepreneurship&#13;
Education: A System of Systems Study of the Orbit Tool</title>
<link href="https://hdl.handle.net/1721.1/162539" rel="alternate"/>
<author>
<name>Dale, William</name>
</author>
<id>https://hdl.handle.net/1721.1/162539</id>
<updated>2025-08-28T03:07:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Emergent Dynamics in AI-Enhanced Entrepreneurship&#13;
Education: A System of Systems Study of the Orbit Tool
Dale, William
The convergence of artificial intelligence and entrepreneurship education has opened a novel frontier in pedagogical innovation. The deployment of Orbit—a bespoke generative AI tool—within MIT’s 15.390 entrepreneurship course, which follows the structured Disciplined Entrepreneurship framework, is examined through a System-of-Systems perspective. This approach reveals how the tool functions not as an isolated feature but as an integrated element within a multifaceted educational ecosystem. Drawing on quantitative usage data across three consecutive academic semesters (Spring 2024-Spring 2025) complemented by course evaluation metrics, our mixed-methods approach reveals the multidimensional impact of AI-enhanced entrepreneurial education. The findings demonstrate that Orbit, particularly in its refined v2 iteration, functions as a powerful External Enabler that significantly reduces both the opacity and agency-intensity inherent in complex entrepreneurial frameworks. This enabling function manifested through measurable increases in student adoption, idea generation, and iterative engagement with critical DE steps. Beyond efficiency gains, we identify a substantive Transformation of Learning where students developed distinctly different engagement patterns—characterized by increased iteration, greater willingness to tackle complex entrepreneurial challenges, and enhanced overall course experiences. This transformation appears to deepen rather than merely accelerate learning, as evidenced by improved course evaluations alongside increased time investment in coursework. However, our analysis reveals that this transformation operates within the constraints of what we term AI’s "Jagged Frontier"—an uneven landscape of capabilities leading to differentiated impacts across DE tasks and student segments. The evolution from Orbit v1 to v2 underscores how thoughtful system design and curriculum integration critically influence the effectiveness of educational AI tools. This research contributes a nuanced understanding of how specialized AI tools can enhance entrepreneurship education while highlighting that their benefits depend on deliberate design choices, strategic pedagogical integration, and recognition of current technological limitations. The SoS framework proves instrumental in capturing these emergent dynamics, offering valuable insights for educational technologists, entrepreneurship educators, and institutions navigating the AI-enhanced learning landscape.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System Design and Evaluation of Spectrum Management Architectures for Co-Primary Sharing in the 37 GHz Band</title>
<link href="https://hdl.handle.net/1721.1/162538" rel="alternate"/>
<author>
<name>Alsehali, Mohammed S.</name>
</author>
<id>https://hdl.handle.net/1721.1/162538</id>
<updated>2025-08-28T03:07:12Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">System Design and Evaluation of Spectrum Management Architectures for Co-Primary Sharing in the 37 GHz Band
Alsehali, Mohammed S.
This thesis presents a system design framework for evaluating spectrum management architectures enabling co-primary access in the 37 GHz band. Motivated by increasing demand for mid-band and mmWave spectrum, and recent policy directions for federal-commercial sharing, this research investigates the trade-offs between utilization efficiency, coordination overhead, and interference performance across thousands of feasible spectrum management system.&#13;
&#13;
Using a morphological matrix, eight key architectural decisions were defined, including coordination topology, licensing mechanism, frequency planning, sensing mode, and access priority. A parametric event-driven simulation model was developed in Python to evaluate 2,808 valid architectures under low, medium, and high spectrum demand scenarios. The performance metrics, Spectrum Utilization Efficiency (SUE), Coordination Index (Cindex), and Blocking Probability (BP), were used to generate multi-dimensional tradespaces and identify Pareto-optimal solutions.&#13;
&#13;
Results indicate that semi-dynamic spectrum management systems with decentralized or hybrid coordination topologies consistently dominate the Pareto frontier across all demand levels. Compared to fully dynamic systems, semi-dynamic designs achieve 80–90% of the utilization efficiency with way less than 50% of the coordination cost. &#13;
&#13;
The results validate key hypotheses about performance trade-offs and offer actionable insights for regulators and system designers. This thesis recommends semi-dynamic, co-primary frameworks for initial 37 GHz implementation and proposes future research directions, including agent-based modeling, economic behavior integration, and acuarate physics modeling.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Architecting Innovation in Healthcare: A Case Study Review in Digital Pathology</title>
<link href="https://hdl.handle.net/1721.1/162537" rel="alternate"/>
<author>
<name>Jezewska, Martyna</name>
</author>
<id>https://hdl.handle.net/1721.1/162537</id>
<updated>2025-08-28T03:07:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Architecting Innovation in Healthcare: A Case Study Review in Digital Pathology
Jezewska, Martyna
The Mayo Clinic, a renowned non-profit organization, has long been at the forefront of healthcare innovation. This thesis explores the implementation of digital pathology within the Mayo Clinic, focusing on its potential to enhance diagnostic accuracy, increase efficiency, enable remote collaboration, and ultimately improve patient care. By leveraging the Architecting Innovative Enterprise Strategy (ARIES) framework, this research provides a comprehensive analysis of the socio-technical aspects of digital pathology implementation. The study begins with a literature review on innovation and its application in healthcare,&#13;
followed by an in-depth case study of the Mayo Clinic's journey with digital pathology. Key findings highlight the importance of organizational design, stakeholder engagement, and continuous improvement in successfully integrating digital pathology into existing healthcare systems. The research concludes with recommendations for future innovations and insights on how healthcare institutions can better prepare for and adapt to disruptive technologies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization of Renewable Energy Siting Decisions&#13;
through Vertical Axis Wind Turbine Integration</title>
<link href="https://hdl.handle.net/1721.1/162536" rel="alternate"/>
<author>
<name>Suresh, Nithyaharini</name>
</author>
<id>https://hdl.handle.net/1721.1/162536</id>
<updated>2025-08-28T03:07:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimization of Renewable Energy Siting Decisions&#13;
through Vertical Axis Wind Turbine Integration
Suresh, Nithyaharini
The rapid increase in wind energy deployment is critical to achieving net-zero carbon emissions in the United States. However, conventional Horizontal Axis Wind Turbines (HAWTs) face deployment constraints due to their large spatial requirements stemming from their size itself and turbine spacing to accommodate wake interference. Their large footprint makes it impractical to deploy in densely populated and restricted areas, such as military zones and urban regions. This setback results in the underutilization of available wind resources, limiting wind energy’s full potential. To overcome these constraints, Vertical Axis Wind Turbines (VAWTs) offer a spatially compact alternative, enabling deployment in space-constrained areas. This study investigates the feasibility of VAWTs as a complementary wind technology by integrating them into a renewable energy siting optimization framework. This framework considers HAWTs, Solar Photovoltaics (PV), battery storage, etc., within the New England region, assuming a 100% decarbonized power system. The model utilizes an analysis that aims to minimize total system costs to assess VAWTs under varying capital expenditures and land-use restrictions. A novel feature of this study is the usage of the land availability cutoff and land restriction cases that are introduced to realistically mimic real-world land use constraints that influence wind turbine siting. The land availability cutoff defines the minimum area of land usable within the parcel for it to be considered for HAWTs and Solar PV deployment, given their larger spatial footprint. Parcels below this land cutoff are excluded from those technologies and only consider VAWTs due to the lower land available within the parcel, representing constrained regions. This methodology offers a more technical modeling of spatial constraints for renewable energy siting and allows for a realistic assessment of VAWT feasibility. Results indicate that, at current commercial costs, VAWTs are less competitive withm HAWTs and solar PV, primarily due to their early stage in the technology development and their significantly higher CAPEX, which is approximately ten times that of HAWTs. To test the technology’s viability with hypothetical utility-scale costs, where VAWT costs fall within the range of $1,300–$1,500/kW, the model still preferentially selects HAWTs due to their higher capacity factors. However, when the model considers different land use restriction cases for VAWT technology, as compared to HAWTs and Solar PVs, VAWTs become significantly more viable. VAWT placement becomes notable in these cases, increasing its share in the energy mix by 2.61% to 10.32% in favorable conditions. At high levels of land availability on a per-parcel scale, specifically, when more than 70% of the land identified as technically suitable remains available for any deployment, high-quality sites with favorable wind resources and high capacity factors continue to support HAWTs as the dominant technology given their lower Levelized Cost of Energy (LCOE). However, when the land availability cutoff increases beyond 70%, reducing siting opportunities for HAWTs and solar PV, the reliance is shifted towards VAWTs, amplifying the impact of their higher LCOE on overall system costs and making cost differentials between technologies more critical. These findings emphasize that while CAPEX reductions are critical in scaling VAWTs and driving up their competitiveness, land-use policies and spatial constraints are primary determinants of deployment feasibility. The study highlights the need for targeted policy intervention for flexible siting policies and continued research to optimize VAWT deployment strategies, ultimately enhancing wind energy integration in land constrained regions within New England and maximizing wind resource potential.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational Targeted Codon Optimization and Translation with Deep Learning</title>
<link href="https://hdl.handle.net/1721.1/162535" rel="alternate"/>
<author>
<name>Chemparathy, Anugrah</name>
</author>
<id>https://hdl.handle.net/1721.1/162535</id>
<updated>2025-08-28T03:07:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Computational Targeted Codon Optimization and Translation with Deep Learning
Chemparathy, Anugrah
Codon optimization—the task of recoding a protein’s underlying DNA sequence to maximize expression in a target organism—is a complicated biological optimization problem. Each gene brings a dynamic combination of local and long-range dependencies along with globally imposed constraints specific to the organism. While most existing tools for systematic codon optimization are restricted to optimizing under the constraint of a fixed amino acid sequence, recent architectural advancements in deep learning have made it possible to introduce partial modifications to the amino acid sequence without affecting protein function during the codon optimization process. Such approaches greatly increase the search space of feasible sequences, potentially opening up pathways to previously unconsidered DNA sequences with significantly greater expression rates. In this thesis, we seek to understand and improve the inverse-folding codon optimization model CodonMPNN, the behavior and performance of which have not yet been fully evaluated. We present a detailed empirical evaluation of CodonMPNN, characterizing its performance across reconstruction and translation tasks and demonstrating that it captures higher-order codon usage patterns. We produce evidence that the CodonMPNN’s training has successfully captured nontrivial aspects of the codon distribution for 1000 unique organisms, and are able to better characterize the optimal tasks that CodonMPNN’s non-synonymous nature may be able to solve. Then, by a combination of improved pretraining and a new inference-time evolutionary algorithm we are able to modestly improve the base performance of CodonMPNN from its original publication. Together, these contributions yield a measurable improvement in CodonMPNN’s practical performance and provide actionable guidance for its application in constrained codon design. More broadly, this work highlights the importance of application-aware evaluation when deploying machine learning models in synthetic biology and motivates the design of future architectures that are better aligned with real-world usage constraints.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The First Signs of Vision</title>
<link href="https://hdl.handle.net/1721.1/162534" rel="alternate"/>
<author>
<name>Chang, Cathy</name>
</author>
<id>https://hdl.handle.net/1721.1/162534</id>
<updated>2025-08-28T03:07:26Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The First Signs of Vision
Chang, Cathy
There has been a lot of research on the evolution of eyes through the lens of biology; however, there have been a distinct lack of research in simulating what animals saw as their eyes evolved. This project aims to create interactive simulations of the evolution of animal visions from the Cambrian Explosion to present day through the use of extended reality (XR) environments. Our goal is to communicate and educate about the evolutionary timescale to help our audience understand 1) the history of vision and intelligence and 2) how vision came to be and why it is the way it is. In addition, we want to bridge the gap between technology and vision research to help people better understand and visualize this evolutionary process. We have also collaborated with the Museum of Science and the MIT Museum to display this work in events at their venues.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Steerability of Generative Models: Towards Bicycles&#13;
for the Mind</title>
<link href="https://hdl.handle.net/1721.1/162533" rel="alternate"/>
<author>
<name>Bentley, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/162533</id>
<updated>2025-08-28T03:07:19Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Steerability of Generative Models: Towards Bicycles&#13;
for the Mind
Bentley, Sarah
Generative models have rapidly advanced in their ability to produce diverse, high-quality outputs. Yet their practical utility often falls short: users frequently struggle to guide models toward desired outputs, even when the model is capable of producing those outputs. This thesis argues that unlocking the full potential of generative AI requires not only improving what models can produce (producibility), but also how effectively users can guide them toward producible outputs (steerability). In short, how can we make the entire producible sets of generative models easily accessible to humans? Our contributions are fourfold. First, we formally define steerability and introduce a framework for evaluating it independently of producibility. Second, we instantiate this framework through benchmarks on the steerability of text-to-image and language models. We find that not only is steerability poor, but steering doesn’t reliably improve with more attempts. Third, we propose a framework for designing and optimizing steering mechanisms – tools that help users articulate and achieve their goals with models – and introduce Reinforcement Learning for Human Steering (RLHS) to systematically optimize these mechanisms. Finally, we instantiate this framework in a new steering mechanism for image generation that enables users to steer via images rather than text prompts. This mechanism achieves over 2x improvement over traditional text-based prompting on our benchmark. Our mathematical framework provides a generalizable path forward for measuring and improving the steerability of generative models, while our implementations of that framework empirically demonstrate its utility and viability. Overall, we define a new axis – steerability – upon which we can vastly improve generative models not only as tools for automation, but as bicycles for the mind.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lentiviral Vector Engineering for High-Throughput Immune Profiling</title>
<link href="https://hdl.handle.net/1721.1/162532" rel="alternate"/>
<author>
<name>Dobson, Connor S.</name>
</author>
<id>https://hdl.handle.net/1721.1/162532</id>
<updated>2025-08-28T03:04:25Z</updated>
<published>2022-02-01T00:00:00Z</published>
<summary type="text">Lentiviral Vector Engineering for High-Throughput Immune Profiling
Dobson, Connor S.
The ability to decipher immune recognition is critical to understanding a broad range of diseases, including cancer, infection, and autoimmunity, as well as for the development of countermeasures such as vaccines and immunotherapy. Efforts to do so have been hampered by a lack of technologies that are capable of scaling to simultaneously capture the complexity of the adaptive immune repertoire and the landscape of potential antigens. Each individual’s immune repertoire consists of tens of millions of unique receptors that are responsible for surveying the trillions of possible antigens that might be encountered in one’s lifetime. As a result, there has been intense focus on the development of tools for screening large antigen sets or large collections of potential immune receptors, but most of these capture complexity on only one side of the interaction. We have therefore used synthetic virology approaches to engineer a “lentivirus surface display” platform capable of screening complex antigen mixtures against the full complexity of the adaptive immune repertoire. In Chapter 2 of this thesis, we describe our molecular engineering approaches that enabled the development of VSVGmut, an efficient and modular targeted pseudotyping strategy. In Chapter 3, we leverage VSVGmut and further advances to enable one-pot library on library antigen identification screens for T cells by displaying antigens on the surface of lentiviruses and encoding their identity in the viral genome. Antigen-specific viral infection of cells allows readout of both antigen and receptor identities via single-cell sequencing. In Chapters 4 and 5, we extend our approaches to B cells and present preliminary data for applications in both cellular and humoral profiling. Taken together, our approaches represent a new class of tools for identifying the molecular targets of the adaptive immune response at scale.
</summary>
<dc:date>2022-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Safety Analysis and Design Improvement for Semi-Automatic Train Operation (STO) in High-Speed Rail Using STPA</title>
<link href="https://hdl.handle.net/1721.1/162531" rel="alternate"/>
<author>
<name>Suzuki, Wataru</name>
</author>
<id>https://hdl.handle.net/1721.1/162531</id>
<updated>2025-08-28T03:07:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Safety Analysis and Design Improvement for Semi-Automatic Train Operation (STO) in High-Speed Rail Using STPA
Suzuki, Wataru
In Japan, the Tokaido Shinkansen, a major high-speed rail corridor, plans to introduce Grade of Automation 2 (GoA2) through Semi-Automatic Train Operation (STO). While partial automation promises advantages such as reduced driver’s workload and enhanced efficiency, it also creates new risks due to increasingly complex interactions among automated control systems, human operators, and physical infrastructure.&#13;
This thesis aims to systematically identify and address potential hazards arising from STO in high-speed rail. By using the Tokaido Shinkansen’s announced plan as a model case, the research seeks to uncover scenarios in which normal, non-failed system behaviors can still lead to unsafe outcomes, and to propose design solutions that mitigate those risks early in development. To achieve this, the study applies Systems-Theoretic Process Analysis (STPA). Rather than isolating hardware and function failures, STPA models the entire system as a hierarchical control structure, examining each controller’s possible unsafe actions and their feedback pathways. &#13;
The analysis reveals hazard scenarios that traditional failure-based methods might overlook. Examples include cases where a passenger is not detected between the train and platform doors at departure, or where verbal and signal instructions conflict and delay the driver’s response. These scenarios can happen even without any component failure. Drawing on these insights, the thesis recommends a variety of design improvements, such as new monitoring functions for subsystems, modifying instruction interfaces, and strengthening the software logic of automation systems.&#13;
These findings demonstrate the value of conducting a holistic safety analysis using STPA at the conceptual design stage, before late-stage changes become more expensive. Moreover, this research provides a comprehensive, system-level railway hazard analysis, and the proposed measures can be broadly applicable to high-speed rail systems with automation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Biomechanical Golf Swing Analysis using Markerless Three-Dimensional Skeletal Tracking through Truncation-Robust Heatmaps</title>
<link href="https://hdl.handle.net/1721.1/162530" rel="alternate"/>
<author>
<name>Taylor, Benjamin F.</name>
</author>
<id>https://hdl.handle.net/1721.1/162530</id>
<updated>2025-08-28T03:07:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Biomechanical Golf Swing Analysis using Markerless Three-Dimensional Skeletal Tracking through Truncation-Robust Heatmaps
Taylor, Benjamin F.
The efficient generation and transfer of energy in the golf swing has long been a subject of biomechanical interest, with a particular focus on the concept of the kinematic sequence, which is the coordinated segmental rotation of the pelvis, torso, arms, and club.  While previous studies have modeled aspects of this sequence using high-end laboratory setups or proprietary systems, few have provided open, quantifiable, and time-resolved measurements of angular kinematics across the full swing cycle.  This thesis seeks to address this gap by implementing a markerless temporal skeletal tracking approach built on the open-source MeTRAbs computer vision framework to model and measure joint angles and angular velocities throughout the golf swing.  Using two-dimensional video footage of right-handed golfers performing driver swings, the MeTRAbs pose estimation model and supplemental cross-frame temporal motion sequencing code were used to reconstruct three-dimensional joint trajectories and compute rotational kinematics of key body segments.&#13;
This study demonstrates the feasibility of using markerless pose estimation to extract golf swing signatures and angular velocity profiles without requiring expensive or inaccessible motion capture equipment. Preliminary analysis suggests that joint coordination patterns and temporal characteristics of body segment angular velocities may reveal quantifiable insights into the kinematic sequence, laying the groundwork for further research and instructional applications. Ultimately, this thesis contributes a replicable and cost-effective framework for analyzing golf swing biomechanics using open-source tools and computer vision.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Opportunities in Advanced Wireless Integrated Circuits</title>
<link href="https://hdl.handle.net/1721.1/162529" rel="alternate"/>
<author>
<name>Fareed, Mo</name>
</author>
<id>https://hdl.handle.net/1721.1/162529</id>
<updated>2025-08-28T03:07:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Opportunities in Advanced Wireless Integrated Circuits
Fareed, Mo
The continued evolution of wireless communications, novel compact radars, and power electronics has driven demand for high-performance semiconductor materials capable of operating at higher power density, fast switching speeds, and improved efficiency. Gallium Nitride (GaN) has emerged as a leading candidate due to its superior electrical properties compared to traditional silicon (Si), silicon carbide (SiC), and gallium arsenide (GaAs). GaN’s high power density, thermal stability, and high-frequency operation make it an ideal candidate for applications in 5G/6G infrastructure, satellite communications, defense radar, electric vehicles, and power electronics. However, widespread commercial adoption of GaN faces significant barriers, including high production costs, supply chain constraints, and integration challenges within existing silicon-based fabrication processes.&#13;
&#13;
This thesis explores the opportunities and challenges associated with GaN-based integrated circuits (ICs) in the context of advanced wireless systems by utilizing Dr. Eugene Fitzgerald’s innovation framework – Technology, Markets, and Implementation (TMI). A comparative analysis of monolithic vs. board-level GaN integration is conducted. The research highlights that scaling GaN wafer production to approximately 10,000 wafers per year (200mm sized wafers) is necessary to achieve cost-effective monolithic integration, yet current defense-driven demand is insufficient to drive economies of scale. Instead, commercial applications—such as telecommunications, power electronics, and consumer RF devices—are target audiences that can take advantage of monolithic integration in high volume. &#13;
&#13;
The findings indicate that while defense applications have led non-monolithic GaN adoption (that is, discrete GaN transistor adoption), they cannot sustain large-scale production alone due to small volume. The semiconductor industry must navigate manufacturing bottlenecks, cost reduction strategies, and foundry availability to ensure GaN’s transition from a niche, high-cost technology to a commercially viable solution. By mapping the TMI intersections and addressing economic and technical barriers, this thesis provides strategic insights into how GaN technology can achieve scalable production, unlock new market opportunities, and shape the future of advanced wireless integrated circuits.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stock-constrained design of pseudo-standard walls&#13;
from studs off-cuts</title>
<link href="https://hdl.handle.net/1721.1/162528" rel="alternate"/>
<author>
<name>Fontaine, Anouk</name>
</author>
<id>https://hdl.handle.net/1721.1/162528</id>
<updated>2025-08-28T03:07:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Stock-constrained design of pseudo-standard walls&#13;
from studs off-cuts
Fontaine, Anouk
The AEC industry is responsible for 40% of global greenhouse gas emissions and 38% of EU waste, much of which is landfilled. The AEC waste represents an immense portion of resources that could be used instead of new materials. Many ongoing research projects have explored ways of reusing irregular components in construction, from whole steel trusses to single elements, triangulated subparts, or even irregular wood offcuts in order to mitigate the intensive recycling and deconstruction processes. However, the research has focused on general methodologies or one-off prototypes. This paper introduces a systematic approach to repurpose discarded steel and timber studs - components that make up to 10% of waste on local sites (Parigi, 2021) - into modular, steel-frame, load-bearing walls, providing a way to build new structures for the growing global demand for housing and infrastructure, while minimizing the creation of new emissions through the use of waste elements. Through a topdown and stock-constrained design approach, geometry optimization through a matching algorithm is combined with topology optimization to generate and evaluate various configurations to minimize new emissions and maximize structural efficiency. A human-scale prototype further assesses costs, architectural and structural flexibility, construction feasibility, robotic efficiency, and embodied emissions, offering a promising pathway for sustainable construction through effective waste reuse. For the available inventory, a human-scale prototype gives data on the workflow. This approach tackles the issues of existing waste stock with the growing demand for infrastructure and minimizes embodied emissions through structurally efficient resource use by pushing forward a systematic implementation of reuse in common construction practices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ensuring Security of Supply while Decarbonizing Islanded Heavy Industrial Electricity Systems</title>
<link href="https://hdl.handle.net/1721.1/162527" rel="alternate"/>
<author>
<name>Kumar, Prashant</name>
</author>
<id>https://hdl.handle.net/1721.1/162527</id>
<updated>2025-08-28T03:07:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Ensuring Security of Supply while Decarbonizing Islanded Heavy Industrial Electricity Systems
Kumar, Prashant
Electricity is set to become the central pillar of both energy production and consumption in the global effort to achieve net-zero emissions. As key sectors—transportation, chemicals, and heavy industry—seek to decarbonize by electrifying their operations, industrialized nations face mounting strain on their electricity systems. This strain is further compounded by the rising demand for electricity driven by data centers and artificial intelligence applications, heralding a future of potentially unrelenting load growth.&#13;
In such a context, it becomes not merely prudent but essential to approach decisions regard- ing investment and operation in the electricity sector with analytical rigor. Advanced capacity expansion models provide the tools for this task. In this thesis, the GenX model is employed to study Taiwan’s electricity system—an islanded, industrially-intensive grid—evaluating the evolution of its capacity mix, generation profile, prices, emissions, and overall costs.&#13;
Our findings suggest that a reliable path to decarbonization lies in a considered combination of natural gas-fired generation with carbon capture and storage (CCUS), renewable sources such as solar and wind, and energy storage systems. Furthermore, this study finds that integration of nuclear and geothermal technologies significantly improves the cost-effectiveness of achieving decarbonization targets.&#13;
This thesis also addresses the “missing money” problem endemic to energy-only electricity markets, examining how the introduction of a capacity market influences both investment and operational outcomes. We find that the efficacy of capacity markets is highly sensitive to the design parameters of the demand curve and the capacity credit values of the resources. For islanded systems such as Taiwan’s, a pragmatic approach to ensuring security of supply may involve retaining existing natural gas infrastructure as a strategic reserve, paired with a capacity market design that avoids excessive conservatism, leveraging the absence of policy interactions and competition with neighboring electricity markets, as observed in Europe.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Grid Enhancing Technologies: Optimization and Benefit for Distribution Grids</title>
<link href="https://hdl.handle.net/1721.1/162526" rel="alternate"/>
<author>
<name>Anastos, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/162526</id>
<updated>2025-08-28T03:06:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Grid Enhancing Technologies: Optimization and Benefit for Distribution Grids
Anastos, Daniel
One of the largest existential challenges the US and other countries face is climate change. And maybe no other system is more crucial to combatting climate change than the grid.  Increasingly more requirements have been put onto the transmission and distribution grids to play an even larger role than they have in the past; consider AI, EV, residential solar, electrification of heat, decarbonization of buildings, increasing energy rates, old infrastructure. Improving the grid is a necessity to decarbonize and innovate. However, utilities, backed up by state regulation, usually, but not always, use traditional techniques to expand grid capacity and increase resiliency as opposed to investing in modern grid technology that would more quickly allow for future innovations and decarbonization. These technologies, or techniques, are broadly called grid enhancing technologies, or GETs. There are rational reasons why GETs are not used more often. Utilities are correctly, highly risk averse because they must safely and adequately supply power directly to people. Utilizing new technologies, even if proven, can be a risk that utilities are unwilling, or not allowed, to take given their role and responsibility. But these risks are largely avoided with the technologies discussed in this paper and one could argue these technologies could not only make the grid cheaper to expand but also give the grid more resilience. This paper explores how a particular grid section can increase its solar penetration by avoiding traditional hosting capacity limitations and use not even innovative GETs but GETs that are largely tested and proven. Traditionally, at some limit, the utility will stop allowing solar in an area due to various grid constraints. This paper explores how a utility may solve these constraints using new methods to avoid large grid expansion CAPEX costs and utilize new technologies or techniques. Some of the techniques explored here are commercial scale energy storage support at substations, PV curtailment, and volt-var optimization control.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geothermal Energy Planning Considerations for Military Operational Energy Demands</title>
<link href="https://hdl.handle.net/1721.1/162525" rel="alternate"/>
<author>
<name>Seckfort, Cody L.</name>
</author>
<id>https://hdl.handle.net/1721.1/162525</id>
<updated>2025-08-28T03:06:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Geothermal Energy Planning Considerations for Military Operational Energy Demands
Seckfort, Cody L.
Contingency locations are temporary military bases that are often established in austere or contested environments. These locations rely heavily on diesel fuel for electrical power, which creates logistical vulnerabilities and increases the risk to personnel conducting fuel resupply missions. While the Department of Defense has made progress in adopting renewable energy technologies, many of these systems remain too large, inefficient, or underdeveloped for widespread use in operational environments. Geothermal energy presents a promising but underexplored alternative for generating reliable, on-site electrical power without the need for continuous fuel resupply.&#13;
This thesis evaluates the feasibility of geothermal energy systems for military operational energy demands and introduces a modified power planning process that incorporates geothermal considerations. The research focuses on closed-loop geothermal systems, utilizing an example system called the “Mil-Loop”, which is designed to minimize the system surface footprint and support remote installations. The planning process integrates existing geothermal tools, including GEOMAP/TEST for resource estimation and GEOPHIRES for system modeling and performance analysis. The Mil-Loop System Model incorporates each step of the planning process to produce a site-specific power system profile. &#13;
A case study using site-specific data from Bagram Airfield was used to assess the performance of a hybrid geothermal-diesel power system. The results suggest that geothermal system integration could reduce diesel fuel consumption by up to 42.9 percent over a 40-year site lifecycle. A sensitivity analysis indicates that geothermal system power output, drilling time, and installation costs are the most critical parameters affecting system viability. Advances in drilling technology and heat extraction have the potential to reduce installation costs and timelines, making geothermal more competitive with diesel generation. This thesis also identifies a gap in military energy planning resources, specifically the lack of frameworks that include geothermal options for operational environments. It recommends that the DoD begin integrating geothermal technologies into its energy planning strategies and develop modular systems that can be deployed in contested or resource-constrained areas. &#13;
While this research is limited by simplified power demand modeling and generalized tool assumptions, it offers a practical framework for evaluating geothermal viability in future defense applications. This study demonstrates that geothermal energy systems, particularly closed-loop configurations, can serve as a viable and strategically beneficial power source for military operations. When paired with targeted technology development and thoughtful integration into planning processes, geothermal systems can reduce logistical burdens, improve energy resilience, and enhance mission success in operational environments.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Levelized Cost of Fuel (LCOF) studies for microreactors using TRISO&#13;
fuel in hydride and beryllium-based composite moderators in open&#13;
and closed fuel cycles</title>
<link href="https://hdl.handle.net/1721.1/162524" rel="alternate"/>
<author>
<name>Balla, Sai Prasad</name>
</author>
<id>https://hdl.handle.net/1721.1/162524</id>
<updated>2025-08-28T03:07:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Levelized Cost of Fuel (LCOF) studies for microreactors using TRISO&#13;
fuel in hydride and beryllium-based composite moderators in open&#13;
and closed fuel cycles
Balla, Sai Prasad
This study provides a comprehensive techno-economic evaluation of a specific class of nuclear batteries—high-temperature gas-cooled 10 MW_th microreactors (HTGRs) with TRISO fuel in prismatic- and pebble-bed cores—using four composite moderator concepts (MgO–Be, MgO–BeO, MgO–YH, MgO–ZrH). These options are compared against a prismatic graphite benchmark, under both once-through and continuous-recycle fuel cycles.&#13;
&#13;
In once-through prismatic systems, hydride-based moderators can reduce overall fuel-cycle costs by up to about 20% relative to graphite, whereas beryllium-based moderators may remain 40–50% costlier due to higher raw material expenses. Shifting from prismatic blocks to pebble beds decreases moderator usage and increases burnup, thus making advanced moderator options more competitive. &#13;
&#13;
Adopting a continuous-recycle strategy replaces enrichment with reprocessing and can further lower fuel-cycle costs by roughly 30%. Coupling a sodium-cooled fast reactor (SFR) to supply transuranic’s further reduces the cost: SFR driver fabrication and reprocessing can account for the bulk of total costs, rendering microreactor-level variations comparatively minor. Meanwhile, pebble-bed designs propose ultra-high burnups and extended residence times, which could yield significant economic gains, contingent on demonstrated long-term TRISO fuel integrity.&#13;
&#13;
Waste handling also factors into the analysis. Deconsolidation—removing the inert moderator before disposal—can shrink spent-fuel volumes by more than 90%, easing repository demands. Continued R&amp;D into advanced additive manufacturing, high-burnup TRISO performance, and streamlined waste management will be crucial for capitalizing on these potential cost advantages.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Robust Dexterous Manipulation Enabled by Learning at Scale inSimulation</title>
<link href="https://hdl.handle.net/1721.1/162523" rel="alternate"/>
<author>
<name>Bhatia, Jagdeep Singh</name>
</author>
<id>https://hdl.handle.net/1721.1/162523</id>
<updated>2025-08-28T03:07:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Robust Dexterous Manipulation Enabled by Learning at Scale inSimulation
Bhatia, Jagdeep Singh
Robots with robust bimanual dexterity have the potential to transform industries such as manufacturing and healthcare by performing complex tasks at human-level proficiency. While end-to-end learning methods have shown promise in achieving this goal, scaling these approaches remains challenging. Existing paradigms suffer from high costs associated with collecting large-scale, high-quality demonstrations on physical systems and face performance saturation due to reliance on offline data. We propose a task-agnostic pipeline that leverages robotics simulation to overcome these limitations. In particular, we introduce DART, a cost-effective, augmented reality, robot teleoperation platform for scalable data collection. We demonstrate through user study that it enables twice the throughput of existing systems. We also present a learning algorithm that integrates real-world demonstrations with reinforcement learning to surpass performance plateaus. Finally, we design a method that zero-shot transfers policies trained in simulation on real robots using only RGB input. Together, these contributions provide a practical and scalable path toward achieving general-purpose dexterous robot manipulation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Image Registration and Gantry Tracking System of Clytia hemisphaerica</title>
<link href="https://hdl.handle.net/1721.1/162522" rel="alternate"/>
<author>
<name>Bunch, Bradley</name>
</author>
<id>https://hdl.handle.net/1721.1/162522</id>
<updated>2025-08-28T03:08:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Image Registration and Gantry Tracking System of Clytia hemisphaerica
Bunch, Bradley
Understanding nervous system function and evolution requires detailed behavioral analysis of model organisms such as the jellyfish Clytia hemisphaerica. However, its size and rapid, free-swimming nature pose significant tracking challenges. This work presents a platform for the XY gantry system developed to overcome these hurdles for high-resolution behavioral monitoring. Separately, to prepare for downstream neural analysis, we developed an automated neuron segmentation pipeline - tailored for image registration purposes. Together, the tracking system and the analysis preparation pipeline provide powerful, distinct tools for high-throughput behavioral quantification and facilitate future studies linking behavior to underlying neural dynamics in Clytia hemisphaerica.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital Twin Technology Applied to Automotive Diagnostics</title>
<link href="https://hdl.handle.net/1721.1/162521" rel="alternate"/>
<author>
<name>Mwarage, Jessy Mbagara</name>
</author>
<id>https://hdl.handle.net/1721.1/162521</id>
<updated>2025-08-28T03:07:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Digital Twin Technology Applied to Automotive Diagnostics
Mwarage, Jessy Mbagara
There is currently a lot of interest in the area of Digital Twin (DT) Technology. Physical product oriented organizations are increasingly looking for ways to stay ahead of the technological innovation curve in order to not get disrupted by more agile entrants. Therefore, the promise of a technology like DT is alluring for the sake of maintaining a competitive edge. This thesis seeks to explore the potential benefits of DT technology alongside what challenges might be faced in implementing one. To this end, a problem statement is formulated in the field of automotive diagnostics. This is a key value addition field for automotive companies seeking to better manage the diagnosis and repair of their automobiles in the field or the manufacturing environment. The problem is further concretized with a study of some user-driven use cases and needs in a real automotive company. From these needs, a set of requirements is formulated to guide the architecture and design of a DT demonstration. The process of architecting and designing the DT is documented. This includes a deep dive on the modeling approaches considered, the solution space for the architecture and the detailed design and implementation of a DT demonstration from a selected architectural concept. The DT demonstration is then operated under controlled conditions in order to showcase some of its capabilities. Pursuant to all this, a reflection on the effectiveness of the demonstration and the lessons learned about the implementation process are discussed. The results of the study and demonstration show some promise for organizations seeking to adopt DT technology, in this particular case for automotive diagnostics. The benefits are mainly in terms of better system architecture  planning and the increased potential for better incorporating lessons learned from products operating in the field back into the design process. These benefits are weighed against the socio-technical challenges of implementing DTs from the outset of a system design exercise.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data Acquisition for Enhancing Human-Informed Topology&#13;
Optimization</title>
<link href="https://hdl.handle.net/1721.1/162520" rel="alternate"/>
<author>
<name>Wang, Zach</name>
</author>
<id>https://hdl.handle.net/1721.1/162520</id>
<updated>2025-08-28T03:07:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Data Acquisition for Enhancing Human-Informed Topology&#13;
Optimization
Wang, Zach
This thesis presents a survey application designed for the future development of HumanInformed Topology Optimization (HiTop) towards the deeper integration of optimization and real-world feasibility. Topology optimization produces high-performance designs by optimally distributing material, but its application in professional environments remains limited due to fabrication constraints and inflexible design workflows. To address this, the Carstensen Group developed HiTop, which integrates optimization algorithms with human experience, allowing engineers to modify the computer design based on their professional judgment. Thus, the future development of HiTop requires real-world data on human preferences. This project introduces a web-based survey app integrated with Qualtrics. It presents users with various design scenarios and computer-optimized designs, and records their modifications and reasoning. A preliminary survey collected responses from 13 professionals and engineering students. Preliminary findings suggest that engineers consistently focus on similar regions of interest, even when motivated by different reasons. However, the sample size is too small to make any statistically significant conclusions. While the platform mostly performed as intended, a bug related to data storage was discovered during analysis. The issue has since been resolved, and the tool is now fully functional and ready for broader deployment.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating Motivational Drivers of Participation in W3C’s Web Standards Development Process</title>
<link href="https://hdl.handle.net/1721.1/162519" rel="alternate"/>
<author>
<name>Lauber, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/162519</id>
<updated>2025-08-28T03:07:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Investigating Motivational Drivers of Participation in W3C’s Web Standards Development Process
Lauber, Emily
This research investigates the motivational drivers for companies and individuals to participate in the World Wide Web Consortium’s Web standards development process. Motivational drivers are identified through a literature review, primary sources, and interviews. Thirteen semi-structured interviews were conducted with questions related to participants’ experience with the World Wide Web broadly, Web standards in general, the organization of W3C, and game modeling of the process. W3C was selected as the case study of Web-related standards bodies because of its unique model of paid membership yet open standards available royalty-free. The W3C standards process requires consensus-building, horizontal review, and proof of implementation before the organization officially recommends the specification. Existing research documents the history and value of standardization across industries, the modeling of various Standards Development Organizations (SDOs) in information industries, and the negotiation of international Internet governance. This thesis does not attempt to prove a societal benefit of Web standards but instead focuses on an individual’s belief in societal benefit and how that belief drives their engagement with W3C.&#13;
&#13;
Initial findings point to members seeking economic, philosophic, and moral value through participation in Web standards development. A game theory framework evaluates the economic value of different players within the ecosystem and identifies that Web browser vendors and long-time consortium members have greater power for their preferred specification outcomes than Web developers or newcomers. Despite changes in the Web ecosystem in the past 30 years, W3C members continue to be drawn to the Web for the same philosophical intents that Sir Tim Berners-Lee designed the Web for. There are shared concerns though that the economic power players identified in the game modeling has damaged or will threaten the philosophy of an open, safe, accessible Web. Interviewees shared personal beliefs that there was a moral responsibility to engage in Web standards development and enable W3C’s mission of “empowering humanity”. Further research is required to catalogue more motivational drivers, evaluate drivers across other Web-related Standards Development Organizations, and rank the priority of motivations when the different drivers are in tension.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Objective Generation of Pareto-Optimal Perception Architectures for Autonomous Robotic Systems</title>
<link href="https://hdl.handle.net/1721.1/162518" rel="alternate"/>
<author>
<name>Putnam, Rachael M.</name>
</author>
<id>https://hdl.handle.net/1721.1/162518</id>
<updated>2025-08-28T03:07:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Multi-Objective Generation of Pareto-Optimal Perception Architectures for Autonomous Robotic Systems
Putnam, Rachael M.
Designing perception systems for autonomous robots and vehicles requires balancing sensor performance against cost, complexity, and integration constraints. This thesis introduces GO4R (Generation and Optimization of Perception System Architectures for Robotics), a multi-objective framework that jointly designs sensor selection, placement, against volumetric, entropy-based utility metric H (-) and monetary cost M ($). Perception Entropy H is formalized as a volumetric measure of uncertainty across a voxelized regions of interest (ROI), which naturally rewards coverage, overlap, and redundancy required for robust sensor fusion and calibration.&#13;
&#13;
NSGA-II is implemented with custom mixed-variable operators to specifically handle both continuous (e.g. sensor poses) and discrete (e.g. sensor type/count) decision variables found in this problem. Two case studies, long-range outdoor navigation on a Clearpath Jackal and short-range indoor navigation on ANYmal-C, demonstrate the framework’s ability to generate Pareto-optimal sensor architectures under vastly different ROI definitions and operating conditions. In the Jackal study, GO4R converges to a population of 11 novel Pareto-optimal designs, and revealing sensitivity to voxel size and importance weighting. In the ANYmal-C study, the compact, uniformly weighted ROI yields a flatter Pareto front with 25 Pareto-optimal designs, and underscores how intrinsic sensor parameters (e.g. angular resolution, and Field of View) dominate design trade-offs when baseline coverage is already high.&#13;
&#13;
Key architectural decisions are analyzed, quantified by their impact on Pareto front shape, and ordered according to the GO4R method to successively reduce uncertainty. The resulting guidelines provide practitioners with a rigorous, reusable process for tailoring perception systems to task-specific requirements. Finally, GO4R provides a publicly available NVIDIA Isaac Sim extension to aid practitioners in following the GO4R method, no matter their Autonomy application. Future work will extend GO4R to dynamic environments, improve fidelity of generated designs, and incorporate additional cost metrics such as computational load and maintainability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Agricultural Waste Utilization: Life Cycle Assessment for Selecting Carbon-Management Best Practices on a Global Scale</title>
<link href="https://hdl.handle.net/1721.1/162517" rel="alternate"/>
<author>
<name>Shao, Yu-Tong</name>
</author>
<id>https://hdl.handle.net/1721.1/162517</id>
<updated>2025-08-28T03:07:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Agricultural Waste Utilization: Life Cycle Assessment for Selecting Carbon-Management Best Practices on a Global Scale
Shao, Yu-Tong
Crop residues are a widely available form of agricultural waste with several possible reuse applications, including use as biofertilizers, animal feed, biofuels, and for carbon sequestration. However, in many parts of the world, large quantities of these residues are still burned in the field, releasing significant amounts of greenhouse gases (GHGs) and air pollutants to the atmosphere. This study aims to evaluate alternative and carbon-efficient strategies for reusing crop residues – especially focusing on rice straw and wheat straw – by conducting life cycle assessments (LCA) of multiple utilization pathways. Different alternative scenarios for utilizing crop residues are assessed: incorporating residue in field, animal usage for feeding, pyrolysis for electricity generation, pyrolysis for carbon sequestration, and electricity generation through residue combustion. Specifically, for the scenarios of pyrolysis and electricity generation through residue combustion, the maximum feasible transport distances of crop residues from agricultural fields to processing facilities are modeled for different logistics methods, providing information for the 