| dc.contributor.author | Spangher, Lucas | |
| dc.contributor.author | Bonotto, Matteo | |
| dc.contributor.author | Arnold, William | |
| dc.contributor.author | Chayapathy, Dhruva | |
| dc.contributor.author | Gallingani, Tommaso | |
| dc.contributor.author | Spangher, Alexander | |
| dc.contributor.author | Cannarile, Francesco | |
| dc.contributor.author | Bigoni, Daniele | |
| dc.contributor.author | de Marchi, Eliana | |
| dc.contributor.author | Rea, Cristina | |
| dc.date.accessioned | 2025-11-19T15:44:47Z | |
| dc.date.available | 2025-11-19T15:44:47Z | |
| dc.date.issued | 2025-05-24 | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/163763 | |
| dc.description.abstract | Plasma disruptions remain a major obstacle to sustained commercial operation of tokamak-based fusion devices. Although machine learning (ML) methods have shown promise for predicting disruptions, their performance and generalizability suffer from a lack of common benchmarks and comprehensive multi-device evaluations. To address this, we present DisruptionBench, a new benchmarking platform designed to standardize how ML-driven disruption prediction systems are trained and evaluated on multi-machine data. DisruptionBench spans three devices - Alcator C-Mod, DIII-D, and EAST - and includes tasks of varying difficulty: zero-shot, few-shot, and many-shot training regimes to assess each model’s ability to transfer learned representations to new or data-limited machines. We evaluate four state-of-the-art ML architectures. Two are re-implementations of notable prior work: a random forest (Cristina Rea in PPCF 60:084008, 2018) and the Hybrid Deep Learner (HDL) (Zhu in NC 61: 026607, 2020). We also propose two new approaches tailored for disruption prediction: a transformer-based model inspired by GPT-2, capable of learning long-range temporal dependencies through self-attention, and a Continuous Convolutional Neural Network (CCNN) that leverages continuous kernels to capture subtle variations in plasma signals. Across the nine benchmarking tasks, the CCNN demonstrates consistently strong performance and achieves the highest overall Area Under the ROC Curve (AUC) in intra-machine tests (up to 0.97 on C-Mod). Nevertheless, the GPT-2-based approach and HDL can outperform CCNN in specific transfer scenarios, particularly when the test machine is underrepresented in training data. We further analyze the significance of memory length in capturing precursor phenomena, providing evidence that longer context windows can boost predictive accuracy. | en_US |
| dc.publisher | Springer US | en_US |
| dc.relation.isversionof | https://doi.org/10.1007/s10894-025-00495-2 | en_US |
| dc.rights | Creative Commons Attribution | en_US |
| dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | en_US |
| dc.source | Springer US | en_US |
| dc.title | DisruptionBench and Complimentary New Models: Two Advancements in Machine Learning Driven Disruption Prediction | en_US |
| dc.type | Article | en_US |
| dc.identifier.citation | Spangher, L., Bonotto, M., Arnold, W. et al. DisruptionBench and Complimentary New Models: Two Advancements in Machine Learning Driven Disruption Prediction. J Fusion Energ 44, 26 (2025). | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Plasma Science and Fusion Center | en_US |
| dc.relation.journal | Journal of Fusion Energy | en_US |
| dc.identifier.mitlicense | PUBLISHER_CC | |
| dc.eprint.version | Final published version | en_US |
| dc.type.uri | http://purl.org/eprint/type/JournalArticle | en_US |
| eprint.status | http://purl.org/eprint/status/PeerReviewed | en_US |
| dc.date.updated | 2025-07-18T15:31:49Z | |
| dc.language.rfc3066 | en | |
| dc.rights.holder | The Author(s) | |
| dspace.embargo.terms | N | |
| dspace.date.submission | 2025-07-18T15:31:49Z | |
| mit.journal.volume | 44 | en_US |
| mit.license | PUBLISHER_CC | |
| mit.metadata.status | Authority Work and Publication Information Needed | en_US |