Forecasting Research Trends Using Knowledge Graphs and Large Language Models
Author(s)
Tomczak, Maciej; Park, Yang Jeong; Hsu, Chia‐Wei; Brown, Payden; Massa, Dario; Sankowski, Piotr; Li, Ju; Papanikolaou, Stefanos; ... Show more Show less
DownloadPublished version (5.993Mb)
 Publisher with Creative Commons License
Publisher with Creative Commons License
Creative Commons Attribution
Terms of use
Metadata
Show full item recordAbstract
Since ancient times, oracles (e.g., Delphi) has the ability to provide useful visions of where the society is headed, based on key event correlations and educated guesses. Currently, foundation models are able to distill and analyze enormous text-based data that can be used to understand where societal components are headed in the future. This work investigates the use of three large language models (LLM) and their ability to aid the research of nuclear materials. Using a large dataset of Journal of Nuclear Materials papers spanning from 2001 to 2021, models are evaluated and compared with perplexity, similarity of output, and knowledge graph metrics such as shortest path length. Models are compared to the highest performer, OpenAI's GPT-3.5. LLM-generated knowledge graphs with more than 2 × 105 nodes and 3.3 × 105 links are analyzed per publication year, and temporal tracking leads to the identification of criteria for publication innovation, controversy, influence, and future research trends.
Date issued
2025-09-12Department
Massachusetts Institute of Technology. Department of Nuclear Science and EngineeringJournal
Advanced Intelligent Systems
Publisher
Wiley
Citation
Maciej Tomczak, Yang Jeong Park, Chia-Wei Hsu, Payden Brown, Dario Massa, Piotr Sankowski, Ju Li, Stefanos Papanikolaou. Adv. Intell. Syst.. 2025; 000, e2401124.
Version: Final published version