AI Commentator: Narrating Sports Games through Multimodal Perception and Large Language Models
Author(s)
Purohit, Sonia
DownloadThesis PDF (40.09Mb)
Advisor
Oliva, Aude
Feris, Rogerio
Terms of use
Metadata
Show full item recordAbstract
Automated visual understanding is an essential part of the sports industry, particularly in the context of major sports tournaments. The scale of generated video footage necessitates the use of automated systems to generate insights and enhance fan experiences. One area where this is particularly challenging is commentary, which requires detailed information about play-by-play action, a task that cannot be efficiently carried out by human commentators at scale.
We tackle this problem for grand-slam tennis through an IBM partnership with the Championships, Wimbledon. This thesis introduces a novel system that utilizes computer vision to extract play-by-play metadata and convert it into fluent commentary using large language models. Our computer vision module utilizes a single camera feed to understand every detail of the game – court and net detection, player and ball tracking, player poses, and fine-grained shot classification, all in near-real-time. This metadata is then combined with additional information from other modalities, such as crowd audio and radar-measured ball speed, and fed into a "data2text" large language model to generate commentary in natural language.
Our system not only supports the narration of match content at scale, but also powers the collection of additional metadata to facilitate additional match insights in the future.
Date issued
2023-06Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology