dc.contributor.author | Liang, Jiaming | |
dc.contributor.author | Cao, Lei | |
dc.contributor.author | Madden, Samuel | |
dc.contributor.author | Ives, Zack | |
dc.contributor.author | Li, Guoliang | |
dc.date.accessioned | 2024-04-04T17:42:23Z | |
dc.date.available | 2024-04-04T17:42:23Z | |
dc.date.issued | 2024-03-12 | |
dc.identifier.issn | 2836-6573 | |
dc.identifier.uri | https://hdl.handle.net/1721.1/154072 | |
dc.description.abstract | Timeseries analytics is important in many real-world applications. Recently, the Transformer model, popular in natural language processing, has been leveraged to learn high quality feature embeddings from timeseries: embeddings are key to the performance of various timeseries analytics tasks such as similarity-based timeseries queries within vector databases. However, quadratic time and space complexities limit Transformers' scalability, especially for long timeseries. To address these issues, we develop a timeseries analytics tool, RITA, which uses a novel attention mechanism, named group attention, to address this scalability issue. Group attention dynamically clusters the objects based on their similarity into a small number of groups and approximately computes the attention at the coarse group granularity. It thus significantly reduces the time and space complexity, yet provides a theoretical guarantee on the quality of the computed attention. The dynamic scheduler of RITA continuously adapts the number of groups and the batch size in the training process, ensuring group attention always uses the fewest groups needed to meet the approximation quality requirement. Extensive experiments on various timeseries datasets and analytics tasks demonstrate that RITA outperforms the state-of-the-art in accuracy and is significantly faster --- with speedups of up to 63X. | en_US |
dc.publisher | Association for Computing Machinery (ACM) | en_US |
dc.relation.isversionof | 10.1145/3639317 | en_US |
dc.rights | Creative Commons Attribution | en_US |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | en_US |
dc.source | Association for Computing Machinery | en_US |
dc.title | RITA: Group Attention is All You Need for Timeseries Analytics | en_US |
dc.type | Article | en_US |
dc.identifier.citation | Jiaming Liang, Lei Cao, Samuel Madden, Zachary Ives, and Guoliang Li. 2024. RITA: Group Attention is All You
Need for Timeseries Analytics. Proc. ACM Manag. Data 2, 1 (SIGMOD), Article 62 (February 2024), 28 pages. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory | |
dc.relation.journal | Proceedings of the ACM on Management of Data | en_US |
dc.identifier.mitlicense | PUBLISHER_CC | |
dc.eprint.version | Final published version | en_US |
dc.type.uri | http://purl.org/eprint/type/JournalArticle | en_US |
eprint.status | http://purl.org/eprint/status/PeerReviewed | en_US |
dc.date.updated | 2024-04-01T07:49:13Z | |
dc.language.rfc3066 | en | |
dc.rights.holder | The author(s) | |
dspace.date.submission | 2024-04-01T07:49:13Z | |
mit.journal.volume | 2 | en_US |
mit.journal.issue | 1 | en_US |
mit.license | PUBLISHER_CC | |
mit.metadata.status | Authority Work and Publication Information Needed | en_US |