Skip to content Skip to footer

Researchers from Carnegie Mellon University have suggested MOMENT: A range of open-source foundation models for machine learning, tailored for general-purpose time series analysis.

Large models pre-training on time series data is a frequent challenge due to the absence of a comprehensive public time series repository, diverse time series characteristics, and emerging benchmarks for model testing. Despite this, time series analysis remains integral in various fields, including weather forecasting, heart rate irregularity detection, and anomaly identification in software deployments. Existing pre-trained models from language, vision, and video analysis can be adapted for use in time series data specifics.

The application of transformers to time series analysis faces its own constraints, such as self-attention mechanism’s quadratic growth with input token size. Treating time series sub-sequences as tokens could improve efficiency and forecasting effectiveness. The new methodology from language models, ORCA, adapts pre-trained models to different modes via the align-then-refine fine-tuning. Earlier research used this method to adjust language pre-trained transformers for time series analysis, however, these models frequently require significant memory and computational resources.

Researchers from Carnegie Mellon University and the University of Pennsylvania have developed MOMENT, an open-source set of foundation models for general-purpose time series analysis. Featuring the Time series Pile, a diverse collection of public time series data, MOMENT addresses specific time series challenges and allows large-scale multi-dataset pretraining. These transformer models are pre-trained extensively using a masked time series prediction task from multiple domains, ensuring versatility and robustness for diverse time series analysis processes.

MOMENT collects a varied range of public time series data, dubbed the Time Series Pile, consolidating datasets from different repositories to overcome a lack of comprehensive time-series datasets. The compile datasets encompass multiple tasks, including long and short-horizon forecasting, classification, and anomaly detection. MOMENT’s design involves a transformer encoder and a reconstruction head, pre-trained on a masked time series prediction task. Depending on the task’s needs, MOMENT is fine-tuned for downstream operations like forecasting, classification, anomaly detection, and imputation.

The study behind MOMENT compares it with other deep learning and statistical machine learning models across a range of tasks. Particularly noteworthy is the superior performance of statistical and non-transformer-based methods like ARIMA, N-BEATS, and k-nearest neighbors for short-horizon forecasting, long-horizon forecasting, and anomaly detection respectively.

In conclusion, the research introduces MOMENT, the first open-source time series foundation models set, developed through comprehensive data compilation, model pre-training, and addressing time series-specific challenges. Using Time Series Pile and innovative strategies, MOMENT exhibits impressive performance in pre-training various sized transformer models. The research proposes an experimental benchmark for evaluating time series foundation models and highlights the excellent performance of smaller statistical and shallower deep learning methods. Companies aim to foster collaboration and further advancements in time series analysis by releasing the Time Series Pile, along with code and model weights.

Leave a comment

0.0/5