Robust benchmarks are essential for researchers as they provide a strict framework for evaluating novel methods across an array of datasets. These benchmarks contribute significantly to the advancement of the industry by fostering innovation and ensuring fair comparisons among competing methods. However, existing benchmarks for Time Series Forecasting (TSF) are limited in their ability to fully evaluate methods across different application domains and to ensure equitable fairness.
Chinese researchers from the China Normal University, Huawei Cloud Computing Technologies, and Aalborg University have introduced a new tool, the Time Series Forecasting Benchmark (TFB), specifically designed to address these limitations. TFB has a curated, well-organized collection of complex and realistic datasets from diverse domains, making it a robust platform for researchers to evaluate different forecasting methods while addressing dataset bias and limited coverage.
The TFB possesses various key characteristics that are crucial for carrying out fair and rigorous TSF evaluations. It covers a variety of existing methods, including statistical learning, machine learning, and deep learning approaches. It also includes an array of evaluation strategies and metrics. This allows comprehensive evaluations across various methodologies and evaluation settings, thereby enriching TSF research. The TFB’s scalable and flexible pipeline improves the fairness of method comparisons by employing uniform evaluation strategies and standardized datasets. This also eliminates biases, leading to more accurate performance assessments.
Researchers have uncovered valuable insights into the performance of different TSF methods across various datasets and characteristics by conducting experiments on TFB. They have discovered that statistical methods like VAR and linear regression perform comparatively against state-of-the-art methods. The effectiveness of Transformer-based approaches on datasets showing strong seasonality and non-linear patterns has been proven as well.
In conclusion, the TFB is a significant advancement in the TSF field, providing researchers with a standardized platform for evaluating forecasting methods. By resolving issues of fairness, dataset diversity, and method coverage, TFB aims to foster innovation and enable more robust comparisons among competing methodologies, thus pushing TSF research forward. The research is available on GitHub and the paper for further reading.