Concatenation-Informer: Pre-Distilling and Concatenation Improve Efficiency and Accuracy
Abstract
Time series widely exist in the real world, and a large part of them are long time series, such as weather information records and industrial production information records. The inherent long-term data dependence of long-time series has extremely high requirements on the feature extraction ability of the model. The sequence length of long time series also directly causes high computational cost, which requires the model to be more efficient. This paper proposes Concatenation-Informer containing a Pre-distilling operation and a Concatenation-Attention operation to predict long time series. The pre-distilling operation reduces the length of the series and effectively extracts context-related features. The Concatenation-Attention operation concatenates the attention mechanism's input and output to improve the efficiency of parameters. The total space complexity of the Concatenation-Informer is less than the complexity and usage of the Informer.
Related Papers
- Korean text-to-speech and concatenation cost function(2006)
- Applications of Virtual Concatenation in Digital Wrapper Technology(2003)
- → A Chinese Text Classification Method With Low Hardware Requirement Based on Improved Model Concatenation(2020)
- → Concatenations of Terms of an Arithmetic Progression(2022)
- → Concatenation-Informer: Pre-Distilling and Concatenation Improve Efficiency and Accuracy(2023)