Artificial Intelligence 18 min read

Time Series Forecasting of Key Business Indicators: Methods, Models, and Practical Deployment

This article presents a comprehensive study on forecasting critical business metrics such as traffic, order volume, and GMV using traditional, machine‑learning, and deep‑learning time‑series models, detailing feature engineering, model design, experimental comparison, online deployment, and monitoring strategies.

Ctrip Technology
Ctrip Technology
Ctrip Technology
Time Series Forecasting of Key Business Indicators: Methods, Models, and Practical Deployment

Background – Accurate prediction of key internet‑industry metrics (traffic, orders, GMV) is essential for budgeting and strategic decisions. These metrics form a classic time‑series forecasting problem, especially challenging around holidays and other irregular events.

Problem Definition & Challenges – The target is a 30‑day forecast for multiple indicators, with special emphasis on holiday periods. Challenges include external shocks (policy, disasters), limited historical samples, and the need for multi‑feature, long‑horizon predictions.

Data Selection & Feature Construction – Daily historical data for several years were collected. Seven holiday‑related temporal features (e.g., is_holiday, day_of_week, days_to_next_holiday) and additional covariates were engineered, resulting in about 20 features.

Model Overview

Traditional statistical models (moving average, ARIMA, exponential smoothing) are interpretable but usually univariate and suffer from error accumulation in multi‑step forecasts.

Machine‑learning models (linear regression, tree‑based, Prophet) handle multivariate inputs but still face multi‑step error buildup.

Deep‑learning models (TCN, LSTM, Transformer) overcome these drawbacks. The following deep models were evaluated:

Prophet – Decomposes a series into trend, seasonality, and holiday components: y(t)=g(t)+s(t)+h(t)+ϵ with g(t)=kt+m for trend.

Informer – Transformer‑based model with sparse self‑attention (O(L log L)) and attention distillation for long sequences.

Autoformer – Introduces a deep decomposition architecture and an auto‑correlation mechanism to replace point‑wise attention, achieving state‑of‑the‑art results on long‑term forecasts.

DLinear – Simplifies the pipeline by applying a decomposition layer followed by separate linear layers for trend and seasonal components.

TimesNet – Extracts multiple periodicities via FFT, reshapes them into 2‑D tensors, and processes them with Inception‑style 2‑D convolutions before adaptive fusion.

Experiments – Models were trained on the engineered features (input dimension 20) with various seq_len , label_len , and pred_len settings. The best configuration was seq_len=180 , label_len=90 . TimesNet achieved the lowest MSE/MAE, while Prophet was used in an ensemble to improve detection of sudden jumps.

Deployment & Monitoring – Trained models were deployed online to generate daily T+30 forecasts. A monitoring system tracks prediction deviations at T+3, T+7, T+14, T+21, and T+30, producing reports for rapid model adjustment. Example results on a test channel showed average forecast bias of +1.03% during a holiday period.

Conclusion & Outlook – Deep learning has become dominant for time‑series forecasting, but its performance still depends on data volume. Future work includes richer feature engineering, confidence interval estimation, and hybrid models that combine traditional statistical insights with deep architectures.

deep learningtime series forecastingbusiness metricsProphetInformerAutoformerTimesNet
Ctrip Technology
Written by

Ctrip Technology

Official Ctrip Technology account, sharing and discussing growth.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.