Strongest combination of directional accuracy (78%) and error rate across the test set. Trains significantly faster than LSTM while matching its predictive capability on weekly S&P 500 data, making it the most practical choice for regular retraining.
Literature consistently shows that ensemble methods outperform single estimators on financial time series when the feature set is well-engineered. XGBoost and Random Forest dominate on tabular data tasks, while LSTM models close the gap when longer temporal sequences are available. Linear Regression serves as a critical baseline, meaning if any model cannot significantly beat it, the feature engineering likely needs revisiting. No model should be trusted in isolation; the ensemble of predictions across models gives a more reliable signal than any single forecast.
| Model | MAE | MAPE | RMSE | Dir. Acc. |
|---|
| Model | Training Time | Complexity |
|---|