# Forecasting Error: Determining Forecast Fit

## What is Forecast Fit?

Forecast fit refers to how successfully your chosen forecast method fits to your actuals. A good forecast fit captures all patterns and trends, but ignores random noise, producing a low percentage of forecasting error.

To determine whether your forecast method fits well, this post reviews the following key concepts:

• Forecast Fit – Residual Analysis
• Out of Sample Testing / Holdout Sample
• Forecast Error

At the bottom: beware the straight line forecasting myth.

## Residual Analysis

Residuals indicate the difference between your chosen forecasting method and actuals. You can look at residuals over time and their distribution to understand how well the chosen forecast method fits to your historic data.

A residual time graph shows the difference between forecasts (red line) and actuals (blue line).

The residual time graph illustrates the difference between forecast values and your actual historical data over time.

Ideally, a residual graph will look like noise. It should contain no discernible patterns or repeated column groupings. Otherwise, the graph indicates that the forecasting method is not picking up on seasonality, recurring promotions, price changes, or other events. You may also be missing exponential growth or decay.

The peak of this graph should always be centered at or around zero. If it is not peaked near zero, it indicates that your forecasting method is biased, meaning it is systematically over- or under-forecasting. For example, your data might contain a growing or declining trend but is being modeled with a level method. Alternatively, your data might contain exponential growth but is being modeled with a linear trend method.

If your graph does not have a bell shape (e.g. hill shape), it most likely means that you have outliers that are skewing your bell curve to have a longer tale on one side versus the other. In some cases, you might just have too little data to create a smooth distribution graph; in this case you should ignore the graph until you have a more representative sample of residuals.

## Out of Sample Testing / Holdout Sample

Out-of-sample testing is a popular way to test the likely accuracy of a forecasting method. It is unbiased since it’s stripped of all adjustments and filters.

To conduct this test, take out the most recent periods of demand history (the holdout sample) as if it did not exist. The number of time periods that you remove should correlate to your normal forecast horizon (e.g. if you forecast three months into the future, the holdout should be at least three months long). You can then apply different forecasting methods to see what the holdout sample error is or Mean Absolute Deviation/Error (MAD or MAE). This will tend to be higher than the mean error. The method with the lowest MAD will likely be more accurate.

The holdout sample is strictly anecdotal data since it covers only a limited number of periods. So while it’s helpful to test different methods, it does not, by itself, determine which method is best.

## Forecasting Error

Forecasting error refers to the difference between your forecast and your actuals. It can be calculated in a variety of ways, including:

• Symmetric Mean Absolute Percent Error (SMAPE)
• Mean Absolute Percent Error (MAPE)
• Last Absolute Deviation Z-Score