Vanguard Predictive PlanningSOLUTION BROCHURE
Forecast fit refers to how successfully your chosen forecast method fits to your actuals. A forecast is considered a good fit if it captures all patterns and trends, but ignores random noise.
To determine whether your forecast method fits well, check out the following:
- Forecast Fit – Residual Analysis
- Out of Sample Testing / Holdout Sample
- Forecast Error
At the bottom: beware the straight line forecasting myth
Residuals indicate the difference between your chosen forecasting method and actuals. You can look at residuals over time and their distribution to understand how well the chosen forecast method fits to your historic data.
A residual time graph shows the difference between forecasts (red line) and actuals (blue line).
Reading a Residual Time Graph
The residual time graph illustrates the difference between forecast values and your actual historical data over time.
Ideally, a residual graph will look like noise. It should contain no discernible patterns or repeated column groupings. Otherwise, the graph indicates that the forecasting method is not picking up on seasonality, recurring promotions, price changes, or other events. You may also be missing exponential growth or decay.
Reading a Residual Distribution Graph
The peak of this graph should always be centered at or around zero. If it is not peaked near zero, it indicates that your forecasting method is biased, meaning it is systematically over- or under-forecasting. For example, your data might contain a growing or declining trend but is being modeled with a level method. Alternatively, your data might contain exponential growth but is being modeled with a linear trend method.
If your graph does not have a bell shape (e.g. hill shape), it most likely means that you have outliers that are skewing your bell curve to have a longer tale on one side versus the other. In some cases, you might just have too little data to create a smooth distribution graph; in this case you should ignore the graph until you have a more representative sample of residuals.
Out of Sample Testing / Holdout Sample
Out-of-sample testing is a popular way to test the likely accuracy of a forecasting method. It is unbiased since it’s stripped of all adjustments and filters.
To conduct this test, take out the most recent periods of demand history (the holdout sample) as if it did not exist. The number of time periods that you remove should correlate to your normal forecast horizon (e.g. if you forecast three months into the future, the holdout should be at least three months long). You can then apply different forecasting methods to see what the holdout sample error is or Mean Absolute Deviation/Error (MAD or MAE). This will tend to be higher than the mean error. The method with the lowest MAD will likely be more accurate.
The holdout sample is strictly anecdotal data since it covers only a limited number of periods. So while it’s helpful to test different methods, it does not, by itself, determine which method is best.
Forecast error refers to the difference between your forecast and your actuals. It can be calculated in a variety of ways, including:
- Symmetric Mean Absolute Percent Error (SMAPE)
- Mean Absolute Percent Error (MAPE)
- Last Absolute Deviation Z-Score
- Mean Absolute Deviation (MAD)
- Mean Absolute Error (MAE)
- Mean Absolute Deviation Percent (MADP)
Straight Line Forecast Myth: A common misconception about forecasting is that straight-line forecasts are poor forecasts because your data is not a straight line. Your data usually has inherent variability, making it jump around. Sometimes that variability exhibits a trend or pattern and sometimes it is just “noise”. In the absence of real patterns, the best forecast might just be a straight line. Well, what do you do with a straight line forecast? That is where forecasting meets planning. A straight line forecast is a clue to business planners that there are some key trade-offs to measure.
Vanguard Software has 20 years of business forecasting and planning experience. Request a demo of the cloud IBP platform, Predictive Planning, to see how we enable better decision-making under uncertainty, and help companies achieve breakaway performance with advanced analytics forecasting.
Impact of Advanced Analytics ForecastingRESEARCH REPORT
Vanguard Software introduced its first product for decision support analysis in 1995. Today, companies across every major industry and more than 60 countries rely on the Vanguard Predictive Planning platform. Vanguard Software is based in Cary, North Carolina.