menu

Forecast accuracy

0
742 views

What's the best method to measure the forecast accuracy when using a rolling forecast?

Extra info:
- Forecast is given on a monthly base for coming 12 months.
- Each month all forecast for next 12 months is reviewed.
- Production + delivery is 5 months

Forecasting
Planning
Analysis
Pieter Vissers
19 months ago

5 answers

0

Even if you are doing rolling updates of a forecast I think it is important to 'freeze' a forecast prior to the start of the year and then use that as the benchmark. If your lead time is five months for orders, perhaps you can compare the frozen forecast with the revisions on a quarterly basis. Some companies I have worked with do a single reforecast of the official budget during the second quarter of the year so that there is a new forecast halfway through the year. There could be material events the completely throw the previous forecast into disarray. In that case, comparing with that less relevant forecast is not that helpful.

Barry Korn
19 months ago
0

If you're trying to assess the accuracy of a forecasting process, then your first question should be whether the method can forecast the past (i.e. run it against some known data). Without that, the future values are still not useful no matter how finely you slice the actual forecast range.

Douglas M. Brown
19 months ago
0

The classic best practice is to set the "plan" for the year using estimated demand values for each month.  Then, the forecast is the updated 'what you think is going to happen' vs this baseline plan of demand as things unfold as the year progresses.   You want to store the following for each month:

  1. Original Baseline Plan Value - all 12 values for each month set at the beginning of the year
  2. Forecasted Value (also commonly called Projection) - updated, typically each month, based on what you know at the time.  Can 'save' the multiple versions of this, depending on if your business needs it.
  3. Actual Value - as the month closes, enter actual values


The forecast is typically updated each month, but the baseline value remains static.  Most people measure Actual vPlan and Actual vForecast and the forecast is a point in time measurement since it changes.

For example, at the start of the year I have my baseline plan of demand.  My forecast is typically 'the same' at this starting point as I know the same amount of information for both.

As the first month passes, perhaps a certain product starts to underperform in order volume.  I will likely downgrade my forecasted demand for this product over the remaining months.  But the plan is 'set'.  This is my first forecast.

If the product demand continues to degrade, I may downgrade the future month's forecast even more, so the 'first' forecast is more bullish then the second more downgraded forecast.   Some people store these various forecast iterations to measure forecast accuracy, others simply measure v Plan and the latest Forecast entered. 

If your lead time is long - and from the info it seems like it may be five months - then accurate forecasting is important.  And measuring the changes of iterations of the forecast may be helpful.   So you may want to store your 11 versions of the Forecast, assuming you update it each month, and measure actuals to your original plan and then the Forecast versions.  The version that is used to schedule production is the most critical, obviously.

Autumn Bayles
19 months ago
0

Hi Pieter. Great info from Barry, Douglas & Autumn in their responses.

Understanding the purpose of the measurement may also expand responses to your question. ie are you benchmarking your specific FA measuring technique for S&OP/management reporting, or are you trying to identify ways to fine-tune your accuracy levels? If the earlier, above responses are great. If the latter, you may need to look beyond just FA, and understand the characteristics of your current forecasting - am i using the right method for each type of product or customer category, do i have bias (positive/negative) in my forecast, do we understand the enrichment process and does it makes sense for what i'm forecasting, what level are my safety stocks at and are they masking underlying FA issues etc.

Matthew Theocharous
19 months ago
0

When measuring forecast accuracy, you have 2 critical dimensions:
1) Time to event: how long is it from the time when you create your forecast until you will will have the actual results.
2) Method/Model: how did you create the forecast. This could be as simple as annotating that you use a 12-month rolling average or as complex as a specific regression type using specific independent variables.


The most important factor that usually gets overlooked in this tracking is who overrides what as part of the method detail. Normally, some level of “expert” adjustment is applied. It is imperative to track these adjustments to analyze if they are value-added or not.

Once you have these 2 dimensions, you can quickly compare like forecast types over different forecast timelines or like forecast timelines over different forecast types and look at your error and see if there are any obvious drivers. Usually, you will find that the “expert” overrides are done in a manner that directly benefits the expert, which calls into question their validity. You can also do some simple statistical tests when your data is formatted like this to help support your findings.

Pat Lapomarda
18 months ago

Have some input?