What's the best method to measure the forecast accuracy when using a rolling forecast?
- Forecast is given on a monthly base for coming 12 months.
- Each month all forecast for next 12 months is reviewed.
- Production + delivery is 5 months
Even if you are doing rolling updates of a forecast I think it is important to 'freeze' a forecast prior to the start of the year and then use that as the benchmark. If your lead time is five months for orders, perhaps you can compare the frozen forecast with the revisions on a quarterly basis. Some companies I have worked with do a single reforecast of the official budget during the second quarter of the year so that there is a new forecast halfway through the year. There could be material events the completely throw the previous forecast into disarray. In that case, comparing with that less relevant forecast is not that helpful.
If you're trying to assess the accuracy of a forecasting process, then your first question should be whether the method can forecast the past (i.e. run it against some known data). Without that, the future values are still not useful no matter how finely you slice the actual forecast range.
Hi Pieter. Great info from Barry, Douglas & Autumn in their responses.
Understanding the purpose of the measurement may also expand responses to your question. ie are you benchmarking your specific FA measuring technique for S&OP/management reporting, or are you trying to identify ways to fine-tune your accuracy levels? If the earlier, above responses are great. If the latter, you may need to look beyond just FA, and understand the characteristics of your current forecasting - am i using the right method for each type of product or customer category, do i have bias (positive/negative) in my forecast, do we understand the enrichment process and does it makes sense for what i'm forecasting, what level are my safety stocks at and are they masking underlying FA issues etc.
When measuring forecast accuracy, you have 2 critical dimensions:
1) Time to event: how long is it from the time when you create your forecast until you will will have the actual results.
2) Method/Model: how did you create the forecast. This could be as simple as annotating that you use a 12-month rolling average or as complex as a specific regression type using specific independent variables.
The most important factor that usually gets overlooked in this tracking is who overrides what as part of the method detail. Normally, some level of “expert” adjustment is applied. It is imperative to track these adjustments to analyze if they are value-added or not.
Once you have these 2 dimensions, you can quickly compare like forecast types over different forecast timelines or like forecast timelines over different forecast types and look at your error and see if there are any obvious drivers. Usually, you will find that the “expert” overrides are done in a manner that directly benefits the expert, which calls into question their validity. You can also do some simple statistical tests when your data is formatted like this to help support your findings.