I recently attended a John Galt Systems User Conference event to train and speak. Before leaving the next day, I attended a morning session where the company discussed its Forecast-ability AnalysisTM that provides information on how well various methods are able to forecast items in a product line. The analysis, for example, identifies high versus low “forecastable” products. The presenter showed a two-by-two matrix that is similar to the Alan L. Milliken (from BASF) matrix described in my Fall 2009 Journal of Business Forecasting (JBF) column, titled “Volume-Variance Analysis.” That column depicted a vertical axis labeled Sales Volume versus a horizontal axis labeled Variability of Sales (co-labeled: Coefficient of Variation = [(Standard Deviation of Period Sales or Forecast Error)/(Average Period Sales)]). BASF uses it to apply differing inventory management strategies depending on what quadrant a product falls into when the matrix is split into four High-Low quadrants.
Similarly, the John Galt presenter’s matrix depicted a vertical axis labeled Value versus Forecastability the latter is dependent on the forecast accuracies of specified forecasting methods, such as time-series and life cycle. The company stated that it could be used to make decisions on “forecast, production, purchasing, and inventory policies tailored to the needs of product groups.” There were a variety of questions asked at the end of the presentation before I raised my hand to make a point. It was: “Forecasters also really need this type of information to keep their jobs!”
Forecasters also really need this type of information to keep their jobs!
I went on to say that I overheard someone, the day before, mention that he was working with his boss to set his annual job performance review factors, which included a demand forecast accuracy target. I believe performance reviews based on achieving specific forecast error levels are generally risky propositions for forecasters. Eventually a forecast organization will fail to achieve an established target, so trying to do so year-after-year is literally job suicide. Demand forecast accuracy—for example in terms of Mean Absolute Percent Error (MAPE) — is highly dependent on a company’s inherent demand variation that differs greatly across companies. Companies with large demand variations will invariably have higher MAPEs, while those with low demand variations will have smaller MAPEs. In addition, companies with aggressive marketing and sales organizations will tend to experience lower forecast accuracy because heavily promoted and new products are fraught with much higher demand uncertainties, leading to increased MAPEs for these companies. In addition, in highly competitive industries, competitor actions will add to demand variations as well.
To exaggerate, I said that forecasters could assure more accurate forecasts if, and only if, their company’s sales and marketing organizations would agree to stop all promotions and the use of aggressive sales tactics. Furthermore, forecasters could guarantee 100% forecast accuracy if their company would agree to shut its doors and go out of business. Then a forecast of zero demand would be 100% accurate! These, of course, are unrealistic demands for a forecaster to make. Instead, when faced with the proposition to include forecast accuracy targets in their job performance reviews, they should be prepared by knowing the forecastability of their product lines in advance. This should include the forecastability of various breakdowns of the product line, such as by demand drivers, product groups, markets, and product aggregations.
The Winter 98/99 issue of the JBF included my column titled “Forecasting Is About Understanding Variations.” In it I discussed the fact that forecasting is all about understanding why demand has varied in the past and using that knowledge to forecast how it will vary in the future. After all, if demand did not vary, forecasting would be an easy job—just use the naïve forecasting approach (e.g., this period’s forecast is last period’s actual demand). I proposed understanding a product’s inherent demand variations by computing its Mean Absolute Percent Variation (MAPV), which is defined as equal to the average absolute deviation from the historical mean demand divided by the historical mean demand and expressed as a percent.
A comparison of a product forecast’s MAPE with its MAPV would give an indication of how well one is doing in forecasting a product’s demand. For example, if a product’s MAPV was 40% and its MAPE was 20%, then the forecasting process is explaining half of the inherent variations. Meanwhile if the MAPE was 60%, then the process was making things worse by adding more variation and uncertainty into the forecast.
An analysis that compares MAPE to MAPV calculates the Percent of Variation Explained (PVE), where
PVE = 100 × (1- MAPE/MAPV). It is an indicator of what portion of the inherent variation in demand is being explained. In addition, it is also a good measure of forecast accuracy over time because it can tell you whether your forecasting of a product is getting better or worse. Solely monitoring the MAPE for a product’s forecast, in isolation of its MAPV, might be adequate for most situations; however, it may not be when demand variation is changing. To illustrate the value of PVE and MAPV in contrast to MAPE alone, consider a case where a product’s MAPE has increased over time by 10%. On the face of it, it would seem like its forecasting accuracy had deteriorated. However at the same time, if its MAPV had increased by 25%, this would actually result in a higher PVE; thus forecast accuracy would have actually improved since a greater percent of the demand variation was explained. Note: MAPE would have increased by 25% if forecast accuracy (in terms of PVE) stayed the same.
PVE is also useful for benchmarking demand forecast errors. When people ask me what MAPE is “best-in-class” in order to benchmark against their own, I advise them that it may not be wise to compare the MAPEs of different companies. If one company’s demand is more volatile, it will naturally experience worse forecast accuracy when measured by MAPE. Take for example Company A that has a MAPE of 20% and a MAPV of 20% (i.e., a PVE of 0%) and Company B that has a MAPE of 30% and a MAPV of 50% (a PVE of 40%). Standard use of MAPE as an accuracy measure for benchmarking would say Company A is doing a better job of forecasting than Company B. However, Company B is the clear winner using PVE.
While PVE could be used to estimate a product line’s forecastability, other approaches have emerged since I wrote that column more than 16 years ago. John Galt’s Forecastability Analysis is one of these, and examines a variety of different forecast methods to gauge the forecastability of a product. In Michael Gilliland’s book, The Business Forecasting Deal (John Wiley & Sons, Inc., 2010), he proposed using a Forecast Value Added (FVA) analysis to assess how well a product is being forecasted. This is done by comparing the MAPE of a product’s forecast to one that would be generated using the naïve forecast method.
USING FORECASTABILTY ESTIMATES
Profiling a product line’s Forecastability is useful for learning how to improve forecasting and to make better planning decisions over time. Forecasters, for example, can identify which product groupings are being forecast well and which are not. This can help them identify which ones are worth spending more effort on in order to improve their forecast accuracy. In conjunction with the traditional A/B/C analysis—in which products are ranked by a factor such as corporate value and grouped into high, medium and low categories—they can identify which A, and possibly which B, products are best to work on in order to improve the overall forecast accuracy.
A forecastability profile of a product line can also be useful to forecasters for understanding the forecastability of the five major components that drive demand variability; namely, the Seasonal, Trend, Promotional/ Event, Business Cycle, and Unknown/ Random variation drivers. For example, current forecast methods might be doing a good job of forecasting seasonal impacts, but not as well as in predicting variations due to trends. If so, less effort should be applied to try to improve seasonal forecasting, and more should be spent on improving forecasting trends.
In addition, profiling a product line’s forecastability is also useful to planners who need to use forecast errors for risk management tactics, such as supply-side planning for excess production capacity and for setting inventory “buffer stocks” to mitigate potential supply shortages, as well for demand-side contingency planning to cover potential shortfalls in demand.
All the ways discussed above to use forecastability analysis to improve forecasting and planning were the primary focus of my column over 16 years ago. However, at the time, I failed to realize that more fully understanding your ability to forecast your product line might also useful for keeping your job. My opinion is that a forecasting organization needs to continually have the confidence of the executive team, in that it understands that the organization is the only one that can provide the most accurate forecasts year-after-year. Thus “sustained credibility” should be the most important aspect of a forecaster’s annual job performance review. That said, some day your boss might propose including forecast accuracy targets as part of your annual job performance review. That discussion will go a lot smoother and easier if you already know your product line’s forecastability!
This article was first published in the Journal of Business Forecasting Fall 2015.