The Whiplash Effect
Before vendors began selling demand planning software, forecasting was essentially a balancing act between competing factions within the enterprise. The marketing department would set a high target because it wanted the product to be a success, says Tom Burns, CFO for the enterprise network division of telecommunications equipment maker Alcatel. Salespeople, on the other hand, would come in with conservative forecasts since they wanted to keep their sales quotas low and manageable. "If marketing says we are going to sell $US150 million, and the sales guy says we are going to sell $US75 million, what do we tell the supply chain guys to build?" Burns asks.
One can see the appeal of a computerised system that could provide an objective answer to that question. Furthermore, the maths needed to build these systems has been around for nearly 75 years. It was Ronald Fisher, a British mathematician working after World War I, who first conceived of a system that could take numbers, look for patterns and then make predictions based on those patterns. The result, the classic regression model, is still used in 90 per cent of demand planning software today.
Regression essentially takes multiple variables, makes inferences about the relationships between them, and ultimately charts the result as a curve showing upward or downward trends. The curve can be extended to predict future results. For example, a regression study of the rate of death among people between the ages of 20 and 80 would, despite numerous exceptions, find a general trend that as people got older, the rate of death increased. You could then predict that an 81-, 85- or 90-year-old person would be even more likely to die than someone who is 80.
The problem is that regression analysis - and any other statistical model a demand forecasting system may use - requires clean data and a potential relationship among the variables, says Rob Cooley, US technical director at KXEN, a demand forecasting vendor. In Fisher's day there wasn't the computational power to consider more than a few variables, which made it easier to focus on the accuracy of a few data points, like in the rate of death example. But today, computerised systems make it possible to consider hundreds, if not thousands, of variables - anything from weather to time of day - and a correspondingly vast number of data points.
Most of these data points are inaccurate, or more specifically, are only an estimate of what actually happened. The most common example is guessing what consumers bought based on what the company itself sold. While a retail store knows how much of a product it sells, the manufacturer only knows how much the retailer orders - and more often than not there are distributors acting as middlemen to further muddy the transfer of information about sales.
Logistics executives at Procter & Gamble studied how this dynamic affected demand planning and found that the further away your data is from the point of sale, the more data accuracy decreased and forecasting errors increased. For example, P&G found that consumers bought its Pampers nappies at a fairly steady clip and that retailers' orders reflected this: Orders had moderate swings one would associate with relatively flat demand. The distributors, however, would react to moderate increases by not only increasing their orders, but by upwardly adjusting their reserve stock, signalling a much larger increase in demand back to P&G. The manufacturer, in turn, would ramp up its Pampers production and continue the bullwhip effect down the supply chain. Ultimately, everyone would be left holding excess inventory.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.