The Economist asks the question that never goes out of style: Who's good at forecasts? "That question is both obvious and critical. And yet, in most instances, we really don't know the answer." I prefer the view that no one and everyone is good at this guessing game. No one in the sense that the future is unknowable. Everyone because even a broken forecasting model can be right at times, albeit for the wrong reason: luck. It's the gray area between those extremes that's dangerously seductive and occasionally useful.
There's a fine line between reasonable forecasts that can help us think productively about the future vs. poorly designed projections that leave us worse off than making random guesses. George E.P. Box's famous quote certainly resonates on this point: All models are wrong, but some are useful.
The main challenge in building a forecasting system is recognizing what's reasonable and what's not. If you spend enough time with the data and compare predictions with the actual results you discover fairly quickly that some data sets and objectives are better suited to forecasting than others. But much depends on the numbers under scrutiny. As an example, your odds for success rise a bit as the time horizon for projections falls...sometimes. Estimating the likely path for industrial production over, say, the next three months is easier than predicting how industrial activity will fare in the fourth quarter of 2023. By contrast, predicting the stock market's returns for the next week is generally a lot tougher than projecting the expected 10-year annualized return.
In the realm of macro, one of the more productive lessons from history is the value of combining forecasts, as a long line of research shows. One example can be found on these pages in regular installments via the monthly US Economic Profile posts (here's the current update). Applying a robust forecasting technique to a broad array of indicators and then aggregating the predictions yields reasonably good estimates of the macro trend in the cause of evaluating business cycle risk. As an example, consider some recent vintage forecasts and how they compare with the actual results:
The red dots in the chart above represent the reported data for the Economic Trend Index (ETI), a diffusion index that's built on 14 financial and economic indicators. The monthly projections fluctuate through time, of course, as shown by the colored bars that reflect forecasts at various dates. But the variation between the forecasts and the actual results is relatively tight, as it has been for some time. The true test will come at a major turning point for the economy (i.e., the start of the next recession), but so far the results are encouraging.
Ditto for my projections of the Chicago Fed National Activity Index, another broad measure of the economic trend (see the previous forecast here and the subsequent report here, for instance). The edge, so to speak, is forecasting by analyzing several benchmarks that track the overall economic trend and using various econometric techniques to produce forecasts. From that pool of predictions, the average forecast tends to provide a mostly reliable approximation of the actual result.
The message is that one way to improve forecasts is to focus on reasonable objectives with time-tested modeling techniques. Aggregating forecasts in search of the big-picture macro trend lends itself to modeling, for instance. The model is still wrong (as all models are), but it's still useful.
Some analysts say that we can and should do much more in terms of applying far more sophisticated models for projecting the future. The possibilities are surely broad and deep on this front. But beware of going too far. Here again a quote from Box (pdf) is appropriate:
Since all models are wrong the scientist cannot obtain a "correct" one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity.
Since all models are wrong the scientist must be alert to what is importantly wrong. It is inappropriate to be concerned about mice when there are tigers abroad.
This entry passed through the Full-Text RSS service — if this is your content and you're reading it on someone else's site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.
from SeekingAlpha.com: Home Page http://seekingalpha.com/article/1877301-wrong-models-good-forecasts-and-the-search-for-useful-guidance?source=feed
Aucun commentaire:
Enregistrer un commentaire