We are getting a new forecast this week for tax revenues in North Dakota. Or so we are told. I’ve written about the problems with these forecasts in the past, but there is a further issue here needing discussion. The simple fact of the matter is a lack of good practice in the overall approach, particularly with how forecast results are disseminated.
The major issue here is the public does not receive the complete forecast for tax revenues. The various budget documents give summaries of information such as the forecast growth rate of measures like taxes and income. The latter is no substitute for the former, and really needs to be corrected going forward. This is particularly the case now with frequent revisions to the revenue outlook.
Revisions are not the problem in my opinion. Revisions when new information becomes available, or when targets are not met, is standard practice in forecasting. It is something I do as frequently as possible with my own work. Forecasts need to be evaluated in light of new data, which could necessitate changes in models, or simply updating of the existing model. However, the entirety of the update should be communicated, even how it impacted estimates of recent past observations. You also want to release information to allow us to compare the revised forecast not just against actual tax values, but also against prior forecasts. It is with this in mind that I offer up the following graphs. All data are from the Rev-E-News publications from the North Dakota Office of Management and Budget from August 2013 to February 2017.
What are the forecast series? Since it is not the case that we get a complete forecast series to utilize I need to explain how these are constructed. First, there is the May 2013 legislative forecast. There is also a May 2015 legislative forecast. There were revisions to the forecast in January 2016, July 2016, and November 2016.
The May 2013 legislative forecast covers the time period of July 2013 to June 2015. The May 2015 forecast covers the period from July 2015 to June 2016. I should state that I assume it actually covered to June of 2017, but the data stopped being reported in June of 2016. We actually have an overlap between the May 2015 forecast and the first available revision from January of 2016. There are a total of six months available for the original and the revised. The July 2016 forecast runs from July of 2016 through the end of the current sample in January of 2017. There was another revision in November of 2016 which gives us three observations overlapping with the July 2016 forecast revision.
The series “fcst1” is simply the May 2013 and May 2015 legislative forecasts as a series. The series “fcst2” is the May 2013 forecast combined with the May 2015 forecast with the January 2016 revisions and the July 2016 forecast appended on to the end. There is a “fcst3” series which is the same as “fcst2” except that the November 2016 revisions are substituted in at the end, but I am not including that one right now.
We can discuss the inaccuracies of the forecasts at a later time. We should also revisit the fact that forecasts underestimating actual revenues can create problems as well. What can we highlight with the following graph. What I take away is that the downward revision of the forecast from the original legislative forecast is not that large, and was clearly not big enough. Maybe if we had the entirety of the series we could better understand the overall forecast revision or its process.
I know this was longer winded and technical than many would like, but that is what you do when you revise a forecast or the forecast process. You go through the gory details of the changes made in order to explain why you needed to make changes to models and process.
Based on the track record so far we would likely expect the revisions to continue to fall. I will go into the details of the specific revenue stream forecasts, not the total, in the next few days posts.