Back in December, David Levinson put up a wonderful post with graphical representations looking to match predictions to reality. The results aren’t good for the predictors. Lots of official forecasts call for increased vehicle travel, while many places have seen stagnant or declining VMT. It’s not just a problem for traffic engineers, but for a variety of professions (I took note of similar challenges for airport traffic here previously).
Prediction is hard. What’s curious for cities is that despite the inherent challenges of developing an accurate forecast, we nonetheless bet the house on those numbers with expensive regulations (e.g. requiring off-street parking to meet demand) and projects (building more road capacity to relieve congestion) based on bad information and incorrect assumptions.
One of the books I’ve included in the reading list is Nate Silver’s The Signal and the Noise, Silver’s discussion of why most efforts at prediction fail. In Matt Yglesias’s review of the book, he summarizes Silver’s core argument: “For all that modern technology has enhanced our computational abilities, there are still an awful lot of ways for predictions to go wrong thanks to bad incentives and bad methods.”
Silver rose to prominence by successfully forecasting US elections based on available polling data. In the process, he argued the spin of pundits added nothing to the discussion; political analysts were seldom held accountable for their bad analysis. Yet, because of the incentives for punditry, these analysts with poor track records continued to get work and airtime.
Traffic forecasts have a lot in common with political punditry – many of the projects are woefully incorrect; the methods for predicting are based more on ideology than observation and analysis.
More troubling, for city planning, is the tendency to take these kinds of projections and enshrine them in our regulations, such as the way that the ITE (Institute of Transportation Engineers) projections for parking demand are translated into zoning code requirements for on-site parking. Levinson again:
But this requirement itself is odd, and leads to the construction of excess off-street parking, since at least some of that parking is vacant 300, 350, 360, or even 364 days per year depending on how tight you set the threshold and how flat the peak demand is seasonally. Is it really worth vacant paved impervious surface 364 days so that 1 day there is no spillover to nearby streets?
In other words, the ideology behind the requirement wants to maximize parking.
It’s not just the ideology behind these projections that is suspect; the methods are also questionable at best. In the fall 2014 issue of Access, Adam Millard-Ball discusses the methodological flaws of ITE’s parking generation estimates. (Streetsblog has a summary available) Millard-Ball notes that the “seemingly mundane” work of traffic analysis has enormous consequences for the shape of our built environment, due to the associated requirements for new development. Indeed, the trip generation estimates for any given project appear to massively overestimate the actual impact on traffic.
There are three big problems with the ITE estimates: first, they massively overestimate the actual traffic generated by a new development, due to non-representative samples and small sample sizes. Second, the estimates confuse marginal and average trip generation. Build a replacement court house, Millard-Bell notes, and you won’t generate new trips to the court – you’ll just move them. Third, the rates have a big issue with scale. Are we concerned about the trips generated to determine the impact on a local street, or on a neighborhood, or the city, or the region?
What is clear is that these estimates aren’t accurate. Why do we continue to use them as the basis of important policy decisions? Why continue to make decisions based on bad information? A few hypotheses:
- Path dependence and sticky regulations: Once these kinds of regulations and procedures are in place, they are hard to change. Altering parking requirements in a zoning code can seem simple, but could take a long time. In DC, the 2006 Comprehensive Plan recommended a review and re-write of the zoning code. That process started in earnest in 2007. Final action didn’t come until late in 2014, with implementation still to come – and even then, only after some serious alterations of the initial proposals.
- Leverage: Even if everyone knows these estimates are garbage, the forecasts of large traffic impacts provide useful leverage for cities and citizens to leverage improvements and other contributions from developers. As Let’s Go LA notes, “traffic forecasting works that way because politicians want it to work that way.”
- Rent seeking: There’s money to be made from consultants and others in developing these inaccurate estimates and then proposing remedies to them.
Pingback: Prices before Pavements | Transportationist