The Conundrum of Event Impact Studies

Kevin Johnson
Geografia Company Blog
7 min readNov 7, 2019

--

When public funds are directed towards putting on events, the intention is usually to stimulate local economic activity and, to demonstrate the value of this investment, an economic impact study is usually commissioned.

Unfortunately, there’s an incentive there for the event proponent to encourage the people doing the study to produce the biggest possible value. And the methods we often use to measure event impact make it easy to do this. This might include ‘inflationary models’ that overestimate flow on effects; opaque methods that can’t be scrutinised; and even rubbery survey figures. Of course, we don’t have to do this if we want a realistic measure. And that’s what this story is about.

Events for events’ sake?

It’s fair to say that many public events generate intangible benefits that can’t be easily (or even remotely) quantified. Furthermore, these intangible benefits may be more important than any local and short-term economic gain because the truth is that most events simply reprioritise peoples’ spending. That is, they don’t necessarily generate more activity, they just move it from one weekend to another; one expenditure category to another; or one suburb to another.

Additionally, right now, we’re definitely seeing a decline in discretionary spending in Australia: we would expect the size of an event impact to be lower than it would have been last year.

And this should be okay. Putting public funds into an event that generates intangible community benefits is an admirable end in its own right.

Unfortunately, we’ve ended up in a place where we measure the value of everything in dollar terms, not least because it’s one of the easiest ways to measure something. Proving the value of an event comes down to proving how big the dollar value is.

Fine. But if we’re going to do this, then let’s at least get the measurement correct. Because, if we make these numbers up, we are guilty of two things:

  1. We are risking taking funds away from what may be more effective efforts to stimulate the economy; something that is particularly problematic when purse strings are tight. It’s never a good investment to keep throwing money at something that is not delivering the returns we need. Even less so when we don’t have much money to throw around.
  2. By believing what we are saying about the value of these events, when the claimed benefits don’t appear in local tills, wallets and bank accounts, we are undermining everybody’s confidence in our work, including, in the long-run, the confidence of the grant funding bodies.

I recently heard directly from a regional tourism organisation that they would prefer inflated estimates of event impact because it is more likely to guarantee future funding for that event. That’s a self-defeating argument. If you’re trying to stimulate the economy and it eventually becomes clear you’re not doing a very cost effective job of it, then that funding will dry up and, what’s worse, your economy will be no better off. This is not good, responsible public policy decision-making.

We can do better.

So, let’s do better

We set out to test how well, we as an industry, perform impact analysis. We looked at 11 local tourism events across Australia. These events were chosen because self-reported and published economic impact figures were available for these events. We tested these figures using a standardised methodology we developed in-house, and data sourced from Spendmapp, a Geografia product which presents actual spending data (using bank transactions), rather than estimated spending.

The method

Table 1 shows the 11 events and the self-reported economic impact figures. Most of the details have been omitted or masked to protect the innocent (or guilty). These figures refer to direct consumer spending at the event (they don’t include any assumed flow-on effects).

Notably, it is not always made clear whether the impact being claimed is due to visitor spend only.

Table 1: Claimed Event Impact (source: Various)

To determine how reliable the estimates from Table 1 were, we:

  1. Built a model to predict the daily baseline economic activity (resident and visitor spending) for each of the 11 local areas using the Spendmapp data. We settled on something called a gradient boosted regression model. The inputs for this were historical spending (both resident and visitor), weather variables (daily min and max, as well as rainfall) day of week, month of year, and public holidays¹. The model’s predictions represent economic activity on ‘ordinary’ days on which no unusual economic activity is present.
  2. We compared the model’s predictions against the economic activity directly observed in Spendmapp data during the 11 historical tourism events. This difference between the model estimate and the actual spending figure represents the direct impact of the tourism event on the local economy (with random error being less than +/- 5% with 99% probability). That is:
    Spendmapp Observation — Model Prediction = Total Actual Event Impact
  3. We compared this figure with the event impact claims from Table 1. And then, just to be certain we are comparing our figures fairly, for each event we included a ‘bump-in’ and ‘bump-out’ period. That is, a few days leading up to and after the event when we would expect more than the usual number of visitors to be around, preparing for or extending their break after the event in question.

And the result?

Figure 1 shows the difference between the event impact claims and the observed impact expenditure (the biggest of the events is excluded only because it makes a mess of the chart, but the results are similar).

It’s not a pretty sight. Actual spending only exceeded the claimed spending for two events (an arts and cultural event and a sports event, both in highly popular tourist destinations that, obviously, know what they’re doing when it comes to putting on events). For the rest, the claims were inflated by an average of 57% (with overestimates ranging from 10% to 94%). The biggest differences were for sporting events.

Figure 1: Comparing Actual Impact with Claimed Impact (Source: Geografia, 2019)

Figure 2 gives you a sense of the scale of the difference by event type, with difference between actual spending and the claimed impact running from between 4% and 122%.

Figure 2: The Ratio of Claimed and Observed Expenditure by Event Type Source: Geografia, 2019

If it looks like a duck, quacks like a duck…

Now it may be there are many reasons why the actual spending was so much lower than the expected spending. For example, consider sporting events. The differences between claimed and actual impacts tended to be high. Perhaps this is because, sporting events organisers are more likely to outsource their ticketing and catering, which means ticket purchases and food and beverage spending will not be recorded in local spending data. So maybe the claimed impact is higher because it is counting impacts that aren’t seen locally, whereas our actual impact figure is not including this.

But then, isn’t the point of sponsoring an event all about stimulating local economic activity? After all, local traders and residents are the ones who have to deal with the inconvenience of an event, shouldn’t they benefit from the windfall spending as well? If that’s not the case, then we need to be more honest about how we articulate the economic impact, making it clear we are really estimating the total impact on the state, or nation. Or, by contrast, we could be a little more diligent in our methodology, and calculate local spending impact only.

Or it could be some events are cash-based. Although we weight our actual spending data to account for cash, the weighting won’t be sufficiently large if a higher than average proportion of visitors use cash. But, as far as we know, the events we chose aren’t known to be cash-based events, so even if we missed a bit of the cash transaction activity, it wouldn’t account for such large differences.

We can only conclude that, if it looks like a duck and quacks like a duck, maybe it is a duck.

We are pretty confident that event impact claims are, generally, just too high. I don’t think anyone in our industry is going to be surprised by that statement. Not only do we know we all do it, as I mentioned earlier, people have explicitly told me they want those higher numbers. Right or wrong, bigger numbers are better.

And that’s the conundrum. We all (mostly) want to do good analytical work. But there’s an incentive to gild the lily here.

The data is unequivocally telling us many events just don’t have the scale of effect you might hope for. And while this news might not always be welcome, if you’re in the business of long-term, sustainable, and meaningful economic development, then it’s important to know this. Events will always have intangible benefits we should be proud of. What we shouldn’t be proud of is making claims we can’t substantiate.

Isn’t it better you know how much money you’ve got coming in to your local economy, rather than how much you wish you had coming in?

¹ For those who want to know, the model’s out-of-sample mean absolute percentage error across the 11 events is under 8%. The out-of-sample prediction errors were within +/- 5% for 99% of test data.

--

--