I love the book Rise of the Revenue Marketer. In it Debbie Qaqish describes the need for a change program to move your marketing department from being a cost centre (“We’re not sure what marketing do, but we need them to do the brochures”), to a revenue centre (“They’re responsible for generating a significant proportion of our company’s revenue”). Though the journey is easy to describe, it’s a long and arduous path to take.
We at Redgate have been on this path for a while now, and we’ve made enormous progress, particularly in the last 12 months. But one of the things that slowed us down was holding on to certain beliefs about how to measure marketing performance, how to measure the impact of marketing work – and holding on to those beliefs for too long, when perhaps they just weren’t true. Lots of these ideas came from conferences, blogs, books and make a lot of sense on paper. But when you get to the real world of implementing something, the reality is not always as expected.
Here I’ll go through five “myths” that I found to be unfounded. Of course, these come with big caveats – we’re one specific org, with a specific market, with particular advantages and disadvantages – so all should be taken with a pinch of salt. Still, with that caveat in mind here are my five, starting with the most controversial:
Myth 1: Attribution Models are Useful
The idea of a marketing attribution model is that you can take every lead, opportunity or sale and somehow work out “What were all of the things we did in marketing that contributed to that outcome, and what value would we give to each of these things?”. For example, I just generated a lead, I could go back and look through the path history of that individual, find that she clicked on a PPC ad, attended an event, did a Google search, interacted with us on Facebook, and so on. I then have some smart “multi-touch” model that assigns value to each of these (maybe the first or last get higher scores? There are lots of alternatives). If you then know the value of a lead (let’s say, $10), you can work out the Return on Marketing Investment (ROMI) for each activity by comparing the “value” (e.g. maybe $3 for the PPC click), against the spend.
But, I think this is baloney. This is a classic example where – just because you can do the maths, doesn’t mean to say the results are accurate or useful. The model is flawed for at least the following two reasons:
- Data. It’s impossible to get all of the data about an individual’s path history – everything they’ve done, interacting with your brand over the last few years. Not difficult, but impossible. You don’t know about the offline activity they’ve done, you don’t know about the browsing they’ve done on their mobile, on their home laptop at the weekend, you’re very unlikely to have a link to their activity from three years ago (when they actually discovered the brand) and so on. NB: some MarTech orgs promise they can deliver on all these things, but I don’t believe them!
- Over-simplistic view of how customers learn about a brand. The reality is that an individual will have 100s of different interactions with your brand all of which build up to a given perspective. They’ll attend an event, they’ll speak to a specific person on your stand who may or may not be great, they’ll read 100s of different pages on your site, they’ll talk to their colleagues about you, they’ll read third party review sites, they’ll kick the tyres of the software, they’ll see an ad on a news site (without clicking on it!), they’ll remember a comment from their boss two years ago (“Oh, you should check out Redgate, see what they’ve got”), and so on. All of these things somehow add up to a favourable view of your org (or otherwise!) and to try and model that with a simple sequential attribution model isn’t, I think, valid. The best you can hope to do is make sure every interaction with your brand is awesome and have faith that will lead to positive results.
Okay, maybe it’s not all baloney – but the approach is, I believe, significantly flawed. Nevertheless, there are some things that can be measured – which brings me to myth 2…
Myth 2: Everything should be Measured
Not sure this is controversial actually. To quote Seth Godin:
The approach here is as simple as it is difficult: If you’re buying direct marketing ads, measure everything. Compute how much it costs you to earn attention, to get a click, to turn that attention into an order. Direct marketing is action marketing, and if you’re not able to measure it, it doesn’t count.
If you’re buying brand marketing ads, be patient. Refuse to measure. Engage with the culture. Focus, by all means, but mostly, be consistent and patient. If you can’t afford to be consistent and patient, don’t pay for brand marketing ads.
The danger is that, in an effort to measure everything and show the return on everything, you stop activities because they’re fundamentally un-measurable. The myth is that “Because you need to show a repeatable, predictable and scalable revenue engine, you need to understand and measure the impact of everything you do”. But that’s taking the argument to an extreme view – the reality is that there will always be spend in your budget where you won’t be able to tie revenue to that spend. Ever.
Myth 3: You Need a Funnel
Perhaps controversial again. A traditional funnel implies a sequential path for a customer from something like “Awareness of problem” to “Discovered our solution [to that problem]”, “Evaluated our solution” finally “Becomes customers [then perhaps evangelist etc]”.
Again, we’ve never found this to represent reality. Of course all models are exactly that – models. They’re not perfect, but if they’re useful, that’s okay.
But I feel the funnel fundamentally misrepresents how real people actually interact with a brand. From talking to customers what you find is that there are just an enormous number of holes in this approach. For example:
- The “Awareness of problem” is just too crude. The chances are that your content was very unlikely to be the way people became aware of the problem; that actually their knowledge has built up in a fragmented way over time; that they’re still learning all through the sale, even post-sale.
- The idea of “stages” like this just doesn’t make sense generally. Often people are already customers of yours – and they’re finding new things you offer. Their understanding of your offering is forever a slow build up (from a theoretical “nothing” many years ago, to some partial understanding now), that it goes back and forth.
A funnel implies a single direction of travel, a path to enlightenment, ending with purchasing your tool. But, from talking to customers I find a much messier reality – people go back and forth, there are interrupts and so on. We’ve found it almost impossible to actually classify people in to different stages – it’s too over-simplistic to be useful (we’ve found).
Myth 4: Conversion Rates Matter
Again, controversial. But our experience is that conversion rates are the lever you are least able to pull. Why? Because for most orgs, they have a pretty well optimised process for converting leads at different stages. At Redgate, there are certain lead types that convert at a 70% conversion rate, within a 2 week period – and that has been consistent for about 10 years, almost regardless of what we do! We’ve spent a lot of time and effort on this stuff – its value is in “Can we improve/optimise this?” – and generally we find we can’t. Of course you monitor it, to make sure it’s not dropping (e.g. because some leads got lost), but otherwise – stop worrying.
Finally, myth 5…
Myth 5: This is an Impossible Task
I wanted to end on a positive. 2-3 years ago, I thought the task of building out a “revenue engine” that was vaguely water-tight, believable and actionable was never going to happen. There were so many holes in the data, it was so hard linking activities to outcomes, that it wouldn’t actually happen.
I pleased to say that isn’t what has happened. It’s been pretty arduous, but we are now on the brink of a model that allows us to:
- See the impact of many (but not all!) of our activities
- Track the resultant leads through to opportunity then revenue
- Match the activities with budgets to pull out ROMI
- Use this insight to stop certain activities (already cut a few things), start a few more, and adjust how we do other things.
A simple example of the last point – in 2018 we ran a number of webinars of different sorts. We tracked through the leads, opportunities and revenue from each of these and found that the impact of having a “star” presenting the webinar (someone big in our community) had a far bigger upside than expected – at little or no additional cost to us, other than the trouble of finding and convincing these stars. I.e. one webinar with a star involved would generate more high quality leads than 2-3 webinars without such a person on the event. So this year, we’re changing our program a little – fewer webinars, but each more impactful with more big names presenting.
Just a small example, but there are countless more – we’re building out a model where we know how and which levers we can pull (and which we can’t), and at what cost. It took a long time to get there, but it’s finally becoming real. Feel free to get in touch if you want to know more!
Leave a Reply