Tuesday, March 30, 2010
Memo from CFO: Ad Metrics Not Good Enough
Following is a sanitized version of an actual email from a CFO to a CMO in a global 1000 company…
TO: Susan James – Chief Marketing Officer
FROM: Amy Ivers – Chief Financial Officer
RE: Congratulations on your recent recognition for marketing efficiency
Susan –
Congratulations on being ranked in the top ten of Media Weeks’ most recent list of “most efficient media buying organizations.” It is a reflection of your ongoing commitment to getting the most out of every expense dollar, and well-earned recognition.
But I can’t help but wonder, what are we getting for all that efficiency?
Sure, we seem to be purchasing GRP’s and click-thru’s at a lower cost than most other companies, but what value is a GRP to us? How do we know that GRPs have any value at all for us, separate from what others are willing to pay for them? How much more/less would we sell if we purchased several hundred more/less GRPs?
And why can’t we connect GRPs to click-thrus? Don’t get me wrong, I love the direct relationship we can see of how click-thrus translate into sales dollars. And I understand that when we advertise broadly, our click-thrus increase. But what exactly is the relationship between these? Would our click-thru rate double if we purchased twice as much advertising offline?
Also, I’m confused about online advertising and all the money we spend on both “online display” and “paid search”. I realize that we are generally able to get exposure for less by spending online versus offline, but I really don’t understand how much more and what value we get for that piece either.
In short, I think we need to look beyond these efficiency metrics and find a way to compare all these options on the basis of effectiveness. We need a way to reasonably relate our expenses to the actual impact they have on the business, not just on the reach and frequency we create amongst prospective customers. Until we can do this, I’m not comfortable supporting further purchases of advertising exposure either online or offline.
It seems to me that, if we put some of our best minds on the challenge, we could create a series of test markets using different levels of advertising exposure (including none) in different markets which might actually give us some better sense of the payback on our marketing expenditures. Certainly I understand that just measuring the short-term impact may be a bit short-sighted, but it seems to me that we should be able (at the very least) to determine where we get NO lift in sales in the short term, and safely conclude that we are unlikely to get the payback we seek in the longer term either.
Clearly I’m not an expert on this topic. But my experience tells me that we are not approaching our marketing programs with enough emphasis on learning how to increase the payback, and are at best just getting better at spending less to achieve the same results. While this benefit is helpful, it isn’t enough to propel us to our growth goals and, I believe, presents an increasing risk to our continued profitability over time as markets get more competitive.
I’d be delighted to spend some time discussing this with you further, but we need a new way of looking at this problem to find solutions. It’s time we stop spending money without a clear idea of what result we’re getting. We owe it to each other as shareholders to make the best use of our available resources.
I’ll look forward to your reply.
Thank you.
So how would you respond? I’ll post the most creative/effective responses in two weeks.
_________________________
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.
Wednesday, March 17, 2010
What 300 Years of Science Teaches Us About WOM Metrics
In the early 18th century, scientists were fascinated with questions about the age of the earth. Geometry and experimentation had already provided clues into the size of the planet, and to the mass. But no one had yet figured out how old it was.
A few enterprising geologists began experimenting with ocean salinity. They measured the level of salt in the oceans as a benchmark, and then measured it every few months thereafter, believing that it might then be possible to work backwards to figure out how long it took to get to the present salinity level. Unfortunately, what they found was that the ocean salinity level fluctuates. So that approach didn’t work.
In the 19th century, another group of geologists working the problem hypothesized that if the earth was once in fact a ball of molten lava, then it must have cooled to its current temperature over time. So they designed some experiments to heat various spheres to proportionate temperatures and then measure the rate of cooling. From this, they imagined, they could tell how long it took the earth to cool to its present temperature. Again, interesting approach, but it led to estimates that were in the range of 75,000 thousands of years. Skeptics argued that a quick look at the countryside around them provided evidence that those estimates couldn’t possibly be correct. But the theory persisted for nearly 100 years!
Then in the early part of the 20th century, astronomers devised a new approach in estimating the age of the earth through radio spectroscopy – they studied the speed with which the stars were moving away from earth (by measuring shifts in light wave spectrum) and found there was a fairly uniform rate of speed. This allowed them to estimate that the earth was somewhere between 700 million and 1.2 billion years old. This seemed more plausible.
Not until 1956, shortly after the discovery of atomic half-lives, did physicists actually come up with the answer that we have today. When they studied various metals found in nature, they could measure the level of radiation in lead that had cooled from uranium, and then calculate backwards how long it had taken for radiation to achieve its present level. The estimated therefore that the earth was 4 to 5 billion years old.
Finally, in 1959, geologists discovered the Canyon Diablo meteorite and the physicists realized that the earth must be older than the meteorite that hit it (seems logical). So they took radiological readings from the meteorite and dated it at 4.53 – 4.58 billion years old.
Thus we presently believe our planet’s age is somewhere in this range. It took the collective learnings of geologists, astronomers, and physicists (and a few chemists along the way) and over 250 years to crack the code. Thousands of man-years of experimentation traced some smart and some not-so-smart theories, but we got to an answer that seems like a sound estimate based on all available data.
Why torture you with the science lecture? Because there are so many parallels to where we are today with marketing measurement. We’ve only really been studying it for about 50 years now, and only intensely so for the past 30 years. We have many theories of how it works, and a great many people collecting evidence to test those theories. Researchers, statisticians, ethnographers, and academics of all types are developing and testing theories.
Still, at best, we are somewhere between cooling spheres and radio spectroscopy in our understanding of things. We’re making best guesses based on our available science, and working hard to close the gaps and not get blinded by the easy answers.
I was reminded of this recently when I reviewed some of the excellent research done by Keller Fay Group in their TalkTrack® research which interviews thousands of people each week to find out what they’re talking about, and how that word-of-mouth (WOM) impacts brands. They have pretty clearly shown that only about 10% of total WOM activity occurs online. Further, they have established that in MOST categories (not all, but most), the online chatter is NOT representative of what is happening offline, at kitchen tables and office water coolers.
Yet many of the “marketing scientists” are still confusing measurability and large data sets of online chatter for accurate information. It’s a conclusion of convenience for many marketers. And one that is likely to be misleading and potentially career-threatening.
History is full of examples of how scientists were seduced by lots of data and wound up wandering down the wrong path for decades. Let’s be cautious that we’re not just playing with cooling spheres here. Scientific progress has always been built on triangulation of multiple methods. And while accelerating discoveries happen all the time though hard work, silver bullets are best left to the dreamers.
For the rest of us, it’s back to the grindstone, testing our best hypotheses every day.
_________________________
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.
A few enterprising geologists began experimenting with ocean salinity. They measured the level of salt in the oceans as a benchmark, and then measured it every few months thereafter, believing that it might then be possible to work backwards to figure out how long it took to get to the present salinity level. Unfortunately, what they found was that the ocean salinity level fluctuates. So that approach didn’t work.
In the 19th century, another group of geologists working the problem hypothesized that if the earth was once in fact a ball of molten lava, then it must have cooled to its current temperature over time. So they designed some experiments to heat various spheres to proportionate temperatures and then measure the rate of cooling. From this, they imagined, they could tell how long it took the earth to cool to its present temperature. Again, interesting approach, but it led to estimates that were in the range of 75,000 thousands of years. Skeptics argued that a quick look at the countryside around them provided evidence that those estimates couldn’t possibly be correct. But the theory persisted for nearly 100 years!
Then in the early part of the 20th century, astronomers devised a new approach in estimating the age of the earth through radio spectroscopy – they studied the speed with which the stars were moving away from earth (by measuring shifts in light wave spectrum) and found there was a fairly uniform rate of speed. This allowed them to estimate that the earth was somewhere between 700 million and 1.2 billion years old. This seemed more plausible.
Not until 1956, shortly after the discovery of atomic half-lives, did physicists actually come up with the answer that we have today. When they studied various metals found in nature, they could measure the level of radiation in lead that had cooled from uranium, and then calculate backwards how long it had taken for radiation to achieve its present level. The estimated therefore that the earth was 4 to 5 billion years old.
Finally, in 1959, geologists discovered the Canyon Diablo meteorite and the physicists realized that the earth must be older than the meteorite that hit it (seems logical). So they took radiological readings from the meteorite and dated it at 4.53 – 4.58 billion years old.
Thus we presently believe our planet’s age is somewhere in this range. It took the collective learnings of geologists, astronomers, and physicists (and a few chemists along the way) and over 250 years to crack the code. Thousands of man-years of experimentation traced some smart and some not-so-smart theories, but we got to an answer that seems like a sound estimate based on all available data.
Why torture you with the science lecture? Because there are so many parallels to where we are today with marketing measurement. We’ve only really been studying it for about 50 years now, and only intensely so for the past 30 years. We have many theories of how it works, and a great many people collecting evidence to test those theories. Researchers, statisticians, ethnographers, and academics of all types are developing and testing theories.
Still, at best, we are somewhere between cooling spheres and radio spectroscopy in our understanding of things. We’re making best guesses based on our available science, and working hard to close the gaps and not get blinded by the easy answers.
I was reminded of this recently when I reviewed some of the excellent research done by Keller Fay Group in their TalkTrack® research which interviews thousands of people each week to find out what they’re talking about, and how that word-of-mouth (WOM) impacts brands. They have pretty clearly shown that only about 10% of total WOM activity occurs online. Further, they have established that in MOST categories (not all, but most), the online chatter is NOT representative of what is happening offline, at kitchen tables and office water coolers.
Yet many of the “marketing scientists” are still confusing measurability and large data sets of online chatter for accurate information. It’s a conclusion of convenience for many marketers. And one that is likely to be misleading and potentially career-threatening.
History is full of examples of how scientists were seduced by lots of data and wound up wandering down the wrong path for decades. Let’s be cautious that we’re not just playing with cooling spheres here. Scientific progress has always been built on triangulation of multiple methods. And while accelerating discoveries happen all the time though hard work, silver bullets are best left to the dreamers.
For the rest of us, it’s back to the grindstone, testing our best hypotheses every day.
_________________________
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.
Thursday, March 04, 2010
Prescription for Premature Delegation
When responsibility for selecting critical marketing metrics gets delegated by the CMO to one of his or her direct reports (or even an indirect report once- or twice-removed), it sets off a series of unfortunate events reminiscent of Lemony Snicket in the boardroom.
First of all, the fundamental orientation for the process starts off on an "inside/out" track. Middle managers tend (emphasize tend) to have a propensity to view the role of marketing with a bias favoring their personal domain of expertise or responsibility. It's just natural. Sure you can counterbalance by naming a team of managers who will supposedly neutralize each others' biases, but the result is often a recommendation derived primarily through compromise amongst peers whose first consideration is often a need to maintain good working relationships. Worse yet, it may exacerbate the extent to which the measurement challenge is viewed as an internal marketing project, and not a cross-organizational one.
Second, delegating elevates the chances that the proposed metrics will be heavily weighted towards things that can more likely be accomplished (and measured) within the autonomy scope of the “delegatee”. Intermediary metrics like awareness or leads generated are accorded greater weight because of the degree of control the recommender perceives they (or the marketing department) have over the outcome. The danger here is of course that these may be the very same "marketing-babble" concepts that frustrate the other members of the executive committee today and undermine the perception that marketing really is adding value.
Third, when marketing measurement is delegated, reality is often a casualty. The more people who review the metrics before they are presented to the CMO, the greater the likelihood they will arrive "polished" in some more-or-less altruistic manner to slightly favor all of the good things that are going on, even if the underlying message is a disturbing one. Again, human nature.
The right role for the CMO in the process is to champion the need for an insightful, objective measurement framework, and then to engage their executive committee peers in framing and reviewing the evolution of it. Measurement of marketing begins with an understanding of the specific role of marketing within the organization. That's a big task for most CMOs to clarify, never mind hard-working folks who might not have the benefit of the broader perspective. And only someone with that vision can ruthlessly screen the proposed metrics to ensure they are focused on the key questions facing the business and not just reflecting the present perspectives or operating capabilities.
Finally, the CMO needs to be the lead agent of change, visibly and consistently reinforcing the need for rapid iteration towards the most insightful measures of effectiveness and efficiency, and promoting continuous improvement. In other words, they need to take a personal stake in the measurement framework and tie themselves visibly to it so others will more willingly accept the challenge.
There are some very competent, productive people working for the CMO who would love to take this kind of a project on and uncover all the areas for improvement. People who can do a terrific job of building insightful, objective measurement capabilities. But the CMO who delegates too much responsibility for directing measurement risks undermining both the insight and organizational value of the outcome -- not to mention the credibility of the approach in the eyes of the key stakeholders.
_________________________
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.
First of all, the fundamental orientation for the process starts off on an "inside/out" track. Middle managers tend (emphasize tend) to have a propensity to view the role of marketing with a bias favoring their personal domain of expertise or responsibility. It's just natural. Sure you can counterbalance by naming a team of managers who will supposedly neutralize each others' biases, but the result is often a recommendation derived primarily through compromise amongst peers whose first consideration is often a need to maintain good working relationships. Worse yet, it may exacerbate the extent to which the measurement challenge is viewed as an internal marketing project, and not a cross-organizational one.
Second, delegating elevates the chances that the proposed metrics will be heavily weighted towards things that can more likely be accomplished (and measured) within the autonomy scope of the “delegatee”. Intermediary metrics like awareness or leads generated are accorded greater weight because of the degree of control the recommender perceives they (or the marketing department) have over the outcome. The danger here is of course that these may be the very same "marketing-babble" concepts that frustrate the other members of the executive committee today and undermine the perception that marketing really is adding value.
Third, when marketing measurement is delegated, reality is often a casualty. The more people who review the metrics before they are presented to the CMO, the greater the likelihood they will arrive "polished" in some more-or-less altruistic manner to slightly favor all of the good things that are going on, even if the underlying message is a disturbing one. Again, human nature.
The right role for the CMO in the process is to champion the need for an insightful, objective measurement framework, and then to engage their executive committee peers in framing and reviewing the evolution of it. Measurement of marketing begins with an understanding of the specific role of marketing within the organization. That's a big task for most CMOs to clarify, never mind hard-working folks who might not have the benefit of the broader perspective. And only someone with that vision can ruthlessly screen the proposed metrics to ensure they are focused on the key questions facing the business and not just reflecting the present perspectives or operating capabilities.
Finally, the CMO needs to be the lead agent of change, visibly and consistently reinforcing the need for rapid iteration towards the most insightful measures of effectiveness and efficiency, and promoting continuous improvement. In other words, they need to take a personal stake in the measurement framework and tie themselves visibly to it so others will more willingly accept the challenge.
There are some very competent, productive people working for the CMO who would love to take this kind of a project on and uncover all the areas for improvement. People who can do a terrific job of building insightful, objective measurement capabilities. But the CMO who delegates too much responsibility for directing measurement risks undermining both the insight and organizational value of the outcome -- not to mention the credibility of the approach in the eyes of the key stakeholders.
_________________________
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.
Subscribe to:
Posts (Atom)