Tuesday, May 25, 2010

Ten Specific Ways Brand Investments Pay Back

One of the most frequent questions I get about measuring marketing is: “How do we measure the impact of our investments in brand development on the bottom line?”

If you’re really looking for an answer, here goes:

There are ten basic ways a stronger brand creates financial value.
  1. It can attract more customers, either directly or through stronger WOM.
  2. It can encourage customers to spend more with you, making them more receptive to other solutions you can offer, or just more likely to give you first shot at meeting their needs.
  3. It can influence the mix of products/services customers buy from you, since buyers normally hold strong brands in some degree of esteem and respect the “advice” of the brand.
  4. It can reduce the customer’s price sensitivity, allowing you to earn more margin from every dollar they spend with you.
  5. It can help you keep customers active longer, or at the very least act as a “safety net” to give you time or opportunity to fix problems that arise along the way.
  6. It can help you accelerate the customer’s buying process, reducing the probability that something happens to close the wallet before the spending happens.
  7. It can help you attract and retain better talent at lower recruiting and retention costs as people want to be associated with attractive brands.
  8. It can reduce operating expenses by influencing supplier concessions from companies who want to be associated with top-tier brand partners.
  9. It can attract more/better channel partners.
  10. And if that’s not enough for your CFO, tell them how stronger brands can actually help lower your organization’s cost-of-capital borrowing costs due to the lower risks of lending to a company with strong brands (all other things being equal). It’s not unlike how studies have consistently shown that taller people make more money than equally qualified people of average or lower height.
Most of the time, the business case for branding investments can be made in some combination of these ten elements. Of course, you’ll need some data (or at least some well structured assumptions) to make the case credibly. But it can be done with even just a little data.

You’ll also need some idea of just when you expect to see these effects begin to occur, and what the early indicators of progress might be (e.g. shift in perceptions, web site engagement, etc.). Setting up your marketing metrics to monitor these milestones becomes more crucial to the cause as your timeframe for payback gets longer.

Now if you’re NOT really looking for an answer, but just want to muddy the waters on marketing measurement to sufficiently to frustrate the people asking the question in the hopes they’ll go away, you can do that too (for a while). Just start rambling about how EVERYTHING is branding or related to the brand, and consequently NOTHING is measurable. This can work… for a little while. But eventually the other managers find ways to marginalize you so your budget winds up getting cut. So I’d only recommend this as a stalling strategy while you’re secretly negotiating for your next job.

For the rest of us, the case for brand investment gets clearer all the time. The tools are improving and the body of knowledge is growing fast.

In that vein, I’m sure I’ve missed something in my list above, so please feel free to remind me.


Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Monday, May 10, 2010

Newsflash: Biz Media Again Fooled by Bogus Research

I continue to see a number of highly intelligent business editors covering the marketing industry get fooled into running stories based on PR polls disguised as research. And when that happens, everyone in the marketing community gets hurt.

New research is exciting and relevant to marketers. It leads to new thinking and ideas. And it makes good copy. As an example, one major trade publication recently ran a story about a survey of “Chief Marketing Officers” and marketing measurement with the headline “Survey Finds Marketing Contributes to the Bottom Line.” This undoubtedly made it into countless PowerPoint presentations in budget meetings.

But scratch beneath the surface and this “survey” was comprised of 423 “business executives and marketing professionals” who may have come from any industry, or be in companies ranging from $billion marketing budgets to $10k budgets, or may be CEOs or entry-level managers. In other words, unlike its title indicates, the sample is reflective of no particular group, aside from those willing to be surveyed.

Even more dangerously, the story went on to state that 39% of the respondents agreed the marketing is doing a good job of contributing to the financial condition of the business, up from 19% last year. Presumably this was referring to better use of marketing metrics to better measure marketing. But were these the same people surveyed last year? Were they selected to match the profile of people surveyed last year, so their responses would be scientifically valid? No. They were just this year’s batch of willing respondents, bearing little resemblance to last year’s group, thereby making any sort of trend analysis invalid.

Unfortunately, this is neither uncommon nor harmless. Marketing struggles every day to earn trust and credibility with finance, operations, sales, and other functions that have a more skeptical and discerning eye when it comes to research. But if the marketing media suggests it’s OK to accept PR polls as research, it indirectly encourages marketers to include some of these “findings” in rationalizing their recommendations to others.

In fairness to my hard-working friends in the media, most editors and staff writers (let alone marketers themselves) have not had the benefit of training in how to tell a bogus survey from a truly reliable one. They’re very busy trying to produce more content to feed both online and offline vehicles with smaller payroll and more pressure to get readers. So perhaps I can offer a few simple tips to separate the fluff from the real stuff:

  1. Before you even read a survey’s findings, ask to see a copy of the survey questionnaire and check out the profile of the respondent group so you know how to interpret what you’re being told. Get a clear sense of “who” is supposedly doing/thinking “what” and inquire as to how the respondents were incented. Then ask yourself if a reasonable person would really put the effort into answering completely and honestly.
  2. Check the similarity or differences of the survey respondents. If the vast majority of them share similar traits (e.g. company size, industry group, annual budget), then it’s fair to extrapolate the findings to the larger group they represent. But if no single characteristic (other than being in “marketing”) ties them together, they represent no-one – regardless of the number of respondents. You’ll need to separate the responses by sub-group like larger marketers versus smaller ones, or B2B vs. B2C. In general, you’ll need at least 100+ respondents in any sub-segment to make it a valid result.
  3. Check to see if the sample has been consistent year-over-year. If it has, you can safely say that something has or hasn’t changed from one year to the next. But if the sample profile is substantially different year-to-year, comparisons aren’t valid due to differences in the perspectives or expertise of those responding.
  4. Ask about the margin of error. Just because 56% of some group say they feel/believe/do something, doesn’t mean they actually do. EVERY survey comes with a margin of error. Most of the PR-driven polls in the marketing community use inexpensive data collection techniques which offer no ability to validate what people say and no mechanisms to keep respondents honest. Consequently, 56% may actually mean somewhere between 45% and 65% - which may change the interpretation of the findings. Ask the survey sponsor about margin of error. If they aren’t sure, don’t publish the numbers.

And if all that is too difficult, call me and I’ll give you an unbiased assessment for free.

It’s difficult to produce a survey that provides real insight and meaningful information. That’s why real research costs real money. And while there’s nothing wrong with polls being used to gather perspective from broadly defined populations, or PR folks using these for PR purposes – confusing PR for real research slowly poisons the well for all of us in the marketing community.

Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com