Tuesday, August 24, 2010

Achieving Simplicity in Measurement

So often I hear that a given approach to measuring the payback on marketing is “too complex”. It often gets voiced as. “This is too complex for our executives to understand. Can’t we just make it simple?”

The answer is YES. We can make it simpler. To do so, we need to start by recognizing that there are several types of “complexity” that need to be managed:



Over time, I’ve come to learn a few things that help in trying to manage these complexities:
  1. Simplicity is the destination, not the starting point. If the answers were simple, you would have found them already. The challenge is to sift through the complexity to identify the real insights - e.g., discovering brand drivers, identifying customer segments, etc. The path towards marketing measurement excellence requires working through complexity to arrive at the core set of appropriate measures, processes and frameworks.

  2. Complexity in measurement is a reflection of complexity in the business. The measurement plan just reflects the nature of the business, and should be on-par with the complexity management already deals with daily. The goal of the measurement plan is to establish a more structured framework for systematically eliminating complexity over time by isolating the most meaningful areas of focus. If you knew those already, you wouldn’t need to search for them through your metrics.

  3. Complexity is only a problem when introduced to the BROAD organization. The totality of the measurement plan and activity will be contained within a small group of people who already manage the current complexity. It is the responsibility of this core group to communicate comprehensiveness without adding any un-necessary complexity. Roll-out to the broader workforce will be the gradual and likely limited by function.

  4. Simplicity, if not thoughtfully pursued, can inhibit competitive differentiation. If the real competitive insights were are as obvious as picking up shells on the beach, everyone would have them. Try not to limit yourself to picking up the same shells as your competitors. Complexity is often a necessary part of meaningful innovation.
Specific to each type of complexity, there are some clear resolution strategies…


In the end, the goal of enhanced marketing measurement is to guide the organization to take bigger, smarter risks to achieve competitive advantage. Identifying and analyzing those risks is always complex. Your measurement approach needs to be based upon the processes, skills, and tools designed to make it SIMPLER in the future than it is today.
_________________________
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring and improving the payback on marketing investments, and publishers of MarketingNPV Journal available online free at http://www.marketingnpv.com/.

Tuesday, August 10, 2010

Dashboards - Huge Value or Big Expense?

I’m hearing more and more “dashboard bashing” these days. Seems that many have tried to implement them, and then drawn the conclusion that the view isn’t worth the climb. I’ve even been inside companies where you cannot even say “dashboard” in front of senior executives for fear of them throwing you out of the room. Likewise, I’ve been inside companies and heard things like “Dashboards are easy. We have hundreds of them.”

Having “written the book” on marketing dashboards several years ago, I think I own part of this problem. And I confess that there are far fewer comprehensive dashboards in use within Global 1000 marketing organizations than I thought there would be by now. In short, it seems it was a good idea in concept, but it just hasn’t “stuck” within most companies. Why?

Well for starters, dashboard design and implementation tends to get delegated down the hierarchy and get treated like campaign management or marketing automation tools. It’s a bit paradoxical, but if a CMO wants an insight-generating dashboard that saves them time, they need to put more time into nurturing its birth and evolution. EVERY successful dashboard implementation I’ve seen (and yes there are a few) shares a common foundation of senior management attention and high expectations. Without that, they are born of a thousand compromises and arrive neutered of their value.

Second, they are set up to fail by unrealistic expectations with regard to resources required. Sure the software to run dashboards is getting much cheaper all the time, but the effort to gather, align, and interpret data is significant. Not to mention the time required to train the staff how to USE the dashboard to THINK differently about the business than they did before. After all, if your dashboard doesn’t offer the prospect of causing people to think differently, why do it when you can just continue to rely on the existing mélange of reports flying around the building?

Third, there is pressure to execute in Excel in the belief that it will be easier and less expensive. In reality, the limitations it imposes undermine the potential to grab people’s imagination and draw them into interacting with data in new ways. The users can’t sense anything different from the reports they currently get. In short, penny wise and pound foolish.

And finally, start with metrics which CAN be measured today (the pragmatist approach) instead of envisioning the spectrum of things which SHOULD be measured (the visionary approach), and force some amount of new learning exploration right from the start. Without this stretch exercise, many dashboards are started with no prospect of new information, and therefore no compelling reason for anyone to take the time to learn to use them.

So to sum up what we’ve learned, I’d say if you're looking for great insight without expense, stay away from dashboards. You'll be disappointed. But don't burn your bridges behind you. After you've searched far and wide for true insight-generating solutions that meet the "good" and "cheap" criteria, you may just arrive back at the reality that insight is derived through dedicated effort over time. And while the current generation of dashboard software options is slick and inexpensive, they won’t perform the most important transformations for you – the process, skill, and culture ones.

_________________________
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring and improving the payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.


Tuesday, August 03, 2010

Catching Lightning in a Bottle

What do these great marketing breakthroughs have in common?

• MasterCard “priceless”
• Energizer bunny
• “Got Milk?”
• Absolut ________

All were truly “breakthrough ideas”. All were “viral” ad campaigns before there was such a term. But they all were much more than ad campaigns – they were positioning strategies that effectively cemented their brands into the top echelon of their respective categories. They were marketing platforms that have lasted for many years and evolved to adapt effectively in dynamic environments.

Yet of all the marketing jargon that’s penetrated our brains, I think the concept of the “breakthrough idea” may be one of the most dangerous.

Would we all love to have one? Sure. When one comes along, can it revolutionize our business? Absolutely. So what’s the problem? Shouldn’t we all aspire to the same success?

Statistically speaking, most marketing organizations have a better chance of hitting the lottery than they do creating a breakthrough idea that’s more than just a short-lived ad campaign. Declines in both research budgets and internal competencies are primary causes. But internal politics and dynamic competitive environments play a role too. All of which is exacerbated by shorter timelines to produce demonstrable results.

Having spent the better part of the last 10 years crawling inside many Fortune 500 companies to help them measure marketing effectiveness, I have recently come to the (much overdue) conclusion that most measurement problems stem from the core evils of parity value propositions and absence of effective positioning. We’ve somehow managed to shift almost all our efforts from strategic insight development (which we’ve outsourced to consultants and research companies, and then put them on very tight budgetary leashes) to tactical execution in the mistaken belief that the only viable strategy for success in a two-year evaluation window is to catch lightning in a bottle in the form of a positively viral ad campaign. In other words, most unintentionally place themselves in a position where they are relying upon lightning to strike in a specific place during a short window of time.

True, most of the breakthroughs above were born in moments of pure inspiration on either the client or agency side. But those moments were carefully “engineered” to come about through insightful research and market study. They were successful outcomes of a diligent “R&D” process.

As you look ahead to your 2011 budget, how much have you allocated to “R&D”? Not surprisingly, even companies with multi-billion dollar R&D budgets for engineering and product development will likely have just a small fraction of their overall marketing budget dedicated to generating market/customer insights. Far more money will be allocated to unstructured and uncontrolled experimentation with communication tactics in support of messages which are neither “breakthrough” nor effective strategic positioning. Many will invest heavily in analytics to optimize the media mix of campaigns to get the biggest return for the tactical budget, yet will go to sleep at night wondering if they’re actually saying the right things to the right people to inspire the right behaviors.

Increasing the probability of success in marketing almost always comes down to executing against a process of hypothesize, test, learn, refine, repeat. Along the way, you can employ a few metrics to gauge your progress at improving. For example:

  • Relative Value Proposition Strength – a measure of the degree to which your core value proposition (unbundled from ad execution) is preferred by the target audience relative to the options they see themselves having. Tracking this on a regular basis helps diagnose the extent to which your core offering is driving/depressing results versus your execution of it.
  • Positioning Relevance – a measure of the degree to which your key positioning points resonate on relevance and materiality scales compared to other positioning strategies the customers/prospects are exposed to from competitors.
  • Message effectiveness – a measure of the degree to which your message execution is delivering the right message in a compelling and differentiated manner.

Finding out where you score high or low on these metrics will direct and focus your efforts at improvement. It may also help enlighten others around the company as to the need to invest more in developing stronger value propositions through product/service innovation.

Implementing a few structured steps like these can go a long way towards informing your understanding of where and when lightning is more likely to strike, so you can put your bottles in the right places.

_________________________
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring and improving the payback on marketing investments, and publishers of MarketingNPV Journal
available online free at www.MarketingNPV.com.





Tuesday, June 22, 2010

Twitter Metrics? Why Bother?

There sure is lots of buzz these days about how to measure Twitter. One recent question submitted was typical… “How do we measure the value of the tweets we’re producing every day?”

Wrong question.

The right question is “SHOULD we measure the value of the tweets we’re producing every day?”

For the vast majority of companies out there, I think not.

It seems to me that Twitter is a productivity tool. You use it to efficiently communicate messages to people who have indicated a desire to hear them. In doing so, you also benefit from their willingness to re-tweet along to others they may know with similar interests.

As such, the inherent value proposition of Twitter is to REPLACE higher cost avenues of reaching interested parties with LOWER cost avenues. Consequently, the financial value of using Twitter for business is the cost savings of reaching the same people more efficiently and/or the now-affordable opportunity of communicating deeper into the universe of current and prospective customers. All of which is, to a reasonable degree, measurable.

So why am I skeptical about measuring tweets?

First off, there are no platform costs for tweeting. No software to buy. No hardware to install. Just use any web-enabled keyboard and you’re off. Everything you need to get started is free. If you’re not adding staff, and/or you’re not keeping staff to tweet when they would otherwise be expendable, then you have NO incremental cost. If this is your situation (and for most of you I suspect it is), then why bother measuring something that comes at no cost? Save your marketing measurement energy (and that of your management team) for bigger, more expensive issues with more meaningful marketing metrics.

But if you add or divert headcount (staff or contractor) to tweeting in a way that adds to cost, you should be prepared to forecast and measure the impact.

Your Twitter business case for adding additional headcount (aka “Chief Tweeting Officer”) is based on the premise that more/better tweeting will drive measurable impact on the business in some way. So you would compare the incremental headcount cost of the tweeters with the expected incremental impact on the business in terms of:

A) Incremental customer acquisition;
B) Incremental customer retention ;
C) Incremental share-of-customer;
D) Incremental margin per customer or transaction;
E) Improvements in staff or channel partner performance;
F) Accelerated transactional value; or
G) Early indication of problems and the resulting benefit of acting quickly to fix them.

Each of these could be determined through a series of inexpensive experiments intended to prove/disprove the hypothesis that tweeting will somehow result in economically attractive behavior. Some might happen as a direct result of the tweeting. Others may be indirect results in association with combinations of additional marketing tactics (e.g. paid search or display advertising). Define your hypotheses clearly and succinctly, then monitor tweet consumption…

Anyone rolling their eyes yet?

Bottom line is that tweeting, like all social media activities, are engagement tools. We use them to try to engage current/prospective customers in further dialogue of investigation of the solutions we can offer them. So from a measurement perspective, that suggests we focus on what specific types of behavioral engagement we are trying to drive, and what economic impacts we anticipate as a result. Measure changes in those two elements and you’re well on your way to success.

There are always ways to measure marketing effectiveness. Everything in marketing can be measured. But the first and most important question to ask is “what would I do differently if I knew the answer to the question I’m asking?” Only then can you decide how to PRAGMATICALLY balance measurement with action.

My friend Scott Deaver, EVP at Avis car rental is fond of saying “don’t bother if you can’t weigh it on a truck scale.” I think that applies very often to twittering away time measuring Twitter.

_________________________
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Tuesday, June 08, 2010

How to Get the Budget You Need

As 2011 planning season approaches, here are a few things you can do to increase the probability that you’ll get the resources you need for marketing to help drive the business goals. Before you go in with your request, work your way down this checklist:

1. Ensure that you are aligned to the proper goals and objectives.
Your recommendations need to be linked back to specific ways they will help achieve company goals – revenue, profit, customer value, market share, etc. The more specifically you can demonstrate these links, the more likely your recommendations will be taken seriously. Don’t focus on the intermediary steps like awareness or brand preference. Keep the emphasis on the business outcomes you intend to influence.

2. Make sure you’ve squeezed every drop from the current spend patterns.
These days, zero-based budgeting is price-of-entry. Before you can ask for more, you need to show how you have “de-funded” things you previously did that either A) didn’t work; B) weren’t aligned to evolving company goals; or C) seem less important now than other initiatives. By offering up some of your own cuts to partially fund your recommendations, you demonstrate a strong sense of managing a portfolio of investments, and a willingness to make hard choices with company money.

3. Have a plan for how you will prioritize the new marketing funds strategically, not just tactically.
If you get another $1, where will you put it? Before you answer in terms describing programs or tactics, think about segments, geographies, channels, product groups, etc. Knowing where the best strategic leverage points are is far more important than tactical mix. You can always evolve the mix of tactics. But the best tactics applied against the wrong strategic needs won’t produce any results.

4. Identify the points of leverage you can exploit.
Results accrue when you place resources behind places of competitive leverage. Knowing where your leverage points are helps ensure you are spending where it will produce noticeable outcomes. Common leverage points are relative value proposition strength, channel dominance, message effectiveness, and customer switchability. There are others too. But spending without leverage is just playing into the hands of the competitive environment. Without leverage, you have no reasonable expectation of changing anything.

5. Demonstrate an understanding of how the business environment has changed.
Even if you have clear leverage opportunities, the business environment is powerful enough to neutralize just about any unilateral effort a given company might make. Sudden swings in the macro-economic spectrum or the regulatory environment could have you spending into an impenetrable headwind and dramatically reduce the expected impact of your investments. Identify the issues that could create the strongest headwind (or tailwind) for you - interest rates, employment rates, housing starts, currency fluxuations – and prepare an assessment of how they might impact your proposed results.

6. Proactively assess the risk of your plans.
As marketers, we plan like matadors, but have the track record of the bull. We spend so much time conceptualizing our plans, but comparatively little imagining what might go wrong. Which is unfortunate, given that something almost always does go wrong. So run your plan by your company’s “Debbie downer” – the skeptical one who always sees the worst in everything. Let her tell you what might derail your plans, and then develop a plan to manage your risks accordingly. Being proactive about identifying and managing risks demonstrates your ability to dispassionately assess options and pursue realistic opportunities for success with your eyes wide open.

7. Propose “good” benchmarks and targets for your intended outcomes.
Every recommendation should come with expected performance outcomes. Even if you present a range of possible results, you’ll need something to demonstrate the baseline of performance you are starting from, and the yardstick by which you will measure success. This creates the perception of accountability, which appeals to the deeply human desire to trust in someone else where our own personal expertise leaves off.

These 7 pre-test elements will prepare you for every question that might arise in connection with your proposals. And while competing investments might ultimately attract the resources you were fighting for, this process ensures your reputation as a capable manager will grow even if your budget doesn’t.

Tuesday, May 25, 2010

Ten Specific Ways Brand Investments Pay Back

One of the most frequent questions I get about measuring marketing is: “How do we measure the impact of our investments in brand development on the bottom line?”

If you’re really looking for an answer, here goes:

There are ten basic ways a stronger brand creates financial value.
  1. It can attract more customers, either directly or through stronger WOM.
  2. It can encourage customers to spend more with you, making them more receptive to other solutions you can offer, or just more likely to give you first shot at meeting their needs.
  3. It can influence the mix of products/services customers buy from you, since buyers normally hold strong brands in some degree of esteem and respect the “advice” of the brand.
  4. It can reduce the customer’s price sensitivity, allowing you to earn more margin from every dollar they spend with you.
  5. It can help you keep customers active longer, or at the very least act as a “safety net” to give you time or opportunity to fix problems that arise along the way.
  6. It can help you accelerate the customer’s buying process, reducing the probability that something happens to close the wallet before the spending happens.
  7. It can help you attract and retain better talent at lower recruiting and retention costs as people want to be associated with attractive brands.
  8. It can reduce operating expenses by influencing supplier concessions from companies who want to be associated with top-tier brand partners.
  9. It can attract more/better channel partners.
  10. And if that’s not enough for your CFO, tell them how stronger brands can actually help lower your organization’s cost-of-capital borrowing costs due to the lower risks of lending to a company with strong brands (all other things being equal). It’s not unlike how studies have consistently shown that taller people make more money than equally qualified people of average or lower height.
Most of the time, the business case for branding investments can be made in some combination of these ten elements. Of course, you’ll need some data (or at least some well structured assumptions) to make the case credibly. But it can be done with even just a little data.

You’ll also need some idea of just when you expect to see these effects begin to occur, and what the early indicators of progress might be (e.g. shift in perceptions, web site engagement, etc.). Setting up your marketing metrics to monitor these milestones becomes more crucial to the cause as your timeframe for payback gets longer.

Now if you’re NOT really looking for an answer, but just want to muddy the waters on marketing measurement to sufficiently to frustrate the people asking the question in the hopes they’ll go away, you can do that too (for a while). Just start rambling about how EVERYTHING is branding or related to the brand, and consequently NOTHING is measurable. This can work… for a little while. But eventually the other managers find ways to marginalize you so your budget winds up getting cut. So I’d only recommend this as a stalling strategy while you’re secretly negotiating for your next job.

For the rest of us, the case for brand investment gets clearer all the time. The tools are improving and the body of knowledge is growing fast.

In that vein, I’m sure I’ve missed something in my list above, so please feel free to remind me.

_________________________

Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Monday, May 10, 2010

Newsflash: Biz Media Again Fooled by Bogus Research

I continue to see a number of highly intelligent business editors covering the marketing industry get fooled into running stories based on PR polls disguised as research. And when that happens, everyone in the marketing community gets hurt.

New research is exciting and relevant to marketers. It leads to new thinking and ideas. And it makes good copy. As an example, one major trade publication recently ran a story about a survey of “Chief Marketing Officers” and marketing measurement with the headline “Survey Finds Marketing Contributes to the Bottom Line.” This undoubtedly made it into countless PowerPoint presentations in budget meetings.

But scratch beneath the surface and this “survey” was comprised of 423 “business executives and marketing professionals” who may have come from any industry, or be in companies ranging from $billion marketing budgets to $10k budgets, or may be CEOs or entry-level managers. In other words, unlike its title indicates, the sample is reflective of no particular group, aside from those willing to be surveyed.

Even more dangerously, the story went on to state that 39% of the respondents agreed the marketing is doing a good job of contributing to the financial condition of the business, up from 19% last year. Presumably this was referring to better use of marketing metrics to better measure marketing. But were these the same people surveyed last year? Were they selected to match the profile of people surveyed last year, so their responses would be scientifically valid? No. They were just this year’s batch of willing respondents, bearing little resemblance to last year’s group, thereby making any sort of trend analysis invalid.

Unfortunately, this is neither uncommon nor harmless. Marketing struggles every day to earn trust and credibility with finance, operations, sales, and other functions that have a more skeptical and discerning eye when it comes to research. But if the marketing media suggests it’s OK to accept PR polls as research, it indirectly encourages marketers to include some of these “findings” in rationalizing their recommendations to others.

In fairness to my hard-working friends in the media, most editors and staff writers (let alone marketers themselves) have not had the benefit of training in how to tell a bogus survey from a truly reliable one. They’re very busy trying to produce more content to feed both online and offline vehicles with smaller payroll and more pressure to get readers. So perhaps I can offer a few simple tips to separate the fluff from the real stuff:

  1. Before you even read a survey’s findings, ask to see a copy of the survey questionnaire and check out the profile of the respondent group so you know how to interpret what you’re being told. Get a clear sense of “who” is supposedly doing/thinking “what” and inquire as to how the respondents were incented. Then ask yourself if a reasonable person would really put the effort into answering completely and honestly.
  2. Check the similarity or differences of the survey respondents. If the vast majority of them share similar traits (e.g. company size, industry group, annual budget), then it’s fair to extrapolate the findings to the larger group they represent. But if no single characteristic (other than being in “marketing”) ties them together, they represent no-one – regardless of the number of respondents. You’ll need to separate the responses by sub-group like larger marketers versus smaller ones, or B2B vs. B2C. In general, you’ll need at least 100+ respondents in any sub-segment to make it a valid result.
  3. Check to see if the sample has been consistent year-over-year. If it has, you can safely say that something has or hasn’t changed from one year to the next. But if the sample profile is substantially different year-to-year, comparisons aren’t valid due to differences in the perspectives or expertise of those responding.
  4. Ask about the margin of error. Just because 56% of some group say they feel/believe/do something, doesn’t mean they actually do. EVERY survey comes with a margin of error. Most of the PR-driven polls in the marketing community use inexpensive data collection techniques which offer no ability to validate what people say and no mechanisms to keep respondents honest. Consequently, 56% may actually mean somewhere between 45% and 65% - which may change the interpretation of the findings. Ask the survey sponsor about margin of error. If they aren’t sure, don’t publish the numbers.

And if all that is too difficult, call me and I’ll give you an unbiased assessment for free.

It’s difficult to produce a survey that provides real insight and meaningful information. That’s why real research costs real money. And while there’s nothing wrong with polls being used to gather perspective from broadly defined populations, or PR folks using these for PR purposes – confusing PR for real research slowly poisons the well for all of us in the marketing community.

--------------------
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com

Tuesday, April 27, 2010

Measurement's Exponential Curve

I attended the Global Retail Marketing Association’s annual leadership conference recently and had the opportunity to interact with a few terrific speakers including Ray Kurzweil – renown futurist in all matters technology. Ray bombarded us with scientifically- and econometrically-based forecasts for where health, computing, and social technologies were headed.

Kurzweil has a pretty good track record in predicting these things (as evidenced by his success as an angel investor) based on a simple concept – that evolution in technology is not linear, but exponential. He cited examples of how this has been true across the technology spectrum time and time again, but also across the spectrum of life in general. By plotting the advent of major advances in life sciences, one can quickly see how the pace of innovation is accelerating. In fact, the average lifespan of a child born today will be well into their upper 80’s, and that lifespan is accelerating. (Do the math, and if those of us in middle age can just wait long enough, we may live forever.)

As I listened, it occurred to me how appropriate this concept was in the arena of marketing measurement too. While marketers have been seeking to improve the effectiveness and efficiency of their efforts almost since marketing was born as a functional discipline in the early 20th century, the pace of discontinuous innovation is accelerating. And when I speak about discontinuous innovation, I’m not referring to the introduction of internet-based research tools, but fundamental changes in the way we are learning to understand marketing’s impact on the consumer/customer.

For example, traditional survey-based research techniques are proving to be far less predictive of human behavior than bio-metric scanning methods that monitor brainwaves, heart rate, respiration changes, or skin temperature. Today, advertisers can expose their creative messages to prospective customers and read the immediate response in involuntary biometric systems which overcome the social and cultural biases that tend to filter logical survey responses.

Another example… many companies are dis-adopting regression-based marketing mix models in favor of artificial intelligence techniques and agent-based models which focus on replicating thought processes in the full context of competitive dynamics, instead of just looking for statistical relationships. True, these higher-intelligence techniques have been around for 30+ years and not yet found significant penetration in marketing applications, but the PACE of adoption is now increasing noticeably and innovation is driving relevance and applicability more quickly than ever.

All of this has me thinking these days that market research may be operating towards the end of its current lifecycle. The tools, methods, and techniques we use today will not persist more than another 10 to 20 years. We will learn to move past recording and clustering rational thought, and past our voyeuristic tendencies to predict future behavior based on past actions. And in the process, we will learn to reconcile what people say, think, and do with the powerfully innate drivers buried deep in our biological wiring – like in this example.

Inevitably, we will see many instances of borderline unethical manipulation which will slow this adoption curve a bit, but the amount of money and time being invested in these new areas of learning is far too great to be stalled by commercial mis-steps along the way.

So when you drag your finger across your touch-screen interface a few years from now, the underlying systems will capture not just what and where you touched, but your heart rate and body temperature at the time (along with possibly the size of your pupils). This information will feed logic-driven systems which will immediately adapt the type of images, sounds, and smells to your innate preferences to stimulate your interest far beyond anything we’ve achieved as marketers so far.

And just think about the impact this will all have on which metrics we adopt to measure performance…
_________________________
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.


Thursday, April 15, 2010

Memo from the CFO: The Best Response

Two weeks ago I posted a disguised version of an actual email from the CFO of a Global 1000 company to the CMO of that company, congratulating the CMO on doing a good job of improving marketing efficiency, but then raising questions about the effectiveness of that marketing. I invited readers of Metrics Insider to comment on how they would respond if they were the CMO.

Some of the responses were, not surprisingly, promotional messages for proprietary marketing measurement research or analytical methodologies being positioned as silver bullets to solve the problem. While I’m sure some of these solutions would be helpful to some degree, it’s naïve for ANY executive to pin their hopes of understanding marketing payback on any single tool, method, or metric. Many have tried this approach, but very few have succeeded as the “tool in lieu of disciplined evolution” tends to answer the immediate questions, but loses luster as dynamics (both external and internal) evolve. Besides, CFOs tend to be immediately suspicious of any tool packaged in hyperbole like “all you really need is…”

A few responses were pretty hostile. They came in the form of marketers berating the CFO (aka the “bean-counting techno-wonk”) for asking such questions in a way that implied the CMO should have had a much better handle on marketing effectiveness. (for a more humorous approach along these lines, see what Chris Koch wrote). In truth, there was no malice in the questions posed. Just a bit of naiveté on behalf of the CFO with respect to the subtleties of marketing. Sometimes, that lack of understanding manifests itself in poorly-chosen words. But rarely does that mean the issuer of the words is “anti-marketing”. They are just playing one of the many roles they are paid to play… the role of risk management. If the CFO is to adequately assess the corporate risks (including the risk of wasteful spending), they must have the confidence to challenge the evidence put forward in support of EVERY material investment. If you ask the head of IT, you’ll likely find that they too have felt the heat of the CFOs microscope from time-to-time.

So if your first reaction was anger, get past it. Don’t let your own insecurities about marketing measurement negatively taint your assessment of logical questions that inquisitive but un-informed executives may ask.

I think, all things being equal, the best response would be:


To: Amy Ivers – CFO

From: Susan James – CMO

RE: Congratulations on your recent recognition for marketing efficiency


Amy –

Thanks for your note on measuring marketing effectiveness. You raise many good points that I too have been thinking about for some time. There are a number of ways we can approach answering these questions, but I’d need your help since some would inevitably require us to get comfortable with partial data sets, while others may necessitate a temporary step backwards in efficiency to enable some further testing. Together, we might be able to come up with an approach that John and the others on the executive committee find credible. But it might be a bit more involved than emails can adequately address.

I share your passion for better insight into marketing effectiveness. If you’d like to suggest a few possible dates/times, I’d enjoy getting together to bring you up to speed on what we’ve been able to do so far, where our current knowledge gaps are, and what we’re doing to try to close those gaps. I’d appreciate your critical assessment of what we’re doing, and any suggestions you may have for making us better.

Thanks for your input.

Susan.


In a nutshell, the best strategy in this type of situation is:
  1. Disarm and diffuse. Take the emotion out of it, even if the history frustration runs deep.
  2. Focus on defining the questions to be answered. Don’t jump into evidence-presentment mode until you have agreed on what reasonable questions are. You’ll be shooting at a moving target.
  3. Prioritize the questions. Don’t assume they’re all equally important, or you’ll fracture your answering resources into ineffectively small parts.
  4. Decompose the questions into small pieces. Define the sub-components and assess what the company knows and what it doesn’t know with respect to each of the small pieces. Trying to boil the ocean is another sure way to accomplish nothing.
  5. Admit your knowledge limits. Be clear to label your assertions conservatively as facts, observations, and opinions derived from experience.
  6. Have a continuous improvement plan. Show your plan to improve the company’s marketing effectiveness in stages and manage the expectation that it might take time to tackle all the pieces to the roadmap absent a significant boost in resources.

I realize that many of you are caught in situations where the CFO’s questions may in fact be emanating from some apparent malice. In those cases, use honest questions to understand how much they actually know (as distinct from what they think they know). Their path to self-realization is only as fast as your skillful approach to engaging them to be part of the solution instead of just the identifier of the problem.

Using this approach, even the thorniest marketing/finance relationships can be improved by at least 50% (and I have the statistics to back it up).

Thanks for your comments.
_________________________
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Tuesday, March 30, 2010

Memo from CFO: Ad Metrics Not Good Enough


Following is a sanitized version of an actual email from a CFO to a CMO in a global 1000 company…



TO: Susan James – Chief Marketing Officer


FROM: Amy Ivers – Chief Financial Officer

RE: Congratulations on your recent recognition for marketing efficiency

Susan –

Congratulations on being ranked in the top ten of Media Weeks’ most recent list of “most efficient media buying organizations.” It is a reflection of your ongoing commitment to getting the most out of every expense dollar, and well-earned recognition.

But I can’t help but wonder, what are we getting for all that efficiency?

Sure, we seem to be purchasing GRP’s and click-thru’s at a lower cost than most other companies, but what value is a GRP to us? How do we know that GRPs have any value at all for us, separate from what others are willing to pay for them? How much more/less would we sell if we purchased several hundred more/less GRPs?

And why can’t we connect GRPs to click-thrus? Don’t get me wrong, I love the direct relationship we can see of how click-thrus translate into sales dollars. And I understand that when we advertise broadly, our click-thrus increase. But what exactly is the relationship between these? Would our click-thru rate double if we purchased twice as much advertising offline?

Also, I’m confused about online advertising and all the money we spend on both “online display” and “paid search”. I realize that we are generally able to get exposure for less by spending online versus offline, but I really don’t understand how much more and what value we get for that piece either.

In short, I think we need to look beyond these efficiency metrics and find a way to compare all these options on the basis of effectiveness. We need a way to reasonably relate our expenses to the actual impact they have on the business, not just on the reach and frequency we create amongst prospective customers. Until we can do this, I’m not comfortable supporting further purchases of advertising exposure either online or offline.

It seems to me that, if we put some of our best minds on the challenge, we could create a series of test markets using different levels of advertising exposure (including none) in different markets which might actually give us some better sense of the payback on our marketing expenditures. Certainly I understand that just measuring the short-term impact may be a bit short-sighted, but it seems to me that we should be able (at the very least) to determine where we get NO lift in sales in the short term, and safely conclude that we are unlikely to get the payback we seek in the longer term either.

Clearly I’m not an expert on this topic. But my experience tells me that we are not approaching our marketing programs with enough emphasis on learning how to increase the payback, and are at best just getting better at spending less to achieve the same results. While this benefit is helpful, it isn’t enough to propel us to our growth goals and, I believe, presents an increasing risk to our continued profitability over time as markets get more competitive.

I’d be delighted to spend some time discussing this with you further, but we need a new way of looking at this problem to find solutions. It’s time we stop spending money without a clear idea of what result we’re getting. We owe it to each other as shareholders to make the best use of our available resources.

I’ll look forward to your reply.

Thank you.


So how would you respond? I’ll post the most creative/effective responses in two weeks.
_________________________
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Wednesday, March 17, 2010

What 300 Years of Science Teaches Us About WOM Metrics

In the early 18th century, scientists were fascinated with questions about the age of the earth. Geometry and experimentation had already provided clues into the size of the planet, and to the mass. But no one had yet figured out how old it was.

A few enterprising geologists began experimenting with ocean salinity. They measured the level of salt in the oceans as a benchmark, and then measured it every few months thereafter, believing that it might then be possible to work backwards to figure out how long it took to get to the present salinity level. Unfortunately, what they found was that the ocean salinity level fluctuates. So that approach didn’t work.

In the 19th century, another group of geologists working the problem hypothesized that if the earth was once in fact a ball of molten lava, then it must have cooled to its current temperature over time. So they designed some experiments to heat various spheres to proportionate temperatures and then measure the rate of cooling. From this, they imagined, they could tell how long it took the earth to cool to its present temperature. Again, interesting approach, but it led to estimates that were in the range of 75,000 thousands of years. Skeptics argued that a quick look at the countryside around them provided evidence that those estimates couldn’t possibly be correct. But the theory persisted for nearly 100 years!

Then in the early part of the 20th century, astronomers devised a new approach in estimating the age of the earth through radio spectroscopy – they studied the speed with which the stars were moving away from earth (by measuring shifts in light wave spectrum) and found there was a fairly uniform rate of speed. This allowed them to estimate that the earth was somewhere between 700 million and 1.2 billion years old. This seemed more plausible.

Not until 1956, shortly after the discovery of atomic half-lives, did physicists actually come up with the answer that we have today. When they studied various metals found in nature, they could measure the level of radiation in lead that had cooled from uranium, and then calculate backwards how long it had taken for radiation to achieve its present level. The estimated therefore that the earth was 4 to 5 billion years old.

Finally, in 1959, geologists discovered the Canyon Diablo meteorite and the physicists realized that the earth must be older than the meteorite that hit it (seems logical). So they took radiological readings from the meteorite and dated it at 4.53 – 4.58 billion years old.

Thus we presently believe our planet’s age is somewhere in this range. It took the collective learnings of geologists, astronomers, and physicists (and a few chemists along the way) and over 250 years to crack the code. Thousands of man-years of experimentation traced some smart and some not-so-smart theories, but we got to an answer that seems like a sound estimate based on all available data.

Why torture you with the science lecture? Because there are so many parallels to where we are today with marketing measurement. We’ve only really been studying it for about 50 years now, and only intensely so for the past 30 years. We have many theories of how it works, and a great many people collecting evidence to test those theories. Researchers, statisticians, ethnographers, and academics of all types are developing and testing theories.

Still, at best, we are somewhere between cooling spheres and radio spectroscopy in our understanding of things. We’re making best guesses based on our available science, and working hard to close the gaps and not get blinded by the easy answers.

I was reminded of this recently when I reviewed some of the excellent research done by Keller Fay Group in their TalkTrack® research which interviews thousands of people each week to find out what they’re talking about, and how that word-of-mouth (WOM) impacts brands. They have pretty clearly shown that only about 10% of total WOM activity occurs online. Further, they have established that in MOST categories (not all, but most), the online chatter is NOT representative of what is happening offline, at kitchen tables and office water coolers.

Yet many of the “marketing scientists” are still confusing measurability and large data sets of online chatter for accurate information. It’s a conclusion of convenience for many marketers. And one that is likely to be misleading and potentially career-threatening.

History is full of examples of how scientists were seduced by lots of data and wound up wandering down the wrong path for decades. Let’s be cautious that we’re not just playing with cooling spheres here. Scientific progress has always been built on triangulation of multiple methods. And while accelerating discoveries happen all the time though hard work, silver bullets are best left to the dreamers.

For the rest of us, it’s back to the grindstone, testing our best hypotheses every day.
_________________________
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Thursday, March 04, 2010

Prescription for Premature Delegation

When responsibility for selecting critical marketing metrics gets delegated by the CMO to one of his or her direct reports (or even an indirect report once- or twice-removed), it sets off a series of unfortunate events reminiscent of Lemony Snicket in the boardroom.

First of all, the fundamental orientation for the process starts off on an "inside/out" track. Middle managers tend (emphasize tend) to have a propensity to view the role of marketing with a bias favoring their personal domain of expertise or responsibility. It's just natural. Sure you can counterbalance by naming a team of managers who will supposedly neutralize each others' biases, but the result is often a recommendation derived primarily through compromise amongst peers whose first consideration is often a need to maintain good working relationships. Worse yet, it may exacerbate the extent to which the measurement challenge is viewed as an internal marketing project, and not a cross-organizational one.

Second, delegating elevates the chances that the proposed metrics will be heavily weighted towards things that can more likely be accomplished (and measured) within the autonomy scope of the “delegatee”. Intermediary metrics like awareness or leads generated are accorded greater weight because of the degree of control the recommender perceives they (or the marketing department) have over the outcome. The danger here is of course that these may be the very same "marketing-babble" concepts that frustrate the other members of the executive committee today and undermine the perception that marketing really is adding value.

Third, when marketing measurement is delegated, reality is often a casualty. The more people who review the metrics before they are presented to the CMO, the greater the likelihood they will arrive "polished" in some more-or-less altruistic manner to slightly favor all of the good things that are going on, even if the underlying message is a disturbing one. Again, human nature.

The right role for the CMO in the process is to champion the need for an insightful, objective measurement framework, and then to engage their executive committee peers in framing and reviewing the evolution of it. Measurement of marketing begins with an understanding of the specific role of marketing within the organization. That's a big task for most CMOs to clarify, never mind hard-working folks who might not have the benefit of the broader perspective. And only someone with that vision can ruthlessly screen the proposed metrics to ensure they are focused on the key questions facing the business and not just reflecting the present perspectives or operating capabilities.

Finally, the CMO needs to be the lead agent of change, visibly and consistently reinforcing the need for rapid iteration towards the most insightful measures of effectiveness and efficiency, and promoting continuous improvement. In other words, they need to take a personal stake in the measurement framework and tie themselves visibly to it so others will more willingly accept the challenge.

There are some very competent, productive people working for the CMO who would love to take this kind of a project on and uncover all the areas for improvement. People who can do a terrific job of building insightful, objective measurement capabilities. But the CMO who delegates too much responsibility for directing measurement risks undermining both the insight and organizational value of the outcome -- not to mention the credibility of the approach in the eyes of the key stakeholders.
_________________________
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Tuesday, February 16, 2010

Lose the Marketing Love Handles Without Lifting a Finger

I got an email the other day from a marketing technology company trumpeting their software’s ability to help me “improve marketing ROI without lifting a finger.” Wow. Incredible. Can’t be true, can it?

I asked for a demonstration copy to see if I could realize the incredible benefit, but no luck. They wouldn’t send me one. So to test the validity of the claim, I went to the center of all things factual – the internet - to see what else I could do without lifting a finger. The options are amazing. I can:
  • Lose 20 pounds
  • Find a high-paying new job
  • Earn a college degree
  • Write a book (someone else will do it for me)
  • Drive more traffic to my website
  • Be healthier
  • Look better
  • Attract more members of the opposite sex
  • And, my personal favorite, grow more hair.
I feel stupid. I’ve been spending so much time at the gym, writing my own books, taking my own college exams, choosing my own food carefully, and fretting over my hair. I could have spent all that time goofing off and gotten better results.

And I’m really pissed off about the effort I’ve wasted on measuring and improving marketing ROI. For seven years now, I’ve been working on improving marketing ROI all day every day; working with hundreds of marketing, finance, and sales managers in dozens of companies; overcoming obstacles of technical, structural, cultural, and political dimensions; making slow and steady progress.

NOW I discover that, had I just purchased the right software, I could have achieved much more with virtually NO effort. If my clients ever find out, I’m screwed.

On the whole, I think this magic ROI elixir software is really a good thing. It will:
  • Reinforce CMOs’ desire to believe that they can and should delegate ROI efforts even further down the org chart. Afterall, they have many more important things to worry about.
  • Give marketing managers something more tangible to point to when asked “what are you doing to improve the return on our marketing investment?” Of course, we’ve bought some software to fix that.
  • Postpone the question another year while the software winds its way through the procurement process and then gets passed around the IT department – all the while allowing the marketers to keep doing things the way they have been doing them.
  • Befuddle the finance department and get them off marketing’s back. You know how those finance guys love data. They’ll gladly wait awhile if they think there’s some data coming.
So forget all that phooey about aligning on metrics, implementing smart experiments, and methodically improving analytics. Don’t waste time on smarter marketing research. Just cut that shrink wrap on the software box, hit “install”, and off you go.

Then wait for the Easter Bunny to deliver your bonus check.

Hyperbole is a dangerous tool in the hands of marketers – particularly when it comes to measuring marketing ROI. It undermines our credibility with the more serious financial types who often are key influencers on how much we get in the way of resources and what we can do with it. It reinforces their perceptions of marketers as wild-eyed optimists willing to try anything new to deflect the gravity of the questions being asked. Besides, if there WERE a magic marketing ROI software, do you really think your progressively-minded organization would be amongst the very first to find it?

Bad news. There is still no substitute for diligent, disciplined work when it comes to measuring the payback on marketing. Technology enables, but vision and persistence win every time. Show me a company with the will to work at it, and I’ll show you the company that will get clear insights into their ROI long before the software buyers ever realize they’ve been misled.

Measuring and improving ROI is much more like going to the gym every day; watching what you eat; taking classes to earn a degree; and (take it from one who’s done it) writing a book yourself. Persistent, methodical effort is rewarded with great benefits.

So let’s get after those spending love handles and the marketing muffin top.
_________________________
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.


Tuesday, February 02, 2010

My friend and business partner, Dave Reibstein of Wharton, writes an advice column called Ask Dave where former students write in with questions on marketing metrics and he offers some sage advice. This recent correspondence caught my attention for its relevance and timeliness for all of you with new fiscal years beginning soon…

Dear Dave –

I’m a finance manager at a large industrial manufacturer. I often sit in the meetings where marketing comes in and shows us how low our marketing spend is compared to our competitors, and then asks for more money.

Maybe I’m wrong, but I don’t see that matching competitor spend is a path to success, is it? How should I coach our marketing team about what a better analysis would look like and what information they should bring to the table?

Sincerely –
Geoff D. in Chicago


Dear Geoff —

The right amount of spend is indeed a relative thing. Our product appeal, our pricing, our packaging, and just about every aspect of our value proposition are only important RELATIVE to the alternatives the customer has. Likewise, our spending is also, in part, only effective RELATIVE to what others are spending. But there are huge risks tied to making that relative spend the center of any strategy, as we are often following moves that deserved no response.

For example:
  • A while back there was a “holey war” between laundry iron manufacturers. One first put holes on the bottom of their iron allowing for steaming of the linens as they were being ironed. The competitor responded by adding more holes to their own model. Then the race was on to see who could add more and more holes until the number of holes was well beyond what the customer cared about.
  • Pepsi chased Coke into the low carb soft drink market (half the calories and half the carbs of normal colas). Hundreds of millions were spent before anyone realized that consumers that cared about their carb intake wanted ZERO carbs and calories, not half.
  • P&G chased after Kimberly Clark in the introduction of moistened toilet tissue. Both were greeted with failure in this market.
In each case, it was easier to approve the budgets for these programs once it was learned that competition was going to be moving there too. Fear of falling behind is a powerful motivator. The aversion to loss is much greater than the attraction of gain (see Prospect Theory). Marketing managers who use competitive spending as a primary justification for their own spending are (most often unknowingly) playing to this loss-aversion instinct.

But when we measure success on a relative basis (e.g. “share of voice” or “market share”), our behavior is only intended to keep up with competition. Also, the notion that we shouldn’t spend on something until we witness competition doing so implies that competition knows more about the right action than we do, which is very often not true. So, it results in the unwise being led by the same. Even worse it results in never taking the steps to actually get ahead of the competition.

The proven path to success is careful analysis of what makes a difference. Rigorous testing of historical data, or experimentation with spending levels to see what really does work will allow for taking progressive acts without having to wait for competition to move ahead. Smarter companies seem to gauge success on the absolutely impact marketing has on the financial goals of the firm.

It all comes down to what your mother always told you… “have a mind of your own”.

_________________________

Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Tuesday, January 19, 2010

The Cost of NOT Branding

It’s a simple formula: recession requires more tactical spending. This year’s budget = + online spend + social activity + lead generation campaigns – brand investment.

When the dollars get tight, spend shifts to more tangible, less expensive marketing programs with the promise of shorter-term returns (or at least lower costs). Not that there’s anything wrong with saving a few bucks wherever you can get the job done more efficiently. But when saving money becomes the goal instead of a guideline, something big always suffers… usually the brand.

While this is an important problem within the B2C community, it is absolutely URGENT within the B2B community. B2B marketers in large numbers have seen their marketing resources cut back dramatically for anything that isn’t expected to generate significant near-term flows of qualified sales leads. Why? Because absent good metrics to connect brand or longer-term asset development to actual financial value, these things were seen as strategic luxuries that could be postponed.

If I were a CFO looking for strategies to free up cash, I might have reached the same conclusion. Unless my marketing team could explain to me the cost of NOT investing in brand.

Here’s an example… A B2B enterprise technology player (Company X) dropped all marketing programs except those that A) specifically promoted product advantages or B) generated suitable numbers of qualified leads to offset the cost. After a few months, leads were on target, but the sales closing cycle was creeping up. What was originally a 6 to 9 month cycle was becoming 9 to 12 months. Further analysis and research amongst prospects and customers showed that indeed some of this delay was being caused by the general economic uncertainty and the need for buyers to rationalize their purchases internally with more people. But fully 45 days of this extended cycle (estimated by sales managers) was happening because the ultimate decision-makers weren’t sufficiently familiar with the strength of Company X’s product/service offering. (They thought Company X made small consumer electronics, and were not a serious player in enterprise tech.) So the sales team had to make repeated visits and presentations just to work their way into the game to compete on feature/function/price/value.

In this case, the question of the cost of NOT branding could be measured by the increased cost of direct sales associated with NOT branding. Specifically, if Company X measures the sales cost/dollar of contribution margin amongst accounts with strong brand consideration versus those with little-to-no brand perceptions, they should expect to see at least a 50% difference (nine months of effort vs. six), half of which would be attributable to low levels of brand consideration. Multiply that by the percentage of prospects in the addressable market with low levels of brand perception, and you can quickly derive a rough approximation of the cost of NOT branding, expressed either in terms of additional sales headcount required to compensate for lack of branding, or in terms of sales opportunity cost to compensate for an underdeveloped brand. Either way, an imminently measurable problem that would better illuminate the business case for investing in brand development.

There are many other ways to measure the cost of NOT branding, including relative margin realized and strategic segment penetration, amongst others. The right approach for you will depend upon your organizations key business goals.

Now I’m NOT advocating branding as a solution in every circumstance. Nor am I a proponent of the idea that marketing should generally be spending more money versus less. But as a tireless advocate for marketing effectiveness and efficiency, I think we too often fail to examine the business case for NOT doing something as a means of pushing past cultural and political obstacles in our management teams. Remember, there are always two options… DO something, and NOT do something. Each are definitive choices requiring their own payback analysis.

_________________________
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Tuesday, January 05, 2010

Forecast for 2010: Better Forecasting

Yogi Berra said, “It’s tough to make predictions, especially about the future.” Yet the turn of the year (and the decade) makes forecasting an irresistible temptation. But what if forecasting is part of your job, not just a hobby? How do make sure your forecasts are smart, relevant, and even (dare I say) accurate?

While advanced mathematics and enormous computational power have improved forecasting potential significantly, few would argue that forecasting is an exact science. That’s because at its core, forecasting is still mostly a human dynamic where accuracy is dependent upon…

  • asking the right people the right questions;
  • their willingness to answer truthfully and completely;
  • the ability to separate the meaningful elements from the noise; and,
  • the openness of the forecaster to suggestions of process improvement.

That last point is key: process improvement.

Consistently good forecasting isn’t a mathematical exercise performed at regular intervals (e.g., quarterly) as much as it’s an on-going process of gathering and evaluating dozens or hundreds of points of information into a decision framework. Then, when called upon, this decision framework can output the best forward-looking view grounded in the insights of the contributors. While software can facilitate process structure by prompting for specific fields of information to be included, it cannot make judgments on the quality of the information being input. Garbage in; garbage out.

As marketers, our job is to consistently prepare forecasts that help our companies conceive, plan, test, build and ultimately sell successful products and services. Sound forecasting processes form the foundation of an “early warning” system to alert the rest of the organization to the need to rethink its market orientation. In essence, forecasting becomes the rudder that can help your company stay the course, change directions, or navigate uncharted waters with confidence. As such, marketing migrates from being a tactical player to a strategic resource for the CEO when forecasts become more accurate, timely, and reliable.

Five Keys to Better Forecasting

Here are a few things I’ve learned over the years which result in much better forecasts:

  1. Be Specific. Define exactly what you are trying to forecast. If you say “sales”, what do you really mean? Revenue? Unit volume? Gross Margins? Net profit? The differences are substantial and might cause you to take very different approaches to forecasting. Likewise, having some sense of how far out you need to forecast (e.g. 3 months, 12, 36, etc.) and how accurate you need to be will guide you to use forecasting methods and processes better suited to your objectives.
  2. Be Structured. Being methodical in defining all of the dimensions, variables, facts, and assumptions will pay huge dividends in several ways, including explaining your forecast to skeptics and inspiring confidence that you’ve been comprehensive and credible in your approach.
  3. Be Quantitative – With or Without Data. Regardless of how little data you have, there are scientifically developed and proven ways of making better decisions. You may not have the raw materials for statistical regression forecasting, but you surely can use Delphi techniques or other judgmental calculus tools to transform perceptions and intuition of managers into data sets which can be more fully examined and questioned. Often, the process of quantifying the fuzzier logic uncovers great insights that were previously overlooked.
  4. Triangulate – Use multiple forecasting methods and see how the results differ. Chances are that the “reality” is somewhere within the triangle of results. That level of accuracy may be sufficient. But even if it isn’t, the multiple-method approach highlights weaknesses in any single method which might otherwise be overlooked – and that in itself leads to more accurate forecasts.
  5. KISS – Keep it simple, stupid. As with most things in life, simplicity is a virtue in forecasting. Einstein said that “things should be made as simple as possible, but no simpler.” In forecasting, we interpret that to mean that an accurate and reliable forecasting process should be comprehensive enough to identify the truly causal factors, but simple enough to explain to those who will need to make decisions upon it. There is no power in a forecast if those who need to trust it cannot understand or explain the logic and process behind it.

Recognizing forecasting to be a complex human decision process is the first step towards dramatically improving your batting average and improving the accuracy and reliability of the forecasts coming out of your department.

If you’re interested in learning more, here is some expanded forecasting insight, and some great sources of forecasting methods and tools.

-------------------------
Pat LaPointe is managing partner at MarketingNPV – specialist advisors on marketing measurement and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Tuesday, December 22, 2009

Measuring What Matters Most

What is the most important thing in your marketing plan to measure?
A) Campaign response.
B) Customer satisfaction.
C) Brand value.
D) Media mix efficiency.
E) All of the above.

The fact is that there are so many things to measure, more and more marketers are getting wrapped around the axle of measurement and wasting time, energy, and money chasing insight into the wrong things. Occasionally this is the result of prioritizing metrics based on what is easy to measure in an altruistic but misguided attempt to just “start somewhere”. Sometimes, it comes from an ambitious attempt to apply rocket science mathematics to questionable data in the search for definitive answers where none exist. But most often it is the challenge of being able to even identify what the most important metrics are. So here’s a way to isolate the things that are really critical, and thereby the most critical metrics.

Let’s say your company has identified a set of 5 year goals including targets for revenue, gross profit margin, new channel development, customer retention, and employee productivity. The logical first step is to make sure the goals are articulated in a form that facilitates measurement. For example, “opening new channels” isn’t a goal. It’s a wish. “Obtaining 30% market share in the wholesale distributor channel within five years” is a clear, measurable objective.

From those objective statements, you can quantitatively measure the size of the gap between where you are today and where you need to be in year X (the exercise of quantifying the objectives will see to that). But just measuring your progress on those specific measures might only serve to leave you well informed on your trip to nowheresville. To ensure success, you need to break each objective down into its component steps or stages. Working backwards, for example, achieving a 30% share goal in a new channel by year 5 might require that we have at least a 10% share by year 4. Getting to 10% might require that we have contracts signed with key distributors by year 3, which would mean having identified the right distributors and begun building relationships by year 2. And of course you would need all your market research, pricing, packaging, and supply chain plans completed by the end of year 1 so you could discuss the market potential intelligently with your prospective distributors.

When you reverse-engineer the success trajectory on each of your goals, you will find the critical building block components. These are the critical metrics. Monitor your progress towards each of these sub-goals and you have a much greater likelihood of hitting your longer-range objectives.

Kaplan and Norton, the pair who brought you the Balanced Scorecard and Strategy Mapping, have a simple tool they call Success Mapping to help diagram this process of selecting key measures. Each goal is broken down into critical sub-goals. Each sub-goal has metrics that test your on-track performance. A sample diagram follows.

By focusing on your sub-goals, you can direct all measurement efforts to those things that really matter, and invest in measurement systems (read: people and processes, not just software) in a way that’s linked to your overall business plan, not as an afterthought.

-------------------------
Pat LaPointe is managing partner at MarketingNPV – objective advisors on marketing measurement and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Tuesday, December 08, 2009

Making the "Best" Business Case for Marketing Investments

More than ever before, the approval of any significant marketing initiative is dependent upon a compelling business case. A business case is meant to function as a logical framework for the organization of all of the benefit and cost information about a given project or investment.

Working with this definition, one might conclude that a “good” marketing business case is one that increases the quality of decision making. Yet many of us in marketing have come to believe that a good business case is one that predicts a significantly positive ROI, IRR, and/or NPV for a given investment. Strangely, we tend to water-down any assumptions that actually seem to make the case “too good,” lest someone in finance really begin questioning our assumptions. Have you ever found yourself…

  • Using aggressive, moderate, and conservative labels on business case scenarios to show how even the most conservative view provides a strong potential return, and anything beyond that is gravy?
  • Identifying the always-low break-even point at which expenses are recaptured fully, and showing how this point occurs even below the conservative outcome scenario?
  • Taking a “haircut” in assumptions to show how, “even if you cut the number in half, the result is still positive.”

Every time you use one of these approaches in an effort to build credibility with finance or other operating executives, you paradoxically wind up undermining it instead. These tactics all have been shown to communicate subtle messages of inherent bias and manipulative persuasion which, intended or not, are noticeable to non-marketing reviewers – even if only on an instinctive level versus a conscious one.

In my experience, business cases get rejected most often for one of the following reasons:

  • Bias – senior management perceives that marketing is trying to “sell” something rather than truly understand the risk/reward of the proposed spending recommendations.

  • Jumping to the Numbers – showing final forecasts which contradict executive intuition before they have had a chance to reconsider the validity of those instincts.

  • Credibility of Assumptions – forecasts seem to ignore the effect of key variables or predict unprecedented outcomes.

Successful business-case developers recognize that there is more at stake than just getting funding approved. In reality, there are several objectives which must all be achieved with every business case:

  1. Protecting personal credibility. Any one program or initiative may be killed for many possible reasons. But you will still need to come to work tomorrow and be effective with your executives, peers, and team members. Preserving (and strengthening) your personal credibility is therefore the paramount objective.

  2. Enhancing the role of marketing. If you have personal credibility, you will want to use it to take smart risks to help the company achieve its objectives, and to influence matters relating to strategy, products, markets, etc. In the process, you need to be thinking about the role of the marketing function; how it can best serve the firm; and how you need to evolve it from what it is today.

  3. Bringing attractive options to the CEO – the kind that forces him/her to make hard decisions choosing between financially appealing alternatives.

There are always two dimensions to business case quality – financial attractiveness, and credibility of assumptions. In the end, it takes more than just financial attractiveness for a successful business case. It takes:

  • Thoughtfulness: demonstrating keen understanding of the role marketing plays in driving business outcomes and reflecting the input of the most critical stakeholders throughout the organization.
  • Comprehensiveness: including all credible impacts of spending recommendations, and calculating benefits and costs at an appropriate level of granularity.
  • Transparency: Clearly labeling all assumptions as such and presenting them in a way that encourages healthy discussion and challenge.

There are many ways to build a successful business case. But the most important learning is to understand the context in which your proposal will be evaluated BEFORE you put the numbers on the table.

-------------------------
Pat LaPointe is managing partner at MarketingNPV – objective advisors on measuring payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Thursday, November 12, 2009

Let's Bury John Wanamaker

There were so many interesting aspects of this year’s Masters of Marketing conference. One in particular caught my attention right up-front on Friday morning…

In this, the 100th year of the ANA, there are still lots of questions surrounding which 50% of advertising is “wasted”. I find that astounding. That 100 years later we’re still having this debate.

Maybe it’s because the very nature of advertising defies certainty.

Or maybe the definition of “wasted” is too broad.

Or maybe the reality is that the actual waste factor has been reduced to significantly less than 50%, but no one famous ever said that “15% of my advertising is wasted, I just don’t know which 15%.” And it wouldn’t make for a provocative PowerPoint slide even if they did.

It’s difficult to ignore the many signs of great progress we’ve made as an industry towards better understanding the financial payback of marketing and advertising. For example:

  • Research techniques have improved and the frequency of application has increased to provide better perspective on how actions affect brands and demand.

  • We’ve not only embraced analytical models in many categories, but have moved to 2nd and even 3rd generation tools that provide great insight.

  • We’ve adopted multivariate testing and experimental design to test and iterate towards effective communication solutions.

  • We’re learning to link brand investments to cash flow and asset value creation, so CFOs and CEOs can adopt more realistic expectations for payback timeframes.

All of this is very encouraging. Most of the presenters at this year’s conference included in their remarks evidence that they have been systematically improving the return they generate on their marketing spend by use of these and other techniques. So where is the remaining gap (if indeed one exists)?

First off, it seems that we’re often still applying the techniques in more of an ad-hoc than integrated manner. In other words, we appear to be analyzing this and researching that, but not actually connecting this to that in any structured way.

Second, while some of the leading companies with resources to invest in measurement are leading the charge, the majority of firms are under-resourced (not just by lack of funds, but people and management time too) to realistically push themselves further down the insight curve. In other words, the tools and techniques have been proven, but still require a substantial effort to implement and adopt.

Third, not everyone agrees with Eric Schmidt’s proclamation that “everything is measurable”. Some reject the basic premise, while others dismiss its applicability to their own very non-Google-like environments.

So what will it take to put John Wanamaker out of our misery before the 200th anniversary of the ANA?

  1. Training – exposing more marketing managers to more measurement techniques so they can apply their creative skills to the measurement challenge with greater confidence.
  2. A community-wide effort to push down the cost of more advanced measurement techniques, thereby putting them within reach of more marketing departments.
  3. An emphasis on “integrated measurement”. We’ve finally embraced the concept of “integrated marketing”. Now we have to apply the same philosophy to measurement. We need to do a better job of defining the questions we’re trying to answer up-front, and then architecting our measurement tools to answer the questions, instead of buying the tools and accepting whatever answers they offer while pleading poverty with respect to the remaining unanswered ones.
  4. We should eat a bit of our own dog food and develop external benchmarks of progress (much like we do with consumer research today). Let’s stop asking CMOs how they think their marketing teams are doing at measuring and improving payback, and work with members of the finance and academic communities to define a more objective yardstick with which we can measure real progress.

As we embark on the next 100 years, we have the wisdom, technology, and many of the tools to finally put John Wanamaker to rest. With a little concerted effort, we can close the remaining gaps to within a practical tolerance and dramatically boost marketing’s credibility in the process.


From Time’s a Wasting – for more information visit anamagazine.net.
--------------------

Pat LaPointe is managing partner at MarketingNPV – objective advisors on measuring payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

"That is the Big Challenge" says Eric Schmidt

I had a chance to talk briefly with Eric Schmidt, CEO of Google, at last week’s ANA conference. He’d just finished sharing his take on marketing and advertising with 1200 of us representing marketers, agencies, and supporting service providers. He said:

  • Google backed away from managing radio and print advertising networks due to lack of “closed loop feedback”. In other words, they couldn’t tell an advertiser IF the consumer actually saw the ad or if they acted afterward. Efforts to embed unique commercial identifiers into radio ads exist, but are still immature. And in print, it’s still not possible to tell who (specifically) is seeing which ads – at least not until someone places sensors between every two pages of my morning newspaper.
  • Despite this limitation, Schmidt feels that Google will soon crack the code of massive multi-variate modeling of both online and offline marketing mix influences by incorporating “management judgment” into the models where data is lacking, thereby enabling advertisers to parse out the relative contribution of every element of the marketing mix to optimize both the spend level and allocation – even taking into account countless competitive and macro-environmental variables.
  • That “everything is measurable” and Google has the mathematicians who can solve even the most thorny marketing measurement challenges.
  • That the winning marketers will be those who can rapidly iterate and learn quickly to reallocate resources and attention to what is working at a hyper-local level, taking both personalization and geographic location into account.
On all these fronts, I agree with him. I’ve actually said these very things in this blog over the past few years.

So when I caught up with him in the hallway afterward, I asked two questions:

  1. How credible are these uber-models likely to be if they fail to account for “non-marketing” variables like operational changes effecting customer experience and/or the impact of ex-category activities on customers within a category (e.g., how purchase activity in one category may affect purchase interest in another)?

  2. At what point do these models become so complex that they exceed the ability of most humans to understand them, leading to skepticism and doubt fueled by a deep psychological need for self-preservation?
His answers:

  1. “If you can track it, we can incorporate it into the model and determine its relative importance under a variety of circumstances. If you can’t, we can proxy for it with managerial judgment.”

  2. “That is the big challenge, isn’t it.”
So my takeaway from this interaction is:

  • Google will likely develop a “universal platform” for market mix modeling, which in many respects will be more robust than most of the other tools on the market – particularly in terms of seamless integration of online and offline elements, and web-enabled simulation tools. While it may lack some of the subtle flexibility of a custom-designed model, it will likely be “close enough” in overall accuracy given that it could be a fraction of the cost of custom, if not free. And it will likely evolve faster to incorporate emerging dynamics and variables as their scale will enable them to spot and include such things faster than any other analytics shop.

  • If they have a vulnerability, it may be under-estimating the human variables of the underlying questions (e.g., how much should we spend and where/how should we spend it?) and of the potential solution.

Reflecting over a glass of Cabernet several hours later, I realized that this is generally a good thing for the marketing discipline as Google will once again push us all to accelerate our adoption of mathematical pattern recognition as inputs into managerial decisions. Besides, the new human dynamics this acceleration creates will also spur new business opportunities. So everyone wins.