Tuesday, December 14, 2010

Elves Dashboard; Santa Suffers

Dateline: North Pole

Santa slipped into a PowerPoint-induced coma today halfway through the annual Elfin Performance Dashboard presentation. “I guess we had a few too many metrics”, said Lenny, Chief Analytical Elf.

According to eyewitnesses, Santa’s head began bobbing on approximately slide 46, and by the time the Elves reached slide 101 his eyes had rolled back into his head and his breathing became very slow and rhythmic. Mrs. Claus was heard to wonder aloud if it had anything to do with the copious quantities of beef ingested by her husband over the past few weeks in an effort to fill-out his oversized red suit. However, the Elves had prepared for such possibilities by stocking the room with coffee, RedBull, and lots of M&M’s.

“This year, we just had so much more data” said Lenny. “Our new CBM (child-behavior-manager) application gave us much better information on who had been naughty and who had been nice. Combined with the state-of-the-art production management system and our world class routing software, we wanted Santa to see just how efficient we’d been this year in the hopes that he would fund our plant expansion in 2011.”

Dashboard experts were called in to analyze the structure and concluded that there were just too many metrics. Comprehensiveness went way up, but relevance declined exponentially.

“Next year we’ll have it slimmed down to just the most relevant facts” said Lenny. “And we’ll try to make it a bit more forward-looking so Santa can plan better to deliver more peace and happiness throughout the world.”

Here’s to a holiday that’s:
  • Simply
  • Filled with
  • Happiness
  • And Joy.

Tuesday, November 30, 2010

Send THIS to Your CFO (Anonymously)

Attention, all finance executives seeking to understand the ROI on marketing investments…

Over the years I’ve learned that if I come home to find something in the house broken or missing, I’m much more likely to get the truth if I ask my kids “does anyone know anything about ” than if I ask “who broke this?” or “who took my ?” The later approach immediately sends everyone into damage control mode, while the former gives them a bit more latitude to respond in a responsible way. They sense somehow that I am more interested in addressing the problem than finding someone to blame for it. Even the kids who know they never touched the object of my immediate interest learn from my approach and become more proactive in disclosing their borrowing or breaking events in the future.

There are similarities where finance and marketing interact. Finance is often perceived to have parental-like authority in its control of the budget and its audit/oversight responsibility. So it is an unfortunate truth that too often the journey towards better insight into marketing payback gets derailed right at the start when finance asks what they believe to be a logical and simple question, but which marketing interprets in the form of a challenge – e.g.
  • “Is our marketing generating any value for shareholders?” or
  • “How do we know that marketing is working?”
These questions have the immediate and profound effect of putting marketing into “justification” mode and encouraging them to respond defensively. And since marketers are pretty creative and articulate people, they usually answer with a long stream of ad-hoc evidence, anecdotes, and metaphors which individually may not be so convincing, but in the aggregate create enough uncertainty within the executive committee to neutralize the question and deflect the discussion. The result is a stalemate; where the inherent subtleties of marketing are explained with superior powers of persuasion to cast doubt on the wisdom of cutting marketing spend.

Of course this doesn’t help the organization get any smarter. In fact, it actually has a significant “insight opportunity cost” since all the resources that could have been directed towards the pursuit of true insight get diverted to “proving” that marketing works.

Successful marketing measurement, like many other challenging tasks within the company, is a function of effectively deploying constrained resources on a few key focal points rather than fracturing the effort in a broad search for the “preponderance of evidence”. Imagine the payback insight you seek is trapped inside a large wooden log, and splitting the log open is the only way to extract it. You can split the log with a sharpened axe striking the right point in a single blow (two at most), or you can endlessly pound it with a sledge hammer until it (or you) slowly turn to dust. Which approach would you prefer?

The CFO is much more likely to get the answers they’re seeking by approaching the dialogue on marketing payback from an angle that generates productive engagement rather than defensive deflection. Doing so requires three specific attitudinal changes in how most CFOs would normally pursue the answers:

  1. Acknowledge that good marketing always creates shareholder value. If necessary, suspend your disbelief and be willing to concede that if we did things better, we would see a beneficial result. Use questions intended to discover:
    • “What can we achieve with good marketing?”
    • “How well is our current marketing performing?” and
    • “How can we improve the payback we’re getting?”
  2. Embrace uncertainty – especially in the early stages of measurement when the unknowns will outnumber the knowns. Be patient with ambiguity and willing to accept “I don’t know…” as an answer from marketing in the near term, provided it is followed in short order by “…but here is what we can do to find out.” Premature demands for precision will backfire in the form of higher weighting of the more measurable marketing elements such as web site traffic and direct response programs – even if those aren’t the real drivers of your success in the marketplace.
  3. Exercise patience. The questions you’re asking will take some time to fully answer. Expect to see some progress made soon, and then more made in measured increments, but don’t assume that applying time pressure will speed the discovery. More likely, impatience will be met with passive-aggressive resistance which will surface many more complex obstacles than you or the rest of the finance team have the time or ability to conquer.

There are other more targeted questions you can ask of marketing to put the measurement effort on the right track. But if the spirit of your inquiry is interpreted as a quest for insight rather than an attack on the marketing organization, you’ll get much closer to the answers you’re seeking, and get there much faster.

Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring and improving the payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Wednesday, November 17, 2010


One of the most popular measures of relative marketing effectiveness continues to be monitoring share-of-voice (SOV) – a metric based on your total marketing/advertising spend as a percentage of the total spend in your category. Some even go a step further and look at an index of SOV to SOM (share of market). The argument for doing so is that if your SOV is less than your SOM, you are at risk of losing share. Presumably, the converse is then also true – that if your SOV is greater than SOM, you should expect to gain share.

For example, if your measured ad spend was $20MM in a category where total measured spend was $200MM, your SOV would be 10%. If your market share was 13%, you might argue that you are underspending on an SOV/SOM basis, and that more funding was required to maintain share. You may be right, but not for the reasons you cite. And even more importantly, more marketing credibility has been squandered on this simple argument than perhaps any other single metric over the past 20 years.

CEOs and CFOs see right through the SOV argument and regularly tear it to shreds in budget meetings. They tend to outright reject the premise that SOV drives SOM absent any clear data to the contrary.

If you understand the mindset of the CEO and CFO, you know they are very able to envision scenarios where spending even less money on marketing than our competitors could be beneficial if A) the existing core value proposition is better than the competitors’ and customers know it; B) the money would be better spent improving the core value proposition than investing in efforts to “sell” the current inadequate version; or C) the shareholders would benefit more by dropping the savings to the bottom line. Although not often stated, these intuitive expectations are almost always fueled by concerns about the relative effectiveness of the current marketing/advertising investments to begin with.

In this environment, the marketer who enters the meeting with an argument to raise spending levels to achieve some SOV target is actually heard to be saying “Johnny has more money so I want more”. And as soon as that impression is created, you might as well polish your resume because your influence over the marketing budget is now far smaller than even your most conservative hopes.

Nevertheless, relative spend pressure in the marketplace can and often is shown to have an impact on how market share migrates. So how do you bridge this gap credibly?

First, understand the difference between SOV and “Effective SOV” (ESOV). ESOV begins with relative spend, but then adjusts it up or down based on relative strength of your core value proposition and/or message execution. Taking our earlier example, if your spend was $20MM in a $200MM spend category, you would index your 10% SOV by looking at the relative stength of your ad copy execution. If copy testing told you your message was at parity with your competitors, your ESOV would equal SOV at 10%. But if your copy was 30% stronger or weaker than competitors, your ESOV could be 7% (10%*(1-.3)) or 13% (10%*(1+.3)). In other words, you may in fact need to spend more money to maintain an effective level of marketing pressure – even more than you originally believed. Alternatively, you may be benefitting from a strong message and providing an effectiveness dividend to shareholders by requiring less ad spend due to your very stong message.

Calculating ESOV by using ad effectiveness is good, but using relative strength of your value proposition, (the perceptions of the appeal of your product/service vs. your competitor’s before
taking ad execution into account), is even more encapsulating of the impact of marketing spend.

CEOs and CFOs see ESOV as a legitimate analyis of relative strengths and weaknesses. Consequently, using ESOV doesn’t cost you any credibility points. But it doesn’t unilaterally gain you any unless you are simultaneously able to explain the impact of ESOV on profitable share shifting. It’s one thing to know what your ESOV is as a starting point, but quite a bit more impressive if you know how a projected change in ESOV would translate into incremental revenue and margin flows.

There are several ways to determine the incremental impact of ESOV on financial outcomes. The first is with classic marketing mix models, which can help you better understand the historical relationship. If the category dynamics are relatively stable, this might be sufficient to project the outcomes into the future.

If you either cannot implement mix models OR are in a very dynamic category where the past is not a good predictor of the future, then you can use a combination of analytical and choice-options research techniques studying both your own and your competitor’s advertising to better understand the relative behavioral impact of each.

Neither method is perfect. But by triangulating on estimates of the relationship between ESOV and share, then consistently measuring it periodically to refine your understanding, you ensure that the next budget meeting will be a much more intelligent and fact-based discussion where both you and your recommendations come out alive and healthy.
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring and improving the payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Tuesday, November 02, 2010

Tapping Into the Wisdom of Clouds

Prediction markets have been around for quite a while now. The Iowa Electronic Marketplace is world renown for accurately forecasting the outcomes of US elections often many months prior to election day. Sports betting sites in the UK and elsewhere move billions of dollars of wagers on the basis of the collective expertise of those betting on outcomes.

More recently, companies like Consensus Point, Crowdcast, and InTrade (among others) have brought the tools and technologies to the boardroom that allow managers to tap into global pools of “experts” to attempt to predict the future.

  • Hollywood movie studios have made “prediction markets” a key component of their forecasting efforts in deciding how much money to spend on advertising campaigns. (Interestingly, movies rated very high or very low receive relatively little advertising as WOM is expected to play its role at both ends of that spectrum; only movies in the middle-range receive significant ad spend).
  • Retailers use prediction markets to make decisions on which products to carry in inventory, as well as how to promote and price them.
  • Technology firms use them to decide which new platforms to bet on.
  • Pharma industry leaders use them to determine pricing strategies years in advance based on competitive pipelines and regulatory approval processes.
  • And B2B industrial companies use them to model the impacts of changes in sales force size, structure, and compensation.

Perfect? No. Helpful and insightful? Definitely, in several ways.

First, by identifying possibilities not previously within the imagination of your own executive team, and by considering factors which any small group of managers might overlook, they provide a more thorough and comprehensive assessment of uncertain outcomes.

Second, even if prediction markets cannot provide an exact answer (which they rarely can – being better at offering directional probabilities than precise forecasts), they can significantly reduce the uncertainty surrounding what a given market segment might respond to, or how a group of competitors might react to a significant change by one. This makes them good tools for setting performance targets and expectations in the absence of historical perspective.

Third, with the help of cloud computing and social networks, prediction markets are declining in cost to the point that they are often much faster-to-feedback and far less expensive than traditional survey-based research, while offering far greater flexibility to have respondents explore “what-if” scenarios.

Like any tool, they can be dangerous in the hands of amateurs. Garbage-in, garbage-out is a primary risk. So is being too confident in the absolute numbers, when the directional insights are often the most valid level of granularity.

Nevertheless progressive marketing measurers are using prediction markets to help them better understand and act on the insights they’re deriving from their mix models, their web analytics platforms, and their customer satisfaction and referral studies.

There is a whole lot more to learn about prediction markets before you jump into using them. But in general, you can benefit from using them to answer questions you might be struggling to answer with your current data streams if/when:

  1. You have a suitable pool of “experts” to engage in your marketplace. These experts could be associates, customers, prospects, or industry monitors. The exact number required differs by purpose. Sometimes you can get pretty reliable data from as few as 20 participants; other uses would require hundreds (or thousands) of participants.
  2. You can define your questions in terms of “what would happen if…”
  3. You can engage expert participants by offering something of true value in exchange for their effort and energy. Offering monetary rewards, special recognition, unique access, or other benefits of great interest will help ensure a more vibrant and active prediction market that explores new ground.

Finally, two quick learnings about how to get the most from your prediction markets (based on experience)

  1. Include some “noise” traders who inject provocative suggestions or wagers to ensure you draw reactions (supportive or contrarian) from the smarter traders with better insight.
  2. Run your markets as shorter-term events, and not continuous commitments over extended periods. Request only short-bursts of participants’ time; provide feedback quickly; and progress continually towards a defined end-point.

There’s a great deal of un-tapped insight potential in the clouds. Creative approaches are generating terrific new insights into marketing effectiveness and efficiency at increasingly faster rates. And the subset of “difficult to answer” questions is getting smaller and smaller every day.
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring and improving the payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Wednesday, October 20, 2010

Metrics for Your "Special Purpose"

Question: What do Steve Martin (actor) and the ANA’s Masters of Marketing have in common? Read on for the answer.

I’m just back from the ANA Masters of Marketing conference this past week in Orlando.

Celebrating their 100th anniversary, the ANA once again did a superb job of bringing together 1600 members of the marketing community to hear 20 or so CMOs share their stories of success. Most were entertaining. Some were very informative – particularly when ANA CEO Bob Liodice would ask them questions about how they were measuring their success. On the whole it was clear that we (the metrics-loving community at large) have made some substantial progress in this regard as most of the CMOs were able to answer intelligently about what they were measuring and how it related back to business decisions.

But not since Steve Martin starred in “The Jerk” have I heard as much talk about “finding a special purpose”. The unofficial theme of the event seems to have been marketers talking about re-discovering their company’s true purpose in serving customers and enhancing their lives. Some have done elaborate research on their brands to find their “special purpose”. Others went back to the founders or the archives to refresh their institutional memories.

While I applaud the drive for more meaningful connections with customers and prospects, I think this trend poses a risk to lead us astray unless we apply a few carefully chosen metrics in pursuit of purpose.

First, we need to ensure that our purpose is RELEVANT. Those we seek to attract must find our purpose to be consistent with their view of how they want to live their lives, and see the link as to how we can help them do so.

Second, it needs to be MATERIAL. Even if relevant, our purpose may fail to inspire any change in behavior unless it eliminates a significant pain or provides a measurable gain. Most people will live with some degree of pain or inconvenience (tangible or otherwise) until the effort involved in resolving it is clearly less than the expected gain.

Third, our purpose needs to be DISTINCTIVE. It must be seen as somehow uniquely ours to fill. If the needs we are targeting at the core of our purpose can be filled interchangeably by any of our competitors (or other companies even in other industries), then we don’t have a true purpose, we have a slogan. An ad campaign. Some t-shirts. Pursuing a shallow-built purpose has historically been a terrific way to spend lots of money (and executive credibility) without actually achieving any value for shareholders. Hard, tangible value in the form of revenue, profitability, share, customer loyalty, referral networks, channel power, etc.

I wonder just how many companies could find a clear “purpose” that meets all those criteria?

More likely, companies have some ingredients of value proposition to offer that would rise to those standards IF they could better execute against them consistently enough to be recognized in the marketplace.

Chances are that you won’t FIND your purpose by looking through reams of research data. However, if you think you have some clues as to what it MIGHT be, you should be thinking about using various modified conjoint/choice-options types of work to validate it on the dimensions above, and in direct comparisons to WHAT ELSE you might do instead. At least then you would have some more specific sense of the value of achieving your purpose.

Once found, there may be a big role for broad-based advertising to help spark recognition of the match between the needs of the market and the abilities of the firm, as well as to inspire thousands of employees around the world towards a common goal. But as we contemplate using paid, owned, and earned media to get the message of our purpose out, let’s first ensure that we have more than a clever advertising idea to base our hopes (and those of our customers) on.

Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring and improving the payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Tuesday, October 05, 2010

Overcoming the Short-Term Bias in Marketing Measurement

At best, marketing measurement tends to slant towards short-term payback at the expense of longer-term brand and customer development. But when you add heavy doses of highly measurable online tactics to more quantitatively-elusive offline approaches, the slant can become an outright bias. Unchecked, this can seriously impair the marketer’s ability to make smart decisions beyond the next quarter or two. This is particularly acute with respect to fully integrated programs designed not just for lead generation, but for brand and customer development.

This pressure for short-term payback exists in part because finance cannot afford to “trust” the marketer more than one or two periods into the future, and in part because the marketer cannot “prove” that the immediate impact understates the true value derived. Rising above the stalemate requires some new thinking in how marketers plan, execute, and measure their programs, not to mention the way they communicate their expectations and findings to finance.

So how do we stimulate that new thinking? One way is to employ a Brand Value Chain.

The Brand Value Chain (adapted from Kevin Keller of Dartmouth and Don Lehmann of Columbia) helps clarify and document, for all to see, the anticipated relationship between elements of an integrated marketing program and financial value created through stronger brand equity.

In the simplest version of the Value Chain, an integrated campaign leads to some evolution in brand image, which in turn leads to some change in “equity”, which then translates into financial value.

The best way to understand the Brand Value Chain is to begin with the end in mind. Specifically, what sort of financial value is the integrated campaign supposed to lead to? Is it intended to increase the incidence of purchase? To decrease price sensitivity? To open new distribution channels through superior category leverage? To project more powerful negotiating position to vendors and suppliers? Or some combination of the above?

The Brand Value Chain tests your ability to clarify your expectations logically and to define the specific dimensions upon which brand “equity” must evolve to achieve them. How are you expecting the thoughts, beliefs, attitudes, associations, and permissions people ascribe to the brand to change or grow? What do you believe precedes seeing the desired economic behavior?

Finally, the “image” results are the early indicators (e.g. salient awareness, attribute- or characteristic-specific awareness, or more accurate awareness of the brand’s points-of-parity and/or points-of-difference) of progress. While important, they are a necessary but insufficient condition for a profitable outcome. Acknowledging this works to establish the necessity of time to translate imagery into equity into financial gain.

Once you have the Brand Value Chain constructed in a way that reflects your hypotheses about the way things work, you can identify which links in the chain you are able to test/read/validate and which you cannot. This brings focus to the information gaps and raises the question of tradeoffs between the cost and value of further insight for all to assess. If finance is so keen to have precise insight into the financial outcomes of brand advertising, they should be willing to invest in the research, testing, and experiments that would have to go into properly tracking the flow of results through the value chain. Otherwise, they will have to accept informed assumptions and estimating processes which find the balance between cost and benefit.

The Brand Value Chain has one significant flaw as it appears here. It follows the now widely discredited “hierarchy of effects” theory, which prescribed that awareness leads to conscious consideration which in turn precedes behavior. This linear model has been found to have only limited validity in the real world. However, the Value Chain does provide a great starting point for you to map out how you
think your category dynamics operate so you can construct one in a format that is most relevant to your business.

Measuring the impact of integrated marketing over the long run is possible with the application of the right tools and processes. Research, experimental design, factor analysis, and continuous feedback mechanisms all play a role in reducing the unknowns down to comfortable risk levels. It just takes some clarity and precision in defining expectations for marketing’s payback by building financial bridges from short- to long-term value creation.

Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring and improving the payback on marketing investments, and publishers of
MarketingNPV Journal available online free at www.MarketingNPV.com.

Tuesday, September 21, 2010

5 Things You Can Do TODAY to Improve Your Measurement Foundation

While the economy may be improving, CFOs will be cautious not to spend too far in advance of strong demand. This will continue to fan the flames under the question of the expected payback on marketing investments, and expose cracks in your measurement foundation.

So now more than ever before it’s critically important to improve your ability to measure and improve your marketing ROI, and your credibility in explaining it. But most marketers can’t spread their resources too thin, so what will really make the most difference to elevate your measurement game?

Here are a few suggestions…

First, get the metrics right. Many marketers are still wrestling with a panoply of things that could be measured, instead of those which should be measured. They have too many metrics concentrated in the business outcomes (e.g. “revenue”) or in the marketing activities (e.g. “web pages implemented”), and not enough in the middle-ground explaining the progression of the engagement and ultimately buying process. Measuring your social media activities down to the millimeter doesn’t matter at all if you don’t have a clue about the financial return of your brand advertising.

The “right” metrics are the ones that A) give you the insights to answer the question “what do I do next?”; B) are calibrated to move up or down as your spend patterns change so you can learn what moves the needle; and C) elicit head-nodding from the CFO for their credibility and validity. They should account for the impact of the vast majority of your marketing spend activity, and always be pushing to link closer and closer to the ability to determine financial value created.

Second, experiment liberally. If you’re not spending at least 10% of your total budget experimenting, your knowledge foundation is crumbling. Deliberate, focused experimentation is the most credible and effective way to test unknowns about strategy, tactical execution, or resource allocation mix, and do so in a “live” environment where all the boogey man variables can and do impact the results. It creates a disciplined approach to continuous learning and improvement that cannot be achieved through research alone.

Third, standardize business case requirements. Each and every proposed initiative above some spending threshold should come with a business case using a common template that forces the proposer to specifically articulate their assumptions about how this investment will ultimately help improve shareholder value in some financial context. This not only helps the quality of the plans submitted, but also permits some portfolio management of options when the resources aren’t quite sufficient to cover all requests.

In the beginning, these business cases will be full of holes. But in time, validated assumptions will fill the holes and become a repository of institutional knowledge for reasonable expectations. Then you’ll be able to approve or reject ideas faster, and with less haggling with finance. Not to mention the insight you’ll gain into where you need to shore-up your key assumptions.

Fourth, stop searching for “proof” and look for insight. When marketing measurement emanates from a defensive posture, it tends to solve small problems in sequence so as to win one argument at a time. This energy consumed by “fighting” these battles tends to blind CMOs from pursuit of the broader understanding of the real drivers of business success.

A better approach is to work with finance, sales, SBU’s and other key stakeholders to lay out the series of key questions you would like to be able to answer, ranked in order of priority. Once you’ve finalized the list, you now have the framework against which to plan your assault on ignorance in a way that engages the whole organization on the path, and sets you on a foundation of credibility.

Fifth, practice transparency. Make every assumption plain for all to see and criticize. Label educated guesses as such. Invite anyone to challenge your knowledge and engage them in a discussion of the practical limits on what might be known at what cost in what timeframe. Present expectations in terms of probabilities and ranges where certainty is lacking, and allow the informed opinions of others to factor into setting those ranges so they begin to have some ownership of the uncertainty “problem” themselves and are thereby more supportive of you spending money to solve it.

Finally, remember that in measurement “no pain, no gain” is very real. It takes focus, effort, and resources to improve your knowledge. These five components will help you lay the right measurement foundation to build on.

Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring and improving the payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Tuesday, September 07, 2010

Measurement Problems? Check Your Credibility Chain.

Which of the following is the biggest obstacle to better measurement of the payback on marketing investments?

A) Lack of data
B) Low measurement skills
C) Drowning in a thousand metrics
D) Low credibility in the eyes of key stakeholders

While data and skills are critically important components of measurement success, there is no single obstacle more formidable than lack of credibility. Data, after all, can be estimated within reason. Skills can be “rented” while they are being developed. But credibility either exists or it doesn’t. And if it doesn’t, the road back can be long and winding.

Credibility is so crucial since so much of marketing measurement involves making assumptions. Assumptions about the impacts of tactics on prospects; assumptions about the interaction effects between tactics; assumptions about the effects of macroeconomic forces; etc. For every assumption, there is an element of credibility in both the assumption and the assumer. In other words, there are dozens or even hundreds of places where cracks can form in the credibility of any measurement framework.

Managing the credibility of any measurement effort can be broken down into four elements, each of which are necessary to preserve the “credibility chain”.

First, measurement efforts must be seen to be aligned to the needs of the business. Key questions being answered need to be clearly linked to the success of the business in both the current and future perspectives. In addition, those key questions need to be seen as worthy of the resources being allocated to pursuing answers, and “material” to the decisions the business will face.

Once aligned, the measurement effort needs to be seen as comprehensive. Every relevant dimension must be explored fully to leave no stone unturned in search of insight. If there are 100 stones which need to be uncovered and you only explore 99, you get no credit. Rather, you figuratively get hit in the head with the 100th stone by those who feel you may have conveniently ignored it for fear it contained secrets that would undermine your arguments.

But even being comprehensive isn’t sufficient… you need to also be seen as objective in your assessment of what you find under each rock - identifying both the supporting and refuting evidence observed. Yet objectivity is difficult for marketers who tend to see the world through rose-colored glasses, hoping that things we try will work in the marketplace. That’s a natural psychological mechanism in a profession where small failures happen every day on the path to learning. It just needs to be counter-balanced by a concerted effort to question those optimistic tendencies, if for no other reason than to demonstrate that your primary concern is truth, not “proof”.

And finally, when that aligned, comprehensive, objective assessment gets translated into recommendations for how money should be spent, accountability is the perceived result. And people who are seen to be accountable are usually entrusted with more resources and responsibilities.

When you line each of these components up, the end result is credibility in measurement.

If your key stakeholders are questioning the credibility of your measurement efforts, chances are your credibility chain is broken somewhere. Take a look back to see if you have clear alignment on what questions you are answering and how they are prioritized. Ask yourself if there are any other “rocks” that could be turned over. Test your observations with others to see if they are seen as objective. And then check to ensure your recommendations are directly linked to the insights you’ve gained in the process.

It clearly takes more effort to build and preserve a credibility chain than it does to throw ad-hoc metrics and data at your key stakeholders. But it almost always is an investment that leads to longer, more successful tenure in senior marketing roles.

Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring and improving the payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Tuesday, August 24, 2010

Achieving Simplicity in Measurement

So often I hear that a given approach to measuring the payback on marketing is “too complex”. It often gets voiced as. “This is too complex for our executives to understand. Can’t we just make it simple?”

The answer is YES. We can make it simpler. To do so, we need to start by recognizing that there are several types of “complexity” that need to be managed:

Over time, I’ve come to learn a few things that help in trying to manage these complexities:
  1. Simplicity is the destination, not the starting point. If the answers were simple, you would have found them already. The challenge is to sift through the complexity to identify the real insights - e.g., discovering brand drivers, identifying customer segments, etc. The path towards marketing measurement excellence requires working through complexity to arrive at the core set of appropriate measures, processes and frameworks.

  2. Complexity in measurement is a reflection of complexity in the business. The measurement plan just reflects the nature of the business, and should be on-par with the complexity management already deals with daily. The goal of the measurement plan is to establish a more structured framework for systematically eliminating complexity over time by isolating the most meaningful areas of focus. If you knew those already, you wouldn’t need to search for them through your metrics.

  3. Complexity is only a problem when introduced to the BROAD organization. The totality of the measurement plan and activity will be contained within a small group of people who already manage the current complexity. It is the responsibility of this core group to communicate comprehensiveness without adding any un-necessary complexity. Roll-out to the broader workforce will be the gradual and likely limited by function.

  4. Simplicity, if not thoughtfully pursued, can inhibit competitive differentiation. If the real competitive insights were are as obvious as picking up shells on the beach, everyone would have them. Try not to limit yourself to picking up the same shells as your competitors. Complexity is often a necessary part of meaningful innovation.
Specific to each type of complexity, there are some clear resolution strategies…

In the end, the goal of enhanced marketing measurement is to guide the organization to take bigger, smarter risks to achieve competitive advantage. Identifying and analyzing those risks is always complex. Your measurement approach needs to be based upon the processes, skills, and tools designed to make it SIMPLER in the future than it is today.
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring and improving the payback on marketing investments, and publishers of MarketingNPV Journal available online free at http://www.marketingnpv.com/.

Tuesday, August 10, 2010

Dashboards - Huge Value or Big Expense?

I’m hearing more and more “dashboard bashing” these days. Seems that many have tried to implement them, and then drawn the conclusion that the view isn’t worth the climb. I’ve even been inside companies where you cannot even say “dashboard” in front of senior executives for fear of them throwing you out of the room. Likewise, I’ve been inside companies and heard things like “Dashboards are easy. We have hundreds of them.”

Having “written the book” on marketing dashboards several years ago, I think I own part of this problem. And I confess that there are far fewer comprehensive dashboards in use within Global 1000 marketing organizations than I thought there would be by now. In short, it seems it was a good idea in concept, but it just hasn’t “stuck” within most companies. Why?

Well for starters, dashboard design and implementation tends to get delegated down the hierarchy and get treated like campaign management or marketing automation tools. It’s a bit paradoxical, but if a CMO wants an insight-generating dashboard that saves them time, they need to put more time into nurturing its birth and evolution. EVERY successful dashboard implementation I’ve seen (and yes there are a few) shares a common foundation of senior management attention and high expectations. Without that, they are born of a thousand compromises and arrive neutered of their value.

Second, they are set up to fail by unrealistic expectations with regard to resources required. Sure the software to run dashboards is getting much cheaper all the time, but the effort to gather, align, and interpret data is significant. Not to mention the time required to train the staff how to USE the dashboard to THINK differently about the business than they did before. After all, if your dashboard doesn’t offer the prospect of causing people to think differently, why do it when you can just continue to rely on the existing mélange of reports flying around the building?

Third, there is pressure to execute in Excel in the belief that it will be easier and less expensive. In reality, the limitations it imposes undermine the potential to grab people’s imagination and draw them into interacting with data in new ways. The users can’t sense anything different from the reports they currently get. In short, penny wise and pound foolish.

And finally, start with metrics which CAN be measured today (the pragmatist approach) instead of envisioning the spectrum of things which SHOULD be measured (the visionary approach), and force some amount of new learning exploration right from the start. Without this stretch exercise, many dashboards are started with no prospect of new information, and therefore no compelling reason for anyone to take the time to learn to use them.

So to sum up what we’ve learned, I’d say if you're looking for great insight without expense, stay away from dashboards. You'll be disappointed. But don't burn your bridges behind you. After you've searched far and wide for true insight-generating solutions that meet the "good" and "cheap" criteria, you may just arrive back at the reality that insight is derived through dedicated effort over time. And while the current generation of dashboard software options is slick and inexpensive, they won’t perform the most important transformations for you – the process, skill, and culture ones.

Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring and improving the payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Tuesday, August 03, 2010

Catching Lightning in a Bottle

What do these great marketing breakthroughs have in common?

• MasterCard “priceless”
• Energizer bunny
• “Got Milk?”
• Absolut ________

All were truly “breakthrough ideas”. All were “viral” ad campaigns before there was such a term. But they all were much more than ad campaigns – they were positioning strategies that effectively cemented their brands into the top echelon of their respective categories. They were marketing platforms that have lasted for many years and evolved to adapt effectively in dynamic environments.

Yet of all the marketing jargon that’s penetrated our brains, I think the concept of the “breakthrough idea” may be one of the most dangerous.

Would we all love to have one? Sure. When one comes along, can it revolutionize our business? Absolutely. So what’s the problem? Shouldn’t we all aspire to the same success?

Statistically speaking, most marketing organizations have a better chance of hitting the lottery than they do creating a breakthrough idea that’s more than just a short-lived ad campaign. Declines in both research budgets and internal competencies are primary causes. But internal politics and dynamic competitive environments play a role too. All of which is exacerbated by shorter timelines to produce demonstrable results.

Having spent the better part of the last 10 years crawling inside many Fortune 500 companies to help them measure marketing effectiveness, I have recently come to the (much overdue) conclusion that most measurement problems stem from the core evils of parity value propositions and absence of effective positioning. We’ve somehow managed to shift almost all our efforts from strategic insight development (which we’ve outsourced to consultants and research companies, and then put them on very tight budgetary leashes) to tactical execution in the mistaken belief that the only viable strategy for success in a two-year evaluation window is to catch lightning in a bottle in the form of a positively viral ad campaign. In other words, most unintentionally place themselves in a position where they are relying upon lightning to strike in a specific place during a short window of time.

True, most of the breakthroughs above were born in moments of pure inspiration on either the client or agency side. But those moments were carefully “engineered” to come about through insightful research and market study. They were successful outcomes of a diligent “R&D” process.

As you look ahead to your 2011 budget, how much have you allocated to “R&D”? Not surprisingly, even companies with multi-billion dollar R&D budgets for engineering and product development will likely have just a small fraction of their overall marketing budget dedicated to generating market/customer insights. Far more money will be allocated to unstructured and uncontrolled experimentation with communication tactics in support of messages which are neither “breakthrough” nor effective strategic positioning. Many will invest heavily in analytics to optimize the media mix of campaigns to get the biggest return for the tactical budget, yet will go to sleep at night wondering if they’re actually saying the right things to the right people to inspire the right behaviors.

Increasing the probability of success in marketing almost always comes down to executing against a process of hypothesize, test, learn, refine, repeat. Along the way, you can employ a few metrics to gauge your progress at improving. For example:

  • Relative Value Proposition Strength – a measure of the degree to which your core value proposition (unbundled from ad execution) is preferred by the target audience relative to the options they see themselves having. Tracking this on a regular basis helps diagnose the extent to which your core offering is driving/depressing results versus your execution of it.
  • Positioning Relevance – a measure of the degree to which your key positioning points resonate on relevance and materiality scales compared to other positioning strategies the customers/prospects are exposed to from competitors.
  • Message effectiveness – a measure of the degree to which your message execution is delivering the right message in a compelling and differentiated manner.

Finding out where you score high or low on these metrics will direct and focus your efforts at improvement. It may also help enlighten others around the company as to the need to invest more in developing stronger value propositions through product/service innovation.

Implementing a few structured steps like these can go a long way towards informing your understanding of where and when lightning is more likely to strike, so you can put your bottles in the right places.

Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring and improving the payback on marketing investments, and publishers of MarketingNPV Journal
available online free at www.MarketingNPV.com.

Tuesday, June 22, 2010

Twitter Metrics? Why Bother?

There sure is lots of buzz these days about how to measure Twitter. One recent question submitted was typical… “How do we measure the value of the tweets we’re producing every day?”

Wrong question.

The right question is “SHOULD we measure the value of the tweets we’re producing every day?”

For the vast majority of companies out there, I think not.

It seems to me that Twitter is a productivity tool. You use it to efficiently communicate messages to people who have indicated a desire to hear them. In doing so, you also benefit from their willingness to re-tweet along to others they may know with similar interests.

As such, the inherent value proposition of Twitter is to REPLACE higher cost avenues of reaching interested parties with LOWER cost avenues. Consequently, the financial value of using Twitter for business is the cost savings of reaching the same people more efficiently and/or the now-affordable opportunity of communicating deeper into the universe of current and prospective customers. All of which is, to a reasonable degree, measurable.

So why am I skeptical about measuring tweets?

First off, there are no platform costs for tweeting. No software to buy. No hardware to install. Just use any web-enabled keyboard and you’re off. Everything you need to get started is free. If you’re not adding staff, and/or you’re not keeping staff to tweet when they would otherwise be expendable, then you have NO incremental cost. If this is your situation (and for most of you I suspect it is), then why bother measuring something that comes at no cost? Save your marketing measurement energy (and that of your management team) for bigger, more expensive issues with more meaningful marketing metrics.

But if you add or divert headcount (staff or contractor) to tweeting in a way that adds to cost, you should be prepared to forecast and measure the impact.

Your Twitter business case for adding additional headcount (aka “Chief Tweeting Officer”) is based on the premise that more/better tweeting will drive measurable impact on the business in some way. So you would compare the incremental headcount cost of the tweeters with the expected incremental impact on the business in terms of:

A) Incremental customer acquisition;
B) Incremental customer retention ;
C) Incremental share-of-customer;
D) Incremental margin per customer or transaction;
E) Improvements in staff or channel partner performance;
F) Accelerated transactional value; or
G) Early indication of problems and the resulting benefit of acting quickly to fix them.

Each of these could be determined through a series of inexpensive experiments intended to prove/disprove the hypothesis that tweeting will somehow result in economically attractive behavior. Some might happen as a direct result of the tweeting. Others may be indirect results in association with combinations of additional marketing tactics (e.g. paid search or display advertising). Define your hypotheses clearly and succinctly, then monitor tweet consumption…

Anyone rolling their eyes yet?

Bottom line is that tweeting, like all social media activities, are engagement tools. We use them to try to engage current/prospective customers in further dialogue of investigation of the solutions we can offer them. So from a measurement perspective, that suggests we focus on what specific types of behavioral engagement we are trying to drive, and what economic impacts we anticipate as a result. Measure changes in those two elements and you’re well on your way to success.

There are always ways to measure marketing effectiveness. Everything in marketing can be measured. But the first and most important question to ask is “what would I do differently if I knew the answer to the question I’m asking?” Only then can you decide how to PRAGMATICALLY balance measurement with action.

My friend Scott Deaver, EVP at Avis car rental is fond of saying “don’t bother if you can’t weigh it on a truck scale.” I think that applies very often to twittering away time measuring Twitter.

Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Tuesday, June 08, 2010

How to Get the Budget You Need

As 2011 planning season approaches, here are a few things you can do to increase the probability that you’ll get the resources you need for marketing to help drive the business goals. Before you go in with your request, work your way down this checklist:

1. Ensure that you are aligned to the proper goals and objectives.
Your recommendations need to be linked back to specific ways they will help achieve company goals – revenue, profit, customer value, market share, etc. The more specifically you can demonstrate these links, the more likely your recommendations will be taken seriously. Don’t focus on the intermediary steps like awareness or brand preference. Keep the emphasis on the business outcomes you intend to influence.

2. Make sure you’ve squeezed every drop from the current spend patterns.
These days, zero-based budgeting is price-of-entry. Before you can ask for more, you need to show how you have “de-funded” things you previously did that either A) didn’t work; B) weren’t aligned to evolving company goals; or C) seem less important now than other initiatives. By offering up some of your own cuts to partially fund your recommendations, you demonstrate a strong sense of managing a portfolio of investments, and a willingness to make hard choices with company money.

3. Have a plan for how you will prioritize the new marketing funds strategically, not just tactically.
If you get another $1, where will you put it? Before you answer in terms describing programs or tactics, think about segments, geographies, channels, product groups, etc. Knowing where the best strategic leverage points are is far more important than tactical mix. You can always evolve the mix of tactics. But the best tactics applied against the wrong strategic needs won’t produce any results.

4. Identify the points of leverage you can exploit.
Results accrue when you place resources behind places of competitive leverage. Knowing where your leverage points are helps ensure you are spending where it will produce noticeable outcomes. Common leverage points are relative value proposition strength, channel dominance, message effectiveness, and customer switchability. There are others too. But spending without leverage is just playing into the hands of the competitive environment. Without leverage, you have no reasonable expectation of changing anything.

5. Demonstrate an understanding of how the business environment has changed.
Even if you have clear leverage opportunities, the business environment is powerful enough to neutralize just about any unilateral effort a given company might make. Sudden swings in the macro-economic spectrum or the regulatory environment could have you spending into an impenetrable headwind and dramatically reduce the expected impact of your investments. Identify the issues that could create the strongest headwind (or tailwind) for you - interest rates, employment rates, housing starts, currency fluxuations – and prepare an assessment of how they might impact your proposed results.

6. Proactively assess the risk of your plans.
As marketers, we plan like matadors, but have the track record of the bull. We spend so much time conceptualizing our plans, but comparatively little imagining what might go wrong. Which is unfortunate, given that something almost always does go wrong. So run your plan by your company’s “Debbie downer” – the skeptical one who always sees the worst in everything. Let her tell you what might derail your plans, and then develop a plan to manage your risks accordingly. Being proactive about identifying and managing risks demonstrates your ability to dispassionately assess options and pursue realistic opportunities for success with your eyes wide open.

7. Propose “good” benchmarks and targets for your intended outcomes.
Every recommendation should come with expected performance outcomes. Even if you present a range of possible results, you’ll need something to demonstrate the baseline of performance you are starting from, and the yardstick by which you will measure success. This creates the perception of accountability, which appeals to the deeply human desire to trust in someone else where our own personal expertise leaves off.

These 7 pre-test elements will prepare you for every question that might arise in connection with your proposals. And while competing investments might ultimately attract the resources you were fighting for, this process ensures your reputation as a capable manager will grow even if your budget doesn’t.

Tuesday, May 25, 2010

Ten Specific Ways Brand Investments Pay Back

One of the most frequent questions I get about measuring marketing is: “How do we measure the impact of our investments in brand development on the bottom line?”

If you’re really looking for an answer, here goes:

There are ten basic ways a stronger brand creates financial value.
  1. It can attract more customers, either directly or through stronger WOM.
  2. It can encourage customers to spend more with you, making them more receptive to other solutions you can offer, or just more likely to give you first shot at meeting their needs.
  3. It can influence the mix of products/services customers buy from you, since buyers normally hold strong brands in some degree of esteem and respect the “advice” of the brand.
  4. It can reduce the customer’s price sensitivity, allowing you to earn more margin from every dollar they spend with you.
  5. It can help you keep customers active longer, or at the very least act as a “safety net” to give you time or opportunity to fix problems that arise along the way.
  6. It can help you accelerate the customer’s buying process, reducing the probability that something happens to close the wallet before the spending happens.
  7. It can help you attract and retain better talent at lower recruiting and retention costs as people want to be associated with attractive brands.
  8. It can reduce operating expenses by influencing supplier concessions from companies who want to be associated with top-tier brand partners.
  9. It can attract more/better channel partners.
  10. And if that’s not enough for your CFO, tell them how stronger brands can actually help lower your organization’s cost-of-capital borrowing costs due to the lower risks of lending to a company with strong brands (all other things being equal). It’s not unlike how studies have consistently shown that taller people make more money than equally qualified people of average or lower height.
Most of the time, the business case for branding investments can be made in some combination of these ten elements. Of course, you’ll need some data (or at least some well structured assumptions) to make the case credibly. But it can be done with even just a little data.

You’ll also need some idea of just when you expect to see these effects begin to occur, and what the early indicators of progress might be (e.g. shift in perceptions, web site engagement, etc.). Setting up your marketing metrics to monitor these milestones becomes more crucial to the cause as your timeframe for payback gets longer.

Now if you’re NOT really looking for an answer, but just want to muddy the waters on marketing measurement to sufficiently to frustrate the people asking the question in the hopes they’ll go away, you can do that too (for a while). Just start rambling about how EVERYTHING is branding or related to the brand, and consequently NOTHING is measurable. This can work… for a little while. But eventually the other managers find ways to marginalize you so your budget winds up getting cut. So I’d only recommend this as a stalling strategy while you’re secretly negotiating for your next job.

For the rest of us, the case for brand investment gets clearer all the time. The tools are improving and the body of knowledge is growing fast.

In that vein, I’m sure I’ve missed something in my list above, so please feel free to remind me.


Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Monday, May 10, 2010

Newsflash: Biz Media Again Fooled by Bogus Research

I continue to see a number of highly intelligent business editors covering the marketing industry get fooled into running stories based on PR polls disguised as research. And when that happens, everyone in the marketing community gets hurt.

New research is exciting and relevant to marketers. It leads to new thinking and ideas. And it makes good copy. As an example, one major trade publication recently ran a story about a survey of “Chief Marketing Officers” and marketing measurement with the headline “Survey Finds Marketing Contributes to the Bottom Line.” This undoubtedly made it into countless PowerPoint presentations in budget meetings.

But scratch beneath the surface and this “survey” was comprised of 423 “business executives and marketing professionals” who may have come from any industry, or be in companies ranging from $billion marketing budgets to $10k budgets, or may be CEOs or entry-level managers. In other words, unlike its title indicates, the sample is reflective of no particular group, aside from those willing to be surveyed.

Even more dangerously, the story went on to state that 39% of the respondents agreed the marketing is doing a good job of contributing to the financial condition of the business, up from 19% last year. Presumably this was referring to better use of marketing metrics to better measure marketing. But were these the same people surveyed last year? Were they selected to match the profile of people surveyed last year, so their responses would be scientifically valid? No. They were just this year’s batch of willing respondents, bearing little resemblance to last year’s group, thereby making any sort of trend analysis invalid.

Unfortunately, this is neither uncommon nor harmless. Marketing struggles every day to earn trust and credibility with finance, operations, sales, and other functions that have a more skeptical and discerning eye when it comes to research. But if the marketing media suggests it’s OK to accept PR polls as research, it indirectly encourages marketers to include some of these “findings” in rationalizing their recommendations to others.

In fairness to my hard-working friends in the media, most editors and staff writers (let alone marketers themselves) have not had the benefit of training in how to tell a bogus survey from a truly reliable one. They’re very busy trying to produce more content to feed both online and offline vehicles with smaller payroll and more pressure to get readers. So perhaps I can offer a few simple tips to separate the fluff from the real stuff:

  1. Before you even read a survey’s findings, ask to see a copy of the survey questionnaire and check out the profile of the respondent group so you know how to interpret what you’re being told. Get a clear sense of “who” is supposedly doing/thinking “what” and inquire as to how the respondents were incented. Then ask yourself if a reasonable person would really put the effort into answering completely and honestly.
  2. Check the similarity or differences of the survey respondents. If the vast majority of them share similar traits (e.g. company size, industry group, annual budget), then it’s fair to extrapolate the findings to the larger group they represent. But if no single characteristic (other than being in “marketing”) ties them together, they represent no-one – regardless of the number of respondents. You’ll need to separate the responses by sub-group like larger marketers versus smaller ones, or B2B vs. B2C. In general, you’ll need at least 100+ respondents in any sub-segment to make it a valid result.
  3. Check to see if the sample has been consistent year-over-year. If it has, you can safely say that something has or hasn’t changed from one year to the next. But if the sample profile is substantially different year-to-year, comparisons aren’t valid due to differences in the perspectives or expertise of those responding.
  4. Ask about the margin of error. Just because 56% of some group say they feel/believe/do something, doesn’t mean they actually do. EVERY survey comes with a margin of error. Most of the PR-driven polls in the marketing community use inexpensive data collection techniques which offer no ability to validate what people say and no mechanisms to keep respondents honest. Consequently, 56% may actually mean somewhere between 45% and 65% - which may change the interpretation of the findings. Ask the survey sponsor about margin of error. If they aren’t sure, don’t publish the numbers.

And if all that is too difficult, call me and I’ll give you an unbiased assessment for free.

It’s difficult to produce a survey that provides real insight and meaningful information. That’s why real research costs real money. And while there’s nothing wrong with polls being used to gather perspective from broadly defined populations, or PR folks using these for PR purposes – confusing PR for real research slowly poisons the well for all of us in the marketing community.

Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com

Tuesday, April 27, 2010

Measurement's Exponential Curve

I attended the Global Retail Marketing Association’s annual leadership conference recently and had the opportunity to interact with a few terrific speakers including Ray Kurzweil – renown futurist in all matters technology. Ray bombarded us with scientifically- and econometrically-based forecasts for where health, computing, and social technologies were headed.

Kurzweil has a pretty good track record in predicting these things (as evidenced by his success as an angel investor) based on a simple concept – that evolution in technology is not linear, but exponential. He cited examples of how this has been true across the technology spectrum time and time again, but also across the spectrum of life in general. By plotting the advent of major advances in life sciences, one can quickly see how the pace of innovation is accelerating. In fact, the average lifespan of a child born today will be well into their upper 80’s, and that lifespan is accelerating. (Do the math, and if those of us in middle age can just wait long enough, we may live forever.)

As I listened, it occurred to me how appropriate this concept was in the arena of marketing measurement too. While marketers have been seeking to improve the effectiveness and efficiency of their efforts almost since marketing was born as a functional discipline in the early 20th century, the pace of discontinuous innovation is accelerating. And when I speak about discontinuous innovation, I’m not referring to the introduction of internet-based research tools, but fundamental changes in the way we are learning to understand marketing’s impact on the consumer/customer.

For example, traditional survey-based research techniques are proving to be far less predictive of human behavior than bio-metric scanning methods that monitor brainwaves, heart rate, respiration changes, or skin temperature. Today, advertisers can expose their creative messages to prospective customers and read the immediate response in involuntary biometric systems which overcome the social and cultural biases that tend to filter logical survey responses.

Another example… many companies are dis-adopting regression-based marketing mix models in favor of artificial intelligence techniques and agent-based models which focus on replicating thought processes in the full context of competitive dynamics, instead of just looking for statistical relationships. True, these higher-intelligence techniques have been around for 30+ years and not yet found significant penetration in marketing applications, but the PACE of adoption is now increasing noticeably and innovation is driving relevance and applicability more quickly than ever.

All of this has me thinking these days that market research may be operating towards the end of its current lifecycle. The tools, methods, and techniques we use today will not persist more than another 10 to 20 years. We will learn to move past recording and clustering rational thought, and past our voyeuristic tendencies to predict future behavior based on past actions. And in the process, we will learn to reconcile what people say, think, and do with the powerfully innate drivers buried deep in our biological wiring – like in this example.

Inevitably, we will see many instances of borderline unethical manipulation which will slow this adoption curve a bit, but the amount of money and time being invested in these new areas of learning is far too great to be stalled by commercial mis-steps along the way.

So when you drag your finger across your touch-screen interface a few years from now, the underlying systems will capture not just what and where you touched, but your heart rate and body temperature at the time (along with possibly the size of your pupils). This information will feed logic-driven systems which will immediately adapt the type of images, sounds, and smells to your innate preferences to stimulate your interest far beyond anything we’ve achieved as marketers so far.

And just think about the impact this will all have on which metrics we adopt to measure performance…
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Thursday, April 15, 2010

Memo from the CFO: The Best Response

Two weeks ago I posted a disguised version of an actual email from the CFO of a Global 1000 company to the CMO of that company, congratulating the CMO on doing a good job of improving marketing efficiency, but then raising questions about the effectiveness of that marketing. I invited readers of Metrics Insider to comment on how they would respond if they were the CMO.

Some of the responses were, not surprisingly, promotional messages for proprietary marketing measurement research or analytical methodologies being positioned as silver bullets to solve the problem. While I’m sure some of these solutions would be helpful to some degree, it’s naïve for ANY executive to pin their hopes of understanding marketing payback on any single tool, method, or metric. Many have tried this approach, but very few have succeeded as the “tool in lieu of disciplined evolution” tends to answer the immediate questions, but loses luster as dynamics (both external and internal) evolve. Besides, CFOs tend to be immediately suspicious of any tool packaged in hyperbole like “all you really need is…”

A few responses were pretty hostile. They came in the form of marketers berating the CFO (aka the “bean-counting techno-wonk”) for asking such questions in a way that implied the CMO should have had a much better handle on marketing effectiveness. (for a more humorous approach along these lines, see what Chris Koch wrote). In truth, there was no malice in the questions posed. Just a bit of naiveté on behalf of the CFO with respect to the subtleties of marketing. Sometimes, that lack of understanding manifests itself in poorly-chosen words. But rarely does that mean the issuer of the words is “anti-marketing”. They are just playing one of the many roles they are paid to play… the role of risk management. If the CFO is to adequately assess the corporate risks (including the risk of wasteful spending), they must have the confidence to challenge the evidence put forward in support of EVERY material investment. If you ask the head of IT, you’ll likely find that they too have felt the heat of the CFOs microscope from time-to-time.

So if your first reaction was anger, get past it. Don’t let your own insecurities about marketing measurement negatively taint your assessment of logical questions that inquisitive but un-informed executives may ask.

I think, all things being equal, the best response would be:

To: Amy Ivers – CFO

From: Susan James – CMO

RE: Congratulations on your recent recognition for marketing efficiency

Amy –

Thanks for your note on measuring marketing effectiveness. You raise many good points that I too have been thinking about for some time. There are a number of ways we can approach answering these questions, but I’d need your help since some would inevitably require us to get comfortable with partial data sets, while others may necessitate a temporary step backwards in efficiency to enable some further testing. Together, we might be able to come up with an approach that John and the others on the executive committee find credible. But it might be a bit more involved than emails can adequately address.

I share your passion for better insight into marketing effectiveness. If you’d like to suggest a few possible dates/times, I’d enjoy getting together to bring you up to speed on what we’ve been able to do so far, where our current knowledge gaps are, and what we’re doing to try to close those gaps. I’d appreciate your critical assessment of what we’re doing, and any suggestions you may have for making us better.

Thanks for your input.


In a nutshell, the best strategy in this type of situation is:
  1. Disarm and diffuse. Take the emotion out of it, even if the history frustration runs deep.
  2. Focus on defining the questions to be answered. Don’t jump into evidence-presentment mode until you have agreed on what reasonable questions are. You’ll be shooting at a moving target.
  3. Prioritize the questions. Don’t assume they’re all equally important, or you’ll fracture your answering resources into ineffectively small parts.
  4. Decompose the questions into small pieces. Define the sub-components and assess what the company knows and what it doesn’t know with respect to each of the small pieces. Trying to boil the ocean is another sure way to accomplish nothing.
  5. Admit your knowledge limits. Be clear to label your assertions conservatively as facts, observations, and opinions derived from experience.
  6. Have a continuous improvement plan. Show your plan to improve the company’s marketing effectiveness in stages and manage the expectation that it might take time to tackle all the pieces to the roadmap absent a significant boost in resources.

I realize that many of you are caught in situations where the CFO’s questions may in fact be emanating from some apparent malice. In those cases, use honest questions to understand how much they actually know (as distinct from what they think they know). Their path to self-realization is only as fast as your skillful approach to engaging them to be part of the solution instead of just the identifier of the problem.

Using this approach, even the thorniest marketing/finance relationships can be improved by at least 50% (and I have the statistics to back it up).

Thanks for your comments.
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Tuesday, March 30, 2010

Memo from CFO: Ad Metrics Not Good Enough

Following is a sanitized version of an actual email from a CFO to a CMO in a global 1000 company…

TO: Susan James – Chief Marketing Officer

FROM: Amy Ivers – Chief Financial Officer

RE: Congratulations on your recent recognition for marketing efficiency

Susan –

Congratulations on being ranked in the top ten of Media Weeks’ most recent list of “most efficient media buying organizations.” It is a reflection of your ongoing commitment to getting the most out of every expense dollar, and well-earned recognition.

But I can’t help but wonder, what are we getting for all that efficiency?

Sure, we seem to be purchasing GRP’s and click-thru’s at a lower cost than most other companies, but what value is a GRP to us? How do we know that GRPs have any value at all for us, separate from what others are willing to pay for them? How much more/less would we sell if we purchased several hundred more/less GRPs?

And why can’t we connect GRPs to click-thrus? Don’t get me wrong, I love the direct relationship we can see of how click-thrus translate into sales dollars. And I understand that when we advertise broadly, our click-thrus increase. But what exactly is the relationship between these? Would our click-thru rate double if we purchased twice as much advertising offline?

Also, I’m confused about online advertising and all the money we spend on both “online display” and “paid search”. I realize that we are generally able to get exposure for less by spending online versus offline, but I really don’t understand how much more and what value we get for that piece either.

In short, I think we need to look beyond these efficiency metrics and find a way to compare all these options on the basis of effectiveness. We need a way to reasonably relate our expenses to the actual impact they have on the business, not just on the reach and frequency we create amongst prospective customers. Until we can do this, I’m not comfortable supporting further purchases of advertising exposure either online or offline.

It seems to me that, if we put some of our best minds on the challenge, we could create a series of test markets using different levels of advertising exposure (including none) in different markets which might actually give us some better sense of the payback on our marketing expenditures. Certainly I understand that just measuring the short-term impact may be a bit short-sighted, but it seems to me that we should be able (at the very least) to determine where we get NO lift in sales in the short term, and safely conclude that we are unlikely to get the payback we seek in the longer term either.

Clearly I’m not an expert on this topic. But my experience tells me that we are not approaching our marketing programs with enough emphasis on learning how to increase the payback, and are at best just getting better at spending less to achieve the same results. While this benefit is helpful, it isn’t enough to propel us to our growth goals and, I believe, presents an increasing risk to our continued profitability over time as markets get more competitive.

I’d be delighted to spend some time discussing this with you further, but we need a new way of looking at this problem to find solutions. It’s time we stop spending money without a clear idea of what result we’re getting. We owe it to each other as shareholders to make the best use of our available resources.

I’ll look forward to your reply.

Thank you.

So how would you respond? I’ll post the most creative/effective responses in two weeks.
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Wednesday, March 17, 2010

What 300 Years of Science Teaches Us About WOM Metrics

In the early 18th century, scientists were fascinated with questions about the age of the earth. Geometry and experimentation had already provided clues into the size of the planet, and to the mass. But no one had yet figured out how old it was.

A few enterprising geologists began experimenting with ocean salinity. They measured the level of salt in the oceans as a benchmark, and then measured it every few months thereafter, believing that it might then be possible to work backwards to figure out how long it took to get to the present salinity level. Unfortunately, what they found was that the ocean salinity level fluctuates. So that approach didn’t work.

In the 19th century, another group of geologists working the problem hypothesized that if the earth was once in fact a ball of molten lava, then it must have cooled to its current temperature over time. So they designed some experiments to heat various spheres to proportionate temperatures and then measure the rate of cooling. From this, they imagined, they could tell how long it took the earth to cool to its present temperature. Again, interesting approach, but it led to estimates that were in the range of 75,000 thousands of years. Skeptics argued that a quick look at the countryside around them provided evidence that those estimates couldn’t possibly be correct. But the theory persisted for nearly 100 years!

Then in the early part of the 20th century, astronomers devised a new approach in estimating the age of the earth through radio spectroscopy – they studied the speed with which the stars were moving away from earth (by measuring shifts in light wave spectrum) and found there was a fairly uniform rate of speed. This allowed them to estimate that the earth was somewhere between 700 million and 1.2 billion years old. This seemed more plausible.

Not until 1956, shortly after the discovery of atomic half-lives, did physicists actually come up with the answer that we have today. When they studied various metals found in nature, they could measure the level of radiation in lead that had cooled from uranium, and then calculate backwards how long it had taken for radiation to achieve its present level. The estimated therefore that the earth was 4 to 5 billion years old.

Finally, in 1959, geologists discovered the Canyon Diablo meteorite and the physicists realized that the earth must be older than the meteorite that hit it (seems logical). So they took radiological readings from the meteorite and dated it at 4.53 – 4.58 billion years old.

Thus we presently believe our planet’s age is somewhere in this range. It took the collective learnings of geologists, astronomers, and physicists (and a few chemists along the way) and over 250 years to crack the code. Thousands of man-years of experimentation traced some smart and some not-so-smart theories, but we got to an answer that seems like a sound estimate based on all available data.

Why torture you with the science lecture? Because there are so many parallels to where we are today with marketing measurement. We’ve only really been studying it for about 50 years now, and only intensely so for the past 30 years. We have many theories of how it works, and a great many people collecting evidence to test those theories. Researchers, statisticians, ethnographers, and academics of all types are developing and testing theories.

Still, at best, we are somewhere between cooling spheres and radio spectroscopy in our understanding of things. We’re making best guesses based on our available science, and working hard to close the gaps and not get blinded by the easy answers.

I was reminded of this recently when I reviewed some of the excellent research done by Keller Fay Group in their TalkTrack® research which interviews thousands of people each week to find out what they’re talking about, and how that word-of-mouth (WOM) impacts brands. They have pretty clearly shown that only about 10% of total WOM activity occurs online. Further, they have established that in MOST categories (not all, but most), the online chatter is NOT representative of what is happening offline, at kitchen tables and office water coolers.

Yet many of the “marketing scientists” are still confusing measurability and large data sets of online chatter for accurate information. It’s a conclusion of convenience for many marketers. And one that is likely to be misleading and potentially career-threatening.

History is full of examples of how scientists were seduced by lots of data and wound up wandering down the wrong path for decades. Let’s be cautious that we’re not just playing with cooling spheres here. Scientific progress has always been built on triangulation of multiple methods. And while accelerating discoveries happen all the time though hard work, silver bullets are best left to the dreamers.

For the rest of us, it’s back to the grindstone, testing our best hypotheses every day.
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Thursday, March 04, 2010

Prescription for Premature Delegation

When responsibility for selecting critical marketing metrics gets delegated by the CMO to one of his or her direct reports (or even an indirect report once- or twice-removed), it sets off a series of unfortunate events reminiscent of Lemony Snicket in the boardroom.

First of all, the fundamental orientation for the process starts off on an "inside/out" track. Middle managers tend (emphasize tend) to have a propensity to view the role of marketing with a bias favoring their personal domain of expertise or responsibility. It's just natural. Sure you can counterbalance by naming a team of managers who will supposedly neutralize each others' biases, but the result is often a recommendation derived primarily through compromise amongst peers whose first consideration is often a need to maintain good working relationships. Worse yet, it may exacerbate the extent to which the measurement challenge is viewed as an internal marketing project, and not a cross-organizational one.

Second, delegating elevates the chances that the proposed metrics will be heavily weighted towards things that can more likely be accomplished (and measured) within the autonomy scope of the “delegatee”. Intermediary metrics like awareness or leads generated are accorded greater weight because of the degree of control the recommender perceives they (or the marketing department) have over the outcome. The danger here is of course that these may be the very same "marketing-babble" concepts that frustrate the other members of the executive committee today and undermine the perception that marketing really is adding value.

Third, when marketing measurement is delegated, reality is often a casualty. The more people who review the metrics before they are presented to the CMO, the greater the likelihood they will arrive "polished" in some more-or-less altruistic manner to slightly favor all of the good things that are going on, even if the underlying message is a disturbing one. Again, human nature.

The right role for the CMO in the process is to champion the need for an insightful, objective measurement framework, and then to engage their executive committee peers in framing and reviewing the evolution of it. Measurement of marketing begins with an understanding of the specific role of marketing within the organization. That's a big task for most CMOs to clarify, never mind hard-working folks who might not have the benefit of the broader perspective. And only someone with that vision can ruthlessly screen the proposed metrics to ensure they are focused on the key questions facing the business and not just reflecting the present perspectives or operating capabilities.

Finally, the CMO needs to be the lead agent of change, visibly and consistently reinforcing the need for rapid iteration towards the most insightful measures of effectiveness and efficiency, and promoting continuous improvement. In other words, they need to take a personal stake in the measurement framework and tie themselves visibly to it so others will more willingly accept the challenge.

There are some very competent, productive people working for the CMO who would love to take this kind of a project on and uncover all the areas for improvement. People who can do a terrific job of building insightful, objective measurement capabilities. But the CMO who delegates too much responsibility for directing measurement risks undermining both the insight and organizational value of the outcome -- not to mention the credibility of the approach in the eyes of the key stakeholders.
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.