Tuesday, December 22, 2009
A) Campaign response.
B) Customer satisfaction.
C) Brand value.
D) Media mix efficiency.
E) All of the above.
The fact is that there are so many things to measure, more and more marketers are getting wrapped around the axle of measurement and wasting time, energy, and money chasing insight into the wrong things. Occasionally this is the result of prioritizing metrics based on what is easy to measure in an altruistic but misguided attempt to just “start somewhere”. Sometimes, it comes from an ambitious attempt to apply rocket science mathematics to questionable data in the search for definitive answers where none exist. But most often it is the challenge of being able to even identify what the most important metrics are. So here’s a way to isolate the things that are really critical, and thereby the most critical metrics.
Let’s say your company has identified a set of 5 year goals including targets for revenue, gross profit margin, new channel development, customer retention, and employee productivity. The logical first step is to make sure the goals are articulated in a form that facilitates measurement. For example, “opening new channels” isn’t a goal. It’s a wish. “Obtaining 30% market share in the wholesale distributor channel within five years” is a clear, measurable objective.
From those objective statements, you can quantitatively measure the size of the gap between where you are today and where you need to be in year X (the exercise of quantifying the objectives will see to that). But just measuring your progress on those specific measures might only serve to leave you well informed on your trip to nowheresville. To ensure success, you need to break each objective down into its component steps or stages. Working backwards, for example, achieving a 30% share goal in a new channel by year 5 might require that we have at least a 10% share by year 4. Getting to 10% might require that we have contracts signed with key distributors by year 3, which would mean having identified the right distributors and begun building relationships by year 2. And of course you would need all your market research, pricing, packaging, and supply chain plans completed by the end of year 1 so you could discuss the market potential intelligently with your prospective distributors.
When you reverse-engineer the success trajectory on each of your goals, you will find the critical building block components. These are the critical metrics. Monitor your progress towards each of these sub-goals and you have a much greater likelihood of hitting your longer-range objectives.
Kaplan and Norton, the pair who brought you the Balanced Scorecard and Strategy Mapping, have a simple tool they call Success Mapping to help diagram this process of selecting key measures. Each goal is broken down into critical sub-goals. Each sub-goal has metrics that test your on-track performance. A sample diagram follows.
By focusing on your sub-goals, you can direct all measurement efforts to those things that really matter, and invest in measurement systems (read: people and processes, not just software) in a way that’s linked to your overall business plan, not as an afterthought.
Pat LaPointe is managing partner at MarketingNPV – objective advisors on marketing measurement and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.
Tuesday, December 08, 2009
Working with this definition, one might conclude that a “good” marketing business case is one that increases the quality of decision making. Yet many of us in marketing have come to believe that a good business case is one that predicts a significantly positive ROI, IRR, and/or NPV for a given investment. Strangely, we tend to water-down any assumptions that actually seem to make the case “too good,” lest someone in finance really begin questioning our assumptions. Have you ever found yourself…
- Using aggressive, moderate, and conservative labels on business case scenarios to show how even the most conservative view provides a strong potential return, and anything beyond that is gravy?
- Identifying the always-low break-even point at which expenses are recaptured fully, and showing how this point occurs even below the conservative outcome scenario?
- Taking a “haircut” in assumptions to show how, “even if you cut the number in half, the result is still positive.”
Every time you use one of these approaches in an effort to build credibility with finance or other operating executives, you paradoxically wind up undermining it instead. These tactics all have been shown to communicate subtle messages of inherent bias and manipulative persuasion which, intended or not, are noticeable to non-marketing reviewers – even if only on an instinctive level versus a conscious one.
In my experience, business cases get rejected most often for one of the following reasons:
- Bias – senior management perceives that marketing is trying to “sell” something rather than truly understand the risk/reward of the proposed spending recommendations.
- Jumping to the Numbers – showing final forecasts which contradict executive intuition before they have had a chance to reconsider the validity of those instincts.
- Credibility of Assumptions – forecasts seem to ignore the effect of key variables or predict unprecedented outcomes.
Successful business-case developers recognize that there is more at stake than just getting funding approved. In reality, there are several objectives which must all be achieved with every business case:
- Protecting personal credibility. Any one program or initiative may be killed for many possible reasons. But you will still need to come to work tomorrow and be effective with your executives, peers, and team members. Preserving (and strengthening) your personal credibility is therefore the paramount objective.
- Enhancing the role of marketing. If you have personal credibility, you will want to use it to take smart risks to help the company achieve its objectives, and to influence matters relating to strategy, products, markets, etc. In the process, you need to be thinking about the role of the marketing function; how it can best serve the firm; and how you need to evolve it from what it is today.
- Bringing attractive options to the CEO – the kind that forces him/her to make hard decisions choosing between financially appealing alternatives.
There are always two dimensions to business case quality – financial attractiveness, and credibility of assumptions. In the end, it takes more than just financial attractiveness for a successful business case. It takes:
- Thoughtfulness: demonstrating keen understanding of the role marketing plays in driving business outcomes and reflecting the input of the most critical stakeholders throughout the organization.
- Comprehensiveness: including all credible impacts of spending recommendations, and calculating benefits and costs at an appropriate level of granularity.
- Transparency: Clearly labeling all assumptions as such and presenting them in a way that encourages healthy discussion and challenge.
There are many ways to build a successful business case. But the most important learning is to understand the context in which your proposal will be evaluated BEFORE you put the numbers on the table.
Pat LaPointe is managing partner at MarketingNPV – objective advisors on measuring payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.
Thursday, November 12, 2009
In this, the 100th year of the ANA, there are still lots of questions surrounding which 50% of advertising is “wasted”. I find that astounding. That 100 years later we’re still having this debate.
Maybe it’s because the very nature of advertising defies certainty.
Or maybe the definition of “wasted” is too broad.
Or maybe the reality is that the actual waste factor has been reduced to significantly less than 50%, but no one famous ever said that “15% of my advertising is wasted, I just don’t know which 15%.” And it wouldn’t make for a provocative PowerPoint slide even if they did.
It’s difficult to ignore the many signs of great progress we’ve made as an industry towards better understanding the financial payback of marketing and advertising. For example:
- Research techniques have improved and the frequency of application has increased to provide better perspective on how actions affect brands and demand.
- We’ve not only embraced analytical models in many categories, but have moved to 2nd and even 3rd generation tools that provide great insight.
- We’ve adopted multivariate testing and experimental design to test and iterate towards effective communication solutions.
- We’re learning to link brand investments to cash flow and asset value creation, so CFOs and CEOs can adopt more realistic expectations for payback timeframes.
All of this is very encouraging. Most of the presenters at this year’s conference included in their remarks evidence that they have been systematically improving the return they generate on their marketing spend by use of these and other techniques. So where is the remaining gap (if indeed one exists)?
First off, it seems that we’re often still applying the techniques in more of an ad-hoc than integrated manner. In other words, we appear to be analyzing this and researching that, but not actually connecting this to that in any structured way.
Second, while some of the leading companies with resources to invest in measurement are leading the charge, the majority of firms are under-resourced (not just by lack of funds, but people and management time too) to realistically push themselves further down the insight curve. In other words, the tools and techniques have been proven, but still require a substantial effort to implement and adopt.
Third, not everyone agrees with Eric Schmidt’s proclamation that “everything is measurable”. Some reject the basic premise, while others dismiss its applicability to their own very non-Google-like environments.
So what will it take to put John Wanamaker out of our misery before the 200th anniversary of the ANA?
- Training – exposing more marketing managers to more measurement techniques so they can apply their creative skills to the measurement challenge with greater confidence.
- A community-wide effort to push down the cost of more advanced measurement techniques, thereby putting them within reach of more marketing departments.
- An emphasis on “integrated measurement”. We’ve finally embraced the concept of “integrated marketing”. Now we have to apply the same philosophy to measurement. We need to do a better job of defining the questions we’re trying to answer up-front, and then architecting our measurement tools to answer the questions, instead of buying the tools and accepting whatever answers they offer while pleading poverty with respect to the remaining unanswered ones.
- We should eat a bit of our own dog food and develop external benchmarks of progress (much like we do with consumer research today). Let’s stop asking CMOs how they think their marketing teams are doing at measuring and improving payback, and work with members of the finance and academic communities to define a more objective yardstick with which we can measure real progress.
As we embark on the next 100 years, we have the wisdom, technology, and many of the tools to finally put John Wanamaker to rest. With a little concerted effort, we can close the remaining gaps to within a practical tolerance and dramatically boost marketing’s credibility in the process.
From Time’s a Wasting – for more information visit anamagazine.net.
- Google backed away from managing radio and print advertising networks due to lack of “closed loop feedback”. In other words, they couldn’t tell an advertiser IF the consumer actually saw the ad or if they acted afterward. Efforts to embed unique commercial identifiers into radio ads exist, but are still immature. And in print, it’s still not possible to tell who (specifically) is seeing which ads – at least not until someone places sensors between every two pages of my morning newspaper.
- Despite this limitation, Schmidt feels that Google will soon crack the code of massive multi-variate modeling of both online and offline marketing mix influences by incorporating “management judgment” into the models where data is lacking, thereby enabling advertisers to parse out the relative contribution of every element of the marketing mix to optimize both the spend level and allocation – even taking into account countless competitive and macro-environmental variables.
- That “everything is measurable” and Google has the mathematicians who can solve even the most thorny marketing measurement challenges.
- That the winning marketers will be those who can rapidly iterate and learn quickly to reallocate resources and attention to what is working at a hyper-local level, taking both personalization and geographic location into account.
So when I caught up with him in the hallway afterward, I asked two questions:
- How credible are these uber-models likely to be if they fail to account for “non-marketing” variables like operational changes effecting customer experience and/or the impact of ex-category activities on customers within a category (e.g., how purchase activity in one category may affect purchase interest in another)?
- At what point do these models become so complex that they exceed the ability of most humans to understand them, leading to skepticism and doubt fueled by a deep psychological need for self-preservation?
- “If you can track it, we can incorporate it into the model and determine its relative importance under a variety of circumstances. If you can’t, we can proxy for it with managerial judgment.”
- “That is the big challenge, isn’t it.”
- Google will likely develop a “universal platform” for market mix modeling, which in many respects will be more robust than most of the other tools on the market – particularly in terms of seamless integration of online and offline elements, and web-enabled simulation tools. While it may lack some of the subtle flexibility of a custom-designed model, it will likely be “close enough” in overall accuracy given that it could be a fraction of the cost of custom, if not free. And it will likely evolve faster to incorporate emerging dynamics and variables as their scale will enable them to spot and include such things faster than any other analytics shop.
- If they have a vulnerability, it may be under-estimating the human variables of the underlying questions (e.g., how much should we spend and where/how should we spend it?) and of the potential solution.
Reflecting over a glass of Cabernet several hours later, I realized that this is generally a good thing for the marketing discipline as Google will once again push us all to accelerate our adoption of mathematical pattern recognition as inputs into managerial decisions. Besides, the new human dynamics this acceleration creates will also spur new business opportunities. So everyone wins.
Friday, October 30, 2009
- What are the specific goals for our marketing spending and how should we expect to connect that spending to incremental revenue and/or margins?
- What would be the short and long-term impacts on revenue and margins if we spent 20% more/less on marketing overall in the next 12 months?
- Compared to relevant benchmarks (historical, competitive, and marketplace), how effective are we at transforming marketing investments into profit growth?
- What are appropriate targets for improving our marketing leverage ($’s of profit per $ of marketing spend) in the next 1/3/5 year horizons, and what key initiatives are we counting on to get us there?
- What are the priority questions we need to answer with respect to informing our knowledge of the payback on marketing investments and what are we doing to close those knowledge gaps?
How you answer these five questions will get you promoted, fired, or worse - marginalized.
If you tend to answer in a dizzying array of highly conceptual (e.g., brand strength rankings compared to 100 other companies) and/or excruciatingly tactical (e.g., cost per conversion on website leads) “evidence”, stop. Preponderance of evidence doesn’t win in the court of business. Credible, structured attempts to answer the underlying financial questions does.
The five questions are all answerable, even in the most data-challenged environments. Provided, of course, that you build acceptance of the need for some substantial assumptions to be made in deriving the answers – much the same way as you would in building a business case for a new plant, or deploying a new IT infrastructure project. The key is to make the assumptions explicit and clearly define the boundaries of facts, anecdotes, opinions, and guesses. Think in terms of:
- Clarifying links between the company’s strategic plan and the role marketing plays in realizing it;
- Connecting every tactical initiative back to one or more of the strategic thrusts in a way that makes every expenditure transparent in its intended outcome, thereby promoting accountability for results at even the most junior levels of management;
- Defining the right metrics to gauge success, diagnose progress, and better forecast outcomes;
- Developing a more methodical (not “robotic”) learning process in which experiments, research, and analytics are used to triangulate on the very types of elusive insights which create competitive advantage; and
- Establishing a culture of continuous improvement that seeks to achieve quantifiably higher goals year after year.
As you evaluate push boldly into 2010, remember that 2011 is just around the corner. You may not have been asked these hard questions this year, but who knows who your CFO might talk to next year (me, perhaps??) that will ratchet up his/her expectations. You can prepare by using these five questions as a framework to benchmark where your marketing organization is starting from; as a guide to ensure that sufficient resources are being allocated to promote continuous learning and improvement; and as a means of monitoring the performance of your marketing organization.
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.
Wednesday, September 30, 2009
To: John Buleader
From: Barbara Researcher
Subject: How to improve our Net Promoter Scores
In response to your question of last week, I have considered several options for how we can improve our recently flagging Net Promoter scores and thereby increase that portion of our year-end bonus linked to that specific metric.
- We could restrict our sample of surveyed respondents to only those who have recently purchased from us, and ignore those who either didn’t like us enough to buy from us as well as those who bought from us a while ago buy may be having second thoughts due to our poor reliability and service.
- We could change our sampling approach to only solicit surveys from those who buy online since our website is so slick and efficient. This has the added benefit of reducing our research expenses so we can still afford those front-row football tickets.
- We could change the way we calculate Net Promoter to take the percentage of customers who score us as 7 trough 10’s and subtract those who score us as 1’s or 2’s since we know that 3’s to 6’s are really the “marginal” middle group and thereby take some credit for producing partial satisfaction.
- We can offer customers a $10 bonus coupon to allow our sales associates to “help” them complete the survey before they leave the store, thus providing both convenience and value to our customers.
- We can reduce the frequency of surveying from monthly to annually so as to make it virtually impossible to link our marketing or sales actions back to increases or decreases in the scores. This will create much confusion over interpretation and causality that bonuses will have long been paid by the time anyone actually agrees on what to do next.
- We can have our sales reps do the surveying themselves. This will allow us to capture notations about body language of the respondents too (side benefit: see football reference above).
Any or all of these strategies could essentially ensure success. Provided our market share doesn’t fall too fast, we’re unlikely to draw any undue attention.
Please let me know how you would like to proceed. We can also adjust any of our other metrics in similar ways.
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.
Friday, September 18, 2009
I’m sitting in a presentation at a meeting of a group of mid-level marketers representing some of America’s biggest ad budgets. The speaker, who is representing a media source (monopoly, government owned), is telling us that spending more on marketing in a recession is a good way to improve ROI. His argument: most marketers pull back in a recession, so if you maintain or increase your spending, your message will get through to more prospective customers, more often, with less clutter. He’s citing some examples of where this was the case.
If you believe this, I have a few lots in Florida I’d like to discuss with you. Waterfront. Tons of wildlife.
I know a few people who are big fans of pizza. Given the choice, they would eat pizza for breakfast, lunch, and dinner – and probably do on occasion. Interestingly, these people tend to be thin, with very low body fat and excellent muscle tone. So, by extrapolation, I can conclude that eating pizza makes them thin. Let’s all go eat more pizza.
Anyone who has studied the question of increasing spend in a recession (and done so objectively) will tell you that the evidence supporting higher spending in recessions is weak at best. There are many anecdotes and some success stories, but not enough clear evidence to convince even a moderately smart CFO that the reward outweighs the risks.
The unfortunate fact is that the definitive study on this question has never been done. No one has ever surveyed a broad sampling of marketers in different industries and different competitive scenarios, and had some randomly increase spend while holding others in control at lower levels. There are no legitimate studies that tell us how X% of marketers that increased spend had successful outcomes. And even those that have attempted to do a meta-study of all the many narrowly-focused probes on the topic conclude that there are no general rules of thumb that hold up across industries and companies.
If you think I’m saying that spending more is NOT a good strategy, you’re missing the point. Spending more MIGHT be the perfect strategy. Or it might be the last bad choice of your career. Success during recessionary (or slow recovery) times has less to do with level of spending than it does three simple factors:
- How strong is your product/service value proposition relative to your competitors or the alternatives your prospect may have? If it’s VERY strong, you might gain ground by spending more. If not, you might waste money just trying to buy share of voice in support of a solution which isn’t all that compelling.
- How responsive are your prospects to marketing spend? If they are very likely to respond to marketing stimulus, maybe more spend is good. But if marketing is just one of many things that cause them to buy, you may find that it would take a disproportionate increase in spend to achieve any noticeable shift in outcomes.
- How strong is your company balance sheet? Chances are, if you spend more, your competitors will try to match you to prevent losing share. It would be naïve to think you could get away with anything that would steal share without seeing some sort of blunting response. If you have the cash to withstand an escalation of a competitive war on marketing spend, go for it. But check with your CFO before you propose a strategy that might create far more risk that the company can undertake at this time.
If you worry about the consequences of eating too much pizza, you’re now better equipped to challenge broad assertions about spending more. And you’re more likely to preserve your credibility for the really important issues in the future.
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics and measuring marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.
Friday, July 24, 2009
One of the most common questions I’m getting these days is “how should I measure the value of all the social marketing things we’re doing like Twitter, Linked-in, Facebook, etc.”
My answer: WHY are you doing them in the first place? If you can’t answer that, you’re wasting your time and the company’s money.
Sounds simple I know, but I’m stunned at how unclear many marketers are about their intentions/expectations/hypotheses for how social media initiatives might actually help their business. In short, if you can’t describe in two sentences or less (no semi-colons) WHAT you hope to gain through use of social media, then WHY are you doing it? Measurement isn’t the problem. If you don’t know where you’re going, any measurement approach will work.
Here’s a framework for thinking about social measurement:
- Fill in the blanks: “Adding or swapping-in social media initiatives will impact ____________ by __________ extent over _____________ timeframe. And when that happens, the added value for the business will be $_____________, which will give me an ROI of ______________. This forms your hypotheses about what you might achieve, and why the rest of the business should care.
- Identify all the assumptions implicit in your hypotheses and “flex” each assumption up/down by 50% to 100% to see under which circumstances your assumptions become unprofitable.
- Identify the most sensitive assumption variables - those that tend to dramatically change the hypothesized payback by the greatest degree based on small changes in the assumption. These are your key uncertainties.
- Enhance your understanding of the sensitive assumptions through small-scale experiments constructed across broad ranges of the sensitive variables. Plan your experiments in ways you can safely FAIL, but mostly in ways to help you understand clearly what it would take to SUCCEED – even if that turns out to be unprofitable upon further analysis. That way, you will at least know what won’t work, and change your hypotheses in #1 above accordingly.
- Repeat steps 1 thru 4 until you have a model that seems to work.
- In the process, the drivers of program success will become very obvious. Those become your key metrics to monitor.
In short, measuring the payback on social media requires a sound initial business case that lays out all the assumptions and uncertainties, then methodically iterates through tests to find the model(s) that work best. Plan to fail in small scale, but most importantly plan to LEARN quickly.
Measure social media like you should any other marketing investment: how did it perform versus your expectations of how it should have? If those expectations are rooted in principles of profit-generation, your measurement will be relevant and insightful.
Tuesday, July 14, 2009
My response was: “compared to what? Walking naked through Times Square?” I was being asked to evaluate a proposed strategy without any sense of what the alternatives were.
Sure, I can come up with a means of estimating and tracking the ROI on almost anything. But if that ROI comes to 142%, so what? Is there a plan that might get us to 1000% (without just cutting cost and manipulating the formula)?
As I thought back on the hundreds of planning meetings I’ve been in over the last 10 years, it occurred to me that we marketers are not so good at identifying alternative ways of achieving objectives and systematically weighing the options to ensure we’re selecting the paths that best meet the organization’s needs strategically, financially, and otherwise.
On a relative basis, we spend far too much of our time measuring the tactical/executional performance of the things we have decided to do, and far too little measuring the comparative value of things we might decide to do. Scenario planning; options analysis; decision frameworks. You get the idea.
The importance of this up-front effort isn’t just in getting to better strategies, but in building further credibility throughout the organization. Finance, sales, and operations all see marketing investments as inherently risky due to A) the size of the expenditures; and B) the uncertain nature of the returns as compared to many of the things those other functions tend to spend money on. Impressing them with our thorough exploration of the landscape of options goes a long way to demonstrating that we’re considered risk (albeit implicitly) in our recommendations, and have done all the necessary homework to arrive at a reasonable conclusion. NOT necessarily producing a 50 page deck, but rather simply stating which alternatives were considered, what the decision framework was, and how the ultimate selection was made. (This also builds trust through transparency).
From a measurement perspective, we can then consider the relative potential value of doing A versus B versus C, and in the process raise the level of confidence that we are spending the company’s money wisely. We can then turn our attention to measuring the quality of the execution of the chosen path with confidence that we’re not just randomly measuring the trees while wandering in the forest.
I’m not sure how many businesses might get a high ROI on walking naked through Times Square, but imagining that option certainly helps fuel creativity and underscores the importance of measuring strategic relevance, not just tactical performance.
Got any good stories about wandering naked?
Thursday, July 09, 2009
1. I will lead this process and not get dragged behind it.
2. I recognize that many of our business fundamentals may have recently changed, so I commit to anticipating the key questions likely to define our strategy and, using research, analytics, and experiments, gather as much insight into them as I can in advance of making recommendations.
3. I will approach my budget proposal from the ground up, with every element having a business case that estimates the payback and makes my assumptions clear for all to see.
4. I will not be goaded into squabbling over petty issues by pin-headed, myopic, fraidy-pants types in other departments, regardless of how ignorant or personally offensive I find them to be.
5. The person who wrote number 4 above has just been sacked.
6. I will proactively seek input from others in finance, sales, and business units as I assemble my plan, to ensure I understand their questions and concerns and incorporate the appropriate adjustments.
7. I will clearly and specifically define what "success" looks like before I propose spending money, and plan to implement the necessary measurement process with all attendant pre/post, test/control, and with/without analytics required to isolate (within reason) the expected relative contribution of each element of my plan.
8. I will analyze the alternatives to my recommendations, so I am prepared to answer the inevitable CEO question: "Compared to what?"
9. I will be more conscious of my inherent biases relative to the power of marketing, and try not to let my passion get in the way of my judgment when constructing my plan.
10. If all else above fails, I promise to be at least 10% more foresighted and open-minded than I was last year, as measured by my boss, my peers in finance, and my administrative assistant. My spouse, however, will not be asked for an opinion.
How are you preparing for planning season? I'd like to hear what your resolutions are.
Thursday, May 07, 2009
· 4 out of 5 respondents feel that marketing is a “dead” profession.
· 60% reported having little if any respect for the quality of marketing programs today.
· Fully 75% of those responding would rather be poked with a sharp stick straight into the eye than be forced to work in a marketing department.
In total, the survey panel reported a mean response of 27% when asked, “on a scale of 0% to 100%, how many marketers suck?”
This has been a test of the emergency BS system. Had this been a real, scientifically-based survey, you would have been instructed where to find the nearest bridge to jump off.
Actually, it was a “real” “survey”. I found 5 teenagers in a local shopping mall loitering around the local casual restaurant chain and asked them a few questions. Seem valid?
Of course not. But this one was OBVIOUS. Every day we marketers are bamboozled by far more subtle “surveys” and “research projects” which purport to uncover significant insights into what CEOs, CFOs, CMOs, and consumers think, believe, and do. Their headlines are written to grab attention:
- 34% of marketers see budgets cut.
- 71% of consumers prefer leading brands when shopping for
And my personal favorite:
- 38% of marketers report significant progress in measuring marketing ROI, up 4% from last year.
Who are these “marketers”? Are they representative of any specific group? Do they have anything in common except the word “marketing” on their business cards?
Inevitably such surveys blend convenience samples (e.g. those willing to respond) of people from the very biggest billion dollar plus marketers to the smallest $100k annual budgeteers. They mix those with advanced degrees and 20 years of experience in with those who were transferred into a field marketing job last week because they weren’t cutting it in sales. They commingle packaged goods marketers with those selling industrial coatings and others providing mobile dog grooming.
If you look closely, the questions are often constructed in somewhat leading ways, and the inferences drawn from the results conveniently ignore the statistical error factors which frequently wash-away any actual findings whatsoever. There is also a strong tendency to draw conclusions year-over-year when the only thing in common from one year to the next was the survey sponsor.
As marketers, we do ourselves a great dis-service whenever we grab one of these survey nuggets and imbed it into a PowerPoint presentation to “prove” something to management. If we’re not trustworthy when it comes to vetting the quality of research we cite, how can we reasonably expect others to accept our judgment on subjective matters?
So the next time you’re tempted to grab some headlines from a “survey” – even one done by a reputable organization – stop for a minute and read the fine print. Check to see if the conclusions being drawn are reasonable given the sample, the questions, and the margins of error. When in doubt, throw it out.
If we want marketing to be taken seriously as a discipline within the company, we can’t afford to let the “marketers” play on our need for convenience and simplicity when reporting “research” findings. Our credibility is at stake.And by the way, please feel free to comment and post your examples of recent “research” you’ve found curious.
Friday, May 01, 2009
I think I agree with this. If there’s any hesitation to offer full-throated support it’s just that it seems to me to be a bit of a penetrating glance into the obvious. OF COURSE marketing programs cannot succeed without reliably strong sales execution. But let’s not put too many eggs in that basket just yet.
The hard reality is that, in the short term, there is only so much sales enablement one can achieve. Magic sell sheets do not turn mediocre sales reps into revenue superheroes. Online demo tools don’t revolutionize categories or spark unprecedented demand. Those sorts of changes come about through:
1. Sound sales management process.
2. Comp structures aligned to incent the “right” behaviors.
3. Methodical, continuous training based on observed effective practices.
4. Regular “pruning” of the bottom 25% of performers (and a few nearer the top who neglect the means for the end).
Those in marketing measurement who are familiar with the concept of “elasticity” understand that there are limits to just how much improvement we can achieve with sales enablement in any 3, 6, or even possibly 12 month horizon. So pressed for better results NOW, sales enablement may be more of a feel-good response than an effective one.
Don’t mistake the message here. Sales enablement is always important. But unless the foundational elements above are being properly managed, marketers efforts to “enable” sales may result in some great meetings and a few positive anecdotes, but may fail to achieve improvement on any scale or sustainable basis. So allocating more resources to “sales enablement” may not provide the expected returns.
If you want to get a better handle on the “elasticity” of your sales force, try performing a deeper analysis of recent buyers versus non-buyers, being sure to sample prospect leads from both high-performing and average-performing sales reps. Get a clearer understanding of why rejecters are not buying (or stalling longer) and analyze the data until you can clearly determine the factors which make the higher-performers more effective. If you conclude that those factors are mostly innate skill, personality or motivation characteristics, then the elasticity of your sales force is likely very low. This suggests that sales management has work to do in the fundamentals arena before marketing can help much.
If however you observe that better needs discovery, stronger communications capabilities, or enhanced preparation explain most of the difference, then your sales elasticity is likely higher and you should focus more on sales enablement in the near term.
When times are tough and horizons are shorter, we all want to help as much as we can. But let’s not mistake hope for judgment when reallocating resources from marketing programs to sales enablement.
Tuesday, March 24, 2009
Sophisticated analytics? Check.
Compelling business case? Check.
Closing that one big hole that could torpedo your career? Uhhhhhhh....... Most new marketing initiatives fail to achieve anything close to their business-case potential. Why? Unilateral analysis, or looking at the world only through your own company's eyes, as if there was no competition.
It sounds stupid, I know, yet most of us perform our analysis of the expected payback on marketing investments without even imagining how competitors might respond and what that response would likely do to our forecast results. Obviously, if we do something that gets traction in the market, they will respond to prevent a loss of share in volume or margin. But how do you factor that into a business case?
Scenario planning helps. Always "flex" your business case under at least three possible scenarios: A) competitors don't react; B) competitors react, but not immediately; C) competitors react immediately. Then work with a group of informed people from your sales, marketing, and finance groups to assess the probability of each of the three possibilities, and weight your business case outcomes accordingly.
If you want to be even more thorough, try adding other dimensions of "magnitude" of competitive response (low/proportionate/high) and "effectiveness" of the response (low/parity/high) relative to your own efforts. You then evaluate eight to 12 possible scenarios and see more clearly the exact circumstances under which your proposed program or initiative has the best and worst probable paybacks. Then if you decide to proceed, you can set in place listening posts to get early warnings of your competitor's reactions and hopefully stay one step ahead.
In the meantime, your CFO will be highly impressed with your comprehensive business case acumen. Check
Thursday, March 12, 2009
Great news for TV sales reps. Likewise for TV production companies. But a sucker's bet for bank marketing executives who would rip out that page of Ad Age and run into their CFO's office to defend a recommendation to spend more on TV.
First, this "study" didn't isolate the non-media marketing variables which may have affected the outcome. Little things like customer service quality, direct marketing spending programs, message effectiveness, word-of-mouth, etc. Bet they didn't count the number of toasters given away either.
Nor does it appear to have accounted for other characteristics of the banks themselves which may have driven performance higher. Price perhaps. Or interest rates charged/paid. Or branch location demographics. Or in-branch cross-sell incentives. Or other things which would never show up in syndicated spend data.
So what can a bank marketing executive take away from this study? Nothing. They just measured what was easy to measure and didn't answer ANY of the open questions surrounding the payback on marketing investments beyond a reasonable doubt. Worse, it is the apple and marketers are Eve. It beckons with faint promises to fulfill the desire to believe that it may offer "evidence" of the beneficial impact of marketing.
If you value your credibility, don't circulate stuff like this within your marketing organization, and don't EVER use it in discussions with a savvy financial executive. When you see a headline like this, just pass it along to your trustworthy, naturally-skeptical research professional and ask them to find the flaws.
Stuff like this isn't research. It's PR. One bite of this apple will cost you your reputation - perhaps permanently.
We'll keep working on raising the standards in the media on what passes for good content.
Wednesday, March 11, 2009
Is it possible that the DMA is losing sight of its role and mission? In this current environment where attracting paying registrants to conferences is so difficult, they seem to have err'd on the side of flash and glitz rather than meaningful substance and advancement of the trade.
It's hard to deny that Ivanka will put more butts in seats than the most compelling direct response case study. And perhaps more butts in seats for Ivanka actually will lead to more exposure of members to substantive content. But it's a sad day when such a long-valued industry organization can't find enough compelling content that they resort to the equivalent of Fonzie jumping the shark to draw "spectators". Maybe next year they can get Chris Angel to come make their relevance re-appear.
Tuesday, March 10, 2009
If you’re thinking “let’s spend more now to gain share”…
Good luck. Headline-grabbing stories of marketing heroes who have taken this approach tend to emphasize the few who have succeeded and gloss over the vast majority who have simply squandered more by throwing money into an economic hurricane. The fact is that there’s not much empirical data to prove the merits of this strategy beyond a reasonable doubt. Many “studies” have been done, but none have derived their conclusions from projectable samples which account for the primary risk factors, nor have any led to any high-probability “formula” for succeeding with this strategy. The margin of error between success and failure tends to be very narrow. It’s a roll of the dice against pretty long odds.
If you’re thinking “we’ve got to keep up our spend to maintain our share of voice”…
Be careful. Matching competitive levels of spend (or making decisions on the basis of “share of voice”) is most often seen by CEOs and CFOs as foolish logic. How do you know the competitor isn’t making an irrational decision? What do you know about the effectiveness of your spending versus theirs? How much ground would you lose if they outspent you by a substantial amount? If you don’t have specific answers to these questions, relying on anecdotal evidence won’t help. It may get you the spend levels you’re requesting in the near term, but if it doesn’t work out, the memory of your recommendations will undermine your credibility for years to come.
When times get tough, buyers re-evaluate the value propositions of what they buy. They make tradeoffs on the basis of what is or isn’t “necessary” any more. Shouting louder (or in more places) is unlikely to break through newly-erected austerity walls.
To make a sound case for spending more, tune into what the CEO is looking for… leverage. They want to find places to squeeze more profitability out of the business. To help, focus your thinking around:
- the relative strength of your value proposition, channel power, and response efficiencies versus your competitors.
- your assumptions about customer profitability and prospect switchability as buyers cut back.
- your price elasticity to find out where the traditional patterns may collapse or where opportunities may emerge.
- the relevance, clarity, and distinctiveness of your message strategy, and your ability to defend it from copycat claims.
And make sure to check with finance to see if the company’s balance sheet is strong enough to handle higher levels of risk exposure during revenue-stressed periods. If it’s not, the whole question of spending more is moot.
If your comparative strengths seem to offer an opportunity, then increasing spend may just be a smart idea. But even so you have to anticipate that competitors aren’t just going to let you walk away with their customers or their revenues. And that may just leave you both with higher costs in times of lower sales. In technical parlance, this is known as a “career-limiting outcome”.
Friday, March 06, 2009
He was (is) strategically brilliant and a fountain of ideas. His insight into his customers was formidable. His managerial capabilities were strong. His relationships with sales management were excellent. And the CEO really liked him. In fact, the CEO really supported the launch of the new ad campaign a few months back that substantially increased the company's spend.
But now, several months past the campaign's initial wave, there is no credible evidence or consensus that it had any positive effect on sales, profits, customer value, or any other meaningful dimension. And the fact that unaided brand awareness was up X% was little comfort.
So this particular VP of Marketing was let go, and marketing is now reporting to the EVP of Sales.
I spoke to the CEO who told me "I relied on (the VP Marketing) to make smart decisions about where and how we spent our marketing resources. In the end, it wasn't the lack of positive results that bothered me about that campaign as much as it was our inability to really learn anything important about why it didn't work and what we should do next. So I felt I needed to place my bets somewhere I get better transparency and feedback... sales. It may be short-sighted, but we need to demonstrate learning and improvement every day, or we're just spinning our wheels."
In better times, the VP may have gotten another chance to iterate to the magic marketing formula. But in the current environment, you only get one chance. To learn, that is, and to build confidence and credibility even when your efforts fail to achieve the desired outcome. Which, in marketing, happens quite often.
Wednesday, February 25, 2009
First, the “best” clearly define what “doing more with less” really means. The most common metric appears to be “marketing contribution efficiency” – an increase in the ratio of net marketing contribution per marketing dollar spent. That’s seems appropriate when budgets are falling (recognizing the need to monitor it over time as it can be manipulated in the near term).
Second, when they cut, they do it strategically. Face it, most of us didn’t take Budget Cutting 101 in B-school. After eliminating travel and consultants and other easy stuff, bad decisions creep in under mounting political pressure. More about this in last week’s post.
Third, they watch the risk factors. CFOs want to cut marketing spend to increase the likelihood of (aka decrease risks against) making short-term profit goals. Yet when marketers try to do more with less, risk exposure rises in ways never imagined – especially if it wasn’t clear which elements of the marketing mix were working before the cuts. It’s the “risk paradox”. If you want to make sure your “less” really has a chance of doing “more”, manage the new risks that have silently crept into the plans.
Fourth, they avoid the ostrich effect. Just because there’s enormous pressure on today, the best don’t ignore the fact that tomorrow is right around the corner in the form of 2010 plan. And when looking ahead, the only thing certain is that historical norms are no longer a reasonable guide. So the best are anticipating the key questions for 2010 plan, and working on getting some answers now. They’re committed to leading the process, not getting dragged behind it.
Finally, the best push their marketing business case competency further, faster. The marketing skeptics and cynics have more political clout now. Un-tested assumptions, like ostriches, will not fly. Better business case discipline is the new currency of credibility.
We all have basically the same tools at our disposal to do more with less. The “best” seem to be able to apply their imagination most effectively in the use of those tools. I’m the world’s biggest proponent of the importance of creative inspiration and instinct, but the lesson here I think is to start the conversation these days with “what do we mean by ‘effective’?”
Thursday, February 12, 2009
Yes, we all should have been smart enough to build sufficiently robust measurement capabilities BEFORE the dramatic assault on our budgets began. Yes, we should have put some water in that bucket BEFORE the fire consumed so much of the house that marketing built.
But we didn’t. So what do we do now that we’re caught in the downward cutting spiral? Where do we turn once all the “fat” has long since been excised and all that’s left is muscle and bone?
First, get your head out of the emotional sand. You’ve lost the battle over the power of marketing to drive the business in the near term. But don’t let your fog of disappointment cost you the war. Suck it up and look ahead. And don’t take it so personally.
Second, define the objectives for making smart cuts.
1. Achieve the target reductions the CEO is asking for (most people stop right here).
2. Clarify the mid- to long-term strategy for competing successfully.
3. Conduct a thorough and unbiased analysis of the options.
4. Provide a comprehensive assessment of the near- and long-term implications of the cutting alternatives.
5. Preserve your credibility. Live to fight again another day.
If you’re not thinking about all 5, you’re likely suffering a very slow death by 1000 cuts yourself.
Third, frame your cutting analysis on the basis of strategic dimensions of competitiveness, NOT on the basis of what’s easiest to cut (e.g. travel and outside contractors), and for heaven’s sake do NOT cut proportionately across the board (which strengthens the hidden weaknesses in your plan while weakening the strengths). Think about the relative value/importance of customer segments; product groups; channels; or even geographic regions. Consider the marginal returns of a dollar spent in each one. Cut ruthlessly from the bottom of the importance rankings.
Fourth, engage people in finance, sales, or SBUs in your thought process. You have nothing to gain by being an island now.
Fifth, get comfortable with making educated guesses on expected impacts. You’re beyond the point where data-driven analysis is likely to help. Think about using monte carlo simulation and other probabilistic assessment methods to make intelligent guesses now (and loop back to “fourth” above).
Finally, present your findings with passion, but not bias. The time for “I believe…” is past. The mantra of the moment is “having run many options by the good people in finance and sales, we all feel that the smartest course of action is…”
And by the way, NOW is exactly the time to begin building that measurement capability you really wish you had over the past few months.If you need more help, start here.
Friday, February 06, 2009
You and I bought stocks and mutual funds (and you might have bought hedge funds). We expected above-average returns from those investment managers.
The investment managers make money by selling more shares in their funds. To do that, they needed to show higher-than-average returns. They had only a secondary interest in the long-term health of the companies they were buying (despite statements to the contrary in their prospectus). The really just needed to show strong returns NOW to compete with other funds. That made their focus short-term even if they wouldn’t admit it.
Since CEOs need demand for their company stock to keep the price high (and keep the board happy), they obliged these fund investors by pushing to meet short-term earnings growth to increase the rationale for a higher P/E multiple. As a result, their decision process became somewhat perverted towards hitting every quarterly target they promised to Wall St. and the fund managers.
This perversion drove managers working for the CEOs onto a slippery slope of buying and selling things that had substantially higher risk profiles than they were used to, and many hidden risks that have only recently come to light. Altruistically in most cases, but not all.
The CEO was OK with this as A) it was driving earnings growth; and B) they were “trusting” their expert managers and consultants (who were also paid handsomely for making recommendations to participate in such behaviors).
The fund managers were OK keeping a blind eye to this, as long as the returns for their fund were above average.
You and I were happy as long as our investment portfolios were rising in value.
In short, we all got too focused on the WRONG metric of short-term growth in stock prices. It’s the same thing that happened in the dot-bomb era, only with a different rationale. Only this time, we managed to ensnare millions of homeowners in the process, destabilizing their confidence in spending. This, in turn, destabilized the climate for corporate investments, and increased layoffs. Thus the viscous cycle we’re in now.
I raise this as an example of how seriously wrong things can go if we’re not focused on the right metrics.
The answer isn’t higher levels of government oversight and regulation. It’s higher levels of transparency in companies reporting what they’re doing to hit earnings targets, combined with a closer monitoring of their productivity in generating organic growth. And it’s paying more attention to the leading indicator metrics of consumer behavior – security, liquidity, and confidence.
Just like many of our businesses, it often takes a knock upside the head for us to realize that we slowly lost focus on the right metrics.
Thursday, January 29, 2009
Some digital digerati (like Guy Kawasaki) suggest that in today’s world of blogging and tweeting, mass reach is the name of the game. Guy’s argument is that the internet and social media have eliminated or substantially reduced any semblance of information dissemination hierarchy. As such, if you extend your reach as far as possible through as many network nodes as possible, you will reach more prospective customers and thereby optimize your results. In this view, focusing on reaching “influentials” who might effectively distribute your message to an audience of more likely buyers is a waste of time. Just blog away and let anyone and everyone carry the message.
On the other side of the issue are people like Ed Keller of Keller Fay, who literally wrote the book on the influentials. Ed’s research into both online and offline WOM suggests that A) online WOM is still only a small fraction of offline WOM volume in most categories, and that nothing is more effective at driving behavior than the objective recommendation of a known, credible source. This would suggest that pursuing sheer volume of reviews and opinions flying around the websphere may be a potentially distracting pursuit to the marketer seeking highly effective leverage of their limited resources.
I see some parallels in marketing history here to how first network television and then direct mail each boomed on the strength of message delivery efficiency, and then busted under the declining marginal returns as clutter and CPMs rose and response rates declined. Each respectively then fractured further (network TV to cable TV; direct mail into database marketing) in search of targeting efficiencies. The idea of targeting “influentials” was born out of a desire to focus the increasingly constrained marketing team resources on the points of greatest leverage in the market.
Granted, there are substantial differences in the evolution of web communications, not the least of which is the no/low cost of pushing out messages. But it strikes me that the real cost of communicating with a flat world is the time and energy it takes to respond to all the feedback you get, much of which is irrelevant (owing to the reverse-application of the flat world theory back on you). This is just one of the dimensions of measuring WOM effectively.
So I suspect that the futurists forecasting the death of the influential-centric strategy are just that, futurists (and, somewhat paradoxically, influentials themselves). If you’re selling Coke or Crest or something else that practically anyone in the world (including emerging economies) would buy, maybe the flat world model works. But until we have appropriate technology for effectively and efficiently sifting/sorting and managing the feedback from the flat world, most marketers would probably be better off concentrating their efforts on reaching the right “nodes of influence” within the websphere.
Presumably that’s what you and I are both doing right this very moment.
Tuesday, January 27, 2009
Now more than ever before, we need to build and nurture our brand assets.
Presumably this is intended to mean that in these times of great economic challenge, we cannot afford to let our brand standards slip, our brand equities become cloudy, or our brand experience decay.
Fair enough. But does that mean that we should be spending money to build these brand assets, even in the face of substantial cutbacks elsewhere? Or just be cautious not to cut things that would cause an undue decline in brand strength?
This had me wondering … under what conditions would we expect to be able to “harvest” some of the investment we’ve been making? How bad would things have to get before we expected the brand to “pay us back”? At what point would a CMO stand up and advocate “harvesting brand value”?
Sure, I understand that you should always be working on building your brand, and that done right, it is always paying you back. The flow is bi-directional and fluid. But it’s also transparent, and that’s the problem.
Assets, in a financial context, are a way of storing cash value for later use. You invest in stocks as assets, with the expectation that they will appreciate and return more cash to you later. Likewise, you invest in assets like manufacturing equipment, software, or other “tools” required to produce goods or services to sell. Brands could be said to play a similar role. Yet property, plant, and equipment are depreciated over time to reflect the decline of their useful life. Stocks and bonds are liquid assets for which there are markets to quickly buy and sell them, thus establishing their value.
Brands, on the other hand, aren’t depreciated. They can become “impaired” (accounting term meaning they are worth less than you paid for them, thereby triggering a write-down), but only if you purchased them from someone else. So if we marketers are going to rationalize some of our cash spending in good times by talking about “investing” in brand “assets”, at some point we are expected by the financial types to demonstrate how that asset value is being realized back into cash. I call it, “harvesting”.
So under what circumstances would you consider harvesting some of that brand equity?
Well, for starters, if you need to raise prices without adding any incremental costs associated with new features, benefits, or other value visible to the customer. In that case, you are relying on your brand asset to carry you past the danger of customer defection. To the degree that you averted attrition related to unilateral price increases (not matched by competitors immediately), you can legitimately claim that your brand “saved” you money. This is measurable.
Likewise, when a competitor announces a new product/feature/benefit that you cannot match, thereby taking an advantage in perceived value, you rely on your customers’ relationship with your brand to carry you through until you can once again restore your value proposition to its rightful state. This too is measurable.
And finally, when some aspect of your customer experience is deficient – a poor interaction with a call center agent, an inaccurate statement, or maybe a data privacy mishap – you rely on the strength of the overall brand relationship to carry you through. The value of this too is measurable.
So in this economy, while your budget is getting cut again and again, be sure to take the necessary steps to earn credit for how you’re now “spending” some of that “asset” value you built up over time. Done correctly, it will underscore what a good steward of company resources you are, and how far-sighted you’ve been all these years.
Just be careful not to overspend that brand asset account along the way (also measurable).
Saturday, January 17, 2009
Yes We Can apply more discipline to how we measure the payback on marketing investments without increasing the workload proportionately.
Yes We Can embrace this discipline without harming the creative energy so critical to marketing success.
Yes We Can measure those "softer" elements like branding, customer experience, innovation, and word-of-mouth, and link them to impacts on company cashflows.
Yes We Can overcome gaps in data and find ways to build reasonable approximations which even the CFO will embrace.
Yes We Can align the entire company on a single set of marketing metrics and all use the same yardsticks to measure success.
Yes We Can forecast the impact of changes in spending amount or allocation in ways that will inspire confidence instead of criticism.
Yes We Can anticipate the challenges ahead with reasonable certainty and act now to prepare ourselves to meet them head-on. And most importantly,
Yes We Can restore credibility and confidence in marketing as a means of driving profitable growth in our companies, regardless of industry, sector, corporate politics, culture, structure, or market dynamics.
The present economic environment offers a unique opportunity to re-invent the role of marketing in the organization, and to re-establish the critical links between our marketing efforts and the bottom-line shareholder value they create.
Believe it. If you're not doing it, your competitors likely are. There are no more good excuses. There is only "Yes We Can".
Monday, January 12, 2009
- Lack of clarity - not having a specific definition of what they're trying to measure, and getting lost in the ambiguity of the process. HINT: define and prioritize the key questions you're trying to answer BEFORE you set out to measure them. Read this.
- Inability to measure the "brand" impact - having great difficulty getting funding for branding activities/initiatives due to absence of any hard financial evidence of how brand drives value. Here are a few ideas. NOTE: solve this one now, or what's left of your branding budget may well disappear in the tough year ahead.
- No or bad data - this is not a reason, it's an excuse. There are dozens of ways to overcome short-term data gaps IF you realize that doing so is a people/politics challenge and not a technical one.
- Low credibility in the board room - the chickens have come home to roost. In the good times, we should have been working on building your knowledgebase of how marketing drives shareholder value. Now, all we can do is move funds from the more intangible activities to the more quantifiable. That's not a strategy. That's an outcome. How to NOT lose the battle next time around.
If you're still struggling to get an insightful and credible measurement program off the ground (or to see it reach a higher level of value), look here to see what your symptoms are, and then find the prescribed cure.
On the bright side, out of this economic crisis marketers are sure to gain some valuable experience ("experience" is what you get when you don't get what you want). As a community, we will learn from it and do better next time. At least, those of us who are actively working hard to get better will.
Tuesday, January 06, 2009
- Increasing difficulty of reaching consumers via cell phones
- Consumer fears of behavior tracking
- Stage and federal government interest in shady incentive practices used to entice medical professionals
- Unpopularity of "robocalls" and automated dialing
- Public backlash to "push-polls"
- Data security and breach protocols
While each of these dynamics is a threat to the future of the research industry, the bigger threat is the increasing irrelevance of research to senior management. More and more companies have outsourced their strategic marketing research functions to suppliers. The suppliers have been consolidating, often being acquired by bigger agency or marketing services holding companies. Not surprisingly, there is a serious degradation of objectivity that occurs in the process. And the more junior marketers now left client-side to direct the research program within their companies are not generally as politically senior/influential as one needs to be to push through the right research agenda - especially in times of immense cost-cutting pressure. (see Rebuilding Trust in Research as a Measurement Tool)
Sure, there are many executional threats facing the research industry today. But unless the way research is conceived in an appropriate strategic/financial context and prioritized for the value it potentially holds, the methodological threats will be but cubes floating in an ocean of icebergs.
It's time the research profession re-rises to the occasion. I hope they do.