Tuesday, January 19, 2010

The Cost of NOT Branding

It’s a simple formula: recession requires more tactical spending. This year’s budget = + online spend + social activity + lead generation campaigns – brand investment.

When the dollars get tight, spend shifts to more tangible, less expensive marketing programs with the promise of shorter-term returns (or at least lower costs). Not that there’s anything wrong with saving a few bucks wherever you can get the job done more efficiently. But when saving money becomes the goal instead of a guideline, something big always suffers… usually the brand.

While this is an important problem within the B2C community, it is absolutely URGENT within the B2B community. B2B marketers in large numbers have seen their marketing resources cut back dramatically for anything that isn’t expected to generate significant near-term flows of qualified sales leads. Why? Because absent good metrics to connect brand or longer-term asset development to actual financial value, these things were seen as strategic luxuries that could be postponed.

If I were a CFO looking for strategies to free up cash, I might have reached the same conclusion. Unless my marketing team could explain to me the cost of NOT investing in brand.

Here’s an example… A B2B enterprise technology player (Company X) dropped all marketing programs except those that A) specifically promoted product advantages or B) generated suitable numbers of qualified leads to offset the cost. After a few months, leads were on target, but the sales closing cycle was creeping up. What was originally a 6 to 9 month cycle was becoming 9 to 12 months. Further analysis and research amongst prospects and customers showed that indeed some of this delay was being caused by the general economic uncertainty and the need for buyers to rationalize their purchases internally with more people. But fully 45 days of this extended cycle (estimated by sales managers) was happening because the ultimate decision-makers weren’t sufficiently familiar with the strength of Company X’s product/service offering. (They thought Company X made small consumer electronics, and were not a serious player in enterprise tech.) So the sales team had to make repeated visits and presentations just to work their way into the game to compete on feature/function/price/value.

In this case, the question of the cost of NOT branding could be measured by the increased cost of direct sales associated with NOT branding. Specifically, if Company X measures the sales cost/dollar of contribution margin amongst accounts with strong brand consideration versus those with little-to-no brand perceptions, they should expect to see at least a 50% difference (nine months of effort vs. six), half of which would be attributable to low levels of brand consideration. Multiply that by the percentage of prospects in the addressable market with low levels of brand perception, and you can quickly derive a rough approximation of the cost of NOT branding, expressed either in terms of additional sales headcount required to compensate for lack of branding, or in terms of sales opportunity cost to compensate for an underdeveloped brand. Either way, an imminently measurable problem that would better illuminate the business case for investing in brand development.

There are many other ways to measure the cost of NOT branding, including relative margin realized and strategic segment penetration, amongst others. The right approach for you will depend upon your organizations key business goals.

Now I’m NOT advocating branding as a solution in every circumstance. Nor am I a proponent of the idea that marketing should generally be spending more money versus less. But as a tireless advocate for marketing effectiveness and efficiency, I think we too often fail to examine the business case for NOT doing something as a means of pushing past cultural and political obstacles in our management teams. Remember, there are always two options… DO something, and NOT do something. Each are definitive choices requiring their own payback analysis.

_________________________
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Tuesday, January 05, 2010

Forecast for 2010: Better Forecasting

Yogi Berra said, “It’s tough to make predictions, especially about the future.” Yet the turn of the year (and the decade) makes forecasting an irresistible temptation. But what if forecasting is part of your job, not just a hobby? How do make sure your forecasts are smart, relevant, and even (dare I say) accurate?

While advanced mathematics and enormous computational power have improved forecasting potential significantly, few would argue that forecasting is an exact science. That’s because at its core, forecasting is still mostly a human dynamic where accuracy is dependent upon…

  • asking the right people the right questions;
  • their willingness to answer truthfully and completely;
  • the ability to separate the meaningful elements from the noise; and,
  • the openness of the forecaster to suggestions of process improvement.

That last point is key: process improvement.

Consistently good forecasting isn’t a mathematical exercise performed at regular intervals (e.g., quarterly) as much as it’s an on-going process of gathering and evaluating dozens or hundreds of points of information into a decision framework. Then, when called upon, this decision framework can output the best forward-looking view grounded in the insights of the contributors. While software can facilitate process structure by prompting for specific fields of information to be included, it cannot make judgments on the quality of the information being input. Garbage in; garbage out.

As marketers, our job is to consistently prepare forecasts that help our companies conceive, plan, test, build and ultimately sell successful products and services. Sound forecasting processes form the foundation of an “early warning” system to alert the rest of the organization to the need to rethink its market orientation. In essence, forecasting becomes the rudder that can help your company stay the course, change directions, or navigate uncharted waters with confidence. As such, marketing migrates from being a tactical player to a strategic resource for the CEO when forecasts become more accurate, timely, and reliable.

Five Keys to Better Forecasting

Here are a few things I’ve learned over the years which result in much better forecasts:

  1. Be Specific. Define exactly what you are trying to forecast. If you say “sales”, what do you really mean? Revenue? Unit volume? Gross Margins? Net profit? The differences are substantial and might cause you to take very different approaches to forecasting. Likewise, having some sense of how far out you need to forecast (e.g. 3 months, 12, 36, etc.) and how accurate you need to be will guide you to use forecasting methods and processes better suited to your objectives.
  2. Be Structured. Being methodical in defining all of the dimensions, variables, facts, and assumptions will pay huge dividends in several ways, including explaining your forecast to skeptics and inspiring confidence that you’ve been comprehensive and credible in your approach.
  3. Be Quantitative – With or Without Data. Regardless of how little data you have, there are scientifically developed and proven ways of making better decisions. You may not have the raw materials for statistical regression forecasting, but you surely can use Delphi techniques or other judgmental calculus tools to transform perceptions and intuition of managers into data sets which can be more fully examined and questioned. Often, the process of quantifying the fuzzier logic uncovers great insights that were previously overlooked.
  4. Triangulate – Use multiple forecasting methods and see how the results differ. Chances are that the “reality” is somewhere within the triangle of results. That level of accuracy may be sufficient. But even if it isn’t, the multiple-method approach highlights weaknesses in any single method which might otherwise be overlooked – and that in itself leads to more accurate forecasts.
  5. KISS – Keep it simple, stupid. As with most things in life, simplicity is a virtue in forecasting. Einstein said that “things should be made as simple as possible, but no simpler.” In forecasting, we interpret that to mean that an accurate and reliable forecasting process should be comprehensive enough to identify the truly causal factors, but simple enough to explain to those who will need to make decisions upon it. There is no power in a forecast if those who need to trust it cannot understand or explain the logic and process behind it.

Recognizing forecasting to be a complex human decision process is the first step towards dramatically improving your batting average and improving the accuracy and reliability of the forecasts coming out of your department.

If you’re interested in learning more, here is some expanded forecasting insight, and some great sources of forecasting methods and tools.

-------------------------
Pat LaPointe is managing partner at MarketingNPV – specialist advisors on marketing measurement and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Tuesday, December 22, 2009

Measuring What Matters Most

What is the most important thing in your marketing plan to measure?
A) Campaign response.
B) Customer satisfaction.
C) Brand value.
D) Media mix efficiency.
E) All of the above.

The fact is that there are so many things to measure, more and more marketers are getting wrapped around the axle of measurement and wasting time, energy, and money chasing insight into the wrong things. Occasionally this is the result of prioritizing metrics based on what is easy to measure in an altruistic but misguided attempt to just “start somewhere”. Sometimes, it comes from an ambitious attempt to apply rocket science mathematics to questionable data in the search for definitive answers where none exist. But most often it is the challenge of being able to even identify what the most important metrics are. So here’s a way to isolate the things that are really critical, and thereby the most critical metrics.

Let’s say your company has identified a set of 5 year goals including targets for revenue, gross profit margin, new channel development, customer retention, and employee productivity. The logical first step is to make sure the goals are articulated in a form that facilitates measurement. For example, “opening new channels” isn’t a goal. It’s a wish. “Obtaining 30% market share in the wholesale distributor channel within five years” is a clear, measurable objective.

From those objective statements, you can quantitatively measure the size of the gap between where you are today and where you need to be in year X (the exercise of quantifying the objectives will see to that). But just measuring your progress on those specific measures might only serve to leave you well informed on your trip to nowheresville. To ensure success, you need to break each objective down into its component steps or stages. Working backwards, for example, achieving a 30% share goal in a new channel by year 5 might require that we have at least a 10% share by year 4. Getting to 10% might require that we have contracts signed with key distributors by year 3, which would mean having identified the right distributors and begun building relationships by year 2. And of course you would need all your market research, pricing, packaging, and supply chain plans completed by the end of year 1 so you could discuss the market potential intelligently with your prospective distributors.

When you reverse-engineer the success trajectory on each of your goals, you will find the critical building block components. These are the critical metrics. Monitor your progress towards each of these sub-goals and you have a much greater likelihood of hitting your longer-range objectives.

Kaplan and Norton, the pair who brought you the Balanced Scorecard and Strategy Mapping, have a simple tool they call Success Mapping to help diagram this process of selecting key measures. Each goal is broken down into critical sub-goals. Each sub-goal has metrics that test your on-track performance. A sample diagram follows.

By focusing on your sub-goals, you can direct all measurement efforts to those things that really matter, and invest in measurement systems (read: people and processes, not just software) in a way that’s linked to your overall business plan, not as an afterthought.

-------------------------
Pat LaPointe is managing partner at MarketingNPV – objective advisors on marketing measurement and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Tuesday, December 08, 2009

Making the "Best" Business Case for Marketing Investments

More than ever before, the approval of any significant marketing initiative is dependent upon a compelling business case. A business case is meant to function as a logical framework for the organization of all of the benefit and cost information about a given project or investment.

Working with this definition, one might conclude that a “good” marketing business case is one that increases the quality of decision making. Yet many of us in marketing have come to believe that a good business case is one that predicts a significantly positive ROI, IRR, and/or NPV for a given investment. Strangely, we tend to water-down any assumptions that actually seem to make the case “too good,” lest someone in finance really begin questioning our assumptions. Have you ever found yourself…

  • Using aggressive, moderate, and conservative labels on business case scenarios to show how even the most conservative view provides a strong potential return, and anything beyond that is gravy?
  • Identifying the always-low break-even point at which expenses are recaptured fully, and showing how this point occurs even below the conservative outcome scenario?
  • Taking a “haircut” in assumptions to show how, “even if you cut the number in half, the result is still positive.”

Every time you use one of these approaches in an effort to build credibility with finance or other operating executives, you paradoxically wind up undermining it instead. These tactics all have been shown to communicate subtle messages of inherent bias and manipulative persuasion which, intended or not, are noticeable to non-marketing reviewers – even if only on an instinctive level versus a conscious one.

In my experience, business cases get rejected most often for one of the following reasons:

  • Bias – senior management perceives that marketing is trying to “sell” something rather than truly understand the risk/reward of the proposed spending recommendations.

  • Jumping to the Numbers – showing final forecasts which contradict executive intuition before they have had a chance to reconsider the validity of those instincts.

  • Credibility of Assumptions – forecasts seem to ignore the effect of key variables or predict unprecedented outcomes.

Successful business-case developers recognize that there is more at stake than just getting funding approved. In reality, there are several objectives which must all be achieved with every business case:

  1. Protecting personal credibility. Any one program or initiative may be killed for many possible reasons. But you will still need to come to work tomorrow and be effective with your executives, peers, and team members. Preserving (and strengthening) your personal credibility is therefore the paramount objective.

  2. Enhancing the role of marketing. If you have personal credibility, you will want to use it to take smart risks to help the company achieve its objectives, and to influence matters relating to strategy, products, markets, etc. In the process, you need to be thinking about the role of the marketing function; how it can best serve the firm; and how you need to evolve it from what it is today.

  3. Bringing attractive options to the CEO – the kind that forces him/her to make hard decisions choosing between financially appealing alternatives.

There are always two dimensions to business case quality – financial attractiveness, and credibility of assumptions. In the end, it takes more than just financial attractiveness for a successful business case. It takes:

  • Thoughtfulness: demonstrating keen understanding of the role marketing plays in driving business outcomes and reflecting the input of the most critical stakeholders throughout the organization.
  • Comprehensiveness: including all credible impacts of spending recommendations, and calculating benefits and costs at an appropriate level of granularity.
  • Transparency: Clearly labeling all assumptions as such and presenting them in a way that encourages healthy discussion and challenge.

There are many ways to build a successful business case. But the most important learning is to understand the context in which your proposal will be evaluated BEFORE you put the numbers on the table.

-------------------------
Pat LaPointe is managing partner at MarketingNPV – objective advisors on measuring payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Thursday, November 12, 2009

Let's Bury John Wanamaker

There were so many interesting aspects of this year’s Masters of Marketing conference. One in particular caught my attention right up-front on Friday morning…

In this, the 100th year of the ANA, there are still lots of questions surrounding which 50% of advertising is “wasted”. I find that astounding. That 100 years later we’re still having this debate.

Maybe it’s because the very nature of advertising defies certainty.

Or maybe the definition of “wasted” is too broad.

Or maybe the reality is that the actual waste factor has been reduced to significantly less than 50%, but no one famous ever said that “15% of my advertising is wasted, I just don’t know which 15%.” And it wouldn’t make for a provocative PowerPoint slide even if they did.

It’s difficult to ignore the many signs of great progress we’ve made as an industry towards better understanding the financial payback of marketing and advertising. For example:

  • Research techniques have improved and the frequency of application has increased to provide better perspective on how actions affect brands and demand.

  • We’ve not only embraced analytical models in many categories, but have moved to 2nd and even 3rd generation tools that provide great insight.

  • We’ve adopted multivariate testing and experimental design to test and iterate towards effective communication solutions.

  • We’re learning to link brand investments to cash flow and asset value creation, so CFOs and CEOs can adopt more realistic expectations for payback timeframes.

All of this is very encouraging. Most of the presenters at this year’s conference included in their remarks evidence that they have been systematically improving the return they generate on their marketing spend by use of these and other techniques. So where is the remaining gap (if indeed one exists)?

First off, it seems that we’re often still applying the techniques in more of an ad-hoc than integrated manner. In other words, we appear to be analyzing this and researching that, but not actually connecting this to that in any structured way.

Second, while some of the leading companies with resources to invest in measurement are leading the charge, the majority of firms are under-resourced (not just by lack of funds, but people and management time too) to realistically push themselves further down the insight curve. In other words, the tools and techniques have been proven, but still require a substantial effort to implement and adopt.

Third, not everyone agrees with Eric Schmidt’s proclamation that “everything is measurable”. Some reject the basic premise, while others dismiss its applicability to their own very non-Google-like environments.

So what will it take to put John Wanamaker out of our misery before the 200th anniversary of the ANA?

  1. Training – exposing more marketing managers to more measurement techniques so they can apply their creative skills to the measurement challenge with greater confidence.
  2. A community-wide effort to push down the cost of more advanced measurement techniques, thereby putting them within reach of more marketing departments.
  3. An emphasis on “integrated measurement”. We’ve finally embraced the concept of “integrated marketing”. Now we have to apply the same philosophy to measurement. We need to do a better job of defining the questions we’re trying to answer up-front, and then architecting our measurement tools to answer the questions, instead of buying the tools and accepting whatever answers they offer while pleading poverty with respect to the remaining unanswered ones.
  4. We should eat a bit of our own dog food and develop external benchmarks of progress (much like we do with consumer research today). Let’s stop asking CMOs how they think their marketing teams are doing at measuring and improving payback, and work with members of the finance and academic communities to define a more objective yardstick with which we can measure real progress.

As we embark on the next 100 years, we have the wisdom, technology, and many of the tools to finally put John Wanamaker to rest. With a little concerted effort, we can close the remaining gaps to within a practical tolerance and dramatically boost marketing’s credibility in the process.


From Time’s a Wasting – for more information visit anamagazine.net.
--------------------

Pat LaPointe is managing partner at MarketingNPV – objective advisors on measuring payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

"That is the Big Challenge" says Eric Schmidt

I had a chance to talk briefly with Eric Schmidt, CEO of Google, at last week’s ANA conference. He’d just finished sharing his take on marketing and advertising with 1200 of us representing marketers, agencies, and supporting service providers. He said:

  • Google backed away from managing radio and print advertising networks due to lack of “closed loop feedback”. In other words, they couldn’t tell an advertiser IF the consumer actually saw the ad or if they acted afterward. Efforts to embed unique commercial identifiers into radio ads exist, but are still immature. And in print, it’s still not possible to tell who (specifically) is seeing which ads – at least not until someone places sensors between every two pages of my morning newspaper.
  • Despite this limitation, Schmidt feels that Google will soon crack the code of massive multi-variate modeling of both online and offline marketing mix influences by incorporating “management judgment” into the models where data is lacking, thereby enabling advertisers to parse out the relative contribution of every element of the marketing mix to optimize both the spend level and allocation – even taking into account countless competitive and macro-environmental variables.
  • That “everything is measurable” and Google has the mathematicians who can solve even the most thorny marketing measurement challenges.
  • That the winning marketers will be those who can rapidly iterate and learn quickly to reallocate resources and attention to what is working at a hyper-local level, taking both personalization and geographic location into account.
On all these fronts, I agree with him. I’ve actually said these very things in this blog over the past few years.

So when I caught up with him in the hallway afterward, I asked two questions:

  1. How credible are these uber-models likely to be if they fail to account for “non-marketing” variables like operational changes effecting customer experience and/or the impact of ex-category activities on customers within a category (e.g., how purchase activity in one category may affect purchase interest in another)?

  2. At what point do these models become so complex that they exceed the ability of most humans to understand them, leading to skepticism and doubt fueled by a deep psychological need for self-preservation?
His answers:

  1. “If you can track it, we can incorporate it into the model and determine its relative importance under a variety of circumstances. If you can’t, we can proxy for it with managerial judgment.”

  2. “That is the big challenge, isn’t it.”
So my takeaway from this interaction is:

  • Google will likely develop a “universal platform” for market mix modeling, which in many respects will be more robust than most of the other tools on the market – particularly in terms of seamless integration of online and offline elements, and web-enabled simulation tools. While it may lack some of the subtle flexibility of a custom-designed model, it will likely be “close enough” in overall accuracy given that it could be a fraction of the cost of custom, if not free. And it will likely evolve faster to incorporate emerging dynamics and variables as their scale will enable them to spot and include such things faster than any other analytics shop.

  • If they have a vulnerability, it may be under-estimating the human variables of the underlying questions (e.g., how much should we spend and where/how should we spend it?) and of the potential solution.

Reflecting over a glass of Cabernet several hours later, I realized that this is generally a good thing for the marketing discipline as Google will once again push us all to accelerate our adoption of mathematical pattern recognition as inputs into managerial decisions. Besides, the new human dynamics this acceleration creates will also spur new business opportunities. So everyone wins.

Friday, October 30, 2009

The 5 Questions That Kill Marketing Careers

As the planning cycle renews itself, you should be aware of 5 key questions which have been known to pop up in discussion with CEOs/CFOs and short-circuit otherwise brilliant marketing careers.

  1. What are the specific goals for our marketing spending and how should we expect to connect that spending to incremental revenue and/or margins?

  2. What would be the short and long-term impacts on revenue and margins if we spent 20% more/less on marketing overall in the next 12 months?

  3. Compared to relevant benchmarks (historical, competitive, and marketplace), how effective are we at transforming marketing investments into profit growth?

  4. What are appropriate targets for improving our marketing leverage ($’s of profit per $ of marketing spend) in the next 1/3/5 year horizons, and what key initiatives are we counting on to get us there?

  5. What are the priority questions we need to answer with respect to informing our knowledge of the payback on marketing investments and what are we doing to close those knowledge gaps?

How you answer these five questions will get you promoted, fired, or worse - marginalized.

If you tend to answer in a dizzying array of highly conceptual (e.g., brand strength rankings compared to 100 other companies) and/or excruciatingly tactical (e.g., cost per conversion on website leads) “evidence”, stop. Preponderance of evidence doesn’t win in the court of business. Credible, structured attempts to answer the underlying financial questions does.

The five questions are all answerable, even in the most data-challenged environments. Provided, of course, that you build acceptance of the need for some substantial assumptions to be made in deriving the answers – much the same way as you would in building a business case for a new plant, or deploying a new IT infrastructure project. The key is to make the assumptions explicit and clearly define the boundaries of facts, anecdotes, opinions, and guesses. Think in terms of:

  • Clarifying links between the company’s strategic plan and the role marketing plays in realizing it;
  • Connecting every tactical initiative back to one or more of the strategic thrusts in a way that makes every expenditure transparent in its intended outcome, thereby promoting accountability for results at even the most junior levels of management;
  • Defining the right metrics to gauge success, diagnose progress, and better forecast outcomes;
  • Developing a more methodical (not “robotic”) learning process in which experiments, research, and analytics are used to triangulate on the very types of elusive insights which create competitive advantage; and
  • Establishing a culture of continuous improvement that seeks to achieve quantifiably higher goals year after year.

As you evaluate push boldly into 2010, remember that 2011 is just around the corner. You may not have been asked these hard questions this year, but who knows who your CFO might talk to next year (me, perhaps??) that will ratchet up his/her expectations. You can prepare by using these five questions as a framework to benchmark where your marketing organization is starting from; as a guide to ensure that sufficient resources are being allocated to promote continuous learning and improvement; and as a means of monitoring the performance of your marketing organization.

_________________________

Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Wednesday, September 30, 2009

Memorandum
To: John Buleader
From: Barbara Researcher
Subject: How to improve our Net Promoter Scores

In response to your question of last week, I have considered several options for how we can improve our recently flagging Net Promoter scores and thereby increase that portion of our year-end bonus linked to that specific metric.

  1. We could restrict our sample of surveyed respondents to only those who have recently purchased from us, and ignore those who either didn’t like us enough to buy from us as well as those who bought from us a while ago buy may be having second thoughts due to our poor reliability and service.
  2. We could change our sampling approach to only solicit surveys from those who buy online since our website is so slick and efficient. This has the added benefit of reducing our research expenses so we can still afford those front-row football tickets.
  3. We could change the way we calculate Net Promoter to take the percentage of customers who score us as 7 trough 10’s and subtract those who score us as 1’s or 2’s since we know that 3’s to 6’s are really the “marginal” middle group and thereby take some credit for producing partial satisfaction.
  4. We can offer customers a $10 bonus coupon to allow our sales associates to “help” them complete the survey before they leave the store, thus providing both convenience and value to our customers.
  5. We can reduce the frequency of surveying from monthly to annually so as to make it virtually impossible to link our marketing or sales actions back to increases or decreases in the scores. This will create much confusion over interpretation and causality that bonuses will have long been paid by the time anyone actually agrees on what to do next.
  6. We can have our sales reps do the surveying themselves. This will allow us to capture notations about body language of the respondents too (side benefit: see football reference above).

Any or all of these strategies could essentially ensure success. Provided our market share doesn’t fall too fast, we’re unlikely to draw any undue attention.

Please let me know how you would like to proceed. We can also adjust any of our other metrics in similar ways.
_________________________


Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Friday, September 18, 2009

Eat Pizza; Stay Thin - But Sacrifice Credibility

There it is again.

I’m sitting in a presentation at a meeting of a group of mid-level marketers representing some of America’s biggest ad budgets. The speaker, who is representing a media source (monopoly, government owned), is telling us that spending more on marketing in a recession is a good way to improve ROI. His argument: most marketers pull back in a recession, so if you maintain or increase your spending, your message will get through to more prospective customers, more often, with less clutter. He’s citing some examples of where this was the case.

If you believe this, I have a few lots in Florida I’d like to discuss with you. Waterfront. Tons of wildlife.

I know a few people who are big fans of pizza. Given the choice, they would eat pizza for breakfast, lunch, and dinner – and probably do on occasion. Interestingly, these people tend to be thin, with very low body fat and excellent muscle tone. So, by extrapolation, I can conclude that eating pizza makes them thin. Let’s all go eat more pizza.

Anyone who has studied the question of increasing spend in a recession (and done so objectively) will tell you that the evidence supporting higher spending in recessions is weak at best. There are many anecdotes and some success stories, but not enough clear evidence to convince even a moderately smart CFO that the reward outweighs the risks.

The unfortunate fact is that the definitive study on this question has never been done. No one has ever surveyed a broad sampling of marketers in different industries and different competitive scenarios, and had some randomly increase spend while holding others in control at lower levels. There are no legitimate studies that tell us how X% of marketers that increased spend had successful outcomes. And even those that have attempted to do a meta-study of all the many narrowly-focused probes on the topic conclude that there are no general rules of thumb that hold up across industries and companies.

If you think I’m saying that spending more is NOT a good strategy, you’re missing the point. Spending more MIGHT be the perfect strategy. Or it might be the last bad choice of your career. Success during recessionary (or slow recovery) times has less to do with level of spending than it does three simple factors:

  1. How strong is your product/service value proposition relative to your competitors or the alternatives your prospect may have? If it’s VERY strong, you might gain ground by spending more. If not, you might waste money just trying to buy share of voice in support of a solution which isn’t all that compelling.
  2. How responsive are your prospects to marketing spend? If they are very likely to respond to marketing stimulus, maybe more spend is good. But if marketing is just one of many things that cause them to buy, you may find that it would take a disproportionate increase in spend to achieve any noticeable shift in outcomes.
  3. How strong is your company balance sheet? Chances are, if you spend more, your competitors will try to match you to prevent losing share. It would be naïve to think you could get away with anything that would steal share without seeing some sort of blunting response. If you have the cash to withstand an escalation of a competitive war on marketing spend, go for it. But check with your CFO before you propose a strategy that might create far more risk that the company can undertake at this time.
There are a few other considerations, but these are the really important ones.
If you worry about the consequences of eating too much pizza, you’re now better equipped to challenge broad assertions about spending more. And you’re more likely to preserve your credibility for the really important issues in the future.

____________________

Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics and measuring marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.


Friday, July 24, 2009

Twittering Away Time and Money

One of the most common questions I’m getting these days is “how should I measure the value of all the social marketing things we’re doing like Twitter, Linked-in, Facebook, etc.”

My answer: WHY are you doing them in the first place? If you can’t answer that, you’re wasting your time and the company’s money.

Sounds simple I know, but I’m stunned at how unclear many marketers are about their intentions/expectations/hypotheses for how social media initiatives might actually help their business. In short, if you can’t describe in two sentences or less (no semi-colons) WHAT you hope to gain through use of social media, then WHY are you doing it? Measurement isn’t the problem. If you don’t know where you’re going, any measurement approach will work.

Here’s a framework for thinking about social measurement:

  1. Fill in the blanks: “Adding or swapping-in social media initiatives will impact ____________ by __________ extent over _____________ timeframe. And when that happens, the added value for the business will be $_____________, which will give me an ROI of ______________. This forms your hypotheses about what you might achieve, and why the rest of the business should care.
  2. Identify all the assumptions implicit in your hypotheses and “flex” each assumption up/down by 50% to 100% to see under which circumstances your assumptions become unprofitable.
  3. Identify the most sensitive assumption variables - those that tend to dramatically change the hypothesized payback by the greatest degree based on small changes in the assumption. These are your key uncertainties.
  4. Enhance your understanding of the sensitive assumptions through small-scale experiments constructed across broad ranges of the sensitive variables. Plan your experiments in ways you can safely FAIL, but mostly in ways to help you understand clearly what it would take to SUCCEED – even if that turns out to be unprofitable upon further analysis. That way, you will at least know what won’t work, and change your hypotheses in #1 above accordingly.
  5. Repeat steps 1 thru 4 until you have a model that seems to work.
  6. In the process, the drivers of program success will become very obvious. Those become your key metrics to monitor.


In short, measuring the payback on social media requires a sound initial business case that lays out all the assumptions and uncertainties, then methodically iterates through tests to find the model(s) that work best. Plan to fail in small scale, but most importantly plan to LEARN quickly.

Measure social media like you should any other marketing investment: how did it perform versus your expectations of how it should have? If those expectations are rooted in principles of profit-generation, your measurement will be relevant and insightful.

Tuesday, July 14, 2009

Walking Naked Through Times Square

I was sitting in a 2010 planning meeting recently listening to the marketing team describe their objectives, strategies, and thoughts on tactics they were planning to deploy. Their question to me was “how should we measure the payback on this strategy”?

My response was: “compared to what? Walking naked through Times Square?” I was being asked to evaluate a proposed strategy without any sense of what the alternatives were.

Sure, I can come up with a means of estimating and tracking the ROI on almost anything. But if that ROI comes to 142%, so what? Is there a plan that might get us to 1000% (without just cutting cost and manipulating the formula)?

As I thought back on the hundreds of planning meetings I’ve been in over the last 10 years, it occurred to me that we marketers are not so good at identifying alternative ways of achieving objectives and systematically weighing the options to ensure we’re selecting the paths that best meet the organization’s needs strategically, financially, and otherwise.

On a relative basis, we spend far too much of our time measuring the tactical/executional performance of the things we have decided to do, and far too little measuring the comparative value of things we might decide to do. Scenario planning; options analysis; decision frameworks. You get the idea.

The importance of this up-front effort isn’t just in getting to better strategies, but in building further credibility throughout the organization. Finance, sales, and operations all see marketing investments as inherently risky due to A) the size of the expenditures; and B) the uncertain nature of the returns as compared to many of the things those other functions tend to spend money on. Impressing them with our thorough exploration of the landscape of options goes a long way to demonstrating that we’re considered risk (albeit implicitly) in our recommendations, and have done all the necessary homework to arrive at a reasonable conclusion. NOT necessarily producing a 50 page deck, but rather simply stating which alternatives were considered, what the decision framework was, and how the ultimate selection was made. (This also builds trust through transparency).

From a measurement perspective, we can then consider the relative potential value of doing A versus B versus C, and in the process raise the level of confidence that we are spending the company’s money wisely. We can then turn our attention to measuring the quality of the execution of the chosen path with confidence that we’re not just randomly measuring the trees while wandering in the forest.

I’m not sure how many businesses might get a high ROI on walking naked through Times Square, but imagining that option certainly helps fuel creativity and underscores the importance of measuring strategic relevance, not just tactical performance.
Got any good stories about wandering naked?


Thursday, July 09, 2009

10 New Resolutions for the 2010 Planning Process

As we approach the 2010 planning season, I always like to take a few moments and reflect on the horrors of last year's planning cycle, making some commitments on how I can do it better this year:

1. I will lead this process and not get dragged behind it.

2. I recognize that many of our business fundamentals may have recently changed, so I commit to anticipating the key questions likely to define our strategy and, using research, analytics, and experiments, gather as much insight into them as I can in advance of making recommendations.

3. I will approach my budget proposal from the ground up, with every element having a business case that estimates the payback and makes my assumptions clear for all to see.

4. I will not be goaded into squabbling over petty issues by pin-headed, myopic, fraidy-pants types in other departments, regardless of how ignorant or personally offensive I find them to be.

5. The person who wrote number 4 above has just been sacked.

6. I will proactively seek input from others in finance, sales, and business units as I assemble my plan, to ensure I understand their questions and concerns and incorporate the appropriate adjustments.

7. I will clearly and specifically define what "success" looks like before I propose spending money, and plan to implement the necessary measurement process with all attendant pre/post, test/control, and with/without analytics required to isolate (within reason) the expected relative contribution of each element of my plan.

8. I will analyze the alternatives to my recommendations, so I am prepared to answer the inevitable CEO question: "Compared to what?"

9. I will be more conscious of my inherent biases relative to the power of marketing, and try not to let my passion get in the way of my judgment when constructing my plan.

10. If all else above fails, I promise to be at least 10% more foresighted and open-minded than I was last year, as measured by my boss, my peers in finance, and my administrative assistant. My spouse, however, will not be asked for an opinion.

How are you preparing for planning season? I'd like to hear what your resolutions are.

Thursday, May 07, 2009

Survey: 27% of Marketers Suck

A recent survey conducted by the Committee to Determine the Intelligence of Marketers (CDIM), an independent think-tank in Princeton NJ, recently found that:

· 4 out of 5 respondents feel that marketing is a “dead” profession.
· 60% reported having little if any respect for the quality of marketing programs today.
· Fully 75% of those responding would rather be poked with a sharp stick straight into the eye than be forced to work in a marketing department.

In total, the survey panel reported a mean response of 27% when asked, “on a scale of 0% to 100%, how many marketers suck?”

This has been a test of the emergency BS system. Had this been a real, scientifically-based survey, you would have been instructed where to find the nearest bridge to jump off.

Actually, it was a “real” “survey”. I found 5 teenagers in a local shopping mall loitering around the local casual restaurant chain and asked them a few questions. Seem valid?

Of course not. But this one was OBVIOUS. Every day we marketers are bamboozled by far more subtle “surveys” and “research projects” which purport to uncover significant insights into what CEOs, CFOs, CMOs, and consumers think, believe, and do. Their headlines are written to grab attention:

- 34% of marketers see budgets cut.
- 71% of consumers prefer leading brands when shopping for .
And my personal favorite:
- 38% of marketers report significant progress in measuring marketing ROI, up 4% from last year.

Who are these “marketers”? Are they representative of any specific group? Do they have anything in common except the word “marketing” on their business cards?

Inevitably such surveys blend convenience samples (e.g. those willing to respond) of people from the very biggest billion dollar plus marketers to the smallest $100k annual budgeteers. They mix those with advanced degrees and 20 years of experience in with those who were transferred into a field marketing job last week because they weren’t cutting it in sales. They commingle packaged goods marketers with those selling industrial coatings and others providing mobile dog grooming.

If you look closely, the questions are often constructed in somewhat leading ways, and the inferences drawn from the results conveniently ignore the statistical error factors which frequently wash-away any actual findings whatsoever. There is also a strong tendency to draw conclusions year-over-year when the only thing in common from one year to the next was the survey sponsor.

As marketers, we do ourselves a great dis-service whenever we grab one of these survey nuggets and imbed it into a PowerPoint presentation to “prove” something to management. If we’re not trustworthy when it comes to vetting the quality of research we cite, how can we reasonably expect others to accept our judgment on subjective matters?

So the next time you’re tempted to grab some headlines from a “survey” – even one done by a reputable organization – stop for a minute and read the fine print. Check to see if the conclusions being drawn are reasonable given the sample, the questions, and the margins of error. When in doubt, throw it out.

If we want marketing to be taken seriously as a discipline within the company, we can’t afford to let the “marketers” play on our need for convenience and simplicity when reporting “research” findings. Our credibility is at stake.And by the way, please feel free to comment and post your examples of recent “research” you’ve found curious.

Friday, May 01, 2009

The Elasticity of Sales Enablement

Trendy consulting advice suggests that, since marketing ROI improvement is ultimately limited by the effectiveness of the marketing/sales handoff, “sales enablement” is crucial to success. Consequently, in the present results-NOW environment, marketers (particularly B2B and high-tech) are re-examining their processes and dedication to provide sales with the right materials, tools, and training to unleash the power of the imbedded marketing ideas.

I think I agree with this. If there’s any hesitation to offer full-throated support it’s just that it seems to me to be a bit of a penetrating glance into the obvious. OF COURSE marketing programs cannot succeed without reliably strong sales execution. But let’s not put too many eggs in that basket just yet.

The hard reality is that, in the short term, there is only so much sales enablement one can achieve. Magic sell sheets do not turn mediocre sales reps into revenue superheroes. Online demo tools don’t revolutionize categories or spark unprecedented demand. Those sorts of changes come about through:

1. Sound sales management process.
2. Comp structures aligned to incent the “right” behaviors.
3. Methodical, continuous training based on observed effective practices.
4. Regular “pruning” of the bottom 25% of performers (and a few nearer the top who neglect the means for the end).

Those in marketing measurement who are familiar with the concept of “elasticity” understand that there are limits to just how much improvement we can achieve with sales enablement in any 3, 6, or even possibly 12 month horizon. So pressed for better results NOW, sales enablement may be more of a feel-good response than an effective one.

Don’t mistake the message here. Sales enablement is always important. But unless the foundational elements above are being properly managed, marketers efforts to “enable” sales may result in some great meetings and a few positive anecdotes, but may fail to achieve improvement on any scale or sustainable basis. So allocating more resources to “sales enablement” may not provide the expected returns.

If you want to get a better handle on the “elasticity” of your sales force, try performing a deeper analysis of recent buyers versus non-buyers, being sure to sample prospect leads from both high-performing and average-performing sales reps. Get a clearer understanding of why rejecters are not buying (or stalling longer) and analyze the data until you can clearly determine the factors which make the higher-performers more effective. If you conclude that those factors are mostly innate skill, personality or motivation characteristics, then the elasticity of your sales force is likely very low. This suggests that sales management has work to do in the fundamentals arena before marketing can help much.

If however you observe that better needs discovery, stronger communications capabilities, or enhanced preparation explain most of the difference, then your sales elasticity is likely higher and you should focus more on sales enablement in the near term.

When times are tough and horizons are shorter, we all want to help as much as we can. But let’s not mistake hope for judgment when reallocating resources from marketing programs to sales enablement.

Tuesday, March 24, 2009

It's All Relative

Brilliant strategy? Check.

Sophisticated analytics? Check.

Compelling business case? Check.

Closing that one big hole that could torpedo your career? Uhhhhhhh....... Most new marketing initiatives fail to achieve anything close to their business-case potential. Why? Unilateral analysis, or looking at the world only through your own company's eyes, as if there was no competition.

It sounds stupid, I know, yet most of us perform our analysis of the expected payback on marketing investments without even imagining how competitors might respond and what that response would likely do to our forecast results. Obviously, if we do something that gets traction in the market, they will respond to prevent a loss of share in volume or margin. But how do you factor that into a business case?

Scenario planning helps. Always "flex" your business case under at least three possible scenarios: A) competitors don't react; B) competitors react, but not immediately; C) competitors react immediately. Then work with a group of informed people from your sales, marketing, and finance groups to assess the probability of each of the three possibilities, and weight your business case outcomes accordingly.

If you want to be even more thorough, try adding other dimensions of "magnitude" of competitive response (low/proportionate/high) and "effectiveness" of the response (low/parity/high) relative to your own efforts. You then evaluate eight to 12 possible scenarios and see more clearly the exact circumstances under which your proposed program or initiative has the best and worst probable paybacks. Then if you decide to proceed, you can set in place listening posts to get early warnings of your competitor's reactions and hopefully stay one step ahead.

In the meantime, your CFO will be highly impressed with your comprehensive business case acumen. Check

Thursday, March 12, 2009

Adam & Eve Beware: Another Apple in the Garden of Accountability

I saw an article in Ad Age today headline: Banks That Spend the Most on TV Ads Performed the Best. It referred to "A new report from financial-services research firm Aite Group, which examined ad-spending trends and return on advertising performance of 32 of the largest 50 U.S. retail banks from 2006 through 2008, found that top 25% highest-performing banks are those with TV-heavy buys.". The data was provided by TNS.

Great news for TV sales reps. Likewise for TV production companies. But a sucker's bet for bank marketing executives who would rip out that page of Ad Age and run into their CFO's office to defend a recommendation to spend more on TV.

First, this "study" didn't isolate the non-media marketing variables which may have affected the outcome. Little things like customer service quality, direct marketing spending programs, message effectiveness, word-of-mouth, etc. Bet they didn't count the number of toasters given away either.

Nor does it appear to have accounted for other characteristics of the banks themselves which may have driven performance higher. Price perhaps. Or interest rates charged/paid. Or branch location demographics. Or in-branch cross-sell incentives. Or other things which would never show up in syndicated spend data.

So what can a bank marketing executive take away from this study? Nothing. They just measured what was easy to measure and didn't answer ANY of the open questions surrounding the payback on marketing investments beyond a reasonable doubt. Worse, it is the apple and marketers are Eve. It beckons with faint promises to fulfill the desire to believe that it may offer "evidence" of the beneficial impact of marketing.

If you value your credibility, don't circulate stuff like this within your marketing organization, and don't EVER use it in discussions with a savvy financial executive. When you see a headline like this, just pass it along to your trustworthy, naturally-skeptical research professional and ask them to find the flaws.

Stuff like this isn't research. It's PR. One bite of this apple will cost you your reputation - perhaps permanently.

We'll keep working on raising the standards in the media on what passes for good content.

Wednesday, March 11, 2009

DMA Jumps the Shark

In true Hollywood fashion, the DMA announced their headliners for this year's DMA Days conference. The keynote speaker will be... drumroll please.... Ivanka Trump - that skilled expert in direct marketing and ROI.

Is it possible that the DMA is losing sight of its role and mission? In this current environment where attracting paying registrants to conferences is so difficult, they seem to have err'd on the side of flash and glitz rather than meaningful substance and advancement of the trade.

It's hard to deny that Ivanka will put more butts in seats than the most compelling direct response case study. And perhaps more butts in seats for Ivanka actually will lead to more exposure of members to substantive content. But it's a sad day when such a long-valued industry organization can't find enough compelling content that they resort to the equivalent of Fonzie jumping the shark to draw "spectators". Maybe next year they can get Chris Angel to come make their relevance re-appear.

Tuesday, March 10, 2009

How do You Know if it's Time to Spend MORE?

In times like these, budgeting and resource allocation decisions tend to get made fast and furious, with little time for clear thinking. Unfortunately, it’s exactly these times when some real discipline is required to both make smart decisions and build credibility with the rest of the senior management team. So if you’re thinking about recommending that your firm should be spending MORE on marketing right now, STOP.

If you’re thinking “let’s spend more now to gain share”…

Good luck. Headline-grabbing stories of marketing heroes who have taken this approach tend to emphasize the few who have succeeded and gloss over the vast majority who have simply squandered more by throwing money into an economic hurricane. The fact is that there’s not much empirical data to prove the merits of this strategy beyond a reasonable doubt. Many “studies” have been done, but none have derived their conclusions from projectable samples which account for the primary risk factors, nor have any led to any high-probability “formula” for succeeding with this strategy. The margin of error between success and failure tends to be very narrow. It’s a roll of the dice against pretty long odds.

If you’re thinking “we’ve got to keep up our spend to maintain our share of voice”…

Be careful. Matching competitive levels of spend (or making decisions on the basis of “share of voice”) is most often seen by CEOs and CFOs as foolish logic. How do you know the competitor isn’t making an irrational decision? What do you know about the effectiveness of your spending versus theirs? How much ground would you lose if they outspent you by a substantial amount? If you don’t have specific answers to these questions, relying on anecdotal evidence won’t help. It may get you the spend levels you’re requesting in the near term, but if it doesn’t work out, the memory of your recommendations will undermine your credibility for years to come.

When times get tough, buyers re-evaluate the value propositions of what they buy. They make tradeoffs on the basis of what is or isn’t “necessary” any more. Shouting louder (or in more places) is unlikely to break through newly-erected austerity walls.

To make a sound case for spending more, tune into what the CEO is looking for… leverage. They want to find places to squeeze more profitability out of the business. To help, focus your thinking around:

  • the relative strength of your value proposition, channel power, and response efficiencies versus your competitors.
  • your assumptions about customer profitability and prospect switchability as buyers cut back.
  • your price elasticity to find out where the traditional patterns may collapse or where opportunities may emerge.
  • the relevance, clarity, and distinctiveness of your message strategy, and your ability to defend it from copycat claims.

And make sure to check with finance to see if the company’s balance sheet is strong enough to handle higher levels of risk exposure during revenue-stressed periods. If it’s not, the whole question of spending more is moot.

If your comparative strengths seem to offer an opportunity, then increasing spend may just be a smart idea. But even so you have to anticipate that competitors aren’t just going to let you walk away with their customers or their revenues. And that may just leave you both with higher costs in times of lower sales. In technical parlance, this is known as a “career-limiting outcome”.

Friday, March 06, 2009

Another VP Marketing lost his job today...

... for the wrong reasons.

He was (is) strategically brilliant and a fountain of ideas. His insight into his customers was formidable. His managerial capabilities were strong. His relationships with sales management were excellent. And the CEO really liked him. In fact, the CEO really supported the launch of the new ad campaign a few months back that substantially increased the company's spend.

But now, several months past the campaign's initial wave, there is no credible evidence or consensus that it had any positive effect on sales, profits, customer value, or any other meaningful dimension. And the fact that unaided brand awareness was up X% was little comfort.

So this particular VP of Marketing was let go, and marketing is now reporting to the EVP of Sales.

I spoke to the CEO who told me "I relied on (the VP Marketing) to make smart decisions about where and how we spent our marketing resources. In the end, it wasn't the lack of positive results that bothered me about that campaign as much as it was our inability to really learn anything important about why it didn't work and what we should do next. So I felt I needed to place my bets somewhere I get better transparency and feedback... sales. It may be short-sighted, but we need to demonstrate learning and improvement every day, or we're just spinning our wheels."

In better times, the VP may have gotten another chance to iterate to the magic marketing formula. But in the current environment, you only get one chance. To learn, that is, and to build confidence and credibility even when your efforts fail to achieve the desired outcome. Which, in marketing, happens quite often.

Wednesday, February 25, 2009

Better Ways To Do More With Less

Crawling around inside a few dozen large marketing and finance organizations these past months I’ve seen some evidence of five patterns of “do more with less” which seem to work best.

First, the “best” clearly define what “doing more with less” really means. The most common metric appears to be “marketing contribution efficiency” – an increase in the ratio of net marketing contribution per marketing dollar spent. That’s seems appropriate when budgets are falling (recognizing the need to monitor it over time as it can be manipulated in the near term).

Second, when they cut, they do it strategically. Face it, most of us didn’t take Budget Cutting 101 in B-school. After eliminating travel and consultants and other easy stuff, bad decisions creep in under mounting political pressure. More about this in last week’s post.

Third, they watch the risk factors. CFOs want to cut marketing spend to increase the likelihood of (aka decrease risks against) making short-term profit goals. Yet when marketers try to do more with less, risk exposure rises in ways never imagined – especially if it wasn’t clear which elements of the marketing mix were working before the cuts. It’s the “risk paradox”. If you want to make sure your “less” really has a chance of doing “more”, manage the new risks that have silently crept into the plans.

Fourth, they avoid the ostrich effect. Just because there’s enormous pressure on today, the best don’t ignore the fact that tomorrow is right around the corner in the form of 2010 plan. And when looking ahead, the only thing certain is that historical norms are no longer a reasonable guide. So the best are anticipating the key questions for 2010 plan, and working on getting some answers now. They’re committed to leading the process, not getting dragged behind it.

Finally, the best push their marketing business case competency further, faster. The marketing skeptics and cynics have more political clout now. Un-tested assumptions, like ostriches, will not fly. Better business case discipline is the new currency of credibility.

We all have basically the same tools at our disposal to do more with less. The “best” seem to be able to apply their imagination most effectively in the use of those tools. I’m the world’s biggest proponent of the importance of creative inspiration and instinct, but the lesson here I think is to start the conversation these days with “what do we mean by ‘effective’?”