Tuesday, April 27, 2010

Measurement's Exponential Curve

I attended the Global Retail Marketing Association’s annual leadership conference recently and had the opportunity to interact with a few terrific speakers including Ray Kurzweil – renown futurist in all matters technology. Ray bombarded us with scientifically- and econometrically-based forecasts for where health, computing, and social technologies were headed.

Kurzweil has a pretty good track record in predicting these things (as evidenced by his success as an angel investor) based on a simple concept – that evolution in technology is not linear, but exponential. He cited examples of how this has been true across the technology spectrum time and time again, but also across the spectrum of life in general. By plotting the advent of major advances in life sciences, one can quickly see how the pace of innovation is accelerating. In fact, the average lifespan of a child born today will be well into their upper 80’s, and that lifespan is accelerating. (Do the math, and if those of us in middle age can just wait long enough, we may live forever.)

As I listened, it occurred to me how appropriate this concept was in the arena of marketing measurement too. While marketers have been seeking to improve the effectiveness and efficiency of their efforts almost since marketing was born as a functional discipline in the early 20th century, the pace of discontinuous innovation is accelerating. And when I speak about discontinuous innovation, I’m not referring to the introduction of internet-based research tools, but fundamental changes in the way we are learning to understand marketing’s impact on the consumer/customer.

For example, traditional survey-based research techniques are proving to be far less predictive of human behavior than bio-metric scanning methods that monitor brainwaves, heart rate, respiration changes, or skin temperature. Today, advertisers can expose their creative messages to prospective customers and read the immediate response in involuntary biometric systems which overcome the social and cultural biases that tend to filter logical survey responses.

Another example… many companies are dis-adopting regression-based marketing mix models in favor of artificial intelligence techniques and agent-based models which focus on replicating thought processes in the full context of competitive dynamics, instead of just looking for statistical relationships. True, these higher-intelligence techniques have been around for 30+ years and not yet found significant penetration in marketing applications, but the PACE of adoption is now increasing noticeably and innovation is driving relevance and applicability more quickly than ever.

All of this has me thinking these days that market research may be operating towards the end of its current lifecycle. The tools, methods, and techniques we use today will not persist more than another 10 to 20 years. We will learn to move past recording and clustering rational thought, and past our voyeuristic tendencies to predict future behavior based on past actions. And in the process, we will learn to reconcile what people say, think, and do with the powerfully innate drivers buried deep in our biological wiring – like in this example.

Inevitably, we will see many instances of borderline unethical manipulation which will slow this adoption curve a bit, but the amount of money and time being invested in these new areas of learning is far too great to be stalled by commercial mis-steps along the way.

So when you drag your finger across your touch-screen interface a few years from now, the underlying systems will capture not just what and where you touched, but your heart rate and body temperature at the time (along with possibly the size of your pupils). This information will feed logic-driven systems which will immediately adapt the type of images, sounds, and smells to your innate preferences to stimulate your interest far beyond anything we’ve achieved as marketers so far.

And just think about the impact this will all have on which metrics we adopt to measure performance…
_________________________
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.


Thursday, April 15, 2010

Memo from the CFO: The Best Response

Two weeks ago I posted a disguised version of an actual email from the CFO of a Global 1000 company to the CMO of that company, congratulating the CMO on doing a good job of improving marketing efficiency, but then raising questions about the effectiveness of that marketing. I invited readers of Metrics Insider to comment on how they would respond if they were the CMO.

Some of the responses were, not surprisingly, promotional messages for proprietary marketing measurement research or analytical methodologies being positioned as silver bullets to solve the problem. While I’m sure some of these solutions would be helpful to some degree, it’s naïve for ANY executive to pin their hopes of understanding marketing payback on any single tool, method, or metric. Many have tried this approach, but very few have succeeded as the “tool in lieu of disciplined evolution” tends to answer the immediate questions, but loses luster as dynamics (both external and internal) evolve. Besides, CFOs tend to be immediately suspicious of any tool packaged in hyperbole like “all you really need is…”

A few responses were pretty hostile. They came in the form of marketers berating the CFO (aka the “bean-counting techno-wonk”) for asking such questions in a way that implied the CMO should have had a much better handle on marketing effectiveness. (for a more humorous approach along these lines, see what Chris Koch wrote). In truth, there was no malice in the questions posed. Just a bit of naiveté on behalf of the CFO with respect to the subtleties of marketing. Sometimes, that lack of understanding manifests itself in poorly-chosen words. But rarely does that mean the issuer of the words is “anti-marketing”. They are just playing one of the many roles they are paid to play… the role of risk management. If the CFO is to adequately assess the corporate risks (including the risk of wasteful spending), they must have the confidence to challenge the evidence put forward in support of EVERY material investment. If you ask the head of IT, you’ll likely find that they too have felt the heat of the CFOs microscope from time-to-time.

So if your first reaction was anger, get past it. Don’t let your own insecurities about marketing measurement negatively taint your assessment of logical questions that inquisitive but un-informed executives may ask.

I think, all things being equal, the best response would be:


To: Amy Ivers – CFO

From: Susan James – CMO

RE: Congratulations on your recent recognition for marketing efficiency


Amy –

Thanks for your note on measuring marketing effectiveness. You raise many good points that I too have been thinking about for some time. There are a number of ways we can approach answering these questions, but I’d need your help since some would inevitably require us to get comfortable with partial data sets, while others may necessitate a temporary step backwards in efficiency to enable some further testing. Together, we might be able to come up with an approach that John and the others on the executive committee find credible. But it might be a bit more involved than emails can adequately address.

I share your passion for better insight into marketing effectiveness. If you’d like to suggest a few possible dates/times, I’d enjoy getting together to bring you up to speed on what we’ve been able to do so far, where our current knowledge gaps are, and what we’re doing to try to close those gaps. I’d appreciate your critical assessment of what we’re doing, and any suggestions you may have for making us better.

Thanks for your input.

Susan.


In a nutshell, the best strategy in this type of situation is:
  1. Disarm and diffuse. Take the emotion out of it, even if the history frustration runs deep.
  2. Focus on defining the questions to be answered. Don’t jump into evidence-presentment mode until you have agreed on what reasonable questions are. You’ll be shooting at a moving target.
  3. Prioritize the questions. Don’t assume they’re all equally important, or you’ll fracture your answering resources into ineffectively small parts.
  4. Decompose the questions into small pieces. Define the sub-components and assess what the company knows and what it doesn’t know with respect to each of the small pieces. Trying to boil the ocean is another sure way to accomplish nothing.
  5. Admit your knowledge limits. Be clear to label your assertions conservatively as facts, observations, and opinions derived from experience.
  6. Have a continuous improvement plan. Show your plan to improve the company’s marketing effectiveness in stages and manage the expectation that it might take time to tackle all the pieces to the roadmap absent a significant boost in resources.

I realize that many of you are caught in situations where the CFO’s questions may in fact be emanating from some apparent malice. In those cases, use honest questions to understand how much they actually know (as distinct from what they think they know). Their path to self-realization is only as fast as your skillful approach to engaging them to be part of the solution instead of just the identifier of the problem.

Using this approach, even the thorniest marketing/finance relationships can be improved by at least 50% (and I have the statistics to back it up).

Thanks for your comments.
_________________________
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Tuesday, March 30, 2010

Memo from CFO: Ad Metrics Not Good Enough


Following is a sanitized version of an actual email from a CFO to a CMO in a global 1000 company…



TO: Susan James – Chief Marketing Officer


FROM: Amy Ivers – Chief Financial Officer

RE: Congratulations on your recent recognition for marketing efficiency

Susan –

Congratulations on being ranked in the top ten of Media Weeks’ most recent list of “most efficient media buying organizations.” It is a reflection of your ongoing commitment to getting the most out of every expense dollar, and well-earned recognition.

But I can’t help but wonder, what are we getting for all that efficiency?

Sure, we seem to be purchasing GRP’s and click-thru’s at a lower cost than most other companies, but what value is a GRP to us? How do we know that GRPs have any value at all for us, separate from what others are willing to pay for them? How much more/less would we sell if we purchased several hundred more/less GRPs?

And why can’t we connect GRPs to click-thrus? Don’t get me wrong, I love the direct relationship we can see of how click-thrus translate into sales dollars. And I understand that when we advertise broadly, our click-thrus increase. But what exactly is the relationship between these? Would our click-thru rate double if we purchased twice as much advertising offline?

Also, I’m confused about online advertising and all the money we spend on both “online display” and “paid search”. I realize that we are generally able to get exposure for less by spending online versus offline, but I really don’t understand how much more and what value we get for that piece either.

In short, I think we need to look beyond these efficiency metrics and find a way to compare all these options on the basis of effectiveness. We need a way to reasonably relate our expenses to the actual impact they have on the business, not just on the reach and frequency we create amongst prospective customers. Until we can do this, I’m not comfortable supporting further purchases of advertising exposure either online or offline.

It seems to me that, if we put some of our best minds on the challenge, we could create a series of test markets using different levels of advertising exposure (including none) in different markets which might actually give us some better sense of the payback on our marketing expenditures. Certainly I understand that just measuring the short-term impact may be a bit short-sighted, but it seems to me that we should be able (at the very least) to determine where we get NO lift in sales in the short term, and safely conclude that we are unlikely to get the payback we seek in the longer term either.

Clearly I’m not an expert on this topic. But my experience tells me that we are not approaching our marketing programs with enough emphasis on learning how to increase the payback, and are at best just getting better at spending less to achieve the same results. While this benefit is helpful, it isn’t enough to propel us to our growth goals and, I believe, presents an increasing risk to our continued profitability over time as markets get more competitive.

I’d be delighted to spend some time discussing this with you further, but we need a new way of looking at this problem to find solutions. It’s time we stop spending money without a clear idea of what result we’re getting. We owe it to each other as shareholders to make the best use of our available resources.

I’ll look forward to your reply.

Thank you.


So how would you respond? I’ll post the most creative/effective responses in two weeks.
_________________________
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Wednesday, March 17, 2010

What 300 Years of Science Teaches Us About WOM Metrics

In the early 18th century, scientists were fascinated with questions about the age of the earth. Geometry and experimentation had already provided clues into the size of the planet, and to the mass. But no one had yet figured out how old it was.

A few enterprising geologists began experimenting with ocean salinity. They measured the level of salt in the oceans as a benchmark, and then measured it every few months thereafter, believing that it might then be possible to work backwards to figure out how long it took to get to the present salinity level. Unfortunately, what they found was that the ocean salinity level fluctuates. So that approach didn’t work.

In the 19th century, another group of geologists working the problem hypothesized that if the earth was once in fact a ball of molten lava, then it must have cooled to its current temperature over time. So they designed some experiments to heat various spheres to proportionate temperatures and then measure the rate of cooling. From this, they imagined, they could tell how long it took the earth to cool to its present temperature. Again, interesting approach, but it led to estimates that were in the range of 75,000 thousands of years. Skeptics argued that a quick look at the countryside around them provided evidence that those estimates couldn’t possibly be correct. But the theory persisted for nearly 100 years!

Then in the early part of the 20th century, astronomers devised a new approach in estimating the age of the earth through radio spectroscopy – they studied the speed with which the stars were moving away from earth (by measuring shifts in light wave spectrum) and found there was a fairly uniform rate of speed. This allowed them to estimate that the earth was somewhere between 700 million and 1.2 billion years old. This seemed more plausible.

Not until 1956, shortly after the discovery of atomic half-lives, did physicists actually come up with the answer that we have today. When they studied various metals found in nature, they could measure the level of radiation in lead that had cooled from uranium, and then calculate backwards how long it had taken for radiation to achieve its present level. The estimated therefore that the earth was 4 to 5 billion years old.

Finally, in 1959, geologists discovered the Canyon Diablo meteorite and the physicists realized that the earth must be older than the meteorite that hit it (seems logical). So they took radiological readings from the meteorite and dated it at 4.53 – 4.58 billion years old.

Thus we presently believe our planet’s age is somewhere in this range. It took the collective learnings of geologists, astronomers, and physicists (and a few chemists along the way) and over 250 years to crack the code. Thousands of man-years of experimentation traced some smart and some not-so-smart theories, but we got to an answer that seems like a sound estimate based on all available data.

Why torture you with the science lecture? Because there are so many parallels to where we are today with marketing measurement. We’ve only really been studying it for about 50 years now, and only intensely so for the past 30 years. We have many theories of how it works, and a great many people collecting evidence to test those theories. Researchers, statisticians, ethnographers, and academics of all types are developing and testing theories.

Still, at best, we are somewhere between cooling spheres and radio spectroscopy in our understanding of things. We’re making best guesses based on our available science, and working hard to close the gaps and not get blinded by the easy answers.

I was reminded of this recently when I reviewed some of the excellent research done by Keller Fay Group in their TalkTrack® research which interviews thousands of people each week to find out what they’re talking about, and how that word-of-mouth (WOM) impacts brands. They have pretty clearly shown that only about 10% of total WOM activity occurs online. Further, they have established that in MOST categories (not all, but most), the online chatter is NOT representative of what is happening offline, at kitchen tables and office water coolers.

Yet many of the “marketing scientists” are still confusing measurability and large data sets of online chatter for accurate information. It’s a conclusion of convenience for many marketers. And one that is likely to be misleading and potentially career-threatening.

History is full of examples of how scientists were seduced by lots of data and wound up wandering down the wrong path for decades. Let’s be cautious that we’re not just playing with cooling spheres here. Scientific progress has always been built on triangulation of multiple methods. And while accelerating discoveries happen all the time though hard work, silver bullets are best left to the dreamers.

For the rest of us, it’s back to the grindstone, testing our best hypotheses every day.
_________________________
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Thursday, March 04, 2010

Prescription for Premature Delegation

When responsibility for selecting critical marketing metrics gets delegated by the CMO to one of his or her direct reports (or even an indirect report once- or twice-removed), it sets off a series of unfortunate events reminiscent of Lemony Snicket in the boardroom.

First of all, the fundamental orientation for the process starts off on an "inside/out" track. Middle managers tend (emphasize tend) to have a propensity to view the role of marketing with a bias favoring their personal domain of expertise or responsibility. It's just natural. Sure you can counterbalance by naming a team of managers who will supposedly neutralize each others' biases, but the result is often a recommendation derived primarily through compromise amongst peers whose first consideration is often a need to maintain good working relationships. Worse yet, it may exacerbate the extent to which the measurement challenge is viewed as an internal marketing project, and not a cross-organizational one.

Second, delegating elevates the chances that the proposed metrics will be heavily weighted towards things that can more likely be accomplished (and measured) within the autonomy scope of the “delegatee”. Intermediary metrics like awareness or leads generated are accorded greater weight because of the degree of control the recommender perceives they (or the marketing department) have over the outcome. The danger here is of course that these may be the very same "marketing-babble" concepts that frustrate the other members of the executive committee today and undermine the perception that marketing really is adding value.

Third, when marketing measurement is delegated, reality is often a casualty. The more people who review the metrics before they are presented to the CMO, the greater the likelihood they will arrive "polished" in some more-or-less altruistic manner to slightly favor all of the good things that are going on, even if the underlying message is a disturbing one. Again, human nature.

The right role for the CMO in the process is to champion the need for an insightful, objective measurement framework, and then to engage their executive committee peers in framing and reviewing the evolution of it. Measurement of marketing begins with an understanding of the specific role of marketing within the organization. That's a big task for most CMOs to clarify, never mind hard-working folks who might not have the benefit of the broader perspective. And only someone with that vision can ruthlessly screen the proposed metrics to ensure they are focused on the key questions facing the business and not just reflecting the present perspectives or operating capabilities.

Finally, the CMO needs to be the lead agent of change, visibly and consistently reinforcing the need for rapid iteration towards the most insightful measures of effectiveness and efficiency, and promoting continuous improvement. In other words, they need to take a personal stake in the measurement framework and tie themselves visibly to it so others will more willingly accept the challenge.

There are some very competent, productive people working for the CMO who would love to take this kind of a project on and uncover all the areas for improvement. People who can do a terrific job of building insightful, objective measurement capabilities. But the CMO who delegates too much responsibility for directing measurement risks undermining both the insight and organizational value of the outcome -- not to mention the credibility of the approach in the eyes of the key stakeholders.
_________________________
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Tuesday, February 16, 2010

Lose the Marketing Love Handles Without Lifting a Finger

I got an email the other day from a marketing technology company trumpeting their software’s ability to help me “improve marketing ROI without lifting a finger.” Wow. Incredible. Can’t be true, can it?

I asked for a demonstration copy to see if I could realize the incredible benefit, but no luck. They wouldn’t send me one. So to test the validity of the claim, I went to the center of all things factual – the internet - to see what else I could do without lifting a finger. The options are amazing. I can:
  • Lose 20 pounds
  • Find a high-paying new job
  • Earn a college degree
  • Write a book (someone else will do it for me)
  • Drive more traffic to my website
  • Be healthier
  • Look better
  • Attract more members of the opposite sex
  • And, my personal favorite, grow more hair.
I feel stupid. I’ve been spending so much time at the gym, writing my own books, taking my own college exams, choosing my own food carefully, and fretting over my hair. I could have spent all that time goofing off and gotten better results.

And I’m really pissed off about the effort I’ve wasted on measuring and improving marketing ROI. For seven years now, I’ve been working on improving marketing ROI all day every day; working with hundreds of marketing, finance, and sales managers in dozens of companies; overcoming obstacles of technical, structural, cultural, and political dimensions; making slow and steady progress.

NOW I discover that, had I just purchased the right software, I could have achieved much more with virtually NO effort. If my clients ever find out, I’m screwed.

On the whole, I think this magic ROI elixir software is really a good thing. It will:
  • Reinforce CMOs’ desire to believe that they can and should delegate ROI efforts even further down the org chart. Afterall, they have many more important things to worry about.
  • Give marketing managers something more tangible to point to when asked “what are you doing to improve the return on our marketing investment?” Of course, we’ve bought some software to fix that.
  • Postpone the question another year while the software winds its way through the procurement process and then gets passed around the IT department – all the while allowing the marketers to keep doing things the way they have been doing them.
  • Befuddle the finance department and get them off marketing’s back. You know how those finance guys love data. They’ll gladly wait awhile if they think there’s some data coming.
So forget all that phooey about aligning on metrics, implementing smart experiments, and methodically improving analytics. Don’t waste time on smarter marketing research. Just cut that shrink wrap on the software box, hit “install”, and off you go.

Then wait for the Easter Bunny to deliver your bonus check.

Hyperbole is a dangerous tool in the hands of marketers – particularly when it comes to measuring marketing ROI. It undermines our credibility with the more serious financial types who often are key influencers on how much we get in the way of resources and what we can do with it. It reinforces their perceptions of marketers as wild-eyed optimists willing to try anything new to deflect the gravity of the questions being asked. Besides, if there WERE a magic marketing ROI software, do you really think your progressively-minded organization would be amongst the very first to find it?

Bad news. There is still no substitute for diligent, disciplined work when it comes to measuring the payback on marketing. Technology enables, but vision and persistence win every time. Show me a company with the will to work at it, and I’ll show you the company that will get clear insights into their ROI long before the software buyers ever realize they’ve been misled.

Measuring and improving ROI is much more like going to the gym every day; watching what you eat; taking classes to earn a degree; and (take it from one who’s done it) writing a book yourself. Persistent, methodical effort is rewarded with great benefits.

So let’s get after those spending love handles and the marketing muffin top.
_________________________
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics, ROI, and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.


Tuesday, February 02, 2010

My friend and business partner, Dave Reibstein of Wharton, writes an advice column called Ask Dave where former students write in with questions on marketing metrics and he offers some sage advice. This recent correspondence caught my attention for its relevance and timeliness for all of you with new fiscal years beginning soon…

Dear Dave –

I’m a finance manager at a large industrial manufacturer. I often sit in the meetings where marketing comes in and shows us how low our marketing spend is compared to our competitors, and then asks for more money.

Maybe I’m wrong, but I don’t see that matching competitor spend is a path to success, is it? How should I coach our marketing team about what a better analysis would look like and what information they should bring to the table?

Sincerely –
Geoff D. in Chicago


Dear Geoff —

The right amount of spend is indeed a relative thing. Our product appeal, our pricing, our packaging, and just about every aspect of our value proposition are only important RELATIVE to the alternatives the customer has. Likewise, our spending is also, in part, only effective RELATIVE to what others are spending. But there are huge risks tied to making that relative spend the center of any strategy, as we are often following moves that deserved no response.

For example:
  • A while back there was a “holey war” between laundry iron manufacturers. One first put holes on the bottom of their iron allowing for steaming of the linens as they were being ironed. The competitor responded by adding more holes to their own model. Then the race was on to see who could add more and more holes until the number of holes was well beyond what the customer cared about.
  • Pepsi chased Coke into the low carb soft drink market (half the calories and half the carbs of normal colas). Hundreds of millions were spent before anyone realized that consumers that cared about their carb intake wanted ZERO carbs and calories, not half.
  • P&G chased after Kimberly Clark in the introduction of moistened toilet tissue. Both were greeted with failure in this market.
In each case, it was easier to approve the budgets for these programs once it was learned that competition was going to be moving there too. Fear of falling behind is a powerful motivator. The aversion to loss is much greater than the attraction of gain (see Prospect Theory). Marketing managers who use competitive spending as a primary justification for their own spending are (most often unknowingly) playing to this loss-aversion instinct.

But when we measure success on a relative basis (e.g. “share of voice” or “market share”), our behavior is only intended to keep up with competition. Also, the notion that we shouldn’t spend on something until we witness competition doing so implies that competition knows more about the right action than we do, which is very often not true. So, it results in the unwise being led by the same. Even worse it results in never taking the steps to actually get ahead of the competition.

The proven path to success is careful analysis of what makes a difference. Rigorous testing of historical data, or experimentation with spending levels to see what really does work will allow for taking progressive acts without having to wait for competition to move ahead. Smarter companies seem to gauge success on the absolutely impact marketing has on the financial goals of the firm.

It all comes down to what your mother always told you… “have a mind of your own”.

_________________________

Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Tuesday, January 19, 2010

The Cost of NOT Branding

It’s a simple formula: recession requires more tactical spending. This year’s budget = + online spend + social activity + lead generation campaigns – brand investment.

When the dollars get tight, spend shifts to more tangible, less expensive marketing programs with the promise of shorter-term returns (or at least lower costs). Not that there’s anything wrong with saving a few bucks wherever you can get the job done more efficiently. But when saving money becomes the goal instead of a guideline, something big always suffers… usually the brand.

While this is an important problem within the B2C community, it is absolutely URGENT within the B2B community. B2B marketers in large numbers have seen their marketing resources cut back dramatically for anything that isn’t expected to generate significant near-term flows of qualified sales leads. Why? Because absent good metrics to connect brand or longer-term asset development to actual financial value, these things were seen as strategic luxuries that could be postponed.

If I were a CFO looking for strategies to free up cash, I might have reached the same conclusion. Unless my marketing team could explain to me the cost of NOT investing in brand.

Here’s an example… A B2B enterprise technology player (Company X) dropped all marketing programs except those that A) specifically promoted product advantages or B) generated suitable numbers of qualified leads to offset the cost. After a few months, leads were on target, but the sales closing cycle was creeping up. What was originally a 6 to 9 month cycle was becoming 9 to 12 months. Further analysis and research amongst prospects and customers showed that indeed some of this delay was being caused by the general economic uncertainty and the need for buyers to rationalize their purchases internally with more people. But fully 45 days of this extended cycle (estimated by sales managers) was happening because the ultimate decision-makers weren’t sufficiently familiar with the strength of Company X’s product/service offering. (They thought Company X made small consumer electronics, and were not a serious player in enterprise tech.) So the sales team had to make repeated visits and presentations just to work their way into the game to compete on feature/function/price/value.

In this case, the question of the cost of NOT branding could be measured by the increased cost of direct sales associated with NOT branding. Specifically, if Company X measures the sales cost/dollar of contribution margin amongst accounts with strong brand consideration versus those with little-to-no brand perceptions, they should expect to see at least a 50% difference (nine months of effort vs. six), half of which would be attributable to low levels of brand consideration. Multiply that by the percentage of prospects in the addressable market with low levels of brand perception, and you can quickly derive a rough approximation of the cost of NOT branding, expressed either in terms of additional sales headcount required to compensate for lack of branding, or in terms of sales opportunity cost to compensate for an underdeveloped brand. Either way, an imminently measurable problem that would better illuminate the business case for investing in brand development.

There are many other ways to measure the cost of NOT branding, including relative margin realized and strategic segment penetration, amongst others. The right approach for you will depend upon your organizations key business goals.

Now I’m NOT advocating branding as a solution in every circumstance. Nor am I a proponent of the idea that marketing should generally be spending more money versus less. But as a tireless advocate for marketing effectiveness and efficiency, I think we too often fail to examine the business case for NOT doing something as a means of pushing past cultural and political obstacles in our management teams. Remember, there are always two options… DO something, and NOT do something. Each are definitive choices requiring their own payback analysis.

_________________________
Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Tuesday, January 05, 2010

Forecast for 2010: Better Forecasting

Yogi Berra said, “It’s tough to make predictions, especially about the future.” Yet the turn of the year (and the decade) makes forecasting an irresistible temptation. But what if forecasting is part of your job, not just a hobby? How do make sure your forecasts are smart, relevant, and even (dare I say) accurate?

While advanced mathematics and enormous computational power have improved forecasting potential significantly, few would argue that forecasting is an exact science. That’s because at its core, forecasting is still mostly a human dynamic where accuracy is dependent upon…

  • asking the right people the right questions;
  • their willingness to answer truthfully and completely;
  • the ability to separate the meaningful elements from the noise; and,
  • the openness of the forecaster to suggestions of process improvement.

That last point is key: process improvement.

Consistently good forecasting isn’t a mathematical exercise performed at regular intervals (e.g., quarterly) as much as it’s an on-going process of gathering and evaluating dozens or hundreds of points of information into a decision framework. Then, when called upon, this decision framework can output the best forward-looking view grounded in the insights of the contributors. While software can facilitate process structure by prompting for specific fields of information to be included, it cannot make judgments on the quality of the information being input. Garbage in; garbage out.

As marketers, our job is to consistently prepare forecasts that help our companies conceive, plan, test, build and ultimately sell successful products and services. Sound forecasting processes form the foundation of an “early warning” system to alert the rest of the organization to the need to rethink its market orientation. In essence, forecasting becomes the rudder that can help your company stay the course, change directions, or navigate uncharted waters with confidence. As such, marketing migrates from being a tactical player to a strategic resource for the CEO when forecasts become more accurate, timely, and reliable.

Five Keys to Better Forecasting

Here are a few things I’ve learned over the years which result in much better forecasts:

  1. Be Specific. Define exactly what you are trying to forecast. If you say “sales”, what do you really mean? Revenue? Unit volume? Gross Margins? Net profit? The differences are substantial and might cause you to take very different approaches to forecasting. Likewise, having some sense of how far out you need to forecast (e.g. 3 months, 12, 36, etc.) and how accurate you need to be will guide you to use forecasting methods and processes better suited to your objectives.
  2. Be Structured. Being methodical in defining all of the dimensions, variables, facts, and assumptions will pay huge dividends in several ways, including explaining your forecast to skeptics and inspiring confidence that you’ve been comprehensive and credible in your approach.
  3. Be Quantitative – With or Without Data. Regardless of how little data you have, there are scientifically developed and proven ways of making better decisions. You may not have the raw materials for statistical regression forecasting, but you surely can use Delphi techniques or other judgmental calculus tools to transform perceptions and intuition of managers into data sets which can be more fully examined and questioned. Often, the process of quantifying the fuzzier logic uncovers great insights that were previously overlooked.
  4. Triangulate – Use multiple forecasting methods and see how the results differ. Chances are that the “reality” is somewhere within the triangle of results. That level of accuracy may be sufficient. But even if it isn’t, the multiple-method approach highlights weaknesses in any single method which might otherwise be overlooked – and that in itself leads to more accurate forecasts.
  5. KISS – Keep it simple, stupid. As with most things in life, simplicity is a virtue in forecasting. Einstein said that “things should be made as simple as possible, but no simpler.” In forecasting, we interpret that to mean that an accurate and reliable forecasting process should be comprehensive enough to identify the truly causal factors, but simple enough to explain to those who will need to make decisions upon it. There is no power in a forecast if those who need to trust it cannot understand or explain the logic and process behind it.

Recognizing forecasting to be a complex human decision process is the first step towards dramatically improving your batting average and improving the accuracy and reliability of the forecasts coming out of your department.

If you’re interested in learning more, here is some expanded forecasting insight, and some great sources of forecasting methods and tools.

-------------------------
Pat LaPointe is managing partner at MarketingNPV – specialist advisors on marketing measurement and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Tuesday, December 22, 2009

Measuring What Matters Most

What is the most important thing in your marketing plan to measure?
A) Campaign response.
B) Customer satisfaction.
C) Brand value.
D) Media mix efficiency.
E) All of the above.

The fact is that there are so many things to measure, more and more marketers are getting wrapped around the axle of measurement and wasting time, energy, and money chasing insight into the wrong things. Occasionally this is the result of prioritizing metrics based on what is easy to measure in an altruistic but misguided attempt to just “start somewhere”. Sometimes, it comes from an ambitious attempt to apply rocket science mathematics to questionable data in the search for definitive answers where none exist. But most often it is the challenge of being able to even identify what the most important metrics are. So here’s a way to isolate the things that are really critical, and thereby the most critical metrics.

Let’s say your company has identified a set of 5 year goals including targets for revenue, gross profit margin, new channel development, customer retention, and employee productivity. The logical first step is to make sure the goals are articulated in a form that facilitates measurement. For example, “opening new channels” isn’t a goal. It’s a wish. “Obtaining 30% market share in the wholesale distributor channel within five years” is a clear, measurable objective.

From those objective statements, you can quantitatively measure the size of the gap between where you are today and where you need to be in year X (the exercise of quantifying the objectives will see to that). But just measuring your progress on those specific measures might only serve to leave you well informed on your trip to nowheresville. To ensure success, you need to break each objective down into its component steps or stages. Working backwards, for example, achieving a 30% share goal in a new channel by year 5 might require that we have at least a 10% share by year 4. Getting to 10% might require that we have contracts signed with key distributors by year 3, which would mean having identified the right distributors and begun building relationships by year 2. And of course you would need all your market research, pricing, packaging, and supply chain plans completed by the end of year 1 so you could discuss the market potential intelligently with your prospective distributors.

When you reverse-engineer the success trajectory on each of your goals, you will find the critical building block components. These are the critical metrics. Monitor your progress towards each of these sub-goals and you have a much greater likelihood of hitting your longer-range objectives.

Kaplan and Norton, the pair who brought you the Balanced Scorecard and Strategy Mapping, have a simple tool they call Success Mapping to help diagram this process of selecting key measures. Each goal is broken down into critical sub-goals. Each sub-goal has metrics that test your on-track performance. A sample diagram follows.

By focusing on your sub-goals, you can direct all measurement efforts to those things that really matter, and invest in measurement systems (read: people and processes, not just software) in a way that’s linked to your overall business plan, not as an afterthought.

-------------------------
Pat LaPointe is managing partner at MarketingNPV – objective advisors on marketing measurement and resource allocation, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Tuesday, December 08, 2009

Making the "Best" Business Case for Marketing Investments

More than ever before, the approval of any significant marketing initiative is dependent upon a compelling business case. A business case is meant to function as a logical framework for the organization of all of the benefit and cost information about a given project or investment.

Working with this definition, one might conclude that a “good” marketing business case is one that increases the quality of decision making. Yet many of us in marketing have come to believe that a good business case is one that predicts a significantly positive ROI, IRR, and/or NPV for a given investment. Strangely, we tend to water-down any assumptions that actually seem to make the case “too good,” lest someone in finance really begin questioning our assumptions. Have you ever found yourself…

  • Using aggressive, moderate, and conservative labels on business case scenarios to show how even the most conservative view provides a strong potential return, and anything beyond that is gravy?
  • Identifying the always-low break-even point at which expenses are recaptured fully, and showing how this point occurs even below the conservative outcome scenario?
  • Taking a “haircut” in assumptions to show how, “even if you cut the number in half, the result is still positive.”

Every time you use one of these approaches in an effort to build credibility with finance or other operating executives, you paradoxically wind up undermining it instead. These tactics all have been shown to communicate subtle messages of inherent bias and manipulative persuasion which, intended or not, are noticeable to non-marketing reviewers – even if only on an instinctive level versus a conscious one.

In my experience, business cases get rejected most often for one of the following reasons:

  • Bias – senior management perceives that marketing is trying to “sell” something rather than truly understand the risk/reward of the proposed spending recommendations.

  • Jumping to the Numbers – showing final forecasts which contradict executive intuition before they have had a chance to reconsider the validity of those instincts.

  • Credibility of Assumptions – forecasts seem to ignore the effect of key variables or predict unprecedented outcomes.

Successful business-case developers recognize that there is more at stake than just getting funding approved. In reality, there are several objectives which must all be achieved with every business case:

  1. Protecting personal credibility. Any one program or initiative may be killed for many possible reasons. But you will still need to come to work tomorrow and be effective with your executives, peers, and team members. Preserving (and strengthening) your personal credibility is therefore the paramount objective.

  2. Enhancing the role of marketing. If you have personal credibility, you will want to use it to take smart risks to help the company achieve its objectives, and to influence matters relating to strategy, products, markets, etc. In the process, you need to be thinking about the role of the marketing function; how it can best serve the firm; and how you need to evolve it from what it is today.

  3. Bringing attractive options to the CEO – the kind that forces him/her to make hard decisions choosing between financially appealing alternatives.

There are always two dimensions to business case quality – financial attractiveness, and credibility of assumptions. In the end, it takes more than just financial attractiveness for a successful business case. It takes:

  • Thoughtfulness: demonstrating keen understanding of the role marketing plays in driving business outcomes and reflecting the input of the most critical stakeholders throughout the organization.
  • Comprehensiveness: including all credible impacts of spending recommendations, and calculating benefits and costs at an appropriate level of granularity.
  • Transparency: Clearly labeling all assumptions as such and presenting them in a way that encourages healthy discussion and challenge.

There are many ways to build a successful business case. But the most important learning is to understand the context in which your proposal will be evaluated BEFORE you put the numbers on the table.

-------------------------
Pat LaPointe is managing partner at MarketingNPV – objective advisors on measuring payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Thursday, November 12, 2009

Let's Bury John Wanamaker

There were so many interesting aspects of this year’s Masters of Marketing conference. One in particular caught my attention right up-front on Friday morning…

In this, the 100th year of the ANA, there are still lots of questions surrounding which 50% of advertising is “wasted”. I find that astounding. That 100 years later we’re still having this debate.

Maybe it’s because the very nature of advertising defies certainty.

Or maybe the definition of “wasted” is too broad.

Or maybe the reality is that the actual waste factor has been reduced to significantly less than 50%, but no one famous ever said that “15% of my advertising is wasted, I just don’t know which 15%.” And it wouldn’t make for a provocative PowerPoint slide even if they did.

It’s difficult to ignore the many signs of great progress we’ve made as an industry towards better understanding the financial payback of marketing and advertising. For example:

  • Research techniques have improved and the frequency of application has increased to provide better perspective on how actions affect brands and demand.

  • We’ve not only embraced analytical models in many categories, but have moved to 2nd and even 3rd generation tools that provide great insight.

  • We’ve adopted multivariate testing and experimental design to test and iterate towards effective communication solutions.

  • We’re learning to link brand investments to cash flow and asset value creation, so CFOs and CEOs can adopt more realistic expectations for payback timeframes.

All of this is very encouraging. Most of the presenters at this year’s conference included in their remarks evidence that they have been systematically improving the return they generate on their marketing spend by use of these and other techniques. So where is the remaining gap (if indeed one exists)?

First off, it seems that we’re often still applying the techniques in more of an ad-hoc than integrated manner. In other words, we appear to be analyzing this and researching that, but not actually connecting this to that in any structured way.

Second, while some of the leading companies with resources to invest in measurement are leading the charge, the majority of firms are under-resourced (not just by lack of funds, but people and management time too) to realistically push themselves further down the insight curve. In other words, the tools and techniques have been proven, but still require a substantial effort to implement and adopt.

Third, not everyone agrees with Eric Schmidt’s proclamation that “everything is measurable”. Some reject the basic premise, while others dismiss its applicability to their own very non-Google-like environments.

So what will it take to put John Wanamaker out of our misery before the 200th anniversary of the ANA?

  1. Training – exposing more marketing managers to more measurement techniques so they can apply their creative skills to the measurement challenge with greater confidence.
  2. A community-wide effort to push down the cost of more advanced measurement techniques, thereby putting them within reach of more marketing departments.
  3. An emphasis on “integrated measurement”. We’ve finally embraced the concept of “integrated marketing”. Now we have to apply the same philosophy to measurement. We need to do a better job of defining the questions we’re trying to answer up-front, and then architecting our measurement tools to answer the questions, instead of buying the tools and accepting whatever answers they offer while pleading poverty with respect to the remaining unanswered ones.
  4. We should eat a bit of our own dog food and develop external benchmarks of progress (much like we do with consumer research today). Let’s stop asking CMOs how they think their marketing teams are doing at measuring and improving payback, and work with members of the finance and academic communities to define a more objective yardstick with which we can measure real progress.

As we embark on the next 100 years, we have the wisdom, technology, and many of the tools to finally put John Wanamaker to rest. With a little concerted effort, we can close the remaining gaps to within a practical tolerance and dramatically boost marketing’s credibility in the process.


From Time’s a Wasting – for more information visit anamagazine.net.
--------------------

Pat LaPointe is managing partner at MarketingNPV – objective advisors on measuring payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

"That is the Big Challenge" says Eric Schmidt

I had a chance to talk briefly with Eric Schmidt, CEO of Google, at last week’s ANA conference. He’d just finished sharing his take on marketing and advertising with 1200 of us representing marketers, agencies, and supporting service providers. He said:

  • Google backed away from managing radio and print advertising networks due to lack of “closed loop feedback”. In other words, they couldn’t tell an advertiser IF the consumer actually saw the ad or if they acted afterward. Efforts to embed unique commercial identifiers into radio ads exist, but are still immature. And in print, it’s still not possible to tell who (specifically) is seeing which ads – at least not until someone places sensors between every two pages of my morning newspaper.
  • Despite this limitation, Schmidt feels that Google will soon crack the code of massive multi-variate modeling of both online and offline marketing mix influences by incorporating “management judgment” into the models where data is lacking, thereby enabling advertisers to parse out the relative contribution of every element of the marketing mix to optimize both the spend level and allocation – even taking into account countless competitive and macro-environmental variables.
  • That “everything is measurable” and Google has the mathematicians who can solve even the most thorny marketing measurement challenges.
  • That the winning marketers will be those who can rapidly iterate and learn quickly to reallocate resources and attention to what is working at a hyper-local level, taking both personalization and geographic location into account.
On all these fronts, I agree with him. I’ve actually said these very things in this blog over the past few years.

So when I caught up with him in the hallway afterward, I asked two questions:

  1. How credible are these uber-models likely to be if they fail to account for “non-marketing” variables like operational changes effecting customer experience and/or the impact of ex-category activities on customers within a category (e.g., how purchase activity in one category may affect purchase interest in another)?

  2. At what point do these models become so complex that they exceed the ability of most humans to understand them, leading to skepticism and doubt fueled by a deep psychological need for self-preservation?
His answers:

  1. “If you can track it, we can incorporate it into the model and determine its relative importance under a variety of circumstances. If you can’t, we can proxy for it with managerial judgment.”

  2. “That is the big challenge, isn’t it.”
So my takeaway from this interaction is:

  • Google will likely develop a “universal platform” for market mix modeling, which in many respects will be more robust than most of the other tools on the market – particularly in terms of seamless integration of online and offline elements, and web-enabled simulation tools. While it may lack some of the subtle flexibility of a custom-designed model, it will likely be “close enough” in overall accuracy given that it could be a fraction of the cost of custom, if not free. And it will likely evolve faster to incorporate emerging dynamics and variables as their scale will enable them to spot and include such things faster than any other analytics shop.

  • If they have a vulnerability, it may be under-estimating the human variables of the underlying questions (e.g., how much should we spend and where/how should we spend it?) and of the potential solution.

Reflecting over a glass of Cabernet several hours later, I realized that this is generally a good thing for the marketing discipline as Google will once again push us all to accelerate our adoption of mathematical pattern recognition as inputs into managerial decisions. Besides, the new human dynamics this acceleration creates will also spur new business opportunities. So everyone wins.

Friday, October 30, 2009

The 5 Questions That Kill Marketing Careers

As the planning cycle renews itself, you should be aware of 5 key questions which have been known to pop up in discussion with CEOs/CFOs and short-circuit otherwise brilliant marketing careers.

  1. What are the specific goals for our marketing spending and how should we expect to connect that spending to incremental revenue and/or margins?

  2. What would be the short and long-term impacts on revenue and margins if we spent 20% more/less on marketing overall in the next 12 months?

  3. Compared to relevant benchmarks (historical, competitive, and marketplace), how effective are we at transforming marketing investments into profit growth?

  4. What are appropriate targets for improving our marketing leverage ($’s of profit per $ of marketing spend) in the next 1/3/5 year horizons, and what key initiatives are we counting on to get us there?

  5. What are the priority questions we need to answer with respect to informing our knowledge of the payback on marketing investments and what are we doing to close those knowledge gaps?

How you answer these five questions will get you promoted, fired, or worse - marginalized.

If you tend to answer in a dizzying array of highly conceptual (e.g., brand strength rankings compared to 100 other companies) and/or excruciatingly tactical (e.g., cost per conversion on website leads) “evidence”, stop. Preponderance of evidence doesn’t win in the court of business. Credible, structured attempts to answer the underlying financial questions does.

The five questions are all answerable, even in the most data-challenged environments. Provided, of course, that you build acceptance of the need for some substantial assumptions to be made in deriving the answers – much the same way as you would in building a business case for a new plant, or deploying a new IT infrastructure project. The key is to make the assumptions explicit and clearly define the boundaries of facts, anecdotes, opinions, and guesses. Think in terms of:

  • Clarifying links between the company’s strategic plan and the role marketing plays in realizing it;
  • Connecting every tactical initiative back to one or more of the strategic thrusts in a way that makes every expenditure transparent in its intended outcome, thereby promoting accountability for results at even the most junior levels of management;
  • Defining the right metrics to gauge success, diagnose progress, and better forecast outcomes;
  • Developing a more methodical (not “robotic”) learning process in which experiments, research, and analytics are used to triangulate on the very types of elusive insights which create competitive advantage; and
  • Establishing a culture of continuous improvement that seeks to achieve quantifiably higher goals year after year.

As you evaluate push boldly into 2010, remember that 2011 is just around the corner. You may not have been asked these hard questions this year, but who knows who your CFO might talk to next year (me, perhaps??) that will ratchet up his/her expectations. You can prepare by using these five questions as a framework to benchmark where your marketing organization is starting from; as a guide to ensure that sufficient resources are being allocated to promote continuous learning and improvement; and as a means of monitoring the performance of your marketing organization.

_________________________

Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Wednesday, September 30, 2009

Memorandum
To: John Buleader
From: Barbara Researcher
Subject: How to improve our Net Promoter Scores

In response to your question of last week, I have considered several options for how we can improve our recently flagging Net Promoter scores and thereby increase that portion of our year-end bonus linked to that specific metric.

  1. We could restrict our sample of surveyed respondents to only those who have recently purchased from us, and ignore those who either didn’t like us enough to buy from us as well as those who bought from us a while ago buy may be having second thoughts due to our poor reliability and service.
  2. We could change our sampling approach to only solicit surveys from those who buy online since our website is so slick and efficient. This has the added benefit of reducing our research expenses so we can still afford those front-row football tickets.
  3. We could change the way we calculate Net Promoter to take the percentage of customers who score us as 7 trough 10’s and subtract those who score us as 1’s or 2’s since we know that 3’s to 6’s are really the “marginal” middle group and thereby take some credit for producing partial satisfaction.
  4. We can offer customers a $10 bonus coupon to allow our sales associates to “help” them complete the survey before they leave the store, thus providing both convenience and value to our customers.
  5. We can reduce the frequency of surveying from monthly to annually so as to make it virtually impossible to link our marketing or sales actions back to increases or decreases in the scores. This will create much confusion over interpretation and causality that bonuses will have long been paid by the time anyone actually agrees on what to do next.
  6. We can have our sales reps do the surveying themselves. This will allow us to capture notations about body language of the respondents too (side benefit: see football reference above).

Any or all of these strategies could essentially ensure success. Provided our market share doesn’t fall too fast, we’re unlikely to draw any undue attention.

Please let me know how you would like to proceed. We can also adjust any of our other metrics in similar ways.
_________________________


Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on measuring payback on marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.

Friday, September 18, 2009

Eat Pizza; Stay Thin - But Sacrifice Credibility

There it is again.

I’m sitting in a presentation at a meeting of a group of mid-level marketers representing some of America’s biggest ad budgets. The speaker, who is representing a media source (monopoly, government owned), is telling us that spending more on marketing in a recession is a good way to improve ROI. His argument: most marketers pull back in a recession, so if you maintain or increase your spending, your message will get through to more prospective customers, more often, with less clutter. He’s citing some examples of where this was the case.

If you believe this, I have a few lots in Florida I’d like to discuss with you. Waterfront. Tons of wildlife.

I know a few people who are big fans of pizza. Given the choice, they would eat pizza for breakfast, lunch, and dinner – and probably do on occasion. Interestingly, these people tend to be thin, with very low body fat and excellent muscle tone. So, by extrapolation, I can conclude that eating pizza makes them thin. Let’s all go eat more pizza.

Anyone who has studied the question of increasing spend in a recession (and done so objectively) will tell you that the evidence supporting higher spending in recessions is weak at best. There are many anecdotes and some success stories, but not enough clear evidence to convince even a moderately smart CFO that the reward outweighs the risks.

The unfortunate fact is that the definitive study on this question has never been done. No one has ever surveyed a broad sampling of marketers in different industries and different competitive scenarios, and had some randomly increase spend while holding others in control at lower levels. There are no legitimate studies that tell us how X% of marketers that increased spend had successful outcomes. And even those that have attempted to do a meta-study of all the many narrowly-focused probes on the topic conclude that there are no general rules of thumb that hold up across industries and companies.

If you think I’m saying that spending more is NOT a good strategy, you’re missing the point. Spending more MIGHT be the perfect strategy. Or it might be the last bad choice of your career. Success during recessionary (or slow recovery) times has less to do with level of spending than it does three simple factors:

  1. How strong is your product/service value proposition relative to your competitors or the alternatives your prospect may have? If it’s VERY strong, you might gain ground by spending more. If not, you might waste money just trying to buy share of voice in support of a solution which isn’t all that compelling.
  2. How responsive are your prospects to marketing spend? If they are very likely to respond to marketing stimulus, maybe more spend is good. But if marketing is just one of many things that cause them to buy, you may find that it would take a disproportionate increase in spend to achieve any noticeable shift in outcomes.
  3. How strong is your company balance sheet? Chances are, if you spend more, your competitors will try to match you to prevent losing share. It would be naïve to think you could get away with anything that would steal share without seeing some sort of blunting response. If you have the cash to withstand an escalation of a competitive war on marketing spend, go for it. But check with your CFO before you propose a strategy that might create far more risk that the company can undertake at this time.
There are a few other considerations, but these are the really important ones.
If you worry about the consequences of eating too much pizza, you’re now better equipped to challenge broad assertions about spending more. And you’re more likely to preserve your credibility for the really important issues in the future.

____________________

Pat LaPointe is Managing Partner at MarketingNPV – specialty advisors on marketing metrics and measuring marketing investments, and publishers of MarketingNPV Journal available online free at www.MarketingNPV.com.


Friday, July 24, 2009

Twittering Away Time and Money

One of the most common questions I’m getting these days is “how should I measure the value of all the social marketing things we’re doing like Twitter, Linked-in, Facebook, etc.”

My answer: WHY are you doing them in the first place? If you can’t answer that, you’re wasting your time and the company’s money.

Sounds simple I know, but I’m stunned at how unclear many marketers are about their intentions/expectations/hypotheses for how social media initiatives might actually help their business. In short, if you can’t describe in two sentences or less (no semi-colons) WHAT you hope to gain through use of social media, then WHY are you doing it? Measurement isn’t the problem. If you don’t know where you’re going, any measurement approach will work.

Here’s a framework for thinking about social measurement:

  1. Fill in the blanks: “Adding or swapping-in social media initiatives will impact ____________ by __________ extent over _____________ timeframe. And when that happens, the added value for the business will be $_____________, which will give me an ROI of ______________. This forms your hypotheses about what you might achieve, and why the rest of the business should care.
  2. Identify all the assumptions implicit in your hypotheses and “flex” each assumption up/down by 50% to 100% to see under which circumstances your assumptions become unprofitable.
  3. Identify the most sensitive assumption variables - those that tend to dramatically change the hypothesized payback by the greatest degree based on small changes in the assumption. These are your key uncertainties.
  4. Enhance your understanding of the sensitive assumptions through small-scale experiments constructed across broad ranges of the sensitive variables. Plan your experiments in ways you can safely FAIL, but mostly in ways to help you understand clearly what it would take to SUCCEED – even if that turns out to be unprofitable upon further analysis. That way, you will at least know what won’t work, and change your hypotheses in #1 above accordingly.
  5. Repeat steps 1 thru 4 until you have a model that seems to work.
  6. In the process, the drivers of program success will become very obvious. Those become your key metrics to monitor.


In short, measuring the payback on social media requires a sound initial business case that lays out all the assumptions and uncertainties, then methodically iterates through tests to find the model(s) that work best. Plan to fail in small scale, but most importantly plan to LEARN quickly.

Measure social media like you should any other marketing investment: how did it perform versus your expectations of how it should have? If those expectations are rooted in principles of profit-generation, your measurement will be relevant and insightful.

Tuesday, July 14, 2009

Walking Naked Through Times Square

I was sitting in a 2010 planning meeting recently listening to the marketing team describe their objectives, strategies, and thoughts on tactics they were planning to deploy. Their question to me was “how should we measure the payback on this strategy”?

My response was: “compared to what? Walking naked through Times Square?” I was being asked to evaluate a proposed strategy without any sense of what the alternatives were.

Sure, I can come up with a means of estimating and tracking the ROI on almost anything. But if that ROI comes to 142%, so what? Is there a plan that might get us to 1000% (without just cutting cost and manipulating the formula)?

As I thought back on the hundreds of planning meetings I’ve been in over the last 10 years, it occurred to me that we marketers are not so good at identifying alternative ways of achieving objectives and systematically weighing the options to ensure we’re selecting the paths that best meet the organization’s needs strategically, financially, and otherwise.

On a relative basis, we spend far too much of our time measuring the tactical/executional performance of the things we have decided to do, and far too little measuring the comparative value of things we might decide to do. Scenario planning; options analysis; decision frameworks. You get the idea.

The importance of this up-front effort isn’t just in getting to better strategies, but in building further credibility throughout the organization. Finance, sales, and operations all see marketing investments as inherently risky due to A) the size of the expenditures; and B) the uncertain nature of the returns as compared to many of the things those other functions tend to spend money on. Impressing them with our thorough exploration of the landscape of options goes a long way to demonstrating that we’re considered risk (albeit implicitly) in our recommendations, and have done all the necessary homework to arrive at a reasonable conclusion. NOT necessarily producing a 50 page deck, but rather simply stating which alternatives were considered, what the decision framework was, and how the ultimate selection was made. (This also builds trust through transparency).

From a measurement perspective, we can then consider the relative potential value of doing A versus B versus C, and in the process raise the level of confidence that we are spending the company’s money wisely. We can then turn our attention to measuring the quality of the execution of the chosen path with confidence that we’re not just randomly measuring the trees while wandering in the forest.

I’m not sure how many businesses might get a high ROI on walking naked through Times Square, but imagining that option certainly helps fuel creativity and underscores the importance of measuring strategic relevance, not just tactical performance.
Got any good stories about wandering naked?


Thursday, July 09, 2009

10 New Resolutions for the 2010 Planning Process

As we approach the 2010 planning season, I always like to take a few moments and reflect on the horrors of last year's planning cycle, making some commitments on how I can do it better this year:

1. I will lead this process and not get dragged behind it.

2. I recognize that many of our business fundamentals may have recently changed, so I commit to anticipating the key questions likely to define our strategy and, using research, analytics, and experiments, gather as much insight into them as I can in advance of making recommendations.

3. I will approach my budget proposal from the ground up, with every element having a business case that estimates the payback and makes my assumptions clear for all to see.

4. I will not be goaded into squabbling over petty issues by pin-headed, myopic, fraidy-pants types in other departments, regardless of how ignorant or personally offensive I find them to be.

5. The person who wrote number 4 above has just been sacked.

6. I will proactively seek input from others in finance, sales, and business units as I assemble my plan, to ensure I understand their questions and concerns and incorporate the appropriate adjustments.

7. I will clearly and specifically define what "success" looks like before I propose spending money, and plan to implement the necessary measurement process with all attendant pre/post, test/control, and with/without analytics required to isolate (within reason) the expected relative contribution of each element of my plan.

8. I will analyze the alternatives to my recommendations, so I am prepared to answer the inevitable CEO question: "Compared to what?"

9. I will be more conscious of my inherent biases relative to the power of marketing, and try not to let my passion get in the way of my judgment when constructing my plan.

10. If all else above fails, I promise to be at least 10% more foresighted and open-minded than I was last year, as measured by my boss, my peers in finance, and my administrative assistant. My spouse, however, will not be asked for an opinion.

How are you preparing for planning season? I'd like to hear what your resolutions are.

Thursday, May 07, 2009

Survey: 27% of Marketers Suck

A recent survey conducted by the Committee to Determine the Intelligence of Marketers (CDIM), an independent think-tank in Princeton NJ, recently found that:

· 4 out of 5 respondents feel that marketing is a “dead” profession.
· 60% reported having little if any respect for the quality of marketing programs today.
· Fully 75% of those responding would rather be poked with a sharp stick straight into the eye than be forced to work in a marketing department.

In total, the survey panel reported a mean response of 27% when asked, “on a scale of 0% to 100%, how many marketers suck?”

This has been a test of the emergency BS system. Had this been a real, scientifically-based survey, you would have been instructed where to find the nearest bridge to jump off.

Actually, it was a “real” “survey”. I found 5 teenagers in a local shopping mall loitering around the local casual restaurant chain and asked them a few questions. Seem valid?

Of course not. But this one was OBVIOUS. Every day we marketers are bamboozled by far more subtle “surveys” and “research projects” which purport to uncover significant insights into what CEOs, CFOs, CMOs, and consumers think, believe, and do. Their headlines are written to grab attention:

- 34% of marketers see budgets cut.
- 71% of consumers prefer leading brands when shopping for .
And my personal favorite:
- 38% of marketers report significant progress in measuring marketing ROI, up 4% from last year.

Who are these “marketers”? Are they representative of any specific group? Do they have anything in common except the word “marketing” on their business cards?

Inevitably such surveys blend convenience samples (e.g. those willing to respond) of people from the very biggest billion dollar plus marketers to the smallest $100k annual budgeteers. They mix those with advanced degrees and 20 years of experience in with those who were transferred into a field marketing job last week because they weren’t cutting it in sales. They commingle packaged goods marketers with those selling industrial coatings and others providing mobile dog grooming.

If you look closely, the questions are often constructed in somewhat leading ways, and the inferences drawn from the results conveniently ignore the statistical error factors which frequently wash-away any actual findings whatsoever. There is also a strong tendency to draw conclusions year-over-year when the only thing in common from one year to the next was the survey sponsor.

As marketers, we do ourselves a great dis-service whenever we grab one of these survey nuggets and imbed it into a PowerPoint presentation to “prove” something to management. If we’re not trustworthy when it comes to vetting the quality of research we cite, how can we reasonably expect others to accept our judgment on subjective matters?

So the next time you’re tempted to grab some headlines from a “survey” – even one done by a reputable organization – stop for a minute and read the fine print. Check to see if the conclusions being drawn are reasonable given the sample, the questions, and the margins of error. When in doubt, throw it out.

If we want marketing to be taken seriously as a discipline within the company, we can’t afford to let the “marketers” play on our need for convenience and simplicity when reporting “research” findings. Our credibility is at stake.And by the way, please feel free to comment and post your examples of recent “research” you’ve found curious.