Showing posts with label advertising payback. Show all posts
Showing posts with label advertising payback. Show all posts

Thursday, May 07, 2009

Survey: 27% of Marketers Suck

A recent survey conducted by the Committee to Determine the Intelligence of Marketers (CDIM), an independent think-tank in Princeton NJ, recently found that:

· 4 out of 5 respondents feel that marketing is a “dead” profession.
· 60% reported having little if any respect for the quality of marketing programs today.
· Fully 75% of those responding would rather be poked with a sharp stick straight into the eye than be forced to work in a marketing department.

In total, the survey panel reported a mean response of 27% when asked, “on a scale of 0% to 100%, how many marketers suck?”

This has been a test of the emergency BS system. Had this been a real, scientifically-based survey, you would have been instructed where to find the nearest bridge to jump off.

Actually, it was a “real” “survey”. I found 5 teenagers in a local shopping mall loitering around the local casual restaurant chain and asked them a few questions. Seem valid?

Of course not. But this one was OBVIOUS. Every day we marketers are bamboozled by far more subtle “surveys” and “research projects” which purport to uncover significant insights into what CEOs, CFOs, CMOs, and consumers think, believe, and do. Their headlines are written to grab attention:

- 34% of marketers see budgets cut.
- 71% of consumers prefer leading brands when shopping for .
And my personal favorite:
- 38% of marketers report significant progress in measuring marketing ROI, up 4% from last year.

Who are these “marketers”? Are they representative of any specific group? Do they have anything in common except the word “marketing” on their business cards?

Inevitably such surveys blend convenience samples (e.g. those willing to respond) of people from the very biggest billion dollar plus marketers to the smallest $100k annual budgeteers. They mix those with advanced degrees and 20 years of experience in with those who were transferred into a field marketing job last week because they weren’t cutting it in sales. They commingle packaged goods marketers with those selling industrial coatings and others providing mobile dog grooming.

If you look closely, the questions are often constructed in somewhat leading ways, and the inferences drawn from the results conveniently ignore the statistical error factors which frequently wash-away any actual findings whatsoever. There is also a strong tendency to draw conclusions year-over-year when the only thing in common from one year to the next was the survey sponsor.

As marketers, we do ourselves a great dis-service whenever we grab one of these survey nuggets and imbed it into a PowerPoint presentation to “prove” something to management. If we’re not trustworthy when it comes to vetting the quality of research we cite, how can we reasonably expect others to accept our judgment on subjective matters?

So the next time you’re tempted to grab some headlines from a “survey” – even one done by a reputable organization – stop for a minute and read the fine print. Check to see if the conclusions being drawn are reasonable given the sample, the questions, and the margins of error. When in doubt, throw it out.

If we want marketing to be taken seriously as a discipline within the company, we can’t afford to let the “marketers” play on our need for convenience and simplicity when reporting “research” findings. Our credibility is at stake.And by the way, please feel free to comment and post your examples of recent “research” you’ve found curious.

Saturday, January 17, 2009

Yes We Can - The Marketing Renaissance Moment

It strikes me that the spirit of "Yes We Can" is very applicable to marketing at this particular point in time when many have recently suffered significant cuts in marketing budgets owing to their lack of ability to demonstrate the financial value derived from those investments.

Yes We Can apply more discipline to how we measure the payback on marketing investments without increasing the workload proportionately.

Yes We Can embrace this discipline without harming the creative energy so critical to marketing success.

Yes We Can measure those "softer" elements like branding, customer experience, innovation, and word-of-mouth, and link them to impacts on company cashflows.

Yes We Can overcome gaps in data and find ways to build reasonable approximations which even the CFO will embrace.

Yes We Can align the entire company on a single set of marketing metrics and all use the same yardsticks to measure success.

Yes We Can forecast the impact of changes in spending amount or allocation in ways that will inspire confidence instead of criticism.

Yes We Can anticipate the challenges ahead with reasonable certainty and act now to prepare ourselves to meet them head-on. And most importantly,

Yes We Can restore credibility and confidence in marketing as a means of driving profitable growth in our companies, regardless of industry, sector, corporate politics, culture, structure, or market dynamics.

The present economic environment offers a unique opportunity to re-invent the role of marketing in the organization, and to re-establish the critical links between our marketing efforts and the bottom-line shareholder value they create.

Believe it. If you're not doing it, your competitors likely are. There are no more good excuses. There is only "Yes We Can".

Monday, January 12, 2009

Gaining More Than "Experience" from Measurement

I recently did some in-depth interviews with CMOs from 6 multi-billion dollar companies which revealed these key measurement challenges and obstacles still looming large in 2009:

  1. Lack of clarity - not having a specific definition of what they're trying to measure, and getting lost in the ambiguity of the process. HINT: define and prioritize the key questions you're trying to answer BEFORE you set out to measure them. Read this.
  2. Inability to measure the "brand" impact - having great difficulty getting funding for branding activities/initiatives due to absence of any hard financial evidence of how brand drives value. Here are a few ideas. NOTE: solve this one now, or what's left of your branding budget may well disappear in the tough year ahead.
  3. No or bad data - this is not a reason, it's an excuse. There are dozens of ways to overcome short-term data gaps IF you realize that doing so is a people/politics challenge and not a technical one.
  4. Low credibility in the board room - the chickens have come home to roost. In the good times, we should have been working on building your knowledgebase of how marketing drives shareholder value. Now, all we can do is move funds from the more intangible activities to the more quantifiable. That's not a strategy. That's an outcome. How to NOT lose the battle next time around.

If you're still struggling to get an insightful and credible measurement program off the ground (or to see it reach a higher level of value), look here to see what your symptoms are, and then find the prescribed cure.

On the bright side, out of this economic crisis marketers are sure to gain some valuable experience ("experience" is what you get when you don't get what you want). As a community, we will learn from it and do better next time. At least, those of us who are actively working hard to get better will.

Tuesday, January 06, 2009

Research Priorities Are All Wrong

I got an email today from the Marketing Research Association spelling out the "top 6 issues for protecting the profession". Included were:

  1. Increasing difficulty of reaching consumers via cell phones
  2. Consumer fears of behavior tracking
  3. Stage and federal government interest in shady incentive practices used to entice medical professionals
  4. Unpopularity of "robocalls" and automated dialing
  5. Public backlash to "push-polls"
  6. Data security and breach protocols
Wrong.

While each of these dynamics is a threat to the future of the research industry, the bigger threat is the increasing irrelevance of research to senior management. More and more companies have outsourced their strategic marketing research functions to suppliers. The suppliers have been consolidating, often being acquired by bigger agency or marketing services holding companies. Not surprisingly, there is a serious degradation of objectivity that occurs in the process. And the more junior marketers now left client-side to direct the research program within their companies are not generally as politically senior/influential as one needs to be to push through the right research agenda - especially in times of immense cost-cutting pressure. (see Rebuilding Trust in Research as a Measurement Tool)

Sure, there are many executional threats facing the research industry today. But unless the way research is conceived in an appropriate strategic/financial context and prioritized for the value it potentially holds, the methodological threats will be but cubes floating in an ocean of icebergs.

It's time the research profession re-rises to the occasion. I hope they do.

Friday, December 26, 2008

Fools Rush In - Searching for Magic ROI

If the current economy is encouraging you to think about shifting resources from traditional media to digital alternatives in search of cost effectiveness and overall efficiency, beware: nearly EVERYONE ELSE HAS THE SAME IDEA.

Implication: you will be moving into an increasingly cluttered marketplace, where broad reach options will continue to lose effectiveness and highly-targeted delivery will come at a higher price as demand outstrips the supply of good inventory and good people to execute. Consumers too will become increasingly savvy with respect to their digital media usage patterns, and harder to “impress” with incrementally new ideas or executions.

I know I’ll get lots of letters about this post “educating” me on the infinite scalability of the digital media, and reminding me that true creativity is likewise boundless. I’m sure many of you have research that shows how the returns to digital marketing programs just keep growing as the audience of users grows across more and more platforms. Fair enough. But the laws of marketing physics suggest that more marketers and marketing dollars will rush in to the arena than proven executional avenues can accommodate in the short term. And most of them will NOT bring breakthrough new creativity with them. That will create lots of failure and un-delivered expectations, which in turn may slow adoption of otherwise valuable marketing options.

Here’s a simple suggestion as you contemplate the great digital shift towards the promise of better ROI… set your expectations based on poorer results than you may have experienced in the past, and/or ratchet-down vendor claims of look-alike results presented in “case studies”. Before committing to the “me too” plan of going digital, ask yourself if your planned online campaigns would be a good investment if they were 10% less effective than originally anticipated? Would your new social networking programs still provide good payback if they had a 20% less impact on potential customers? These may very well be the new reality when everyone rushes in.

In stark contrast, a friend who’s CMO of a packaged goods company tells me that while he is continuing to shift the balance of his total spend towards digital media, he’s doing so in a measured way built on careful experimentation. He’s working on a cycle of plan>execute>learn>expand>plan again. So he’s spending 20% more on digital media in 2009 than in 2008, but not moving huge chunks of his total budget all in one big push for magic returns. Nope. His philosophy is “hit ‘em where they ‘aint.” He’s buying more radio and magazines – media he’s developed clear success cases with in the past and places he can more accurately predict the impact on his business. He may find himself all alone there. But I suspect that’s part of the appeal.

Tuesday, December 23, 2008

Trying to "Justify" Superbowl Spending?

"...as a responsible employer of more than 290,000 employees and contractors world-wide, there is a time to justify such an ad spend and a time to step back."

This quote was provided by the director of advertising at FedEx, in response to a question about why they would not be advertising on this year's Superbowl - the first time in 12 years they would be absent from the annual ad-fest.

The implication from his statement seems to be that, up until now, the Superbowl ads were "justified" by something other than sound economics. Sure, there was the fabulous reach into an attractive target demo, but the price is high. So maybe the premium was being "justified" by some "softer" benefits like employee morale, channel partner collaboration, or even that most elusive of all... "brand preference". And in these days of extreme bottom-line focus, these non-economic "justifications" just weren't going to cut it. It would send the wrong message to people losing jobs and benefits.

The sad truth here is that each and every one of the "softer" benefits can, in fact, be economically measured to a reasonable degree. There are practical, credible ways to calculate the ROI of employee morale, partner collaboration, and brand preference. But they require some techniques that few marketers have yet investigated, let alone perfected.

I don't have any idea if Superbowl advertising is a sound economic decision for FedEx, and I'm not questioning their judgment. It might have been a superb use of shareholder funds, or it may have been a terrible waste. I just cringe when I hear how such important marketing decisions are still, in this age of measurement enlightenment, being made on the basis of "justifications" that suggest something less than a robust economic framework was applied.

We, the marketing industry, can do better. We can measure each and every one of those softer elements in ways that our finance partners will embrace. Those 290,000 employees and contractors need us to do better. For their sake, let's try to ramp up our measurement game in 2009, shall we?

Sunday, December 21, 2008

Trading GRPs for Clicks?

Television networks are making their prime-time programming available in full-form via their websites. And not just the latest episodes of “Desperate Housewives”. CBS and ABC have both announced that they are now streaming from deep inside their programming vaults, bring back favorites like “The Love Boat” and “Twin Peaks”.

Hulu (joint venture between NBC and Fox) attracts more than 2.5 million unique viewers (distinct cookies) monthly, who stream content an average of more than 20 times each! That’s a bigger, more engaged audience than many cable stations draw in a month’s time. And anyone who knows their way around a Bass diffusion curve will tell you that adoption of online viewing is on a trajectory to achieve substantial penetration very rapidly.

All this is causing pre-revolution heartburn in the media departments of major ad agencies today. They’re trying to figure out which metrics best equate clicks (or streams) to GRiPs (gross rating points), so they can compare the costs of advertising online to advertising on TV. Apples-to-apples.

Wrong mission.

Online content streaming is, by its very nature, an active participation medium, while television is passive. As such, the metrics should reflect the degree to which advertisers actively engage the consumer: streams launched; ads clicked; games played; surveys completed; dialogue offered; etc. Selecting passive metrics encourages the content owners to use the computer to stream like they broadcast, thereby replacing one screen with another. In time, that will teach consumers to use it as a passive medium like TV.

If we (the marketers) want to capture the true potential of an active medium, we have to demand performance against active metrics. We have to design ads that give the multi-tasking consumer of today something else to do while they’re watching the show – enter contests on what will happen next; decide who’s telling the truth; test their show knowledge against other fans; shop for that cute skirt – you get the idea.

Effectiveness in this new realm is a function of the actual (active) behavior generated versus the expected amount. And the expected amount is that degree of behavior shift necessary to make the business case for spending the money show a clear and attractive return. Efficiency is then how much more positive behavior we’re generating per dollar spent than we did last month/quarter/year.

Sure, we need to have some sense of which content is attracting people who “look” like customers or prospects, but that’s just the basis upon which we decide where to test and experiment. The real decisions on where to place our big bets will come once we learn what execution tactics are most impactful.

Until then, be careful what you measure, or you will surely achieve it..

Friday, December 19, 2008

Blogging On (or is it "Blogging In"?)

OK. I'm back.

I actually got quite a few requests to resume this blog, even though there were very few comments posted during the year I ran it originally. Plus, it seems to do REALLY well on Google organic search results.

So what have I learned?

1. Blogging on a subject matter like marketing measurement is less about the number of engaged readers than it is the quality of engagement of a few.

2. Blogging is far more about building a well-rounded web marketing presence. No single piece of the puzzle puts one over the top on search results. It's constant experimentation. Having dropped the blog for a while, I can tell you we saw a clear drop in performance of our organic search traffic.

3. Social media is so immature at this point that we're experimenting with many platform components from Twitter (follow me as "measureman") to feedster, to several dozen other elements. The cost of experimentation is high, and I used to think we weren't making sufficient progress towards any real insight. Then I had a bit of an epiphany... the experimentation process really IS the marketing process. Experimentation isn't just what we do to get to a marketing plan. The marketing plan is a summary of how we're experimenting with various methods, tools, and messages to get the desired results.

If you're interested in how we're measuring our own results here at MarketingNPV, shoot me an email and we can talk about the specific metrics.


Tuesday, April 29, 2008

Blogging off...

As someone who prides themselves on allocating precious time to the highest return activities, I'll be ending this blog now.

It's not that there isn't a great deal left to be said on the topic, but rather that there don't seem to be too many people seeking information on this topic in blog form. Having tested several tactical approaches to getting the message into the market, this particular one does not seem to justify the time and energy it requires to feed the insatiable appetite for practical advice in the area of marketing metrics.

So I'm thinking of installing a web cam in my car and allowing you all to watch me as I drive...

Seriously, if you're looking for great content on marketing measurement, marketing metrics, marketing dashboards, brand scorecards, marketing resource allocation, marketing budgeting, or any of the other key search terms, catch us at our home page, www.MarketingNPV.com, where you'll find what you're looking for.

Thank you.

Monday, November 12, 2007

TiVo to the Rescue?

Hot on the heals of the Google/Nielsen partnership, TiVo has entered the measurement fray with its announcement that it will begin providing advertisers with data on the viewing (and skipping) habits of consumers using TiVo’s panel of 20,000 set-top boxes. This has the potential to be far more illuminating than the Nielsen data as TiVo can provide insight into who is skipping ads (both on the demographic/lifestyle segment level and on the individual-addressable level) and which ones they are skipping. The result could be a wealth of information on ad performance, sliced and diced on many dimensions.

Even more interesting, TiVo will offer advertisers the ability to learn (on a blind basis) the viewing habits of their actual customers. By providing TiVo with a customer file, marketers can get insight into exactly how many (and which types) of customers are skipping their ads, which should help both fine-tune message execution and enhance negotiations with networks.

TiVo still can’t tell us who is watching the ads – only who isn’t. But with its jump on the interactive feature options, TiVo may be faster to offer advertisers the back-end direct response element of the engagement chain.

This is a promising frontier for advertisers seeking to understand the actual payback of their advertising investments. It’s not in itself a magic bullet, but another step forward in getting the objective insight we need to draw credible conclusions.


Technorati Profile

Monday, November 05, 2007

Google to Dominate Dashboards?

Having conquered the worlds of web search and analytics, is Google about to corner the market on marketing dashboards?

Hardly.

What Google is doing is coordinating online ad display data with offline (TV) ad exposures. Google is partnering with Nielsen to take data directly from Nielsen’s set-top-box panel of 3,000 households nationwide and mash it up with Google analytics data to find correlations between on- and off-line exposure. The premise is, I’m sure, to help marketers integrate this data with their own sales information and find statistical correlation between the two as a means of assessing the impact of the advertising at a high level. By using data only from the set-top box, Google is able to present offline ad exposure data with the same certainty as it does online – e.g., we know that this ad was actually shown. Unfortunately, we don’t know if the ad (online or off) was actually seen, never mind absorbed.

However, with the evolution of interactive features in set-top boxes, it won’t be long before we begin to get sample data of people “clicking” on TV ads, much like we do online ads. So we’ll get the front end of the engagement spectrum (shown) and the back end (responded). But we won’t get anything from the middle to give us any diagnostic or predictive insights to enhance the performance of our marketing campaigns.

A full marketing dashboard integrates far more than just enhanced ratings data and looks deeper than just summary correlations between ads shown and sales to dissect the actual cause of sales. Presuming that sales were driven by advertising in the Google dashboard model would potentially ignore the influence of a great many other variables like trade promotions, channel incentives, and sales force initiatives.

Drawing conclusions about advertising’s effect solely on the basis of looking at sales and ratings would quickly undermine the credibility of the marketing organization. So while the Google dashboard may be a welcome enhancement, it’s not by any stretch a panacea for measuring marketing effectiveness.

It seems to me that Google has created better tools. But through their lens of selling advertising, they’re perpetuating a few big mistakes.

Monday, October 15, 2007

Masters of Marketing Minimize Measurement Mentions



This year’s ANA “Masters of Marketing” conference in Phoenix was, as usual, the place to see and be seen. There were plenty of very interesting brand strategy stories from the likes of McDonald’s, Fidelity, Liberty Mutual, Anheuser-Busch, and AT&T. Steve Ballmer, CEO of Microsoft, set some bold predictions for the digital content consumption world of the future, and Al Gore introduced the fascinating new business model of Current TV (which, btw, stands to redefine the dialogue on “engagement” far beyond the current amorphous context).

To his great credit, Bob Liodice, ANA president, asked every presenter to comment on how they were measuring the impact of their work. Unfortunately, most of the speakers successfully ducked the question through a series of politically correct, almost Greenspanian deflections:

“Well, Bob, we’re making a major investment in improving customer satisfaction and continuing to drive brand preference relative to competitors to new heights while keeping our eye on price sensitivity and working to ensure that our associates understand the essence of the brand at every touchpoint.”


Leaving me (and a few of the other financially oriented attendees) to wonder why – as in why are the Masters so reluctant to share their true insights into measurement?

OK, so I get that measurement isn’t anywhere nearly as sexy to talk about as the great new commercials you just launched or your insightful brand positioning. I also get that many aspects of measurement are proprietary, and giving away financial details of publicly held companies in such a forum might give the IR folks the cold sweats.

But by avoiding the question, these “Masters of Marketing” – the very CMOs to whom the marketing community looks to for direction and inspiration – are sending a clear message to their staffs and the next generation that the ol’ “brand magic” is still much more important than the specific understanding of how it produces shareholder value.

The message seems to be that, when pressed for insight into ROI, it is acceptable to point to the simultaneous increase of “brand preference” scores and sales and imply, with a sly shrug of the shoulders, that there must be some correlation there. (If you find yourself asking, “So what’s wrong with that?” please read the entire archive of this blog before continuing to the next paragraph.)

Having met and spoken with many Masters of Marketing about this topic, I can tell you that each and every one of them are doing things with measurement that can advance the discipline for all of us. Wouldn’t sharing these experiences be just as important to the community as how you came to the insight for that latest campaign strategy?

Only the Masters can take up the challenge for pushing measurement to the same new heights as they’ve taken the art of integrated communication, the quality of production, and the efficiency of media. It seems to me that people so skilled in communication should be able to find a framework for sharing their learnings and best practices in measurement in ways that are interesting and informative while also protective of competitive disclosure.

Living up to the title of Masters of Marketing means going beyond message strategy, humor, and clever copy lines. We owe that to those we serve today and those who will follow in our footsteps, who will need a far better grounding in the explicit links between marketing investment and financial return to answer the increasingly sophisticated questions they’ll get from the CEO, CFO, and the Board.

So the next time Bob asks, “How do you measure the impact of that on your bottom line?” think about seizing the opportunity to send a really important message.

And Bob, thanks for asking. Keep the faith.

Monday, October 08, 2007

Use It or Lose It

Do you have any leftover 2007 budget dollars that are burning a hole in your pocket that you have to spend by the end of the year or lose? Here’s an idea: Consider investing in the future.

By that, I mean consider investing in ways to identify some of your key knowledge gaps and prioritize some strategies to fill them. Or investing in development of a road map toward better marketing measurement: What would the key steps look like? In what order would you want to progress? What would the road map require in terms of new skills, tools or processes?

It seems kind of odd, but while the pain of the 2008 planning process is still fresh in your mind, start thinking about what you can do better for 2009. By orienting some of those leftover available 2007 dollars toward future improvements, you might make next year’s planning process just a bit less painful.

Wednesday, October 03, 2007

Lessons Learned – The Wrong Metrics

In the course of developing dashboards for a number of our Global 1000 clients over the past few years, we’ve learned many lessons about what really works vs. what we thought would work. One of them is a recognition that, try as you might, only about 50% of the metrics you initially choose for your dashboard will actually be the right ones. That means half of your metrics are, in all likelihood, going to be proven to be wrong over the first 90 to 180 days.

Are they wrong because they were poorly chosen? No. They’re wrong because some of the metrics you selected won’t be as nearly as enlightening as you imagined they would be. Perhaps they don’t really tell the story you thought they were going to tell. Others may be wrong because they will require several on-the-fly iterations before the story really begins to emerge. You might need to filter them differently (by business unit or geography, for example) or you might need to recalculate the way they’re being portrayed. Regardless, some minor noodling is not uncommon when trying to get a metric to fulfill its potential for insight.

Still other metrics will become lightning rods for criticism, which will require some pacification or compromise. In the process, you may have to sacrifice some metrics in order to move past political obstacles and engender further support for the overall effect of the dashboard. If one of your organization’s key opinion leaders is undermining the entire credibility of the dashboard by criticizing a single metric, you may find it more effective to cut the metric in question (for the time being, at least) and do more alignment work on it.

Finally, many of your initial metrics simply may not offer any real diagnostic or predictive insight over time. You may pretty quickly come to realize that a metric you thought was going to be insightful doesn’t have sufficient variability to it, or it may not offer much more than a penetrating glance into the obvious.

So the fact that half of the initial metrics will be proven to be wrong over the course of several months after your roll out your dashboard is bad news, right? No – it’s actually a good sign. It shows that the organization has embraced the dashboard as a learning tool, and that the flexibility to modify it is inherent in the process of improving and managing the dashboard as you go.

Here’s my advice: When implementing a new dashboard, be prepared to iterate rapidly over the first 90 days in response to a flood of feedback. After that initial flurry, develop a release schedule for updates and stick to it, so you can make improvements on a more systematized basis. But above all, make sure you’re responsive to the key constituents and that those constituents have a clear understanding of how their input is being reflected in the dashboard. Or if it’s not being reflected there, be prepared to explain why.

Wednesday, September 19, 2007

Knowing Is Believing

Now that 2008 budget season is upon us, it’s time to identify knowledge gaps in the assumptions underlying your marketing plan – and to lay out (and fund) a strategy for filling them.

We recently published a piece in MarketingNPV Journal which tackles this issue. In “Searching for Better Planning Assumptions? Start with the Unknowns” we suggested:

A marketing team’s ability to plan effectively is a function of the knowns and the unknowns of the expected impact of each element of the marketing mix. Too often, unfortunately, the unknowns outweigh the hard facts. Codified knowledge is frequently limited to how much money lies in the budget and how marketing has allocated those dollars in the past. Far less is known (or shared) about the return received for every dollar invested. As a result, marketers are left to fill the gaps with a mix of assumptions, conventional wisdom, and the occasional wild guess – not exactly a combination that fills a CMO with confidence when asked to recommend and defend next year’s proposed budget to the executive team.


Based on our experience and that of some of our CMO clients, we offer a framework to help CMOs get their arms around what they know, what they think they know, and what they need to know about their marketing investments. The three steps are:

1. Audit your knowledge. The starting point for a budget plan comes in the form of a question: What do we need to know? The key is to identify the knowledge gaps that, once filled, can lessen the uncertainty around the unknown elements, which will give you more confidence to make game-changing decisions.

2. Prioritize the gaps. For each gap or unanswered question, it’s important to ask how a particular piece of information would change the decision process. It might cause you, for example, to completely rethink the scope of a new program, which could have a material impact on marketing performance.

3. Get creative with your testing methods. Marketers have many methods for filling the gaps at their disposal; some are commonly used, others are underutilized. The key is determining the most cost-effective methods – from secondary research to experimental design techniques – to gather the most relevant information.

Don’t let the unknowns persist another year. Find ways to identify them, prioritize them, and fund some exploratory work so you’re legitimately smarter when the next planning season rolls around.

Tuesday, September 11, 2007

Where Have All the CMOs Gone?

If I had to sum up in a phrase what I heard Monday at the ANA Accountability Forum in Palm Beach, Fla., it would be “hard work.”

We heard several stories of marketers (IBM, VF Corp., Johnson & Johnson, Kimberly Clark, Siemens, and Discovery Networks) who are at various stages of understanding the payback on their marketing investments. Some have established impressive abilities to ascertain directional returns via marketing mix models supplemented with online research panels. Others have defined their vision, building the requisite foundation - aligning roles, goals, and expectations – and beginning to see some results. Yet many of the attendees still appear to be circling around the need for better measurement, looking for a point of entry.

Depending upon how one interprets the survey of measurement and accountability practices released at the ANA event, somewhere between 20% and 40% of large marketing organizations are doing measurement reasonably well. Another 20% to 40% are making some progress, but still addressing large gaps of a cultural, organizational, or technical nature. And the remaining 20% to 40% are, apparently, selling far more product than they can produce, ergo they’re not interested in measurement.

Sadly, these numbers do not seem to have improved from surveys done in the past.

I have two theories on why we may have stalled.

First, the truth is getting out: Measurement is hard work. And now that all the low-hanging fruit of defining metrics and building models using available data has been picked, the real insights have proven to be hiding higher up the tree. It’s one thing to know you have a data gap in an all-important metric, but quite another to secure the resources to close it while developing a credible proxy in the interim. If you’ve bought all the relevant syndicated data and are still left with gaping holes in your spend-to-return equation, you face the possibility of having to build your own data, from scratch, to travel that all-important last mile. And if you’ve been standing on the periphery looking for a cheaper, faster, less organizationally intrusive approach to give you great insight at little cost (financially or politically), it’s not going to happen. Get out your ladder and start picking fruit.

Second, the actions of CMOs indicate that they may be losing interest in the topic. Events like the one I’m attending this week used to attract a strong following of CMO types. This year, very few. Have they lost interest? Do they know something the rest of us don’t? Is measurement fading as an issue with CEOs and CFOs? Do CMOs not like Florida in September?

The CMO’s absence in the dialogue suggests they’re delegating really difficult organizational problems to smart, hard-working people who unfortunately lack the political clout necessary to solve them. Maybe the CMO has sufficiently “checked the box” by assigning people to work the issue. Or maybe it’s just not a very fun project to work on relative to the excitement of strategic planning or shuffling the organizational chart. If the CMO is losing interest, progress will be measured in minor increments and marketing is unlikely to ever achieve critical mass of enlightenment.

Whatever the reason, the result is the same: Marketing loses credibility and influence in each passing quarter in which the CMO can’t answer the difficult questions about the relationship between spend and returns. The opportunity cost, both to the company and to the career of the marketer, is staggering.

Bottom line: This measurement stuff is hard work. As I see it, the CMO has two choices: Roll up your sleeves and get into the thick of it, or start calling the recruiters and let the next person worry about it.

Monday, June 18, 2007

Marketing and Finance on the Same Team: Building the Dashboard Together

I meet with dozens of marketing executives every year, and in the vast majority of these meetings, I hear frustration over an inability to effectively communicate marketing program value to the CFO. The two departments often disagree on which metrics — and which programs — are the most significant to the bottom line, as well as on the interpretation of results.

The problem is that when the chemistry isn’t just right, the personal relationship bridges aren’t strong, and the awareness of the range of solutions is limited, the collaborative spirit surrounding marketing measurement devolves quickly into a power struggle. And in that environment, everything stalls and nobody wins.

We recently had a chance to talk to a few marketers, including KeyCorp, Bank of America, Yahoo! and Home Depot, among others. Common to all was an effort to build joint ownership between marketing and finance over marketing measurement responsibility. The result seems to include setting goals that are better aligned up front with the P&L, as well as enhanced interdepartmental communication and an improved ability to interpret and act on results.

Here’s a snapshot of some of the things Marketing is doing. They:
• Build shared goals up front;
• Get up-front buy-in from Finance and corporate executives;
• Align marketing metrics with the P&L;
• Involve Finance in dashboard design;
• Provide them with full transparency;
• Give Finance partial ownership of the dashboard;
• Have the corporate scorecard mirror the LOB scorecards;
• Go beyond deciding which metrics to track to deciding how to distribute the results and to whom;
• Focus on the things that are really going to make a difference in company performance;
• Reach out to all stakeholders and LOBs and ask what’s important to them; then build in different levels for different stakeholders;
• Are realistic about trying to solve things they don’t have control over, or where there may be gaps in information; and
• Don’t leave the numbers open to interpretation; they use a narrative to explain each metric, and they publicize an action plan for the next quarter.

You can see more of this discussion online by clicking here.

Thursday, May 31, 2007

The Marketing Mix Model Grows Up

I’ll be honest — a couple of years ago when marketing mix models started to catch on, I wasn’t entirely enthused. Like any new measurement technique or tool, I felt MMMs just skimmed the surface of tactical optimization when, to offer real value, they really needed to be used as a strategy support tool. But MMMs have gotten better at doing that. Specifically, today’s marketing mix models:

- Provide more operational guidance, aligning increases or decreases in marketing campaign spending with channel management and supply chain considerations;
- Link to trade-off analyses on a market segment or brand-equity level;
- Help companies monitor the impact of marketing programs on incremental revenue while further explaining that amorphous “baseline” number.

Improved automated functionality is also allowing marketers to react more quickly to results based on their needs by, refreshing the models monthly to allow for more frequent changes in marketing support planning.

Ever the devil’s advocate though, I still see some need for improvement. Specifically, modelers need to:

- Ensure that the organization as a whole understands the assumptions and limitations of the marketing mix model;
- Realize that laying the acceptance groundwork around those assumptions is as important and challenging as building the algorithms or collecting the data;
- Be aware of changes in the competitive environment and how they affect your results; this is an area where marketing mix models often break down;
- Understand that the model will, on occasion, fail; expect it and plan for it.

Finally, don’t stop at marketing mix models. Risk is magnified by over-reliance on a single tool. Today’s marketing measurement toolkit needs to be much broader. Deep understanding of brand drivers, customer behavior and value require input from tools and techniques outside the mix model, as well as in.

If you’re interested in more about marketing mix models, as well as how to evolve them, click here for the article on our website.

Thursday, May 17, 2007

Net Promoter Score — Beware the Ceiling

The popularity of Net Promoter Scores as a means to link customer experience execution to financial value creation has been astounding. In just the past two years, American businesses of all sizes, types and structures have begun asking customers about their proclivity to recommend it and the reasons why or why not. Whether you’re a fan of NPS or prefer other methodologies, it would be difficult to dispute that, in the aggregate, this has been a very positive trend that has elevated the consciousness of executives to the link between investing in customer experience improvement and creating shareholder value.

But what happens when we, as consumers, begin getting so many surveys asking about our likelihood to recommend that we become numb? What happens when we realize en-masse that we can end the call quicker by just answering “10” and “no.”

It’s not hard to envision how the simplicity of NPS surveys will eventually lose effectiveness. Familiarity breeds contempt. Respondents will lie with greater frequency. NPS scores will begin to rise — artificially — while the relative competitive gaps begin to disappear. By that point, it will be too late. We’ll have unknowingly made some bad decisions on increasingly flawed data and be left without a transition strategy.

This scenario may not play out for some time yet. But I think it’s helpful to be aware of the inevitability of it; to build early indicators into our current NPS review processes; and to begin imagining what the next solution may look like.

If you’ve had any particular experience with this dynamic, I’d love to hear from you.

Wednesday, April 18, 2007

When Segmentation Loses its Meaning

Segmentation seems to be on its way to becoming one of those vendor co-opted words that is fast losing its meaning.

As I’ve painfully learned with the word “dashboard”, once a concept catches on it quickly becomes perverted beyond recognition until, on some level, everyone is doing it and all consultants and technology providers are experts in it (think CRM). I suppose this is just another benefit of living in the digital age.

Lately, I’ve seen “segmentation” used to describe approaches for:

- finding the most likely prospects for an existing product/service set;

- developing the most effective ad copy; and

- explaining the similarities and differences between competitors in a given market space.

Those of you who know me know that I’m not too hung up on definitions. But I am pretty hung up on meaning. To me, “segmentation” is the process of defining naturally occurring groups of homogenous prospects or customers who share a common need-set, determining the relative size of each group (from the perspective of profit potential), prioritizing their attractiveness, and then developing go-to-market plans best suited to appeal to each.

First off, this can’t be done without quantitative research of some sort. There are many techniques with different circumstantial strengths. But if someone is talking about segmentation based on “some interviews”, run. Fast.

Second, segmentation based on attitudes, beliefs, or perceptions is fine for writing copy or creating positioning statements, but what does it really tell you about the not-so-subtle trade-offs in constructing the real value proposition of the product/service?

A focus on feelings may cause you to miss the importance of making your product easier to spot on the shelf, or modifying your distribution-channel structure, or even extending credit to achieve competitive advantage. Incorporating attitudes into segmentation is smart. Basing the entire segmentation on them can be tragically flawed.

Finally, segmentation without sizing is irrelevant. Whether you size on revenue or contribution margin (preferred) opportunity, you need segmentation to help you understand the relative opportunity of allocating your resources one way versus another. Good segmentation forces you to make hard decisions because it shows you multiple viable pathways. Great segmentation helps you quantify the risk/reward propositions and leads you to the best choice.

How, you may be asking, does this relate to marketing measurement? Segmentation is the basis of all resource allocation. Segment your market properly, and your most meaningful metrics will emerge from your understanding of how to create customer value.

Let the marketer beware, though. The sad paradox of segmentation is that declining standards of imagination and process discipline increasingly mean that all segments are created equal.