Monday, October 15, 2007

Masters of Marketing Minimize Measurement Mentions



This year’s ANA “Masters of Marketing” conference in Phoenix was, as usual, the place to see and be seen. There were plenty of very interesting brand strategy stories from the likes of McDonald’s, Fidelity, Liberty Mutual, Anheuser-Busch, and AT&T. Steve Ballmer, CEO of Microsoft, set some bold predictions for the digital content consumption world of the future, and Al Gore introduced the fascinating new business model of Current TV (which, btw, stands to redefine the dialogue on “engagement” far beyond the current amorphous context).

To his great credit, Bob Liodice, ANA president, asked every presenter to comment on how they were measuring the impact of their work. Unfortunately, most of the speakers successfully ducked the question through a series of politically correct, almost Greenspanian deflections:

“Well, Bob, we’re making a major investment in improving customer satisfaction and continuing to drive brand preference relative to competitors to new heights while keeping our eye on price sensitivity and working to ensure that our associates understand the essence of the brand at every touchpoint.”


Leaving me (and a few of the other financially oriented attendees) to wonder why – as in why are the Masters so reluctant to share their true insights into measurement?

OK, so I get that measurement isn’t anywhere nearly as sexy to talk about as the great new commercials you just launched or your insightful brand positioning. I also get that many aspects of measurement are proprietary, and giving away financial details of publicly held companies in such a forum might give the IR folks the cold sweats.

But by avoiding the question, these “Masters of Marketing” – the very CMOs to whom the marketing community looks to for direction and inspiration – are sending a clear message to their staffs and the next generation that the ol’ “brand magic” is still much more important than the specific understanding of how it produces shareholder value.

The message seems to be that, when pressed for insight into ROI, it is acceptable to point to the simultaneous increase of “brand preference” scores and sales and imply, with a sly shrug of the shoulders, that there must be some correlation there. (If you find yourself asking, “So what’s wrong with that?” please read the entire archive of this blog before continuing to the next paragraph.)

Having met and spoken with many Masters of Marketing about this topic, I can tell you that each and every one of them are doing things with measurement that can advance the discipline for all of us. Wouldn’t sharing these experiences be just as important to the community as how you came to the insight for that latest campaign strategy?

Only the Masters can take up the challenge for pushing measurement to the same new heights as they’ve taken the art of integrated communication, the quality of production, and the efficiency of media. It seems to me that people so skilled in communication should be able to find a framework for sharing their learnings and best practices in measurement in ways that are interesting and informative while also protective of competitive disclosure.

Living up to the title of Masters of Marketing means going beyond message strategy, humor, and clever copy lines. We owe that to those we serve today and those who will follow in our footsteps, who will need a far better grounding in the explicit links between marketing investment and financial return to answer the increasingly sophisticated questions they’ll get from the CEO, CFO, and the Board.

So the next time Bob asks, “How do you measure the impact of that on your bottom line?” think about seizing the opportunity to send a really important message.

And Bob, thanks for asking. Keep the faith.

Monday, October 08, 2007

Use It or Lose It

Do you have any leftover 2007 budget dollars that are burning a hole in your pocket that you have to spend by the end of the year or lose? Here’s an idea: Consider investing in the future.

By that, I mean consider investing in ways to identify some of your key knowledge gaps and prioritize some strategies to fill them. Or investing in development of a road map toward better marketing measurement: What would the key steps look like? In what order would you want to progress? What would the road map require in terms of new skills, tools or processes?

It seems kind of odd, but while the pain of the 2008 planning process is still fresh in your mind, start thinking about what you can do better for 2009. By orienting some of those leftover available 2007 dollars toward future improvements, you might make next year’s planning process just a bit less painful.

Wednesday, October 03, 2007

Lessons Learned – The Wrong Metrics

In the course of developing dashboards for a number of our Global 1000 clients over the past few years, we’ve learned many lessons about what really works vs. what we thought would work. One of them is a recognition that, try as you might, only about 50% of the metrics you initially choose for your dashboard will actually be the right ones. That means half of your metrics are, in all likelihood, going to be proven to be wrong over the first 90 to 180 days.

Are they wrong because they were poorly chosen? No. They’re wrong because some of the metrics you selected won’t be as nearly as enlightening as you imagined they would be. Perhaps they don’t really tell the story you thought they were going to tell. Others may be wrong because they will require several on-the-fly iterations before the story really begins to emerge. You might need to filter them differently (by business unit or geography, for example) or you might need to recalculate the way they’re being portrayed. Regardless, some minor noodling is not uncommon when trying to get a metric to fulfill its potential for insight.

Still other metrics will become lightning rods for criticism, which will require some pacification or compromise. In the process, you may have to sacrifice some metrics in order to move past political obstacles and engender further support for the overall effect of the dashboard. If one of your organization’s key opinion leaders is undermining the entire credibility of the dashboard by criticizing a single metric, you may find it more effective to cut the metric in question (for the time being, at least) and do more alignment work on it.

Finally, many of your initial metrics simply may not offer any real diagnostic or predictive insight over time. You may pretty quickly come to realize that a metric you thought was going to be insightful doesn’t have sufficient variability to it, or it may not offer much more than a penetrating glance into the obvious.

So the fact that half of the initial metrics will be proven to be wrong over the course of several months after your roll out your dashboard is bad news, right? No – it’s actually a good sign. It shows that the organization has embraced the dashboard as a learning tool, and that the flexibility to modify it is inherent in the process of improving and managing the dashboard as you go.

Here’s my advice: When implementing a new dashboard, be prepared to iterate rapidly over the first 90 days in response to a flood of feedback. After that initial flurry, develop a release schedule for updates and stick to it, so you can make improvements on a more systematized basis. But above all, make sure you’re responsive to the key constituents and that those constituents have a clear understanding of how their input is being reflected in the dashboard. Or if it’s not being reflected there, be prepared to explain why.