Monday, November 12, 2007

TiVo to the Rescue?

Hot on the heals of the Google/Nielsen partnership, TiVo has entered the measurement fray with its announcement that it will begin providing advertisers with data on the viewing (and skipping) habits of consumers using TiVo’s panel of 20,000 set-top boxes. This has the potential to be far more illuminating than the Nielsen data as TiVo can provide insight into who is skipping ads (both on the demographic/lifestyle segment level and on the individual-addressable level) and which ones they are skipping. The result could be a wealth of information on ad performance, sliced and diced on many dimensions.

Even more interesting, TiVo will offer advertisers the ability to learn (on a blind basis) the viewing habits of their actual customers. By providing TiVo with a customer file, marketers can get insight into exactly how many (and which types) of customers are skipping their ads, which should help both fine-tune message execution and enhance negotiations with networks.

TiVo still can’t tell us who is watching the ads – only who isn’t. But with its jump on the interactive feature options, TiVo may be faster to offer advertisers the back-end direct response element of the engagement chain.

This is a promising frontier for advertisers seeking to understand the actual payback of their advertising investments. It’s not in itself a magic bullet, but another step forward in getting the objective insight we need to draw credible conclusions.


Technorati Profile

Monday, November 05, 2007

Google to Dominate Dashboards?

Having conquered the worlds of web search and analytics, is Google about to corner the market on marketing dashboards?

Hardly.

What Google is doing is coordinating online ad display data with offline (TV) ad exposures. Google is partnering with Nielsen to take data directly from Nielsen’s set-top-box panel of 3,000 households nationwide and mash it up with Google analytics data to find correlations between on- and off-line exposure. The premise is, I’m sure, to help marketers integrate this data with their own sales information and find statistical correlation between the two as a means of assessing the impact of the advertising at a high level. By using data only from the set-top box, Google is able to present offline ad exposure data with the same certainty as it does online – e.g., we know that this ad was actually shown. Unfortunately, we don’t know if the ad (online or off) was actually seen, never mind absorbed.

However, with the evolution of interactive features in set-top boxes, it won’t be long before we begin to get sample data of people “clicking” on TV ads, much like we do online ads. So we’ll get the front end of the engagement spectrum (shown) and the back end (responded). But we won’t get anything from the middle to give us any diagnostic or predictive insights to enhance the performance of our marketing campaigns.

A full marketing dashboard integrates far more than just enhanced ratings data and looks deeper than just summary correlations between ads shown and sales to dissect the actual cause of sales. Presuming that sales were driven by advertising in the Google dashboard model would potentially ignore the influence of a great many other variables like trade promotions, channel incentives, and sales force initiatives.

Drawing conclusions about advertising’s effect solely on the basis of looking at sales and ratings would quickly undermine the credibility of the marketing organization. So while the Google dashboard may be a welcome enhancement, it’s not by any stretch a panacea for measuring marketing effectiveness.

It seems to me that Google has created better tools. But through their lens of selling advertising, they’re perpetuating a few big mistakes.

Monday, October 15, 2007

Masters of Marketing Minimize Measurement Mentions



This year’s ANA “Masters of Marketing” conference in Phoenix was, as usual, the place to see and be seen. There were plenty of very interesting brand strategy stories from the likes of McDonald’s, Fidelity, Liberty Mutual, Anheuser-Busch, and AT&T. Steve Ballmer, CEO of Microsoft, set some bold predictions for the digital content consumption world of the future, and Al Gore introduced the fascinating new business model of Current TV (which, btw, stands to redefine the dialogue on “engagement” far beyond the current amorphous context).

To his great credit, Bob Liodice, ANA president, asked every presenter to comment on how they were measuring the impact of their work. Unfortunately, most of the speakers successfully ducked the question through a series of politically correct, almost Greenspanian deflections:

“Well, Bob, we’re making a major investment in improving customer satisfaction and continuing to drive brand preference relative to competitors to new heights while keeping our eye on price sensitivity and working to ensure that our associates understand the essence of the brand at every touchpoint.”


Leaving me (and a few of the other financially oriented attendees) to wonder why – as in why are the Masters so reluctant to share their true insights into measurement?

OK, so I get that measurement isn’t anywhere nearly as sexy to talk about as the great new commercials you just launched or your insightful brand positioning. I also get that many aspects of measurement are proprietary, and giving away financial details of publicly held companies in such a forum might give the IR folks the cold sweats.

But by avoiding the question, these “Masters of Marketing” – the very CMOs to whom the marketing community looks to for direction and inspiration – are sending a clear message to their staffs and the next generation that the ol’ “brand magic” is still much more important than the specific understanding of how it produces shareholder value.

The message seems to be that, when pressed for insight into ROI, it is acceptable to point to the simultaneous increase of “brand preference” scores and sales and imply, with a sly shrug of the shoulders, that there must be some correlation there. (If you find yourself asking, “So what’s wrong with that?” please read the entire archive of this blog before continuing to the next paragraph.)

Having met and spoken with many Masters of Marketing about this topic, I can tell you that each and every one of them are doing things with measurement that can advance the discipline for all of us. Wouldn’t sharing these experiences be just as important to the community as how you came to the insight for that latest campaign strategy?

Only the Masters can take up the challenge for pushing measurement to the same new heights as they’ve taken the art of integrated communication, the quality of production, and the efficiency of media. It seems to me that people so skilled in communication should be able to find a framework for sharing their learnings and best practices in measurement in ways that are interesting and informative while also protective of competitive disclosure.

Living up to the title of Masters of Marketing means going beyond message strategy, humor, and clever copy lines. We owe that to those we serve today and those who will follow in our footsteps, who will need a far better grounding in the explicit links between marketing investment and financial return to answer the increasingly sophisticated questions they’ll get from the CEO, CFO, and the Board.

So the next time Bob asks, “How do you measure the impact of that on your bottom line?” think about seizing the opportunity to send a really important message.

And Bob, thanks for asking. Keep the faith.

Monday, October 08, 2007

Use It or Lose It

Do you have any leftover 2007 budget dollars that are burning a hole in your pocket that you have to spend by the end of the year or lose? Here’s an idea: Consider investing in the future.

By that, I mean consider investing in ways to identify some of your key knowledge gaps and prioritize some strategies to fill them. Or investing in development of a road map toward better marketing measurement: What would the key steps look like? In what order would you want to progress? What would the road map require in terms of new skills, tools or processes?

It seems kind of odd, but while the pain of the 2008 planning process is still fresh in your mind, start thinking about what you can do better for 2009. By orienting some of those leftover available 2007 dollars toward future improvements, you might make next year’s planning process just a bit less painful.

Wednesday, October 03, 2007

Lessons Learned – The Wrong Metrics

In the course of developing dashboards for a number of our Global 1000 clients over the past few years, we’ve learned many lessons about what really works vs. what we thought would work. One of them is a recognition that, try as you might, only about 50% of the metrics you initially choose for your dashboard will actually be the right ones. That means half of your metrics are, in all likelihood, going to be proven to be wrong over the first 90 to 180 days.

Are they wrong because they were poorly chosen? No. They’re wrong because some of the metrics you selected won’t be as nearly as enlightening as you imagined they would be. Perhaps they don’t really tell the story you thought they were going to tell. Others may be wrong because they will require several on-the-fly iterations before the story really begins to emerge. You might need to filter them differently (by business unit or geography, for example) or you might need to recalculate the way they’re being portrayed. Regardless, some minor noodling is not uncommon when trying to get a metric to fulfill its potential for insight.

Still other metrics will become lightning rods for criticism, which will require some pacification or compromise. In the process, you may have to sacrifice some metrics in order to move past political obstacles and engender further support for the overall effect of the dashboard. If one of your organization’s key opinion leaders is undermining the entire credibility of the dashboard by criticizing a single metric, you may find it more effective to cut the metric in question (for the time being, at least) and do more alignment work on it.

Finally, many of your initial metrics simply may not offer any real diagnostic or predictive insight over time. You may pretty quickly come to realize that a metric you thought was going to be insightful doesn’t have sufficient variability to it, or it may not offer much more than a penetrating glance into the obvious.

So the fact that half of the initial metrics will be proven to be wrong over the course of several months after your roll out your dashboard is bad news, right? No – it’s actually a good sign. It shows that the organization has embraced the dashboard as a learning tool, and that the flexibility to modify it is inherent in the process of improving and managing the dashboard as you go.

Here’s my advice: When implementing a new dashboard, be prepared to iterate rapidly over the first 90 days in response to a flood of feedback. After that initial flurry, develop a release schedule for updates and stick to it, so you can make improvements on a more systematized basis. But above all, make sure you’re responsive to the key constituents and that those constituents have a clear understanding of how their input is being reflected in the dashboard. Or if it’s not being reflected there, be prepared to explain why.

Wednesday, September 19, 2007

Knowing Is Believing

Now that 2008 budget season is upon us, it’s time to identify knowledge gaps in the assumptions underlying your marketing plan – and to lay out (and fund) a strategy for filling them.

We recently published a piece in MarketingNPV Journal which tackles this issue. In “Searching for Better Planning Assumptions? Start with the Unknowns” we suggested:

A marketing team’s ability to plan effectively is a function of the knowns and the unknowns of the expected impact of each element of the marketing mix. Too often, unfortunately, the unknowns outweigh the hard facts. Codified knowledge is frequently limited to how much money lies in the budget and how marketing has allocated those dollars in the past. Far less is known (or shared) about the return received for every dollar invested. As a result, marketers are left to fill the gaps with a mix of assumptions, conventional wisdom, and the occasional wild guess – not exactly a combination that fills a CMO with confidence when asked to recommend and defend next year’s proposed budget to the executive team.


Based on our experience and that of some of our CMO clients, we offer a framework to help CMOs get their arms around what they know, what they think they know, and what they need to know about their marketing investments. The three steps are:

1. Audit your knowledge. The starting point for a budget plan comes in the form of a question: What do we need to know? The key is to identify the knowledge gaps that, once filled, can lessen the uncertainty around the unknown elements, which will give you more confidence to make game-changing decisions.

2. Prioritize the gaps. For each gap or unanswered question, it’s important to ask how a particular piece of information would change the decision process. It might cause you, for example, to completely rethink the scope of a new program, which could have a material impact on marketing performance.

3. Get creative with your testing methods. Marketers have many methods for filling the gaps at their disposal; some are commonly used, others are underutilized. The key is determining the most cost-effective methods – from secondary research to experimental design techniques – to gather the most relevant information.

Don’t let the unknowns persist another year. Find ways to identify them, prioritize them, and fund some exploratory work so you’re legitimately smarter when the next planning season rolls around.

Tuesday, September 11, 2007

Where Have All the CMOs Gone?

If I had to sum up in a phrase what I heard Monday at the ANA Accountability Forum in Palm Beach, Fla., it would be “hard work.”

We heard several stories of marketers (IBM, VF Corp., Johnson & Johnson, Kimberly Clark, Siemens, and Discovery Networks) who are at various stages of understanding the payback on their marketing investments. Some have established impressive abilities to ascertain directional returns via marketing mix models supplemented with online research panels. Others have defined their vision, building the requisite foundation - aligning roles, goals, and expectations – and beginning to see some results. Yet many of the attendees still appear to be circling around the need for better measurement, looking for a point of entry.

Depending upon how one interprets the survey of measurement and accountability practices released at the ANA event, somewhere between 20% and 40% of large marketing organizations are doing measurement reasonably well. Another 20% to 40% are making some progress, but still addressing large gaps of a cultural, organizational, or technical nature. And the remaining 20% to 40% are, apparently, selling far more product than they can produce, ergo they’re not interested in measurement.

Sadly, these numbers do not seem to have improved from surveys done in the past.

I have two theories on why we may have stalled.

First, the truth is getting out: Measurement is hard work. And now that all the low-hanging fruit of defining metrics and building models using available data has been picked, the real insights have proven to be hiding higher up the tree. It’s one thing to know you have a data gap in an all-important metric, but quite another to secure the resources to close it while developing a credible proxy in the interim. If you’ve bought all the relevant syndicated data and are still left with gaping holes in your spend-to-return equation, you face the possibility of having to build your own data, from scratch, to travel that all-important last mile. And if you’ve been standing on the periphery looking for a cheaper, faster, less organizationally intrusive approach to give you great insight at little cost (financially or politically), it’s not going to happen. Get out your ladder and start picking fruit.

Second, the actions of CMOs indicate that they may be losing interest in the topic. Events like the one I’m attending this week used to attract a strong following of CMO types. This year, very few. Have they lost interest? Do they know something the rest of us don’t? Is measurement fading as an issue with CEOs and CFOs? Do CMOs not like Florida in September?

The CMO’s absence in the dialogue suggests they’re delegating really difficult organizational problems to smart, hard-working people who unfortunately lack the political clout necessary to solve them. Maybe the CMO has sufficiently “checked the box” by assigning people to work the issue. Or maybe it’s just not a very fun project to work on relative to the excitement of strategic planning or shuffling the organizational chart. If the CMO is losing interest, progress will be measured in minor increments and marketing is unlikely to ever achieve critical mass of enlightenment.

Whatever the reason, the result is the same: Marketing loses credibility and influence in each passing quarter in which the CMO can’t answer the difficult questions about the relationship between spend and returns. The opportunity cost, both to the company and to the career of the marketer, is staggering.

Bottom line: This measurement stuff is hard work. As I see it, the CMO has two choices: Roll up your sleeves and get into the thick of it, or start calling the recruiters and let the next person worry about it.

Monday, June 18, 2007

Marketing and Finance on the Same Team: Building the Dashboard Together

I meet with dozens of marketing executives every year, and in the vast majority of these meetings, I hear frustration over an inability to effectively communicate marketing program value to the CFO. The two departments often disagree on which metrics — and which programs — are the most significant to the bottom line, as well as on the interpretation of results.

The problem is that when the chemistry isn’t just right, the personal relationship bridges aren’t strong, and the awareness of the range of solutions is limited, the collaborative spirit surrounding marketing measurement devolves quickly into a power struggle. And in that environment, everything stalls and nobody wins.

We recently had a chance to talk to a few marketers, including KeyCorp, Bank of America, Yahoo! and Home Depot, among others. Common to all was an effort to build joint ownership between marketing and finance over marketing measurement responsibility. The result seems to include setting goals that are better aligned up front with the P&L, as well as enhanced interdepartmental communication and an improved ability to interpret and act on results.

Here’s a snapshot of some of the things Marketing is doing. They:
• Build shared goals up front;
• Get up-front buy-in from Finance and corporate executives;
• Align marketing metrics with the P&L;
• Involve Finance in dashboard design;
• Provide them with full transparency;
• Give Finance partial ownership of the dashboard;
• Have the corporate scorecard mirror the LOB scorecards;
• Go beyond deciding which metrics to track to deciding how to distribute the results and to whom;
• Focus on the things that are really going to make a difference in company performance;
• Reach out to all stakeholders and LOBs and ask what’s important to them; then build in different levels for different stakeholders;
• Are realistic about trying to solve things they don’t have control over, or where there may be gaps in information; and
• Don’t leave the numbers open to interpretation; they use a narrative to explain each metric, and they publicize an action plan for the next quarter.

You can see more of this discussion online by clicking here.

Thursday, May 31, 2007

The Marketing Mix Model Grows Up

I’ll be honest — a couple of years ago when marketing mix models started to catch on, I wasn’t entirely enthused. Like any new measurement technique or tool, I felt MMMs just skimmed the surface of tactical optimization when, to offer real value, they really needed to be used as a strategy support tool. But MMMs have gotten better at doing that. Specifically, today’s marketing mix models:

- Provide more operational guidance, aligning increases or decreases in marketing campaign spending with channel management and supply chain considerations;
- Link to trade-off analyses on a market segment or brand-equity level;
- Help companies monitor the impact of marketing programs on incremental revenue while further explaining that amorphous “baseline” number.

Improved automated functionality is also allowing marketers to react more quickly to results based on their needs by, refreshing the models monthly to allow for more frequent changes in marketing support planning.

Ever the devil’s advocate though, I still see some need for improvement. Specifically, modelers need to:

- Ensure that the organization as a whole understands the assumptions and limitations of the marketing mix model;
- Realize that laying the acceptance groundwork around those assumptions is as important and challenging as building the algorithms or collecting the data;
- Be aware of changes in the competitive environment and how they affect your results; this is an area where marketing mix models often break down;
- Understand that the model will, on occasion, fail; expect it and plan for it.

Finally, don’t stop at marketing mix models. Risk is magnified by over-reliance on a single tool. Today’s marketing measurement toolkit needs to be much broader. Deep understanding of brand drivers, customer behavior and value require input from tools and techniques outside the mix model, as well as in.

If you’re interested in more about marketing mix models, as well as how to evolve them, click here for the article on our website.

Thursday, May 17, 2007

Net Promoter Score — Beware the Ceiling

The popularity of Net Promoter Scores as a means to link customer experience execution to financial value creation has been astounding. In just the past two years, American businesses of all sizes, types and structures have begun asking customers about their proclivity to recommend it and the reasons why or why not. Whether you’re a fan of NPS or prefer other methodologies, it would be difficult to dispute that, in the aggregate, this has been a very positive trend that has elevated the consciousness of executives to the link between investing in customer experience improvement and creating shareholder value.

But what happens when we, as consumers, begin getting so many surveys asking about our likelihood to recommend that we become numb? What happens when we realize en-masse that we can end the call quicker by just answering “10” and “no.”

It’s not hard to envision how the simplicity of NPS surveys will eventually lose effectiveness. Familiarity breeds contempt. Respondents will lie with greater frequency. NPS scores will begin to rise — artificially — while the relative competitive gaps begin to disappear. By that point, it will be too late. We’ll have unknowingly made some bad decisions on increasingly flawed data and be left without a transition strategy.

This scenario may not play out for some time yet. But I think it’s helpful to be aware of the inevitability of it; to build early indicators into our current NPS review processes; and to begin imagining what the next solution may look like.

If you’ve had any particular experience with this dynamic, I’d love to hear from you.

Wednesday, April 18, 2007

When Segmentation Loses its Meaning

Segmentation seems to be on its way to becoming one of those vendor co-opted words that is fast losing its meaning.

As I’ve painfully learned with the word “dashboard”, once a concept catches on it quickly becomes perverted beyond recognition until, on some level, everyone is doing it and all consultants and technology providers are experts in it (think CRM). I suppose this is just another benefit of living in the digital age.

Lately, I’ve seen “segmentation” used to describe approaches for:

- finding the most likely prospects for an existing product/service set;

- developing the most effective ad copy; and

- explaining the similarities and differences between competitors in a given market space.

Those of you who know me know that I’m not too hung up on definitions. But I am pretty hung up on meaning. To me, “segmentation” is the process of defining naturally occurring groups of homogenous prospects or customers who share a common need-set, determining the relative size of each group (from the perspective of profit potential), prioritizing their attractiveness, and then developing go-to-market plans best suited to appeal to each.

First off, this can’t be done without quantitative research of some sort. There are many techniques with different circumstantial strengths. But if someone is talking about segmentation based on “some interviews”, run. Fast.

Second, segmentation based on attitudes, beliefs, or perceptions is fine for writing copy or creating positioning statements, but what does it really tell you about the not-so-subtle trade-offs in constructing the real value proposition of the product/service?

A focus on feelings may cause you to miss the importance of making your product easier to spot on the shelf, or modifying your distribution-channel structure, or even extending credit to achieve competitive advantage. Incorporating attitudes into segmentation is smart. Basing the entire segmentation on them can be tragically flawed.

Finally, segmentation without sizing is irrelevant. Whether you size on revenue or contribution margin (preferred) opportunity, you need segmentation to help you understand the relative opportunity of allocating your resources one way versus another. Good segmentation forces you to make hard decisions because it shows you multiple viable pathways. Great segmentation helps you quantify the risk/reward propositions and leads you to the best choice.

How, you may be asking, does this relate to marketing measurement? Segmentation is the basis of all resource allocation. Segment your market properly, and your most meaningful metrics will emerge from your understanding of how to create customer value.

Let the marketer beware, though. The sad paradox of segmentation is that declining standards of imagination and process discipline increasingly mean that all segments are created equal.

Thursday, March 29, 2007

Understanding the Accounting and Marketing Benefits of Customer Franchise Value

It can often be difficult — sometimes down right impossible — for marketing and finance to coexist when finance needs short-term results to satisfy their generally accepted accounting principles (GAAP) and marketing is trying to build overall brand equity, which leads to long-term customer relationship value.

In the diagram below, you can see that there is much involved in arriving at ROI from the marketing point of view. However, the accounting department only sees that the shortest distance to any destination is a straight line — i.e., 2007 marketing activity should lead directly to 2007 sales.


Customer Franchise Value (CFV) can help bridge the gap between the two departments and help marketing give finance what they need. CFV is a metric that gives the CFO a tangible number to get his hands around that explains payback on marketing efforts today — key emphasis on the word today. In short, it’s a “net present value” snapshot of your current customer base.

At the same time, it serves as a more disciplined way of helping marketers understand the tangible, financial value being created over time — not just the strategic value. Basically, it gives marketers the breathing room they need to invest in longer-term sales growth.


In our latest issue of MarketingNPV, you’ll find a robust discussion on this subject that will help you create your own customer franchise value metric system.


Click here for the article on our website.

Thursday, March 15, 2007

Big News in the World of Marketing Measurement

I don’t spend a lot of time talking about our firm and what we do – but I need to share some big news….

Dave Reibstein, William S. Woodside Professor of Marketing at Wharton, past Executive Director of the Marketing Science Institute, and co-author of the recent book - Marketing Metrics: 50+ Metrics Every Executive Should Master – is joining our firm.

I’m delighted to be working with Dave and his colleague from CMO Partners, Peter McNally. They are world-class marketing strategists with a strong financial orientation and great expertise at selecting the right marketing metrics to diagnose and predict performance. Working together – aside from having some fun - we’ll be looking to conquer the challenges of effective and efficient marketing resource allocation.

If you’d like to get a sense of the things that will be driving our work, check out “10 Immutable Laws of Marketing Measurement”, a new piece co-authored by Dave and yours truly.

Monday, March 05, 2007

Beware Bogus Surveys That Kill Credibility

The popularity of the marketing measurement movement seems to have every PR-hungry consultant jumping on the “survey says” bandwagon to create some “content”. You know the type. “We asked 1,000 people what they thought about….”

The answers are supposed to provide you, the marketing executive, with a benchmark of what your “peers” are doing, so you can gauge the relative performance of your own company or department. Only they don’t. They just manipulate your desire to know and play off of your lack of technical knowledge in reading research results.

It’s ironic, isn’t it, that people who supposedly specialize in credible marketing measurement resort to scientifically flawed methods in their own marketing efforts:

- The survey samples are drawn from convenience and are representative of no larger group (except the group of people who happened to respond to the survey).

- The motivations of the respondents to be truthful seem to pass without question.

- There is no attention paid to the non-respondents (whom one might presume are protecting some real, non-public insights).

- And the summaries exclaim how 40% of respondents said this while another 52% said that, all the while ignoring the fact that the error rates for the study may be +/- 20% or more.

If you presented such garbage information to your executive committee, chances are you’d be out on your ass quicker than your resume could get updated.

I’m not diminishing the importance of qualitative research by any means. I’m simply calling on the emerging industry of measurement consultants to adhere to the same standards they advise their clients on. If you seek publicity for qualitative work, be sure to clearly label it as such and use as many words to explain the limitations of the conclusions as you employ in proposing them.

On the client side, you should have higher standards. Ask a few key questions about anything labeled “research”:

1. Is this qualitative or quantitative? Qualitative research summaries shouldn’t be rooted in numerical comparisons across sub-samples. Their findings are only valid at the level of broad observations and hypotheses.

2. What universe is this sample representative of? Understanding the sample number of respondents in the context of the non-respondents and the group selected to receive the opportunity to respond will tell you if those who did respond are really reflective of your “peer” group and if the differentials reported are meaningful, or manufactured.

3. What is the error factor of the findings? If they can’t say for sure, then it’s not a quantitative study, which means you should pay no attention to the actual numbers and percentages reported.

If we all apply a bit higher standards for credibility in our work, we will collectively advance the credibility of the marketing discipline in its ability to self-measure. Failing that, we’ll continue to be accused of being more interested in PR than real results.

Thursday, February 22, 2007

Marketing and IT: Hyatt Proves That A Team Approach Is Doable

For years, marketing has been feeling the short-end of the stick from IT in terms of support and prioritization. IT, on the other hand, has been mopping up after marketing “experiments” with outsourced, on-demand solutions that didn’t work exactly as hoped. So how do you get the CMO and the CIO to work more closely to integrate their efforts to achieve their (presumably) common goals?

Hyatt seems to have solved the problem. They named Tom O’Toole, formerly “just” the CMO, to be CIO, too.

In an interview I did with Tom recently, he offered a few suggestions for ways to solve expectation and delivery gaps that typically form in the Marketing-IT relationship. Now before you read these, keep in mind that they were coming from the mouth of someone who spent the bulk of their career in the brand marketing world…

Tom’s suggestions for CMOs are:

1. Don’t develop and staff your own applications without at least discussing it with IT. If you do, we don’t have the expertise or the staff to support them. Most often, these systems aren’t well-documented.

2. Don’t mess around with the network. There are security concerns, bandwidth concerns, and reliability concerns. You really have no idea how problematic it can be for a network manager whose job depends upon network performance and uptime to all of a sudden have major delays or outages caused by a rogue Web server he didn’t even know was connecting. It can literally bring the entire company to a standstill.

3. Try to stick with packaged solutions. If you can recommend a solution from a vendor who has already built all the interfaces with the software we run our enterprise on and has tested them with dozens of other clients, it takes a tremendous amount of work (and time) out of the assessment process.

For the entire Q&A with Tom O’Toole, go to:
http://www.marketingnpv.com/interview.asp?ix=1176

Thursday, February 01, 2007

Predicting The Path of Predictive Analytics

Analytics are increasingly the lifeblood of a CMO’s accountability process. And we’ve seen marked advancements in these tools, as marketers turn up the pressure for more usable insight.

In the aggregate, I see four key trends shaping the analytics space:

1. C-level involvement. The corner office will go from interested to involved to participating in marketing decision making. The analytics underlying resource allocation recommendations will need to more clearly articulate and justify what you need, why you need it, and yes, the payback. They will have to speak for themselves, sans the geek interface.

2. Continuous marketing measurement. The near future of analytics will go beyond one-time, “what’s going on today” metrics to present real-time continuous results. This constant flow is critical to overcoming the challenges of today’s fractured media environment. A new ‘test and learn’ framework is also helping marketers capture feedback and adjust to it more quickly.

3. Cheaper, faster models. Similar to Moore’s Law, the speed of analytics models will continue to increase and the capabilities will improve, while the price will gradually decline. Specifically, we anticipate deeper support for data integration and “what if” scenarios.

4. Software tailored to your needs. You’ve been made to walk the walk. Soon, the analytics vendors will be doing it too. While this may be the trend furthest down the pike, we feel the survival of today’s analytics tools is dependent on their ability to be “componentized” to create relevance and meet the unique needs of individual marketers.

None of these trends will cause a definitive paradigm shift next week, or even next month. Rather, the change will be subtle and incremental. But a look back 12 months from now should show considerable advancements beyond today.

For a deeper analysis of these four predictions, go to:
http://www.marketingnpv.com/article.asp?ix=1180

Tuesday, January 16, 2007

WOM Measurement – The Wild, Wild West

As one of the newest media (and one that is still very much evolving), there’s quite a bit of measurement snake-oil surrounding the links between word-of-mouth marketing and financial value creation.

I don’t think we’re far off from bringing respectability to it, because all the necessary tools are there. But we won’t progress unless marketers stop being satisfied with simple “stroke counting” measures — like message delivery and open and pass along rates — and start building a roadmap that more clearly links WOM to revenue and profit.

Here’s a 6-step prescription for WOM measurement progress:

1. Define Objectives. Clearly and succinctly state the intended outcome of the campaign expenditure in economic or behavioral terms.

2. Test the effectiveness of your message strategy to determine the recipients’ behavioral outcome.

3. Develop test-and-control constructs to determine the true predictive value of the awareness or attitude change, and its effect on behavior.

4. Conduct post-campaign interviews with current and new customers, and those who still resist your value proposition to find out what did or didn’t influence their decision to act or not act.

5. Review your proposed measurement methodology with key constituents of the outcome (i.e., the CFO and CEO) in advance to get their feedback and to tighten any loopholes and gaps.

6. Be clear on your expectations. State them in as tangible of financial terms as you can. Then ask yourself the tough questions: Did you succeed in achieving your goals and expectations? Continue to adjust as you move forward.

As word of mouth grows into a recognizable line item on the budget, the measurement practice must improve along with it. Otherwise, it’s the wild, wild west all over again.

If you want to see more on measuring word of mouth marketing, read:
Is There a Reliable Way to Measure Word of Mouth Marketing?
http://www.marketingnpv.com/article.asp?ix=1175

Tuesday, January 02, 2007

Prediction for 2007… Pain

With all the hype surrounding the resurgence of legendary on-screen boxer Rocky Balboa, I couldn’t help but borrow a line from the old Clubber Lang (Mr. T) in anticipation of what 2007 will bring for marketing measurement. He said, “My prediction… pain.”

In the case of marketers, that pain is likely to be felt most by some of the late adopters to measurement discipline. In fact, marketers who haven’t yet made a concerted effort to get a suitably comprehensive and properly stakeholdered measurement process in place are likely to feel the pain more than ever in 2007. Why?

First, CEOs and CFOs are hearing more and more about how measurable marketing is these days. They’re seeing it at conferences, reading about it in their trade journals and hearing it firsthand from their peers. These seeds, once planted, can’t help but grow up through the most hardened sidewalks of resistance. And when they crack through the foundation of credibility, the crumbling is impossible to stop.

Second, unless you’re lucky enough to be in a high-growth business spinning out exceptional shareholder returns, the die is likely already cast for another year of cuts to the marketing budget. The best you can hope for is that the slashes will be swift and sharp. But chances are, they will more likely resemble death by a thousand small incisions. And you can forget about defending your turf. If you had the insights the CEO needed to be more confident, you wouldn’t be the one who’s budget they look to begin with.

Third, if you’re entering the “opportunity zone” of your tenure with the company (somewhere between months 20 and 30), you may have but one more chance to put a sound foundation behind your next budget recommendation. But you’ll need to start now. It takes a minimum of nine months, and more often 18, before you can really get a good historical handle on marketing performance drivers and be able to correlate them to spending with any predictive validity.

The good news is that, if you start in January while the year is fresh and new, you’ll have a fair chance of making a big difference for 2008. You can build a foundation that will serve you immediately and for many years to come. But by April, your window will close. So the question for many marketers isn’t whether or not there will be pain in 2007, but whether it will be the pain of progress or the pain of avoidance. Either way, the choice is deliberate.

Wishing you the very best (and a full bottle of Advil or Tylenol) in 2007.