Showing posts with label analytics. Show all posts
Showing posts with label analytics. Show all posts

Thursday, May 07, 2009

Survey: 27% of Marketers Suck

A recent survey conducted by the Committee to Determine the Intelligence of Marketers (CDIM), an independent think-tank in Princeton NJ, recently found that:

· 4 out of 5 respondents feel that marketing is a “dead” profession.
· 60% reported having little if any respect for the quality of marketing programs today.
· Fully 75% of those responding would rather be poked with a sharp stick straight into the eye than be forced to work in a marketing department.

In total, the survey panel reported a mean response of 27% when asked, “on a scale of 0% to 100%, how many marketers suck?”

This has been a test of the emergency BS system. Had this been a real, scientifically-based survey, you would have been instructed where to find the nearest bridge to jump off.

Actually, it was a “real” “survey”. I found 5 teenagers in a local shopping mall loitering around the local casual restaurant chain and asked them a few questions. Seem valid?

Of course not. But this one was OBVIOUS. Every day we marketers are bamboozled by far more subtle “surveys” and “research projects” which purport to uncover significant insights into what CEOs, CFOs, CMOs, and consumers think, believe, and do. Their headlines are written to grab attention:

- 34% of marketers see budgets cut.
- 71% of consumers prefer leading brands when shopping for .
And my personal favorite:
- 38% of marketers report significant progress in measuring marketing ROI, up 4% from last year.

Who are these “marketers”? Are they representative of any specific group? Do they have anything in common except the word “marketing” on their business cards?

Inevitably such surveys blend convenience samples (e.g. those willing to respond) of people from the very biggest billion dollar plus marketers to the smallest $100k annual budgeteers. They mix those with advanced degrees and 20 years of experience in with those who were transferred into a field marketing job last week because they weren’t cutting it in sales. They commingle packaged goods marketers with those selling industrial coatings and others providing mobile dog grooming.

If you look closely, the questions are often constructed in somewhat leading ways, and the inferences drawn from the results conveniently ignore the statistical error factors which frequently wash-away any actual findings whatsoever. There is also a strong tendency to draw conclusions year-over-year when the only thing in common from one year to the next was the survey sponsor.

As marketers, we do ourselves a great dis-service whenever we grab one of these survey nuggets and imbed it into a PowerPoint presentation to “prove” something to management. If we’re not trustworthy when it comes to vetting the quality of research we cite, how can we reasonably expect others to accept our judgment on subjective matters?

So the next time you’re tempted to grab some headlines from a “survey” – even one done by a reputable organization – stop for a minute and read the fine print. Check to see if the conclusions being drawn are reasonable given the sample, the questions, and the margins of error. When in doubt, throw it out.

If we want marketing to be taken seriously as a discipline within the company, we can’t afford to let the “marketers” play on our need for convenience and simplicity when reporting “research” findings. Our credibility is at stake.And by the way, please feel free to comment and post your examples of recent “research” you’ve found curious.

Tuesday, March 24, 2009

It's All Relative

Brilliant strategy? Check.

Sophisticated analytics? Check.

Compelling business case? Check.

Closing that one big hole that could torpedo your career? Uhhhhhhh....... Most new marketing initiatives fail to achieve anything close to their business-case potential. Why? Unilateral analysis, or looking at the world only through your own company's eyes, as if there was no competition.

It sounds stupid, I know, yet most of us perform our analysis of the expected payback on marketing investments without even imagining how competitors might respond and what that response would likely do to our forecast results. Obviously, if we do something that gets traction in the market, they will respond to prevent a loss of share in volume or margin. But how do you factor that into a business case?

Scenario planning helps. Always "flex" your business case under at least three possible scenarios: A) competitors don't react; B) competitors react, but not immediately; C) competitors react immediately. Then work with a group of informed people from your sales, marketing, and finance groups to assess the probability of each of the three possibilities, and weight your business case outcomes accordingly.

If you want to be even more thorough, try adding other dimensions of "magnitude" of competitive response (low/proportionate/high) and "effectiveness" of the response (low/parity/high) relative to your own efforts. You then evaluate eight to 12 possible scenarios and see more clearly the exact circumstances under which your proposed program or initiative has the best and worst probable paybacks. Then if you decide to proceed, you can set in place listening posts to get early warnings of your competitor's reactions and hopefully stay one step ahead.

In the meantime, your CFO will be highly impressed with your comprehensive business case acumen. Check

Monday, November 05, 2007

Google to Dominate Dashboards?

Having conquered the worlds of web search and analytics, is Google about to corner the market on marketing dashboards?

Hardly.

What Google is doing is coordinating online ad display data with offline (TV) ad exposures. Google is partnering with Nielsen to take data directly from Nielsen’s set-top-box panel of 3,000 households nationwide and mash it up with Google analytics data to find correlations between on- and off-line exposure. The premise is, I’m sure, to help marketers integrate this data with their own sales information and find statistical correlation between the two as a means of assessing the impact of the advertising at a high level. By using data only from the set-top box, Google is able to present offline ad exposure data with the same certainty as it does online – e.g., we know that this ad was actually shown. Unfortunately, we don’t know if the ad (online or off) was actually seen, never mind absorbed.

However, with the evolution of interactive features in set-top boxes, it won’t be long before we begin to get sample data of people “clicking” on TV ads, much like we do online ads. So we’ll get the front end of the engagement spectrum (shown) and the back end (responded). But we won’t get anything from the middle to give us any diagnostic or predictive insights to enhance the performance of our marketing campaigns.

A full marketing dashboard integrates far more than just enhanced ratings data and looks deeper than just summary correlations between ads shown and sales to dissect the actual cause of sales. Presuming that sales were driven by advertising in the Google dashboard model would potentially ignore the influence of a great many other variables like trade promotions, channel incentives, and sales force initiatives.

Drawing conclusions about advertising’s effect solely on the basis of looking at sales and ratings would quickly undermine the credibility of the marketing organization. So while the Google dashboard may be a welcome enhancement, it’s not by any stretch a panacea for measuring marketing effectiveness.

It seems to me that Google has created better tools. But through their lens of selling advertising, they’re perpetuating a few big mistakes.