Thursday, November 12, 2009

"That is the Big Challenge" says Eric Schmidt

I had a chance to talk briefly with Eric Schmidt, CEO of Google, at last week’s ANA conference. He’d just finished sharing his take on marketing and advertising with 1200 of us representing marketers, agencies, and supporting service providers. He said:

  • Google backed away from managing radio and print advertising networks due to lack of “closed loop feedback”. In other words, they couldn’t tell an advertiser IF the consumer actually saw the ad or if they acted afterward. Efforts to embed unique commercial identifiers into radio ads exist, but are still immature. And in print, it’s still not possible to tell who (specifically) is seeing which ads – at least not until someone places sensors between every two pages of my morning newspaper.
  • Despite this limitation, Schmidt feels that Google will soon crack the code of massive multi-variate modeling of both online and offline marketing mix influences by incorporating “management judgment” into the models where data is lacking, thereby enabling advertisers to parse out the relative contribution of every element of the marketing mix to optimize both the spend level and allocation – even taking into account countless competitive and macro-environmental variables.
  • That “everything is measurable” and Google has the mathematicians who can solve even the most thorny marketing measurement challenges.
  • That the winning marketers will be those who can rapidly iterate and learn quickly to reallocate resources and attention to what is working at a hyper-local level, taking both personalization and geographic location into account.
On all these fronts, I agree with him. I’ve actually said these very things in this blog over the past few years.

So when I caught up with him in the hallway afterward, I asked two questions:

  1. How credible are these uber-models likely to be if they fail to account for “non-marketing” variables like operational changes effecting customer experience and/or the impact of ex-category activities on customers within a category (e.g., how purchase activity in one category may affect purchase interest in another)?

  2. At what point do these models become so complex that they exceed the ability of most humans to understand them, leading to skepticism and doubt fueled by a deep psychological need for self-preservation?
His answers:

  1. “If you can track it, we can incorporate it into the model and determine its relative importance under a variety of circumstances. If you can’t, we can proxy for it with managerial judgment.”

  2. “That is the big challenge, isn’t it.”
So my takeaway from this interaction is:

  • Google will likely develop a “universal platform” for market mix modeling, which in many respects will be more robust than most of the other tools on the market – particularly in terms of seamless integration of online and offline elements, and web-enabled simulation tools. While it may lack some of the subtle flexibility of a custom-designed model, it will likely be “close enough” in overall accuracy given that it could be a fraction of the cost of custom, if not free. And it will likely evolve faster to incorporate emerging dynamics and variables as their scale will enable them to spot and include such things faster than any other analytics shop.

  • If they have a vulnerability, it may be under-estimating the human variables of the underlying questions (e.g., how much should we spend and where/how should we spend it?) and of the potential solution.

Reflecting over a glass of Cabernet several hours later, I realized that this is generally a good thing for the marketing discipline as Google will once again push us all to accelerate our adoption of mathematical pattern recognition as inputs into managerial decisions. Besides, the new human dynamics this acceleration creates will also spur new business opportunities. So everyone wins.

No comments: