In the course of developing dashboards for a number of our Global 1000 clients over the past few years, we’ve learned many lessons about what really works vs. what we thought would work. One of them is a recognition that, try as you might, only about 50% of the metrics you initially choose for your dashboard will actually be the right ones. That means half of your metrics are, in all likelihood, going to be proven to be wrong over the first 90 to 180 days.
Are they wrong because they were poorly chosen? No. They’re wrong because some of the metrics you selected won’t be as nearly as enlightening as you imagined they would be. Perhaps they don’t really tell the story you thought they were going to tell. Others may be wrong because they will require several on-the-fly iterations before the story really begins to emerge. You might need to filter them differently (by business unit or geography, for example) or you might need to recalculate the way they’re being portrayed. Regardless, some minor noodling is not uncommon when trying to get a metric to fulfill its potential for insight.
Still other metrics will become lightning rods for criticism, which will require some pacification or compromise. In the process, you may have to sacrifice some metrics in order to move past political obstacles and engender further support for the overall effect of the dashboard. If one of your organization’s key opinion leaders is undermining the entire credibility of the dashboard by criticizing a single metric, you may find it more effective to cut the metric in question (for the time being, at least) and do more alignment work on it.
Finally, many of your initial metrics simply may not offer any real diagnostic or predictive insight over time. You may pretty quickly come to realize that a metric you thought was going to be insightful doesn’t have sufficient variability to it, or it may not offer much more than a penetrating glance into the obvious.
So the fact that half of the initial metrics will be proven to be wrong over the course of several months after your roll out your dashboard is bad news, right? No – it’s actually a good sign. It shows that the organization has embraced the dashboard as a learning tool, and that the flexibility to modify it is inherent in the process of improving and managing the dashboard as you go.
Here’s my advice: When implementing a new dashboard, be prepared to iterate rapidly over the first 90 days in response to a flood of feedback. After that initial flurry, develop a release schedule for updates and stick to it, so you can make improvements on a more systematized basis. But above all, make sure you’re responsive to the key constituents and that those constituents have a clear understanding of how their input is being reflected in the dashboard. Or if it’s not being reflected there, be prepared to explain why.
Subscribe to:
Post Comments (Atom)
1 comment:
insightful. question is how will the new dashboard affect your credibility when the first dashboard was aleardy flooded with feedbacks or criticisms.
Post a Comment