Social Business Zone is brought to you in partnership with:

Reimagining the way work is done through big data, analytics, and event processing, Chris is the cofounder of Successful Workplace. He believes there’s no end to what we can change and improve. Chris is a marketing executive and flew for the US Navy before finding a home in technology 17 years ago. An avid outdoorsman, Chris is also passionate about technology and innovation and speaks frequently about creating great business outcomes at industry events. As well as being a contributor for The TIBCO Blog, Chris contributes to the Harvard Business Review, Venture Beat, Forbes, and the PEX Network. Christopher is a DZone MVB and is not an employee of DZone and has posted 276 posts at DZone. You can read more from them at their website. View Full User Profile

In benchmarking, you are the output

04.02.2013
| 1206 views |
  • submit to reddit

I wanted to tackle another benchmarking myth in this post. I’ve touched on it in previous posts in this series, but wanted to address it specifically. It is the myth of getting EXACTLY comparable data within peer groups.

Why is it a myth?

You can’t achieve the elusive “apples-to-apples” comparison when humans are involved, ever. Unless you are comparing very transactional data that is created by machines, you will always have issues with gauge R&R. Even if the data is machine-generated, you’ll probably still have these issues. Gauge R&R describes the impact of humans on data repeatability and reliability. In other words, you can have the exact same person report data using the exact same survey from the exact same process and you will get different results.

You shouldn’t try to overcome it

It is probably theoretically possible to overcome this issue, but you can’t afford it. The amount of cost and time (duration) it would take to overcome this issue would grind your business to a halt.

Your time is too valuable and you are too smart to focus too much on this part of the benchmarking exercise. Don’t get me wrong, I’m not saying ignore issues around validity, comparability, reliability, etc. (see my post on getting valid benchmarking data). I’m just saying spend your time more wisely.

What is the solution?

With benchmarking, learning is the outcome, not the number. The same goes for you. Your time is best spent learning about the data, understanding what the benchmarking, data, reliabilitydifferences are between your data and other peer groups, and making assumptions based on that understanding. Your company doesn’t pay you to find comparable data streams, read reports, and execute click the appropriate button. Or, if they do, a machine will probably be doing that job pretty soon.

You are valuable to your company because you understand your business, the marketplace, and how to achieve the goals (performance) of the organization in better, faster, and cheaper ways. As with everything, the more you engage in the benchmarking process, the more you are going to get out of it. Don’t make is a part of your “night job”, make it a key piece to your “day job”.

You make the benchmark the benchmark

At the end of the day, the output of benchmarking isn’t valuable, what you learn from the process and able to do for your organization with that output is valuable. You are the benchmark, not the report. There are ways to be more efficient and effective in benchmarking (like using third parties that specialize in benchmarking), but don’t try to commoditize the benchmarking process or output too much. You get too far away from the real value of benchmarking.

Published at DZone with permission of Christopher Taylor, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)