Social Business Zone is brought to you in partnership with:

Reimagining the way work is done through big data, analytics, and event processing, Chris is the cofounder of Successful Workplace. He believes there’s no end to what we can change and improve. Chris is a marketing executive and flew for the US Navy before finding a home in technology 17 years ago. An avid outdoorsman, Chris is also passionate about technology and innovation and speaks frequently about creating great business outcomes at industry events. As well as being a contributor for The TIBCO Blog, Chris contributes to the Harvard Business Review, Venture Beat, Forbes, and the PEX Network. Christopher is a DZone MVB and is not an employee of DZone and has posted 302 posts at DZone. You can read more from them at their website. View Full User Profile

Getting valid benchmarking data

03.13.2013
| 1123 views |
  • submit to reddit

In my other benchmarking posts I’ve explained that you’ll never get to a level of precision where you will truly have an exact comparative benchmark value. It has been proven and noted that the same person asked the same question will give you different answers.

But, even if it were possible, you would not be able to afford the costs associated with validating every aspect of every number you would need to in order to reach 100% comparability. It just won’t happen, and that’s OK. But, there are things you can do to get as valid as required for decision-making. I’ll share some of the tools and tricks we use at APQC in our metric benchmarking projects.

Develop strong benchmarking definitions

You should spend the vast majority of your time developing strong definitions for your benchmarking project. These definitions include the scope (or focus) for the project, as well as the tactical definition of all the measurements and terms used in your survey instrument.

Well-developed benchmarking scope

Whenever we benchmark, we use the APQC Process Classification Framework (PCF) (www.apqc.org/pcf) to define the scope of the project. There are numerous names, terms, labels, and acronyms used for various processes and functions inside organizations. We remove as much misunderstanding as we can by using descriptions of the process groups, processes, and activities from the PCF to describe the scope of the benchmarking project.procurement, benchmarking, data, process

For example, we find many names used for the process that procures raw materials (or services) for an organization. Some call this area “procurement”, but others may call it “purchasing”, “buying”, “requisitioning”, or “that stuff Bob down the hallway does”. As shown in the attached graphic, we call it “4.2 Procure materials and services” and describe it using an outline of the processes and activities that normally occur within “4.2”. This is scope is customized based on how a specific organization structures their organizational processes. To continue the procurement example, some organizations will include the financial payment activities in procurement, while others don’t.

Strong benchmarking definitions

There is no easy way around this. You just have to put in the work to make sure you define everything that might need clarity. The best advice I can give you is to first rely on subject matter experts. These are usually the folks that work in the area being benchmarked. But, many times they have a certain bias (and language) that can skew the survey. So, we always conduct a survey pilot phase for our benchmarking projects.

We have individuals not associated with details of the project, but familiar with the processes being benchmarked, actually take the survey. They will usually raise any issues with vagueness or the data they submit will show odd variations. You will never get everyone thinking about the data the exact same way when they submit data, but this can get you very close and let data validation do the rest.

Use strong benchmarking validation activities

We use two different types of validation activities during our metric benchmarking projects. These activities help account for any remaining inconsistencies present in the data.

Start with logical validation

When you design the survey, think about the logical things you’d expect from a submission and the person submitting the data. Did you get a response to a marketing survey from someone with a payroll title? That may or may not be OK, but you would want to question it. There may be certain ratios or logical calculations that can flag a response as suspect, as well. For example, you might flag a response where a “total cost” metric is reported, but that is twice as much as the “total revenue” reported for the same submission. That could be OK, but you’d want to question it (and understand how they are still in business).

Apply strong statistical validation methods

Once you have noted all the logical validation issues that might be present in the data set, you’ll want to apply statistical methods to test the data. We calculate normalized metrics for every survey submission and note any metric that’s value is greater than two standard deviations away from median value for that same metric. Again, it may be perfectly valid, but we flag the response and verify the information with the submitter before allowing the data into our final data set.

These are just a few of the key steps I would suggest, along with making sure you communicate with all the participants in your benchmarking project. You will get comparisons that well beyond “in the same ballpark”. You’ll get comparisons that are in the same section and row of seating in the ballpark.

Pretty close to the same seat.

I’d love your comments or questions.

Republished with permission

Published at DZone with permission of Christopher Taylor, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)