Man and woman at computer using and Interpreting Talent Benchmarks
Man and woman at computer using and Interpreting Talent Benchmarks

Blog

5 Tips for Using and Interpreting Talent Benchmarks

Organizations use talent benchmarks to help guide development and selection initiatives. To be effective, these benchmarks must be well-constructed, accurately interpreted, and used strategically.

Publish Date: November 13, 2019

Read Time: 6 min

Author: Ann Li, Ph.D. and Bre Wexler, Ph.D., JD.

/ Resources / Blogs / 5 Tips for Using and Interpreting Talent Benchmarks

It’s human nature to compare ourselves to others. This also holds true at the organizational level. For example, one question you may have: How can we know how our organization’s talent compares to our competitors’ on key metrics? The answer is simple: talent benchmarks.

A growing number of organizations are adopting formal assessments that measure their employees on important traits and competencies. They are using these assessments to select and develop talent. While the reports from these assessments capture individual and aggregate results on the measured attributes (personalities, competencies, etc.), vendors such as DDI that deliver these assessments also often include benchmark information that shows how the organization compares to other, similar organizations.

Not All Talent Benchmarks Are Created Equal

The value of these benchmarks depends on several factors. The criteria used to inform the benchmark sample, the quality of data employed to construct the benchmark sample, and how the benchmarks are used to guide talent development and selection initiatives are just a few of these factors.

Well-constructed talent benchmarks enable you to make meaningful comparisons. They make it easier for you to pinpoint areas where development and selection efforts may be underperforming when compared to similarly situated organizations. Using poorly constructed benchmarks, on the other hand, can:

  • Create false perceptions—either too positive or negative—about your organization’s talent.
  • Lead to incorrect conclusions (if the comparison is not meaningful).
  • Steer your development and selection efforts in the wrong direction. This can result in wasted time and resources that would be better allocated elsewhere.

How to Become “Benchmark-Savvy”

Fortunately, you can avoid these negative outcomes. Here are five practical tips that can help you think more critically about the interpretation and application of talent benchmark data. The good news: You do not need to be a statistician to become “benchmark-savvy.”

1. Check the sample size. 

Benchmarks created from small samples lack the stability and reliability of benchmarks created from larger samples. A high-quality assessment report should always disclose benchmark sample sizes. DDI’s research recommends a minimum sample size of 100 participants for a stable and reliable benchmark sample. Once this minimum threshold is met, the increase in reliability levels off. In other words, a larger sample size does not necessarily continue to provide greater reliability returns.

2. Don’t compare apples to oranges. 

It’s important to check the comparison group. Benchmarks are less meaningful if the target organization is compared against a group that does not accurately reflect its employee population. The population used to create the benchmark sample should be as similar as possible to the target organization (e.g., job level, industry, geographic region) without sacrificing the minimum recommended sample size discussed above. For example, benchmarking mid-level leaders in the U.S. against other mid-level leaders in the U.S. allows an organization to derive meaningful insights about how its mid-level population compares to those in similarly situated competitor organizations. The benchmark comparison would not be as meaningful, of course, if the same mid-level U.S. leaders were compared to senior leaders in Southeast Asia. This is why understanding the data that makes up a benchmark is so important. It’s also why ensuring your partner organization has a breadth of data from which to create specific benchmarks—while maintaining minimum reliable sample size thresholds—is critical.

3. Know how the key talent metrics were assessed. 

The quality of the assessment instruments sets a baseline for the quality of the benchmarks. Organizations should thoroughly validate assessments to ensure the instruments successfully measure the intended attributes and predict the intended outcomes—not just across all participants, but also within relevant subpopulations. Talent benchmarks can give your organization valuable leadership development and selection insights. Click to Tweet Regional benchmarks will be more meaningful if the assessment has been locally validated within each region used to create benchmarks. Accordingly, global benchmarks will be more meaningful if the assessment has been cross-culturally validated.

4. Don’t over-interpret benchmarks. 

Benchmarks are not the be-all and end-all of data comparisons and should not be the only data point used to inform an organization’s talent decisions. Instead, organizations should interpret benchmarks alongside other relevant data and consider them within the context of the organization. Perhaps your organization is going through a big change or a culture shift. If so, the benchmarks should be interpreted accordingly.

5. Set action plans. 

Once you have considered the quality and interpretation boundaries of the talent benchmarks you have been provided, it’s time to determine how you can put this data into action for your organization. Is your organization outperforming or underperforming against others in certain areas? If so, by how much? If your organization is outperforming similarly situated organizations, this may indicate a competitive advantage. Moving forward, you should think about what your organization has been doing well and how to maintain this level of performance. It’s important to look at organizational data by different demographic variables relevant to the organization (e.g., location, job level, business unit, etc.) to determine whether the results are consistent across organizational populations. And remember: A small number of high-scoring participants or departments can inflate the organization’s overall results. When competency scores fall below the benchmark, future actions should depend on whether the underperforming areas are developable. If the areas are difficult to develop, you would be wise to focus on adjusting current hiring practices. This can be as simple as adding additional competencies to the selection process. Or, it may demand a more comprehensive change, such as adopting behavioral interviewing. Alternatively, if the underperforming areas are trainable, efforts should instead focus on adding development options or offering coaching opportunities.

It Takes Diligence

Talent benchmarks can provide organizations with extremely valuable insights. But benchmarks must also be constructed properly, interpreted accurately, and used strategically to effectively guide talent selection and development efforts.

These actions require diligence both on the part of the end users who interpret and action plan with benchmarks and on the part of the technical analytics experts who create and implement benchmarks.

Read more about DDI’s leadership assessments for all levels, from frontline to executive. Each report we create includes benchmarks that meet our strict sample size and comparison group criteria to ensure their quality and relevance.

Ann Li, Ph.D., is an intern in DDI's Survey, Testing, Assessment, and Team Services (STATS) group. Her role focuses on building the next generation of DDI's impact surveys and enhancing data visualization through dynamic reporting. In her free time, she enjoys playing piano, hiking, and traveling around the world.

Bre Wexler, Ph.D., J.D., is a consultant at DDI’s world headquarters in Pittsburgh. She works on reporting and analytics projects for DDI’s clients using leadership testing and assessment data and leads several internal reporting initiatives. Bre also manages DDI’s Impact Evaluations, where she is responsible for creating and maintaining the surveys and reports that demonstrate the impact of DDI’s assessment and development solutions. Outside of her work at DDI, most of Bre’s time is spent working on projects for her new house and upcoming wedding. She also enjoys spending time with her fiancé and their dogs, Rocco and Koda.

Topics covered in this blog