Skip to main content

Selection is in my view one of the most, if not the most important task you will be involved in as an HR practitioner. A good selection is foundational to the success of your organization. Unfortunately, it’s one of the most misunderstood areas of HR, and probably one of the most undervalued – in part because people think it's easy to do. So what I’m going to talk about today is the baseline knowledge you as an HR practitioner must have before you assist a manager with building selection questions, lead or participate in a selection panel, or go anywhere near writing a selection policy or handbook for your organization.

Two names in the selection which you should know are Frank L. Schmidt & John E. Hunter (I’ve linked here to their Wikipedia profiles, Professor Hunter, unfortunately, passed away in 2002), they are simply giants in the field – or rock-stars to use a more popular term. Focusing on Professor Schmidt, he has been cited more than 8,000 times (and this was at 2006), presented at over 300 academic conferences, published over 200 research articles, and numerous books. When Professor Schmidt talks about selection, we need to listen.

In 1998 Schmidt & Hunter published an article the Psychological Bulletin (an American Psychological Association publication) with the title ‘The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 85 Years of Research Findings’. This has been a seminal paper in selection research, frankly, if you don’t know this article you don’t know selection.

For practitioners the key part of the article is Table 1 on page 265, the table provides a summary of different selection methods and their predictive validity – you don’t know what predictive validity is keep reading, I’ll cover it shortly.

Personnel Measures Validity

GMA tests________________________________________________0.51
Work sample tests________________________________________0.54
Integrity tests__________________________________________0.41
Conscientiousness tests__________________________________0.31
Employment interviews (structured)_______________________0.51
Employment interviews (unstructured)_____________________0.38
Job knowledge tests______________________________________0.48
Job tryout procedure_____________________________________0.44
Peer rating______________________________________________0.49
Reference checks_________________________________________0.26
Job experience (years)___________________________________0.18
Biographical data measures_______________________________0.35
Assessment centers_______________________________________0.37
Years of education_______________________________________0.10

So predictive validity (represented on the above table as simply validity) is the degree to which a score on a test or assessment predicts performance on a criterion measure. So in English, in selection what we’re trying to do is gain an understanding of a person's future job performance, through using selection tools such as interviews. Predictive validity is simply a measurement of how well a test predicts future performance – so it's pretty important when we’re talking about the selection and which selection tools to utilize. Validity ranges from 1.0 to -1.0, with 1.0 meaning that it is a one to one relationship, while -1.0 means that there is absolutely no relationship between the two items being assessed. For myself I find it easiest to think of validity like a percentage, where 1.0 equates to 100%, that is it will happen every time, and -1.0 means 100% of the time the two factors are not connected. Looking quickly at Age as a predictor, the validity of -0.01 essentially means there is no relationship, so age has no bearing on job performance. Let me know in the comments if something here doesn’t make sense, and I’ll rework or add more to this section.

So looking back at the predictive validity scores from Schmidt & Hunter’s article, we can interpret those scores like this. All other things being equal, using Years of education as a selection measure, we will select the best applicant from our applicant pool 10% of the time – or 90% of the time you won’t get the best applicant. Hopefully now as you look at that table, things are starting to make sense – and most likely some alarm bells are going off as well. You can see for instance using structured employment interviews rather than unstructured employment interviews (or what I call simply having a chat) will on average increase your chances of selecting the best candidate by 13%. If you’ve read my article on SDy you’ll understand that getting the best candidate has a dramatic impact on your bottom line.

Now you may be thinking that by combining the different measures, you can get pretty close to being 100% right 100% of the time. If only it worked this way, unfortunately however it doesn’t. By combining multiple selection tools you will get a slight boost in predictive validity, and while given the very positive financial impacts on getting the best person you can, the reality is that time, money and indeed face validity are important considerations to take into account within the selection process (I’ll be writing a future article on face validity and selection, to promote further understanding of that term and its application to selection). Schmidt and Hunter do cover combining selection methods in their article, specifically Schmidt and Hunter provide the predictive validity achieved when GMA (General Mental Ability) tests are combined with other selection methods, for example combining GMA tests with Integrity tests will provide a predictive validity of 0.65 – or all other things being equal, you will select the best candidate 65% of the time using GMA tests in combination with Integrity tests.

Personnel Measures Validity

GMA tests
Work sample tests________________________________________0.63
Integrity tests__________________________________________0.65
Conscientiousness tests__________________________________0.60
Employment interviews (structured)_______________________0.63
Employment interviews (unstructured)_____________________0.55
Job knowledge tests______________________________________0.58
Job tryout procedure_____________________________________0.58
Peer rating______________________________________________0.58
Reference checks_________________________________________0.57
Job experience (years)___________________________________0.54
Biographical data measures_______________________________0.52
Assessment centers_______________________________________0.53
Years of education_______________________________________0.52

You can see when comparing the two tables I’ve provide above, that combining selection methods isn’t simply a matter of adding the two or more predictive validity scores, and that is your overall predictive validity score for your selection process. And many of us (I certainly know this includes myself) don’t have either that statistical knowledge or data that Schmidt and Hunter did when they ran these calculations. So after all the numbers, what can you take from this?

The key message that I hope you take from this is that not all selection methods are equal, some are better than others – and some (such as age) statistically and morally have no place within a selection process. In a perfect world we would all use selection methods which have high predictive validity, and use a number of them to assess candidates. However we don’t live in a perfect world, we are constrained by budgets, selection processes which take too long have the potential for top applicants to be offered jobs by other live applications they have in the job market, and our managers and our organizations need that talent now. So when advising managers, or writing policy, or indeed leading a selection panel, remember the work of Schmidt and Hunter, and make an informed and objective selection of which selection methods or tests to use. Leveraging their work can create enormous opportunities for innovative and cost-effective selection processes which provide results, for example using a job knowledge test rather than a job tryout procedure will be cheaper for the organization, and indeed provide a better result as job knowledge tests have a predictive validity of 0.48 vs job tryout procedures which have a predictive validity of 0.44 – how surprising will that be to your management team, a better selection test which is cheaper. Alternatively being able to demonstrate through Schmidt and Hunters work that using a structured employment interview rather than simply having a chat (aka an unstructured employment interview) while costing more in terms of time to prepare, will yield far better results in terms of predicting with greater consistency the best applicant from your applicant pool.

So that’s basically it. I appreciate it’s a lot to take in, potentially I’ve covered some new concepts here, I would encourage you to keep reading through it until it all falls into place. Selection is a core activity of HR and one in which we can add tremendous value to our organization – when done well.

I look forward to reading some comments and questions on this one.

By Brendan Lys

Operating at the intersection of Human Resources and Data Science, I leverage extensive specialist experience within Human Resources, with the methodologies and approaches of Data Science. This focus on the discovery of actionable insights from data, has been applied to areas such as: remuneration & benefits, workforce planning, recruitment, health & safety, diversity, and training. But what does the application of data science to HR challenges and opportunities actually look like. Within an HR framework the data we work with typically comes directly from our HRMIS, an advantage of using data science methodologies is that we can bring in additional data either held within the organization or from external sources - data which is out of reach from a pure HR analytics approach. Consider for example position descriptions, these contain a wealth of data that we typically ignore as its not in a analysis ready format. A side project I'm working on currently (April 2019) is using text mining on job descriptions to provide insights into which job family the position may fit into. The insights of my work have been enjoyed by organizations across a diversity of sectors including: Government (Australia and New Zealand), ASX and NZX listed companies, utilities, not for profit and higher education.