Chairman & Founder
Quoting from Psychological Bulletin, 1998, Vo. 124, No.2, 262-274 concerning a meta-analytic study conducted by two eminent researchers who reviewed 85 years of research findings,
"...for hiring employees without previous experience in the job, the most valid predictor of future performance and learning is general mental ability ([GMA], intelligence or general cognitive ability…)."
Of the 19 predictors reviewed in their study, only work-sample tests (useful only when applicants already have knowledge of the job) were superior.
OK, work sample tests fit our minimum inference standard and make sense. They consist of samples of what applicants will actually do in the future. If designed properly, little inference is required. But whoa…. intelligence tests predicting job performance? And the more complex the job, the better they predict! Sounds logical. You wouldn’t expect people who are not intelligent to effectively perform complex jobs. But what about the inference stuff?
Actually it’s more than that…….note the word “valid” in the quotation above. In the context of testing, psychometrics, and employment law, this word has a very special meaning.
va·lid The probability that an observed correlation would occur as a function of chance only 5 times (or less) out of 100.
Remember the concept of content validity introduced in the second blog? Now we are dealing with a totally different type of validity. One that is dependent upon mathematics (statistical correlations) instead of the logical overlap between predictor content and future job content. To determine this form of validity, predictor values (test scores for example) are mathematically correlated with actual measures of job performance. If a statistically significant or valid (as defined above) relationship exists, it can be used to determine the different job success probabilities of applicants with different test (predictor) scores. And can be legally and professionally used in selection decisions. The higher the correlation, the more accurate the prediction.
This is obviously a different type of inference. Think of it as quantified inference that approaches explicit knowledge. Potentially powerful stuff.
If the validity of a predictor is established and quantified, decision makers can know the probability of promoting/hiring people who will be successful on the job (true positives). They can also know the probability of rejecting people who would be successful (false negatives) and accepting those who will not be successful (false positives).
I should also mention that this is the legal definition of validity for predictor measures whose validity cannot be established using content validity.
For companies trying to increase their performance and efficiency by having the right people in critical jobs, this would seem to be some sort of nirvana. Accurate and cheap to administer. Why is it, then, that very few companies use this methodology when hiring and promoting?
I do not have the time or space to deal with this issue and will not challenge the reader’s patience by attempting to do so. Suffice it to say that the answer is partially scientific, partially social and partially legal. Things beyond the limits of this blog.
So why mention it? I thought it important to describe this method of establishing validity. Even an overview of selection/promotion techniques would be incomplete without it. And it will be referenced to describe the usefulness of the other techniques that will be addressed in future blogs.
Statistics and statistical methods will be minimized. Statistics 101 not required.
By the way…… Psychological Bulletin is a peer-reviewed academic journal published by the American Psychological Association. It was first published in 1904 and is generally considered a, if not the, “gold standard” of our science. The study cited above will be used again in future blogs.
We are here to help!
Leave your name or contact our office:
Cornerstone Management Resource Systems
212 Mary Street
Carnegie, PA 15106
Phone: (412) 429-6400
Fax: (412) 429-6450