Chairman & Founder
The first blog of this series addressed the use of inference when promoting people across job families—when critical components of the new, higher level job have little content overlap with the content of previous jobs. I identified the use of inference in this situation as problematic. However, since we cannot look into the future to develop explicit knowledge about performance in a job that exists in a different job family, we are dependent upon some level of inference-based decision-making process. Our challenge is to use one that results in inferential “hops” instead of “leaps.” Think about this as the “hop—leap” continuum. To demonstrate, we will start on the “leap” side.
The greatest leap that comes to mind is phrenology—the understanding of intelligence and personality by measuring the dimensions of and lumps on the human skull. During the early 1800s, employers sometimes required a “character reference” from the local phrenologist before making a job offer. Much inference/faith required here. A more modern leap is graphology—the determination of personality, abilities, character, etc. through handwriting analysis. I once followed a graphologist at a seminar for managers where different promotion techniques were described. The graphologist promised 100% accuracy when making promotion decisions. Some act to follow! I wondered what I was doing there and desperately wanted to be somewhere else.
However, before you reject graphology as something from another era, I have been told that it is quite popular in France and Israel. I have likewise been told that differences in writing are primarily a function of fine motor coordination and the strength of different finger muscles. And, in my case, having Miss Gilbert in the third grade to teach me cursive. Some readers may remember the perfectly drawn letters on the top of grammar school black boards. I’m certain Miss Gilbert created those.
Moving toward the “hop” side of our continuum, there are two commonly used selection methodologies I need to address. They can be applied as either interviews or questionnaires, but we normally think of them as the basis for structuring the content of interviews. The first requires the least amount of inference.
Behavioral consistency reviews. Based on the reasonable principle that past behavior can predict future behavior, this model attempts to predict future performance from performance on an earlier job. It requires that the previous job tasks are reasonably similar (consistent) to tasks that will be performed in the future. It is primarily appropriate when jobs are in the same family. These overlapping (or near-overlapping) tasks are said to have high “content validity.” There is a logical relationship between the content of what is being used to predict performance and the performance to be predicted. Little inference required.
When the new job exists in a different job family, (for example sales rep to sales manager) it is reasonable to assume that some tasks will, in fact, be similar. However, many new and important tasks will be entirely different. In this case, performance on the previous job will have less content validity and utility as a predictor of future performance.
The next type of interview is generally known as “behavioral.” This type of interview attempts to compensate for the voids left in behavioral consistency interviews when the previous job is not meaningfully similar to the new one. To bridge this gap, constructs known as dimensions or competencies, supposedly relevant to performance in the new job, are created. These constructs (themselves created through inference) become the focus of questioning and attempted measurement. Behavior from past life experiences (including jobs) is interpreted through these constructs which are evaluated and used to create inferences about future performance in the new job. Whereas behavioral consistency instruments use past behavior to predict future behavior, “behavioral” interviews use evaluations of constructs (dimensions, competencies, etc.) to predict future performance. “Behavior” (relevant or irrelevant) is used to evaluate the constructs. Inference abounds. A case can be made that these types of interviews would be better named construct interviews.
In addition, there is the issue of accuracy of dimension or competency measurement.
That is a big deal and will be addressed in a later blog.
I have only briefly described two types of interviews. There are many others (situational, psychological, job-related, etc.) that have been researched. Some types seem to be more accurate than others. Structured interviews consistently outperform unstructured ones and all types suffer from differences in interviewer skills. Resplendent in problems that should detract from their accuracy, the interviewing process seems remarkably robust. What interviews measure may not be entirely clear, (probably not what they are supposed to) but overall interview scores or evaluations can be expected to be moderately accurate as predictors of future job performance. They are almost universally used in hiring and promotion decisions and it is my bet that, in the twilight of selection procedures, they will be the last one standing. If, for no other reason, managers do not like to hire people they have not talked to.
I have no way of knowing the backgrounds of people reading this series of blogs. However, I bet the next set of tools to be described for making promotion or selection decisions will come as a complete surprise to many readers.
We are here to help!
Leave your name or contact our office:
Cornerstone Management Resource Systems
212 Mary Street
Carnegie, PA 15106
Phone: (412) 429-6400
Fax: (412) 429-6450