JR
  • Apps
  • AI Philosophy, Policies, and Competencies
  • AI-Powered UDL Strategies
  • LUDIA
  • AI Toolbox
  • Apps
  • AI Philosophy, Policies, and Competencies
  • AI-Powered UDL Strategies
  • LUDIA
  • AI Toolbox
Search

Why  Job  Interviews  Don't  Work  and  How  to  Fix  Them

5/6/2023

0 Comments

 
Picture

​Despite hiring being one of the most important decisions school leaders have to make, research tells us that traditional job interviews don’t work. Fortunately, research also points to several ways the selection process can be significantly improved.

The importance for organizations of hiring great employees cannot be overstated. Although the performance of an organization depends on a multitude of factors, all of them ultimately rely on the quality of its hiring process. This is true because, over time, an organization is its hiring process. Yet, in schools notably, employee selection still involves archaic and highly ineffective methods. Innovative in many other respects, most schools can’t seem to think beyond the “resume, cover letter, interview and list of references” box. ​

Job  Interviews  Don't  Work


Scientific studies suggest that none of these are reliable indicators of professional ability. Summarizing 85 years of research on how well pre-employment assessments predict post-employment performance, Schmidt and Hunter (1998) found that the correlation between employees’ performance during their interview and on the job was actually extremely low. Whatever explained what makes an employee better or worse at their job than someone else, 86% of it was not captured, or not accurately, by the interviewer. Such a high percentage would be understandable if we were trying to correlate future performance with specific criteria, such as years of experience or other naturally occurring variables. After all, professional performance depends on such a variety of factors that no single one of them can be a reliable proxy. But the correlation is alarmingly weak for instruments that have been designed to select the best available candidates. Indeed, the same study found that general intelligence tests were actually better at identifying skilled candidates than usual recruitment tools.

One could argue that these findings are more than 20 years old now, and did not target the specific field of education. Beyond going online, schools’ selection processes have not changed much since then, however. Compared to the corporations included in the review, schools are also often at a disadvantage because of their relatively small size and finances. Unsurprisingly, other meta-analyses focused on teacher interviews such as Young and Delli (2002) found them to be even less reliable than reported above.


After reflection, it is no surprise that job interviews don’t work. First of all, job interviews are supposed to assess a candidate’s ability to perform a variety of functions, but this is not what they actually do. Asking a candidate “how do you differentiate instruction” or “tell me about a trans-disciplinary project you initiated” is obviously not the same thing as collecting reliable evidence, over a period of time, of a teacher’s differentiation or collaborative skills. As a matter of fact, job interviews have precisely emerged as the go-to selection method precisely because of the practical impossibility of assessing candidates’ actual performance on the job. True enough, many schools now invite potential hires to give a “demo lesson”, a practice that has been made easier by technological progress, and that could seem to yield more accurate information: demonstrated rather than reported abilities. Ironically, however, this trend started around the same time that just as many schools moved their evaluation protocols away from the infamous “announced visit”, which has the same limitations as a job interview: the data collected isn’t representative of teachers’ actual, day-to-day practice. The only difference is that in one instance the person being evaluated says what they think the interviewer wants to hear, while in the other they do what they think their supervisor wants to see. The problem is not only that candidates might be less than authentic. Even in a best-case scenario, candidates’ answers and demos only reveal their best intentions, which might not reflect their habitual behavior. A reliable assessment of abilities must unearth pervasive patterns and enduring propensities, but this requires multiple data points collected over a long period of time.

As social psychologist Richard Nisbett wrote, “
Consider the job interview: it’s not only a tiny sample, it’s not even a sample of job behavior but of something else entirely. Extroverts in general do better in interviews than introverts, but for many if not most jobs, extroversion is not what we’re looking for” (Nisbett, 2015). This last observation points to a second problem: not only is the information collected through job interviews unreliable, but so is its interpretation by  interviewers. 
​​

Don't  Trust  Your  Gut

The issue, here, is that human thinking is fundamentally biased. As explained by evolutionary psychology, cognitive biases precisely emerged to help us reach conclusions in situations where large amounts of complex data are very hard to summarize in a rational way. Because interviews provide nominal data on unrelated dimensions, it is extremely difficult to synthesize and determine objectively how well a candidate did overall, and even more so in comparison to other candidates. How do you average a great example of student-centeredness with a disappointing answer to the question “tell me about a failed attempt in the classroom and what you learned from this experience”? And how does this compare to another candidate whose responses were merely satisfactory in both instances? Yet, recruiters do not necessarily struggle all that much to make their choice between the two. Subjectively, the right decision might even seem obvious. This is where biases come into play.

A review of all the different ways in which our mind tricks us in this context goes far beyond the scope of this article, but a couple of examples will still be useful. In one of many studies on the subject, Prickett et alia (2000) found that judgments made in the first 20s of an interview were reliable predictors of its final outcome. This means two things. First, that interviewers tend to rely excessively on the first pieces of information available (a bias known as the “anchoring effect”). Second, that there is a high risk that most of the interview will be spent, not assessing the candidate, but trying to corroborate the immediate impression they made (the well-known “confirmation bias”). What makes this especially problematic, of course, is that the first couple of “get-to-know-you” questions asked during an interview generally have very little to do with the candidate’s ability to do the job: “Tell me a little bit about yourself...”
​

True enough, assessing professional skills is not the only goal of job interviews. Often-cited additional purposes include “getting a feel” for the person and estimating whether they would be a “good colleague” and a “culture fit”. This last expression is revealing. While interviewers should certainly estimate whether a candidate would match the school’s mission and vision, the risk is high that a “click” (or lack thereof) might be little more than an unconscious signal with evolutionary origins that a person belongs to the same tribe as we do. One of the first findings in the history of psychology was that we tend to rely on isolated and largely irrelevant characteristics when forming personality impressions, and then proceed to paint the rest of the portrait using the same “warm” or “cold” color. More recently, social psychology has explained that such attraction (general liking or disliking) depends heavily on our familiarity and similarity with the person. Thus, Degroot and Motowildo (1999) found that candidates’ vocal cues (pitch, speech rate, pauses, amplitude variability) and visual cues (smiling, gaze, hand movement, posture) correlated with interviewers’ evaluations, and this through an effect on liking, trust, and perceived credibility. Such body language, however, says very little about a potential employee, and much more about the personal traits, social stratifications, cultural differences, and other power dynamics at play. 

Don't  Count  On  Your  Experience

As Richard Nisbett concluded, “countless studies show that the unstructured 30-minute interview is virtually worthless as a predictor of long-term performance by any criteria that have been examined. You have only slightly more chance of choosing the better of two employees after a half-hour interview as you would by flipping a coin”. Yet, the social psychologist added, “we are incapable of treating the interview data as little more than unreliable gossip. It’s just too compelling that we’ve learned a lot from those 30 minutes” (Nisbett, 2015). This overconfidence in the ability to predict long-term job performance from an interview, which Nisbett (1987) dubbed “interviewer illusion”, derives in part from a second-order bias: our natural tendency to discount the influence of biases on our own thinking. As John Wilkinson wrote in his 1836 book Quakerism Examined, “one of the artifices of Satan is to induce men to believe that he does not exist”.

Another serious issue, however, is what I like to call the “experience-expertise fallacy”, i.e., the erroneous assumption that experience automatically leads to expertise. Although the quote commonly attributed to Dewey is nowhere to be found in his writings (Lagueux, 2014), it remains true that “we do not learn from experience, but from reflecting on experience”. Without a proper feedback loop giving us a reality check when we make mistakes and a systematic analysis identifying necessary adjustments, there is simply no reason why experience, no matter how extensive, should increase our mastery in any domain. As a matter of fact, experience can just as well lead someone to develop bad habits and become set in ineffective and outdated ways. Worse, it can do so all while giving a false sense of confidence. In the case of recruitment decisions, properly assessing the quality of a selection process would require a number of controls to ensure objectivity. For instance, subsequent job performance would need to be assessed independently. In practice, however, recruiters and supervisors are usually the same person, which effectively means that hiring managers evaluate their own hiring abilities, with a high risk of choice-supportive bias.

Even when a bad hire is identified, a recruiter will really only learn from this mistake if they can pinpoint what it is exactly than went wrong in their decision-making process, and how they can fix it next time around. Likewise, few if any schools follow the careers of the candidates they have rejected, although this is really the only way they could know whether their recruitment procedures allow them to select the 
best possibles candidates, or make them miss out on fantastic hiring opportunities.
​

All in all, the belief in the reliability of job interviews has little to no support beyond the outdated “Great Man” theory, which assumed that leaders were somehow endowed with innate superior abilities. Even the more mundane reliance on “extensive experience” follows a similar superstition: the trope of the “old and wise sage”. In reality, leadership is quite the opposite of infallibility. In the field of education especially, it is synonymous with a dedication to human perfectibility. 

Structure  Your  Interviews

With regards to hiring, the great leader is thus not one who has a magical ability to read minds and foresee the future. To the contrary, it is one who is all too aware that such is not the case, and that effective employee selection requires an improvement on traditional recruitment practices. Schools have an advantage over other types of organization in this respect, because part of their core activity is to evaluate hard-to-assess student abilities. Strangely enough, it does not seem like many have leveraged known best practices, which include that assessment be backward-designed, skills-based, rubrics-bound, and differentiated. Could the same be true of effective teacher interviews?

As we have explained, the usual job interview is a highly unreliable selection tool. The general format is unlikely to go away anytime soon, however, simply for practical reasons. Fortunately, research shows that it can be significantly improved. The goal being to hire great future employees, the first logical step is to identify great current employees and to work backward to design a selection process that would identify similar talents. In fact, the goal being also to avoid bad hires, the same should be done with unsatisfying employees (or former employees...) This is in line with the findings of I/O psychology experts, such as Mary Tenopyr (1997) or Marcus Buckingham and Don Clifton (2001). A backward design obviously assumes that schools have effective evaluation systems in place. If that isn’t the case, reviewing the selection process could become an opportunity to define what makes a teacher or administrator “great”. This should not only include an alignment with the school’s mission, vision and definition of learning, but also a community-wide effort involving what colleagues, parents, and of course students value in an educator. The creation of this profile should focus on identifying the skills needed in a school’s particular context. This includes both the skills needed to be successful at a school in its current state and the skills needed by this school for the future. This breakdown of what a school is looking for needs to be quite granular. This might require asking expert teachers what matters in their particular subject. 
​

The real difficulty, however, does not lie in identifying the kind of skills that a school needs its future employees to possess. It lies in identifying who, among a pool of candidates, possesses them, and to what degree. To achieve this, these skills must be defined in a way that specifies how they can be recognized during the selection process. In the corporate world, “competency-based interviews” thus rely on rubrics that enable recruiters to evaluate candidates fairly and compare them objectively. In addition, effectiveness also requires that job interviews be “structured”, i.e., that questions be formulated ahead of time and asked exactly the same way, in the same order, to all candidates. To quote Richard Nisbett again, “my recommendation is not to interview at all unless you’re going to develop an interview protocol, with the help of a professional, which is based on careful analysis of what you are looking for in a job candidate. And then ask exactly the same questions of every candidate” (Nisbett, 2015). As awkward as this may sound, the meta-analysis mentioned earlier indicates that the mere fact of structuring an interview could double its predictive power (Schmidt and Hunter, 1998), a finding replicated in other studies, which also suggest that it takes 3 to 4 unstructured interviews to have the same predictive accuracy (Wiesner and Cronshaw, 1998; Schmidt & Zimmerman, 2004). What can make things even more awkward, though, is that the interviewer should also take notes to reduce memory distortions and biases (such as the contrast effect). If this seems too awkward, there is always the option of discreetly rating the candidate as the interview proceeds, or simply recording the interview if it is conducted online. 

Differentiate  the  Process

A more traditional unstructured or semi-structured interview can seem to have other advantages. As the interviewer does not follow a strict script but uses different questions and/or formulations for different candidates depending on the course of their interaction, they are by nature more conversational and thus more natural. However, the rigidity of a structured interview can be explained ahead of time to candidates as a way to ensure fairness. The best candidates will likely value that they reflect a school’s professionalism, but a more laid back discussion can also be organized at a later stage with selected candidates. It might also seem that the ability to ask follow-up questions is too important to stick to the constraints of any script. In reality, guiding and probing questions can and should be built into an interview script. For instance, situational questions (“What would you do if…?”) can follow the famous S.T.A.R. framework to dig into the candidate’s thought process. Likewise, behavioral questions (“What did you do when…?”) should include veracity checks.

As mentioned above, neither method is a sufficient predictor of future performance, simply because they only provide a single data-point. Much more promising are job auditions and work samples, during which candidates are asked to demonstrate their skills in different domains with the help of a portfolio—the most reliable selection methods, according to Schmidt and Hunter (1998). A particular advantage of this last selection tool is that it creates a space for differentiation, and provides rich data summarizing a much larger and more representative sample of behavior. 
​

Importantly, the methods described above are not mutually exclusive and can be combined during a job interview. Indeed, research indicates that no single recruitment tool can be as effective as the combination of many. Likewise, having more than one “rater” helps increase reliability dramatically. The most innovative schools might consider instituting 360-panels, which can include non-teaching staff members, peers, as well as students. Offering more than one interview is also ideal, when time and other limits permit. 

Know  Your  No

A frequent issue is the necessity to screen candidates before the interview stage, and thus on even less reliable information. The most easily accessible piece of data is often “years of relevant experience”, which Schmidt and Hunter (1998) have shown to leave out 97% of future performance. Rather than relying on criteria such as resume-length that have zero predictive power, one option would be to use automated online tests as a first screening step. Schools with a strong identity will appeal to a distinctive kind of educators, and thus should not find it too hard to identify early, with a limited number of critical questions, those who are not in line with its expectations and philosophy. Knowing precisely what a school does not want in a future employee is indeed just as important as knowing exactly what it is looking for. “Knowing your no” could be very helpful at the early stages of the process, when candidates are abundant. It notably makes it possible to use filters that are truly meaningful rather than merely practical.

References

Buckingham and Clifton (2001) - Buckingham, M., & Clifton, D. O. (2001). Now, discover your strengths. New York: Free Press. (p. 484) 

Degroot and Motowildo (1999) - Degroot, Timothy, and Stephan J. Motowidlo. “Why Visual and Vocal Interview Cues Can Affect Interviewers' Judgments and Predict Job Performance.” Journal of Applied Psychology, vol. 84, no. 6, 1999, pp. 986–993., doi:10.1037/0021-9010.84.6.986.

Lagueux (2014) - Lagueux, Robert. “A Spurious John Dewey Quotation on Reflection.” Academia.edu, 2014, www.academia.edu/17358587/A_Spurious_John_Dewey_Quotation_on_Reflection. 

Nisbett (1987) - Nisbett, R. E. (1987). Lay personality theory: Its nature, origin, and utility. In N. E. Grunberg, R. E. Nisbett, & others, A distinctive approach to psychologi- cal research: The influence of Stanley Schachter. Hillsdale, NJ: Erlbaum. (p. 485) 

Nisbett (2015) - Nisbett, Richard. “Why Job Interviews Are Pointless.” The Guardian, 22 Nov. 2015, www.theguardian.com/lifeandstyle/2015/nov/22/why-job-interviews-are-pointless. 
Prickett et alia (2000) - Prickett, Tricia J., et al. “First Impression Formation in a Job Interview: The Importance of the First 20 Seconds.” PsycEXTRA Dataset, 2000, doi:10.1037/e413792005-571. 

Schmidt and Hunter (1998) - Schmidt, Frank L., and John E. Hunter. “The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 85 Years of Research Findings.” Psychological Bulletin, vol. 124, no. 2, 1998, pp. 262–274., doi:10.1037/0033-2909.124.2.262. 

Schmidt and Zimmerman (2004) - Schmidt, Frank L., and Ryan D. Zimmerman. “A Counterintuitive Hypothesis About Employment Interview Validity and Some Supporting Evidence.” Journal of Applied Psychology, vol. 89, no. 3, 2004, pp. 553–561., doi:10.1037/0021-9010.89.3.553. 

Tenopyr (1997) - Tenopyr, M. L. (1997). Improving the workplace: Industrial/organizational psychology as a career. In R. J. Sternberg (Ed.), Career paths in psychology: Where your degree can take you. Washington, DC: American Psychological As- sociation. (p. 483) 

Wiesner and Cronshaw (1998) - Wiesner, Willi H., and Steven F. Cronshaw. “A Meta-Analytic Investigation of the Impact of Interview Format and Degree of Structure on the Validity of the Employment Interview*.” Journal of Occupational Psychology, vol. 61, no. 4, 1988, pp. 275–290., doi:10.1111/j.2044-8325.1988.tb00467.x. 

​
Young and Delli (2002) - Young, I. Phillip, and Dane A. Delli. “The Validity of the Teacher Perceiver Interview for Predicting Performance of Classroom Teachers.” Educational Administration Quarterly, vol. 38, no. 5, 2002, pp. 586–612., doi:10.1177/0013161x02239640. ​​​​​​
0 Comments

Your comment will be posted after it is approved.


Leave a Reply.

Proudly powered by Weebly
Photo from Gwydion M. Williams
  • Apps
  • AI Philosophy, Policies, and Competencies
  • AI-Powered UDL Strategies
  • LUDIA
  • AI Toolbox