“You never get a second chance to make a first impression” was the tagline for a Head & Shoulders shampoo ad campaign in the 1980s. It unfortunately encapsulates how most interviews work. There have been volumes written about how “the first five minutes” of an interview are what really matter, describing how interviewers make initial assessments and spend the rest of the interview working to confirm those assessments. If they like you, they look for reasons to like you more. If they don’t like your handshake or the awkward introduction, then the interview is essentially over because they spend the rest of the meeting looking for reasons to reject you. These small moments of observation that are then used to make bigger decisions are called “thin slices.”
Tricia Prickett and Neha Gada-Jain, two psychology students at the University of Toledo, collaborated with their professor Frank Bernieri and reported in a 2000 study that judgments made in the first 10 seconds of an interview could predict the outcome of the interview.
The problem is, these predictions from the first 10 seconds are useless.
They create a situation where an interview is spent trying to confirm what we think of someone, rather than truly assessing them. Psychologists call this confirmation bias, “the tendency to search for, interpret, or prioritize information in a way that confirms one’s beliefs or hypotheses.” Based on the slightest interaction, we make a snap, unconscious judgment heavily influenced by our existing biases and beliefs. Without realizing it, we then shift from assessing a candidate to hunting for evidence that confirms our initial impression.
Typical, unstructured job interviews are pretty bad at predicting how someone performs once hired.
In other words, most interviews are a waste of time because 99.4 percent of the time is spent trying to confirm whatever impression the interviewer formed in the first ten seconds. “Tell me about yourself.” “What is your greatest weakness?” “What is your greatest strength?” Worthless.
Equally worthless are the case interviews and brainteasers used by many firms. These include problems such as: “Your client is a paper manufacturer that is considering building a second plant. Should they?” or “Estimate how many gas stations there are in Manhattan.” Or, most annoyingly, “How many golf balls would fit inside a 747?”
Performance on these kinds of questions is at best a discrete skill that can be improved through practice, eliminating their utility for assessing candidates. At worst, they rely on some trivial bit of information or insight that is withheld from the candidate, and serve primarily to make the interviewer feel clever and self-satisfied. They have little if any ability to predict how candidates will perform in a job.
Full disclosure: I’m the Senior Vice President of People Operations at Google, and some of these interview questions have been and I’m sure continue to be used at the company. Sorry about that. We do everything we can to discourage this, and when our senior leaders—myself included—review applicants each week, we ignore the answers to these questions.
The Unsung Genius of the Structured Interview
In 1998, Frank Schmidt and John Hunter published a meta-analysis of 85 years of research on how well assessments predict performance. They looked at 19 different assessment techniques and found that typical, unstructured job interviews were pretty bad at predicting how someone would perform once hired.
Unstructured interviews have an r2 of 0.14, meaning that they can explain only 14 percent of an employee’s performance. This is somewhat ahead of reference checks (explaining 7 percent of performance), ahead of the number of years of work experience (3 percent).
The best predictor of how someone will perform in a job is a work sample test (29 percent). This entails giving candidates a sample piece of work, similar to that which they would do in the job, and assessing their performance at it. Even this can’t predict performance perfectly, since actual performance also depends on other skills, such as how well you collaborate with others, adapt to uncertainty, and learn.
And worse, many jobs don’t have nice, neat pieces of work that you can hand to a candidate. You can (and should) offer a work sample test to someone applying to work in a call center or to do very task- oriented work, but for many jobs there are too many variables involved day‑to‑day to allow the construction of a representative work sample. All our technical hires, whether in engineering or product management, go through a work sample test of sorts, where they are asked to solve engineering problems during the interview.
The second-best predictors of performance are tests of general cognitive ability (26 percent). In contrast to case interviews and brainteasers, these are actual tests with defined right and wrong answers, similar to what you might find on an IQ test. They are predictive because general cognitive ability includes the capacity to learn, and the combination of raw intelligence and learning ability will make most people successful in most jobs. The problem, however, is that most standardized tests of this type discriminate against non-white, non-male test takers (at least in the United States). The SAT consistently underpredicts how women and non- whites will perform in college. Reasons why include the test format (there is no gender gap on Advanced Placement tests, which use short answers and essays instead of multiple choice); test scoring (boys are more likely to guess after eliminating one possible answer, which improves their scores); and even the content of questions.
Tied with tests of general cognitive ability are structured interviews (26 percent), where candidates are asked a consistent set of questions with clear criteria to assess the quality of responses. There are two kinds of structured interviews: behavioral and situational. Behavioral interviews ask candidates to describe prior achievements and match those to what is required in the current job (i.e., “Tell me about a time . . . ?”). Situational interviews present a job-related hypothetical situation (i.e., “What would you do if . . . ?”). A diligent interviewer will probe deeply to assess the veracity and thought process behind the stories told by the candidate.
Structured interviews are predictive even for jobs that are themselves unstructured. We’ve also found that they cause both candidates and interviewers to have a better experience and are perceived to be most fair. So why don’t more companies use them? Well, they are hard to develop: You have to write them, test them, and make sure interviewers stick to them. And then you have to continuously refresh them so candidates don’t compare notes and come prepared with all the answers. It’s a lot of work, but the alternative is to waste everyone’s time with a typical interview that is either highly subjective, or discriminatory, or both.
There is a better way. Research shows that combinations of assessment techniques are better than any single technique. For example, a test of general cognitive ability when combined with an assessment of conscientiousness is better able to predict who will be successful in a job. My experience is that people who score high on conscientiousness “work to completion”—meaning they don’t stop until a job is done rather than quitting at good enough—and are more likely to feel responsibility for their teams and the environment around them.
Sure, it can be fun to ask ‘What song best describes your work ethic?’ but the point is not to indulge yourself with questions that trigger your biases.
The goal of our interview process is to predict how candidates will perform once they join the team. We achieve that goal by doing what the science says: combining behavioral and situational structured interviews with assessments of cognitive ability, conscientiousness, and leadership. To help interviewers, we’ve developed an internal tool called qDroid, where an interviewer picks the job they are screening for, checks the attributes they want to test, and is emailed an interview guide with questions designed to predict performance for that job. This makes it easy for interviewers to find and ask great interview questions. Interviewers can also share the document with others on the interview panel so everyone can collaborate to assess the candidate from all perspectives.
The neat trick here is that, while interviewers can certainly make up their own questions if they wish, by making it easier to rely on the prevalidated ones, we’re giving a little nudge toward better, more reliable interviewing.
Examples of interview questions include:
- Tell me about a time your behavior had a positive impact on your team. (Follow-ups: What was your primary goal and why? How did your teammates respond? Moving forward, what’s your plan?)
- Tell me about a time when you effectively managed your team to achieve a goal. What did your approach look like? (Follow-ups: What were your targets and how did you meet them as an individual and as a team? How did you adapt your leadership approach to different individuals? What was the key takeaway from this specific situation?)
- Tell me about a time you had difficulty working with someone (can be a coworker, classmate, client). What made this person difficult to work with for you? (Follow-ups: What steps did you take to resolve the problem? What was the outcome? What could you have done differently?)
Generic Questions, Brilliant Answers
One early reader of this book, when it was still a rough draft, told me, “These questions are so generic it’s a little disappointing.” He was right, and wrong. Yes, these questions are bland; it’s the answers that are compelling. But the questions give you a consistent, reliable basis for sifting the superb candidates from the merely great, because superb candidates will have much, much better examples and reasons for making the choices they did. You’ll see a clear line between the great and the average.
Sure, it can be fun to ask “What song best describes your work ethic?” or “What do you think about when you’re alone in your car?”— both real interview questions from other companies— but the point is to identify the best person for the job, not to indulge yourself by asking questions that trigger your biases (“OMG! I think about the same things in the car!”) .
We then score the interview with a consistent rubric. Our own version of the scoring for general cognitive ability has five constituent components, starting with how well the candidate understands the problem.
For each component, the interviewer has to indicate how the candidate did, and each performance level is clearly defined. The interviewer then has to write exactly how the candidate demonstrated their general cognitive ability, so later reviewers can make their own assessment.
Upon hearing about our interview questions and scoring sheets, the same skeptical friend blurted, “Bah! Just more platitudes and corporate speak.” But think about the last five people you interviewed for a similar job. Did you give them similar questions or did each person get different questions? Did you cover everything you needed to with each of them, or did you run out of time? Did you hold them to exactly the same standard, or were you tougher on one because you were tired, cranky, and having a bad day? Did you write up detailed notes so that other interviewers could benefit from your insights?
A concise hiring rubric addresses all these issues because it distills messy, vague, and complicated work situations down to measurable, comparable results. For example, imagine you’re interviewing someone for a tech- support job. A solid answer for “identifies solutions” would be, “I fixed the laptop battery like my customer asked.” An outstanding answer would be, “I figured that since he had complained about battery life in the past and was about to go on a trip, I’d also get a spare battery in case he needed it.” Applying a boring- seeming rubric is the key to quantifying and taming the mess.
Remember too that you don’t just want to assess the candidate. You want them to fall in love with you. Really. You want them to have a great experience, have their concerns addressed, and come away feeling like they just had the best day of their lives. Interviews are awkward because you’re having an intimate conversation with someone you just met, and the candidate is in a very vulnerable position. It’s always worth investing time to make sure they feel good at the end of it, because they will tell other people about their experience—and because it’s the right way to treat people.
In contrast to the days when everyone in Silicon Valley seemed to have a story about their miserable Google experience, today 80 percent of people who have been interviewed and rejected report that they would recommend that a friend apply to Google. This is pretty remarkable considering that they themselves didn’t get hired.
Don’t Leave the Interviewing to the Bosses!
In every interview I’ve ever had with another company, I’ve met my potential boss and several peers. But rarely have I met anyone who would be working for me. Google turns this approach upside down. You’ll probably meet your prospective manager (where possible—for some large job groups like “software engineer” or “account strategist” there is no single hiring manager) and a peer, but more important is meeting one or two of the people who will work for you. In a way, their assessments are more important than anyone else’s—after all, they’re going to have to live with you. This sends a strong signal to candidates about Google being nonhierarchical, and it also helps prevent cronyism, where managers hire their old buddies for their new teams. We find that the best candidates leave subordinates feeling inspired or excited to learn from them.
We also add someone with little connection to the group for which the candidate is interviewing—we might ask someone from the legal team to interview a prospective sales hire.
We also add a “cross-functional interviewer,” someone with little or no connection at all to the group for which the candidate is interviewing. For example, we might ask someone from the legal or the Ads team (the latter design the technology behind our advertising products) to interview a prospective sales hire. This is to provide a disinterested assessment: A Googler from a different function is unlikely to have any interest in a particular job being filled but has a strong interest in keeping the quality of hiring high. They are also less susceptible to the thin-slices error, since they have less in common with the candidate than the other interviewers.
So how do you create your own self-replicating staffing machine?
- Set a high bar for quality. Before you start recruiting, decide what attributes you want and define as a group what great looks like. A good rule of thumb is to hire only people who are better than you. Do not compromise. Ever.
- Find your own candidates. LinkedIn, Google+, alumni databases, and professional associations make it easy.
- Assess candidates objectively. Include subordinates and peers in the interviews, make sure interviewers write good notes, and have an unbiased group of people make the actual hiring decision. Periodically return to those notes and compare them to how the new employee is doing, to refine your assessment capability.
- Give candidates a reason to join. Make clear why the work you are doing matters, and let the candidate experience the astounding people they will get to work with.
This is easy to write, but I can tell you from experience that it’s very hard to do. Managers hate the idea that they can’t hire their own people. Interviewers can’t stand being told that they have to follow a certain format for the interview or for their feedback. People will disagree with data if it runs counter to their intuition and argue that the quality bar doesn’t need to be so high for every job.
Do not give in to the pressure.
Fight for quality.
Excerpted from Work Rules!, published in April 2015 by Twelve, an imprint of Hachette Book Group. Copyright 2015 by Laszlo Bock. Previously published at wire.com