Assessing Software Engineering Candidates

Fri, Jan 5, 2018 4-minute read

All engineering teams, including almost certainly your own, are constantly on the lookout for talented and passionate engineers to join them in building the next great product.

Unfortunately, the most common methods of recruiting these engineers is usually via unproven methods, doing what has “always been done” without reference to its effectiveness, or by scouring Google for advice and recommendations. Given such an absence of guidance, the best way forwards is probably to draw on decades of academic research on the subject — we’re engineers, we like data right? And what better source of data and evidence to inform your recruitment processes than a century of research on selection and assessment in occupational psychology?

Research into selection methods has been around for a long time. In Modena, Italy, the psychologist Ugo Pizzoli started using tests for selecting apprentices from 1901. The motivation for this research has been trying to predict the future performance of people in their jobs. Time to introduce a new concept here: predictive validity. It addresses how well an assessment method predicts future performance on the job. This is in contrast to face validity for example, which addresses how valid a procedure is perceived by candidates.

The predictive validity of various selection methods has been extensively studied. The findings are mixed but overall interviews and work samples seem, on average, to have the most predictive validity.

The most intriguing, and for many controversial, finding is that cognitive mental ability tests such as Raven’s Progressive Matrices are the best predictors of on the job maximal performance i.e. what a candidate ‘can do’ rather than ‘typical’ performance which is what the candidate ‘will do’.

This last result holds irrespective of job type and job complexity. It makes intuitive sense given a key component of cognitive ability is the ability to learn quickly and process new information at a rapid pace. A definition that is often used in applied psychology is intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surrounding — ‘catching on’, ‘making sense’ of things, or ‘figuring out’ what to do.

The most common reply when presented with the finding above is pointing out the prevalence of socially awkward but highly intelligent engineers which are now a stereotype in popular culture. This is a topic which would need much more space to go into but two points need to be made: the first is that like with everything in psychology, exceptions abound. The second is to point out that an excessive focus on logical reasoning skills does a disservice to the definition used above, which includes all kinds of intelligence including social, emotional and verbal along with logical and mathematical. So focusing on the core of the definition — the ability to “make sense” of things, “figure out” what to do, “catch on” — makes it easier to understand what skills and abilities teams should probably be looking for in candidates, both on the interpersonal and technical fronts.

So we’ve covered the 3 most effective assessment and selection methods in terms of generally accepted predictive validity: cognitive ability tests, interviews and work samples. There are many others including personality tests, references, word of mouth, biodata, graphology(!) and assessment centers but their predictive validity are generally lower or even non-existent though they may excel in other areas of selection and assessment.

So — if your sole aim is to select engineers with the most potential to be outstanding contributors in your team, should you only give them cognitive ability tests, conduct interviews and request samples of their work? Not so fast! There are a multitude of other factors to consider such as ability to work well with others, “culture”, aspirations and engagement with your company’s values. And even when considering only the three selection methods I’ve discussed above there are plenty of caveats and issues to consider: is a take-home programming test as useful as a pair programming session? Is an interview with the hiring manager as informative as an interview with a future team-mate? Are the interviews structured or unstructured, how are they scored, are they panel interviews or single interviews? Furthermore, they all assess different but overlapping aspects of performance so you probably need to properly review what attributes and skills you’re looking for before deciding on the optimal mix of selection tests.

Selection and assessment is a wide topic which some researchers have spent their entire career exploring. Hopefully the above has given you a taste of the fascinating complexities and, if done well, potential rewards of a carefully thought-out selection process. At the end of the day, your team is your greatest asset — so make sure it’s the best it can possibly be!