The year is 2057. The Recruiting Effectiveness Act had been in effect for about 5 years now. The Polymorphous Party had passed the act after the Great Productivity Revolt of 2050. The act was short and there were few amendments. In contained just one provision: every recruiter had to be licensed and was ultimately responsible for the quality of the people he or she hired. The act was all encompassing. A recruiter?s economic livelihood depended on acquiring talented people who would be consistently high producers. If the hiring quality dropped once, recruiters were required to attend month-long re-education camps. If it dropped twice, they were assigned to six months. If it dropped three times, they were disbarred from recruiting and sentenced to 12 months hand lettering ?Recruit the Best? posters. We now see Hume Resource, Jr., Recruiter 1st Class, entering his office on the 53rd floor of the Great Big Company. His new assistant, Maggie, was studying to become a recruiter, herself. ?Maggie, do we have that last candidate?s data yet?? ?I have it right here, Hume. Take a look.? ?Hmmm. Let?s see. Yep. See, here, Maggie. We have all our data organized according to the Big Three: cognitive ability, interpersonal skills, and motivations. We learned some time ago that all successes and failures can be classified into one of these three ?buckets?.? ?Just three?? ?Yes, just three. Let?s look at each area separately. First, people need to have the right mental stuff for the job. This includes things like technical knowledge, analytical ability, learning ability, and planning skills ? everything you need to learn, apply and problem solve. Most early recruiters used to just measure technical skills, but they found out that was only a small part of predicting performance. You needed more to be effective.? ?Like getting things done through people?? ?Exactly! Getting things done through people is critical in all jobs. The best technical skills are useless if the person can?t communicate effectively. Of course, communication changes slightly depending on whether you work as a team member, manage people, sell, or work in customer service roles. But, the need is the same. No matter what you have inside your head, you need to work through other people to get things done. It sounds simple, but you would be surprised at how many people fail in this area.? ?What about the motivation thing?? ?Well Maggie. Think of it this way. You can be the smartest person alive. You can even be the most effective communicator living, but if you have a poor attitude, are disinterested or unmotivated, all those skills go to waste.? ?That makes sense, Hume. But how do you measure these things. I guess technical skills are pretty straight forward, but what about the other things?? ?The technical skills aren?t as simple as you would think, Maggie. Just having a degree or certification is no assurance that you know how to use that knowledge. But, I?m getting ahead of myself. Take a look at this IT applicant, for example. She has a C-32 certification, but we need to know if she is able to use that knowledge in our job. First, I did an extensive job analysis to learn that we need an expert in C-32 ? not just a casual programmer. This meant we had to bypass our basic C-32 test battery and use the expert case studies. Each case was very similar to our job requirements. It was scored using a standardized key and independently evaluated by two of our C-32 experts. I see here that she passed with flying colors.? ?So we offer her the job?? ?Not yet. Techie knowledge is only part of the job, remember? These other exercises measured general analytical ability and learning ability. She did well. And, the planning exercise scores also look good.? ?Are those tests based on questions and multiple choice answers?? ?No, we use different tests depending on what we want to measure. In-basket exercises are good indicators of broad analytical thinking because they have no clear-cut course of action; numerical cases are good for measuring specialized analytical ability because they require number crunching; and, planning cases require the applicant to put a specified set of data in the right sequence. Cognitive ability is like a wheel. Everything shares a common hub, but there are many different spokes to measure depending on what is important for the job.? ?I never realized it was that complex!? ?Complexity and thoroughness is how we hire the best people.? ?What about interpersonal skills? What kind of tests do we use for that?? ?That?s a good question, Maggie. Before the Effectiveness Act, recruiters used pencil and paper to ?measure? interpersonal skills. Imagine that! Using paper to predict behavior! Ha! Not only was that foolish. It was ineffective. A lot of people went to camp for that one! Until we find someway to go virtual, Maggie, the only way to accurately measure interpersonal skills is to use structured role-plays ? not the kind you see in training programs, but ones using trained role-players, tightly controlled simulations, and standard scoring guides.? ?You mentioned ?standard scoring guides? twice, Hume. What?s that?? ?Standards scoring guides are a list of possible responses that make up the ?right? answers. We can?t just have anyone doing the scoring. For one thing, it?s not fair to the candidate; for another, it won?t do for everyone to have his or her own idea of the right answer. Scoring guides make sure we are all singing the same music. There are different exercises and scoring guides for managers, sales people, customer service, and teamwork. It depends on what our job analysis uncovers.? ?Wow!?.I never guessed. What about this motivation thing?? ?Motivation is the most difficult of all to measure. People tend to distort the truth on tests and try to look good in the interview. In fact, they?ll say and do almost anything to get a job. We get around that by doing our homework. We use a special motivational test designed for selection ? none of that training stuff here. Our test developers examined hundreds of studies, then designed a test that used only factors that predicted performance — there are only ten. Image that!? ?So the test gives us ten factor scores?? ?Not quite. The factors are only the intermediate step. We study our current employees and use artificial intelligence algorithms to convert the ten factors into a prediction of actual on the job behaviors. The factors are only important to the test developers ? recruiters only care about predicting performance appraisals. It?s one of the reasons why we are so effective. So, Maggie, you see how we get such good people. We start by learning what is needed for the job. Then we use behavioral interviewing and biographical data to examine a candidate?s past behavior, we use tests and exercises to learn as much as we can about present ability and we use special tests to predict future intentions. When we have all our data, we combine it into the three performance areas we discussed earlier. With all that information, our decision becomes obvious.? ?That?s pretty comprehensive!? ?That?s the way it is done, Maggie. Anything less and you?ll find yourself singing Kumba-ya around a campfire!?
Advertisement
The Recruiting Effectiveness Act of 2050
Get articles like this
in your inbox
The longest running and most trusted source of information serving talent acquisition professionals.
Advertisement
Related Articles