Advertisement

The Long and Short Story of Using Work Samples to Hire Salespeople

Sep 3, 2003

A few weeks ago I wrote about how work samples are one of the most accurate skill measurement tools, but among the least used. In response, I received several requests for advice how to “get” work samples. Work samples for an artist, dancer, or actor are pretty straightforward. You just ask the person to perform and observe the result. It is not quite as easy for occupations like sales, management, or other business-related professions where performance is much more complex. These professions require expert knowledge to develop work samples. Let me explain by listing a few factors that professionals consider. Content Validity “Validity” means performance in the exercise will be highly predictive of performance on the job. So, if the job requires presenting to a warm audience, a cold audience, a prospect, a technical audience, or a non-technical audience, the exercise (or exercises) should resemble the kind of behavior required for the job. In sales, a high-dollar, complicated, repeat sale will require a significantly different type of exercise than a small-dollar one-time sale. Think about how the exercises would differ for the following activities:

  • Fact finding followed by a presentation
  • Large-dollar technical sales with group buyers
  • Small-dollar technical sales
  • Small-dollar non-technical sales
  • One-time sales
  • Repeat sales
  • And so forth…

Inter-rater Reliability This refers to whether two or more raters, observing the same person perform the same exercise, will evaluate the same behavior the same way. Raters have to be trained to uniformly evaluate candidate behavior, otherwise Rater 1 might be very tough and Rater 2 might be very lenient. This also means the exercise should be written in a way that the same behavior is elicited regardless of who participates. Unlike a training exercise, where the objective is to practice new skills, the objective of work samples is to dispassionately evaluate job performance. Same person + same exercise = different results = bad science. Reliability This refers to whether the exercise produces the same results from one time to another. A good exercise will produce roughly the same results every time it is conducted. This requires writing a situation that minimizes the effect of prior job knowledge and concentrates on assessing critical behavioral abilities (i.e., evaluates behavior instead of knowledge). It may seem hard to believe, but people tend to perform the same in a well-designed simulation, regardless of how often they participate. Evaluate the Right Things Evaluation is deeper than it looks. Critical behaviors should be identified ahead of time. There should be minimal overlap between one behavior and another. Behaviors should be unbiased and not subject to rater interpretation. Evaluation should be behavioral anchored and not subjective. Exercises should be realistic, observable and time-bound. Asking a sales candidate to sell you the pencil, ashtray, trashcan, or whatever object is at hand trivializes the intricate nature of fact finding, qualification, presentation, discovery, cold calling, and so forth. Salespeople don’t fail because the have a poor sales “pitch”; they fail because they cannot develop trust, get in front of customers, mutually discover problems, and provide viable solutions that make it easy to buy their products. Evaluate the Entire Job, But Don’t Mix and Match Rating accuracy drops when too many things are evaluated in an exercise. For example, a presentation exercise that includes presentation skills, questioning ability, problem solving, and persuasion will also deliver a mish-mash of evaluations ó does the candidate’s response to a question indicate good meeting management skills, problem-solving ability, or persuasiveness? Work samples are decent measures of behaviors, but other measures are necessary to measure motivations, problem-solving ability, planning skills, and so forth. The Way the Pros Develop Work Samples We begin by interviewing a reasonably large sample of salespeople, managers, and visionary managers to discover critical competencies associated with success and failure. We also collect data from the company, such as reports, training materials, and product information. When everything is assembled, we draft one or more work-sample exercises that parallel, not imitate, the job (this eliminates the job-knowledge effect and allows us to concentrate on job behavior). When the draft is completed, we list all the possible courses of action for each exercise and assemble it into an evaluation sheet. The entire package is reduced to a draft exercise that is reviewed and edited by job-content experts. Once their changes are incorporated, we give the exercise to a sample of high and low producers to see if exercise instructions are clear and consistent and can identify differences. More edits and drafts. Finally, we train a few people in the organization to administer the exercise. A few of these folks usually cannot “separate” their own opinions from the exercise, so we have to eliminate them from the evaluation team. So there you have it. No quick answers. No magic test questions. No one-size-fits-all pencil-and-paper sales test. Just a lot of hard work. Conclusion A hiring manager has only two choices: 1) separate the wheat from the chaff in the pre-hire stage using rigorous, well-designed exercises, or 2) take a chance and expect to coach and train about half your employees on the job. It makes “sell me pencil” seem like a silly children’s game, doesn’t it?

Get articles like this
in your inbox
The longest running and most trusted source of information serving talent acquisition professionals.
Advertisement