“Hello. Thank you for calling the Pretty Good Test Company. This is Les. How may I help you?” “I was interested in purchasing one of your selection tests. You sent me a technical manual a few days ago and I have some questions.” “Certainly. I can help!” “I can’t seem to find the criterion validity tables.” “All our tests have been validated.” “I’m sure. But where are the performance criterion tables?” “Let me see…Um. Here it is. Turn to page seven.” “Ok, I’m on page seven. All I see here is a table with the title ‘Average Scores for Mid Managers’.” “Right! That’s our validity data” “But these are only averages.” “Right! Validity!” “Perhaps. But what I am looking for is the chart that shows higher scores on your test leads to higher performance.” “Oh, we didn’t do that. People don’t like to complete rating forms.” “Then how do you know your test is valid for selecting managers?” “Well, for one thing, we asked a lot of managers to take our test. For another, they were all employed at the time.” “Just what kind of managers did you survey?” “I think it was an insurance office somewhere.” “What about technical managers?” “Oh, I’m sure they would score the same.” “The scores for technical managers would be the same as the scores for managers of insurance agents?” “Probably.” “Ok. Let’s go with that for a moment. Do you have any data that shows your test might have adverse impact?” “No. It’s really hard to find a large minority population who will take our test. We plan to do an adverse impact study soon, though.” “Let me see if I have this straight. You claim your test is “valid” because you have published a technical manual that contains average-scores. The managers in your “validation” sample came from an insurance office. You think insurance managers’ scores would probably be equal to scores for technical managers. You did not gather any data on the quality of performance, except for the fact your management sample was employed at the time. You have no data about adverse impact. And, you are also suggesting we use your test for selecting our managers.” “Yes. All our tests are valid.” “How do you account for the fact that users are responsible for validity, even if they purchase materials from a vendor?…. And that descriptive labels, promotional literature, frequency of usage, testimonials, and seller credentials are not evidence of validity?” “Well, we assume our buyers know that already” “But, you state your tests are valid even though you know validity has to be established by the user?” “Uh-Uh. All our tests are validated.” “Arggh! Let’s assume for a minute that IT managers at my organization require the same average scores as your sample. How do we set our cut off points?” “Our motto at the Pretty Good Test Company is “more is better”. “You mean higher scores indicate better performance?” “Yes” “But you said earlier that you did not measure performance, just average scores.” “That’s right.” “But now you are suggesting that higher scores lead to better performance. How do you know this?” “Well, it just seems reasonable.” “What if lower performing managers actually had the high scores and higher performing managers had low scores?” “Well, I guess that would be some kind of surprise, wouldn’t it? How many tests would you like?” “Tell you what. Let me think about it some more. Ok?” “Sure. Call us anytime. The people at Pretty Good Tests are here to service you.” “You give the word “service” a whole new meaning.” “Thank you! We try hard to be the best.” “Let me know when you stop trying and start achieving.” “Yes, Sir! We’ll put your name on our mailing list! Good Bye.” “Bye.”
Advertisement
Anatomy of a Technical Manual or How to Snooker People into Thinking You Have A Good Test
Get articles like this
in your inbox
The longest running and most trusted source of information serving talent acquisition professionals.
Advertisement
Related Articles