Over the past few years I have had the pleasure and the frustration of puzzling through when and how to use off-the-shelf measures for evaluation. I’m a huge fan of off-the-shelf measures that have validation evidence to support their use. Here’s the logic behind my thinking: Why should I spend my time (and a client’s money) to develop a survey from scratch when others have already put their own time, resources, and data collection effort into collecting validity evidence to develop a scale that measures the same construct?
So, when we start a new project at KPC, we often start with off-the-shelf measures that might be a good fit for the project’s needs. Notice that might is emphasized. That’s no accident. Just because a scale has evidence to support its use with one audience in one context doesn’t mean that it will work equally well for other projects and other groups. Each time we use one of these instruments we re-run the reliability statistics and/or collect our own validity data to support the use of instrument in our context and with our evaluation audience. We recently published a study that explored two existing instruments that we hoped to use for an evaluation, to demonstrate the process we go through to determine whether a scale does seem to be a good fit, and to think through what happens when the numbers don’t come out exactly as you would have hoped. In this case, we could still use the scales, but not all of the sub-scales. Read here to learn more.
Good Morning Karen, I hope you and your family are well. I am perusing the web to explore potential scales about science identity and confidence to adapt for high school students who are from under represented groups in science -specifically geo-sciences. I love this blog post. It explains what I have been thinking much more clearly than I am able to do.