Now that you have determined what outcomes or other aspects of your program to evaluate, it is time to identify what type of data to collect and how to collect those data. Keep in mind that there is no single best evaluation design or way to collect data. The most appropriate approach is the one that will answer your evaluations questions within the limits of the resources available to you.
What type of data should I collect?
One of the first aspects you need to consider is what type of data will best meet your needs. The type of data you choose to collect - quantitative or qualitative - is in part dependent on what you want to know about your program. Because there are advantages and disadvantages to both quantitative and qualitative data, many evaluations rely on a mix of the two.
|Uses numerical data to make sense of information
Examples: scores on a test or survey answers on a five-point scale
|Allows collection and analysis of large amounts of data relatively quickly.
Analysis is perceived to be less open to interpretation and thus typically considered more objective.
|Uses narrative forms, such as thoughts or feelings to describe what is being evaluated.
Examples: observations, interview transcripts, focus groups, photographs, or videotapes
|Can provide rich context for examining participants’ experiences and how a program operates.
Allows for questions to be investigated in-depth.
|Uses a combination of both quantitative and qualitative data
Example: A combination of surveys and interviews
|Allows quantitative data to be collected from a large number of participants (increasing the likelihood that results can be applied to all program participants). Also allows in-depth qualitative investigation of evaluation questions with a smaller number of participants.
Requires an evaluator who is able to collect data using a variety of methods and analyses.
Which tool should I use to collect data?
There are many different tools for collecting quantitative and qualitative data. Questionnaires, observations, focus groups, and interviews are among some of the most commonly used techniques.
The resources below offer guidance on selecting tools appropriate for your needs as well as on developing these data collection instruments.
- Designing Evaluation for Education Projects (.pdf).
NOAA Office of Education and Sustainable Development (2004).
Section 6, Data Collection, provides a table that lays out the purpose, advantages, and limitations of many popular data collection methods. Additional details for each method are provided including the number of respondents they are appropriate for, time and cost considerations, and when to use them. Last, there is a table that identifies which methods are appropriate for measuring knowledge, skills, attitudes, and behavior outcomes.
- User-Friendly Handbook for Mixed Method Evaluations (.pdf)
Frechtling, J. and Sharp, L. (1997). National Science Foundation.
Chapter 3, Common Qualitative Methods, provides in-depth descriptions of common qualitative data collection methods. For each method, the chapter identifies advantages and disadvantages, discusses the role of the evaluator, and highlights tips and common pitfalls. The Chapter 3 Appendices provide a sample observation instrument, a sample in-depth interview guide, and a sample focus group topic guide. Additionally, Step 2 of Chapter 6 (page 6-3), should help you choose the best sources of information for your evaluation questions and the most appropriate methods for collecting that information.
- Evaluating Environmental Education (.pdf)
Stokking et al. IUCN (1996). Commission on Education and Communication.
Step 5, Selecting One or More Data Collection Methods (page 48), provides brief descriptions of various data collection methods as well as advice on when to use a particular method. Subsequent sections of this handbook provide detailed guidance on how to construct relevant instruments.
- Tools, Instruments, and Questionnaires
Neill, J. (2006). Wilderdom Outdoor Research Center
This resource provides summaries, reviews and links to over 30 scales (i.e., tested measures) for evaluating “the psycho-social impacts of intervention programs,” focusing mainly on outdoor education programs. Many of the scales are relevant for EE programs as well. This resource also includes a discussion of factors to consider when choosing an instrument and links to sites that help you create online questionnaires.
More Resources on Specific Types of Tools:
- Collecting Evaluation Data: Surveys (.pdf), University of Wisconsin Extension - Program Development and Evaluation Unit
- Questionnaire Design: Asking questions with a purpose (.pdf), University of Wisconsin Extension - Program Development and Evaluation Unit
- Collecting Evaluation Data: Direct Observation (.pdf), University of Wisconsin Extension - Program Development and Evaluation Unit
- Quick Tips: Focus Group Interviews (.pdf), University of Wisconsin Extension - Program Development and Evaluation Unit
- Conducting Key Informant Interviews (.pdf), USAID Center for Development Information and Evaluation
Use existing data collection instruments when possible!
Don’t put all of your eggs in one basket!
When and from whom should I collect data?
The conclusions you are able to draw about your program’s effects are influenced not only by the type of data collected, but also by when and from whom data are collected. Imagine you are evaluating a three-day EE unit that seeks to increase students’ awareness of how pollutants enter the water cycle. To evaluate this outcome, you plan to administer a questionnaire to students at the end of the unit. While this strategy will give you some idea of what the students know after participating in the unit, it also has some drawbacks. How will you assess whether the students’ awareness improved, compared to what they knew before? Likewise, how will you know that gains in students’ awareness were a result of the unit alone and did not occur based on something that students learned elsewhere, for example through television or another class?
There are several strategies for addressing these concerns. You could, for example, administer the questionnaire twice – once before the program begins, and again after it ends. By comparing the two data sets, you will learn how much students’ awareness increased as a result of the unit. Another strategy is to use a comparison or control group, that is, a group of students who are similar to your participants, but who do not experience the unit. If, at the end of the unit, you administer the questionnaire to the participants (also known as the treatment group) and the control group, you can compare their results. Using a control group eliminates some of the uncertainty about whether your program is the sole cause for changes in participants, as students in the control group are equally likely to have been exposed to outside information on water pollution as the treatment group.
There are a number of different ways to use control groups and/or to vary the timing of data collection. The most appropriate design for your evaluation will depend on what you are trying to measure, the structure of your program, and the resources you have available for conducting the evaluation. To help you choose, the advantages and disadvantages of different types of evaluation designs are reviewed here.
Check out the following resources.
Evaluating Environmental Education (.pdf)
Stokking et al. IUCN (1996). Commission on Education and Communication.
Step 4, deciding when, and from which groups data should be collected, offers a straight forward discussion of several major issues to consider when designing an evaluation (e.g., use of pre/post test and follow-up measures).