Step 5: Collect Data

You have settled on what type of data to collect and how to collect it. Now it is time to proceed with actually collecting data for your evaluation.


How can I best manage the data collection process?

Create a work plan and schedule deadlines

Before proceeding with data collection, it is important to determine who will be responsible for different tasks and by when these tasks should be completed. Devising a work plan with deadlines for all key aspects of the evaluation keeps the process manageable and identifies expectations for staff and evaluation team members. Developing a work plan will also help you determine if what you envision for the evaluation is feasible given your resources! Your work plan could include deadlines for:

  • Determining samples and selecting participants
  • Developing and pilot-testing instruments
  • Requesting IRB Approval for your evaluation
  • Completing training on data collection or analysis techniques
  • Scheduling observations, interviews or focus groups
  • Obtaining Informed Consent
  • Disseminating instruments or conducting observations, interviews, etc.
  • Following up with participants who have not completed their questionnaires
  • Receiving and organizing data
  • Scoring, coding, and analyzing data
  • Meeting with program staff and other stakeholders to discuss findings
  • Writing your report
  • Sharing the results
  • Using the results for program improvement or other purposes

Do not forget the easy data!


Develop guidelines for how data should be collected

The procedure used to collect data will vary depending on your design, instruments and sampling of participants. Regardless of the approach there are precautions you can take to ensure a smooth data collection process. These include:

  • Pilot-testing instruments to ensure that directions are clear and that measures are appropriate for the target audience (For an example addressing the benefits of pilot testing click here)
  • Rehearsing the process of administering the evaluation instrument or approach as appropriate or feasible
  • Scoping out in advance the location where data collection will take place,
  • Checking that you have the necessary supplies such as the right number of printed instruments and consent forms (plus a few extras), pens or pencils, stamped return-addressed envelopes, or other means for collecting completed questionnaires, etc.

For additional ideas on ensuring a smooth data collection process see:


Others may not understand your data collection procedures.


How do I select evaluation participants?

Initially you may be tempted to get feedback from everyone who participates in your program. This may be an appropriate strategy, if the program serves a very small audience. With a large group of participants, however, this approach can waste time and money. Instead of collecting information from everyone in your population (i.e., all of your participants), you can learn just as much about your program by collecting information from a sample of participants.

Two factors determine who should be included in your sample:

  • whether you are using quantitative or qualitative methods
  • the purpose of your evaluation.

Sampling applies to more than individual participants!


Quantitative Data

If the purpose of your evaluation is to...

  • ... determine what the typical outcomes are of your program, such as the impacts the program has on participants’ attitudes and behaviors or generally how satisfied individuals are with the way the program was implemented, then you will want a sample that is representative of the entire population. The most accurate way to achieve this is through random selection, a process where every participant has an equal chance of being chosen. With a small population of 80, for example, you might put each participant’s name in a hat and draw 30 names to determine your sample. There are numerous ways to select a random sample, and several of the resources below provide in-depth guidance for choosing the best method to meet your evaluation needs.
  • ... examine the impact of your program on a specific set of individuals (e.g., 6th grade girls who are in the free lunch program), then a non-random approach called purposeful sampling may be a better strategy. In this type of sampling, the evaluator uses his or her expertise to select the participants whom he or she thinks are most likely to represent the population and/or who will be helpful in answering the evaluation questions that were defined in Step 3. Purposeful sampling is also typically used for qualitative research.

The following resources will help you determine what type of sample is most appropriate for your evaluation.

  • Evaluating Environmental Education (.pdf)
    Stokking et al. (1996). IUCN Commission on Education and Communication.
    Beginner Intermediate
    Step 8, "Deciding Who to Include in the Investigation," provides a brief description of several basic sampling methods and explains errors and biases to be aware of.
  • Sampling (.pdf)
    University of Wisconsin Extension Program, Development and Evaluation Unit
    Beginner Intermediate
    This resource describes different types of random and non-random sampling methods, explains how to carry them out, and identifies the advantages and disadvantages of each. It also provides guidance on how to choose a sampling method based on your population size, what you want to know, and available resources.

Beware of Sampling Biases!


Qualitative Data

The intent of qualitative research is to gain an in-depth understanding of a situation. As such, most sampling strategies involve purposefully selecting individuals according to criteria that the evaluator considers most valuable for answering the evaluation questions generated in Step 3. Below are examples of purposeful sampling strategies based on those suggested by Patton (1990):

  • Extreme or deviant case sampling Learn what works or doesn't work for your program by studying the extremely atypical cases -- e.g., either those individuals for whom the program is unusually successful or those for whom the program seems to have no impact.
  • Maximum variation sampling This strategy captures common outcomes that cut across a variety of participants or programs. For example, if you provide outdoor EE to schools of varying socio-economic status, you might try to collect data from schools at each socio-economic level.
  • Homogeneous sampling As the name implies, this sampling method seeks out individuals who are homogeneous with respect to certain variables (e.g., teachers with less than 5 years experience; 3rd grade girls, minority students in the free-lunch program, etc.).
  • Typical case sampling Choosing a sample of "typical" cases allows you to describe the average experience of your participants to someone not familiar with your program. If your program is implemented in multiple schools, for instance, you would select the schools where the program seems to have an average impact and avoid the ones that exhibit unusual or extreme results.
  • Stratified purposeful sampling This strategy involves sampling from below average, average, and above average cases in order to capture the main variations in your program's outcomes.

How big should my sample be?

Quantitative Data

The ideal sample size depends on how you plan to analyze the data you collect and the size of the overall population (e.g., the total number of participants, visitors, etc.). Many evaluators use descriptive statistics to analyze and report data (See Step 6). For example, if you ask teachers to rate how likely they are to implement a new EE curriculum on a scale of 1 to 7 ("extremely unlikely" to "extremely likely"), you might report the mean response and the percent of respondents who are "likely" to "extremely likely" to implement it. If this is similar to how you plan to report your evaluation results, then your primary concern is to choose a sample that is roughly large enough to accurately represent your population. The table below provides rules of thumb for choosing an appropriate sample size.

population sample size percent required
10 10 100
20 19 95
50 44 88
100 80 80
250 152 61
500 217 43
1,000 278 28
2,500 333 13
5,000 350 7
10,000 370 4

From Krejcie, R.V., and Morgan, D. E. (1970). Determining sample size for research activities. Educational and Psychological Measurement, 30(3): 607-610.

Choosing a big enough sample is more critical if you want to use inferential statistics. This type of statistics allows you to test whether there are significant differences either between groups (e.g., does your program have a different impact on girls versus boys) or within individuals (e.g., do participants have more pro-environment attitudes after completing your program than before they started). If you choose a sample that is too small, you increase the likelihood that the differences identified by the analysis are due to chance -- and did not occur as a result of your program! For additional guidance on determining a sample size, see the extended discussion on power analysis.


Qualitative Data

Because qualitative efforts are typically not about generalizing results to a larger population, there are no hard and fast rules for selecting an appropriate sample size. Instead, try to set a minimum sample size based on what you think is necessary to adequately capture your outcomes of interest -- and anticipate that this number may change once you start to collect data. Below are some strategies to guide your decision in choosing a sample for a qualitative evaluation:

  • What's the quality and depth of information that you want from each individual or case? If you want detailed, in-depth information, a lot can be learned from a small number of information-rich cases. If, however, you are trying to explore and understand variations in your program, you might seek less detail-rich information from a greater number of individuals.
  • What type of sampling strategy are you using? Have you included enough cases to identify variations between subgroups of participants, or conversely, to identify common trends among a homogeneous group?
  • Are you continuing to gain new information from each case? It is a good idea to analyze data for themes as you collect information. If you find that new data fit within those themes, further data collection is probably unnecessary. If the information does not fit these themes, you might want to increase your sample size until you reach that saturation point.

The following resources provide additional guidance for determining sample sizes for qualitative evaluations.

  • Sample size for qualitative research
    DePaulo, P. (2000).
    Beginner Intermediate
    This thoughtful article discusses ways to determine if a sample is the right size to discover the insights that are likely to be most important to you. It explains the risks of the wrong sample size, how to figure out the ideal sample size based on calculating probabilities of risk and based on qualitative findings, and how to determine sample size when you only want to know about the typical, or majority view.
  • Sample size in qualitative research
    Sandelowski, M. (1995).
    Intermediate Advanced
    This article, published in the academic journal, Research in Nursing and Health, provides a thorough explanation of the importance of sample size in qualitative research, a review of issues to consider when determining sample size, and a discussion of sample size in different kinds of purposeful sampling.

How do I prepare my data for analysis?

Record and clean it!

Data are precious and, in a sense, irreplaceable. As you collect information from evaluation participants, make sure you take the time to record it properly. This means not only double-checking recorded information but also ensuring that multiple data collectors record information in the same way.

Develop a coding sheet to help keep track of how data were recorded. A coding sheet lists questions on the evaluation instrument and relevant data information. If you use statistical software, record the abbreviations you used to name the corresponding variable. For example, the question "Overall, how satisfied were you with this professional development workshop?" might be coded as "satis_wkshp" in your data file. Coding sheets are also helpful if your instrument has yes/no, true/false, or other non-numerical response options. For quantitative analyses of such data, you will have to assign a value to each response, e.g., "1" for no and false responses, "2" for yes and true responses. A coding sheet provides a useful place to track this information.

The last step before analyzing your data involves "cleaning" or proofing your data file for typographical errors. It is not uncommon to accidentally hit the wrong number or to type two responses in one cell of a spread sheet (e.g., resulting in "23" instead of "2" and "3"). A quick way to resolve most of these errors is to have your spreadsheet software analyze each variable for the minimum and maximum values. If your question has a scale from 1 to 7, but the software reports a maximum value of 45, you know you have an error to find!

The following resources provide more information on developing coding sheets and ensuring that data are entered properly.

  • Make Certain Your Electronic Data Are Accurate (.pdf)
    University of Wisconsin Extension Program, Development and Evaluation Unit
    This document provides brief tips for ensuring that data are ready to be analyzed.
  • User's Guide to Evaluation for National Service Programs (.pdf)
    Corporation for National & Community Service
    The "Data Analysis" chapter in this guide provides detailed insights into how to clean and analyze open-ended qualitative responses as well as how to code quantitative data from a questionnaire. Practical and easy to follow examples and tips are incorporated throughout.

Record how you handle unusual data



Patton, M.Q. (1990). Qualitative Evaluation and Research Methods (2nd Ed.). Newbury Park, CA: Sage Publications.

Back to planning
1 2

Phase 1

Understanding Your Program

Step 1. Before You Get Started Step 2. Program Logic
3 4

Phase 2

Planning Your Evaluation

Step 3. Goals of Evaluation Step 4. Evaluation Design