How To Do It

CDC's Framework also describes different evaluation designs that can guide you as you develop your own evaluation design:

A classification of design types includes experimental, quasi-experimental, and observational designs. No design is better than another under all circumstances.

Evaluation methods should be selected to provide the appropriate information to address stakeholders' questions (i.e., methods should be matched to the primary users, uses, and questions). Experimental designs use random assignment to compare the effect of an activity with otherwise equivalent groups. Quasi-experimental methods compare nonequivalent groups (e.g., program participants versus those on a waiting list) or use multiple waves of data to set up a comparison (e.g., interrupted time series). Observational methods use comparisons within a group to explain unique features of its members (e.g., comparative case studies or cross-sectional surveys). The choice of design has implications for what will count as evidence, how that evidence will be gathered, and what kind of claims can be made (including the internal and external validity of conclusions). Also, methodological decisions clarify how the evaluation will operate (e.g., to what extent program participants will be involved; how information sources will be selected; what data collection instruments will be used; who will collect the data; what data management systems will be needed; and what are the appropriate methods of analysis, synthesis, interpretation, and presentation).

Because each method option has its own bias and limitations, evaluations that mix methods are generally more effective. During the course of an evaluation, methods might need to be revised or modified. Also, circumstances that make a particular approach credible and useful can change. For example, the evaluation's intended use can shift from improving a program's current activities to determining whether to expand program services to a new population group. Thus, changing conditions might require alteration or iterative redesign of methods to keep the evaluation on track.

See the Reference Lists for more information on evaluation design.