Innovations Defined

The lack of adequately defined programs is an impediment to implementation with good outcomes (e.g., Michie and colleagues, 2005; 2009).  To begin to address this issue we propose the following criteria for defining a usable intervention (Fixsen, Blase, Metz, & Van Dyke, 2013):

  1. Clear description of the program

    1. Clear Philosophy, Values, and Principles
      1. The philosophy, values, and principles that underlie the program provide guidance for all treatment decisions, program decisions, and evaluations; and are used to promote consistency, integrity, and sustainable effort across all provider organization units.
    2. Clear inclusion and exclusion criteria that define the population for which the program is intended
      1. The criteria define who is most likely to benefit when the program is used as intended.
  2. Clear description of the essential functions that define the program

    1. Clear description of the features that must be present to say that a program exists in a given location (essential functions sometimes are called core intervention components, active ingredients, or practice elements)
  3. Operational definitions of the essential functions

    1. Practice profiles describe the core activities that allow a program to be teachable, learnable, and doable in practice; and promote consistency across practitioners at the level of actual service delivery
  4. A practical assessment of the performance of practitioners who are using the program

    1. The performance assessment relates to the program philosophy, values, and principles; essential functions; and core activities specified in the practice profiles; and is practical and can be done repeatedly in the context of typical human service systems.
    2. Evidence that the program is effective when used as intended.
      1. The performance assessment (often referred to as “fidelity”) is highly correlated (e.g. 0.50 or better) with intended outcomes for children, families, individuals, and society.

Typical definitions of “evidence-based programs” focus on standards for scientific rigor and statistically significant outcomes (e.g. http://www.colorado.edu/cspv/blueprints). The effectiveness of programs is noted in criterion 4.b for defining a program, with effectiveness tied to a measure of the presence and strength of the program in practice. 

Dane & Schneider (1998) and Durlak & DuPre (2008) summarized reviews of over 1,200 outcome studies and found that investigators assessed the presence or strength (fidelity) of the independent variable (the intervention) in about 20% of the studies and only about 5% of the studies used those assessments in analyses of the outcome data (that is, had data related to criterion 4.b).  Without information on the presence and strength of the independent variable, it is difficult to know what the intervention was and it is difficult to know what produced the outcomes in a study (Dobson & Cook, 1980).  Based on these reviews, one might expect about 5% of currently named “evidence-based programs” might meet the criteria for defining a program.

The current standards for “evidence” are useful for choosing innovations (better evidence for effectiveness is a good basis for choosing) but are not especially helpful for implementation of an innovation in typical practice settings.  Service delivery practitioners and managers use programs (as defined above) in practice; they do not use standards for scientific rigor in their interactions with children, families, individuals, and others.