Usable Innovations and Performance (Fidelity) Assessment

Fidelity assessments are not yet a standard part of the education system. In addition, many programs developed by researchers and experts for use in classrooms do not include fidelity assessments that schools and districts can use.  From an implementation point of view, any innovation (evidence-based or otherwise) is incomplete without a good measure of fidelity to detect the presence and strength of the innovation in practice (see 4.b. in the Usable Innovation criteria). 

The Usable Innovation components are the basis for items included in a fidelity assessment.  In particular, the essential functions and the Practice Profiles that operationalize those functions provide information to guide the development of fidelity assessment items.  Usable innovations are doable and assessable in practice. 

To maximize benefits to students, fidelity data collection is:

  1. Frequent:  More frequent fidelity assessments mean more opportunities for improving.  Instruction, innovations and implementation supports, and school, district, and state supports for the program or practice benefit from frequent feedback.  The mantra for fidelity assessments in education is, “Every teacher every month.”
  2. Relevant:  Fidelity data are most informative when each item on the assessment is relevant to important supports for student learning.  That is, the fidelity assessment items are tied directly to the Practice Profile.
  3. Actionable:  Fidelity data are most useful when each item on the assessment can be included in a coaching service delivery plan and can be improved in the education setting.  After each assessment, the teacher and coach develop goals for improving instruction.  In addition, Implementation Teams work with leadership to ensure that teachers have access to the intensity of coaching supports needed for educators to be successful.

An important lesson of attending to implementation is that accountability moves from the individual practitioner to the organization and leadership.  Accountability is predicated on fidelity assessment. The focus of fidelity assessment is on teacher instruction since that is “where education happens.”  However, the accountability for teacher instruction remains with the Implementation Team and district and school leadership.

  • If student outcomes are improving, and the teachers are using the programs with fidelity, the teachers should be congratulated for their impact on students.
  • If teacher instruction is improving rapidly, the Implementation Team should be congratulated for assuring effective supports for teachers.
  • If teacher instruction is poor, the Implementation Team is accountable for providing more effective supports for teachers.
  • If the Implementation Team is struggling, state and district leadership are accountable for improving the functions, supports, and effectiveness of the Team.

For leaders in education, fidelity is not just of academic importance.  The use of a fidelity measure helps leaders and others discriminate implementation problems from intervention problems and helps guide problem solving to improve outcomes.  As shown below, information about fidelity and outcomes can be linked to possible solutions to improve intended outcomes (Blase, Fixsen, and Phillips, 1984; Fixsen, Blase, Metz, & Naoom, 2014).



High Fidelity

Low Fidelity


Good Outcomes


Celebrate and duplicate!

Re-examine the innovation
Modify the fidelity assessment


Poor Outcomes


Modify the innovation

Start over


As shown in the table, the desired combination is high fidelity use of an innovation that produces good outcomes.

  • When high fidelity is linked consistently with good outcomes it is time to celebrate and continue to use the innovation strategies and implementation support strategies with confidence. 
  • The second best quadrant is where high fidelity is achieved, but outcomes are poor.  This clearly points to an innovation that is being done as intended, but is ineffective.  In this case, the innovation needs to be modified or discarded. 
  • The least desirable quadrants are those in the low fidelity column where corrective actions are less clear.  Low fidelity in combination with good outcomes points to either a poorly described innovation or a poor measure of fidelity (or both).  In either case, it is not clear what is producing the good outcomes. 
  • Low fidelity associated with poor outcomes leaves users in a quandary.  It may be a good time to start again — to develop or find an effective innovation and develop effective implementation supports.