Implementation Measures

Linked implementation teams define an infrastructure for assuring effective implementation supports for every practitioner (teacher) in every service-providing organization (school) in a human service system (education).  The linked implementation teams align, integrate, and leverage existing structures, roles, and functions in a system to focus purposefully and precisely on achieving intended outcomes at the practice level.  In this section, practical assessments of key factors at each level of the infrastructure are described and links are provided to technical descriptions and examples of data.

“Implementation is defined as a specified set of activities designed to put into practice an activity or program of known dimensions.  According to this definition, implementation processes are purposeful and are described in sufficient detail such that independent observers can detect the presence and strength of the “specific set of activities” related to implementation.” (Fixsen, Naoom, Blase, Friedman, & Wallace, 2005, p. 5).  The evidence-based Active Implementation Frameworks (AIF) summarize and operationalize the “specific set of activities” that can be used on purpose to support the full and effective use of effective innovations in human service organizations and systems (http://nirn.fpg.unc.edu/resources/brief-1-active-implementation-practice-and-science).

“Educators typically use the term capacity in reference to the perceived abilities, skills, and expertise of school leaders, teachers, faculties, and staffs—most commonly when describing the “capacity” of an individual or school to execute or accomplish something specific, such as leading a school-improvement effort or teaching more effectively. The term may also encompass the quality of adaptation—the ability of a school or educator to grow, progress, or improve.” (http://edglossary.org/capacity/).  In education and other fields, expertise is embedded in the members of implementation teams, and implementation capacity for scaling in a system is developed in the form of linked implementation teams at the state, region, district, and local levels. 

To be useful, the core components of the AIF must be teachable, learnable, doable, and assessable in practice.  The AiHub coupled with just enough, just in time professional development on site provide opportunities for teaching and learning at every level of a system.  Active modeling and coaching from AIF experts (I do, we do, you do) expand the learning as new implementation team members learn to do the work of implementation to improve delivery of effective innovations and improve human service outcomes.  Assessment in practice has been a challenge because of the complexities in human service environments, the novelties encountered in different domains (e.g. education, child welfare, global public health, pharmacy), and the ongoing development of the AIF as new research and examined experiences are incorporated into the frameworks.

The measures related to the AIF have been developed as the AIF themselves have been developed, used, revised, and reused in a variety of human service systems.  “Action evaluation” measures have evolved that meet the following criteria:

  1. They are relevant and include items that are indicators of key leverage points for improving practices, organization routines, and system functioning.
  2. They are sensitive to changes in capacity to perform with scores that increase as capacity is developed and decrease when setbacks occur.
  3. They are consequential in that the items are important to the respondents and users and scores inform prompt action planning that impacts implementation capacity development; repeated assessments each year monitor progress as capacity develops.
  4. They are practical with modest time required to learn how to administer assessments with fidelity to the protocol, and modest time required of staff to respond to rate the items or prepare for an observation visit in a classroom.

The action evaluation measures listed here are in various stages of development, and all are in use by the National Implementation Research Network and by others in human services.  Links are provided to brief descriptions of the measures that include rationales for and descriptions of the measure, advice and cautions related to administration, examples of uses of the measure, and a summary of the psychometric and other testing done regarding the measure itself.

The links among the measures in an education context are shown in the table.  A vexing problem in implementation practice and science is the interaction among these factors.  If the implementation drivers are essential for consistent high fidelity use of an intervention and reliable and sustained outcomes, then where do the implementation drivers come from?  This vexing problem is repeated at each level of the cascade shown in the table.  An input (implementation drivers) at one level is the output at the next level (district implementation capacity).  In research terms, each independent variable at one level is a dependent variable at the next level.  For example, research shows that coaching is a key factor related to high fidelity use of an effective innovation; in these studies coaching is an input (independent variable) and fidelity is an output (dependent variable).  Where do competent coaches come from?  Coaches are selected, developed, and supported by District Implementation Teams.  Coaching is an output (dependent variable) of a District Implementation Team (independent variable).

A summary of the NIRN assessments are outlined in the table with links to more detailed information and examples of data provided for quick reference.

Summary of NIRN Capacity Assessments

 

System Level

Asessment

Concepts

Uses

Learn More
Classroom

Observation Tool for Instructional Supports and Systems
(OTISS)

 

An assessment of high impact instruction practices in classrooms.

OTISS is used as a fidelity assessment for general instruction to complement a fidelity assessment for any particular effective innovation(s) in use in a school. It is a direct observation of teacher instruction and an assessment of the supports provided to teachers.

 

 
Building

Drivers Best Practices Assessment
(DBPA)

 

An assessment of the competency, organization, and leadership drivers operationalized in the Active Implementation Frameworks.

The DBPA is a facilitated assessment of implementation team supports for practitioners, managers, and leaders in a human service organization.

 

 

 
District

District Capacity Assessment
(DCA)

 

An assessment of implementation capacity in a school district with responsibility for a number of schools.

The DCA is a facilitated assessment of implementation team supports for schools and teachers.

 

Region or Intermediate Agency

Regional Capacity Assessment
(RCA)

An assessment of implementation capacity in a Regional Education Agency with responsibility for a number of districts.

The RCA is a facilitated assessment of implementation team supports for developing and sustaining District Implementation Teams.

 


 
State

State Capacity Assessment
(SCA)

An assessment of state supports for systemic change.

The SCA is a facilitated assessment of system leadership for developing a statewide infrastructure for implementation in the form of linked implementation teams, and for using effective innovations to improve system functions and impact on society.

 

 

With regard to the Formula for Success, the OTISS and other fidelity measures relate to the use of Effective Innovations, the DBPA and DCA relate to Effective Implementation, and the RCA and DCA relate to the Enabling Context.  All of these at sufficient strength work together to produce Educationally Significant Outcomes.


References

Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: A synthesis of the literature. Tampa, FL: University of South Florida, National Implementation Research Network.

 


Related Resources

Brief 5: Developing State Capacity for Change