Results 1 - 10 of 172
Brief 4: Context Matters - Recommendations for Funders & Program Developers Supporting Implementation in Rural Communities
This white paper explores how funders and program developers can partner with rural communities to achieve improved outcomes for individuals and families. Research on implementation of health and human services programming over the past decade demonstrates that achieving outcomes requires not just effective practices or strong communities, but three aligned and interdependent implementation components.
Dean Fixsen (NIRN Founder) gives a quick introduction to implementation Drivers. Implementation Drivers are the key components of capacity that enable the success of innovations in practice. (1 min. 54 sec.)
Dean Fixsen (NIRN Founder) briefly discusses the convergence of implementation science, innovation science, improvement science, and complexity theory. Part of Association of Positive Behavior Supports (APBS) 2016 Keynote.
Dean Fixsen (NIRN Founder) discusses research to practice and the role of implementation science. Part of Association of Positive Behavior Supports (APBS) 2016 Keynote. (2 min. 29 sec.)
Dean Fixsen (NIRN Founder) discusses the notion of churning around the mean and Spencer Darling's notion of 'organizations are designed to achieve the results the get.' Part of Association of Positive Behavior Supports (APBS) 2016 Keynote. (7 min. 41. sec.)
In this video, Karen Blase opens with a metaphor (story) about a country "Status Quo" and the Olympics to open discussion about implementation and scaling-up. The presentation focuses on effectiveness and utility of an intervention, effective implementation methods, and enabling contexts as critical ingredients to producing outcomes. Part of the 2013 Investing in What Works Forum (IWW). (running time 16 min. 58 sec.)
How can we take these good ideas that work in some places, and get them to work in all places… so all children in all schools would have access? Listen to Dean Fixsen and Karen Blase discuss active implementation and scaling up. (4 min. 50 sec.)
The U.S. Department of Education Office of Special Education Programs (OSEP) is a model for other government agencies seeking to support the development of implementation capacity in human service systems. In 2006 OSEP was the first federal agency to recognize the potential benefits of implementation science for improving student outcomes. Since 2006, OSEP has included implementation science in various approaches intended to improve services to and outcomes for students with disabilities. Through a RFP process, OSEP invested in the State Implementation and Scaling up of Evidence-based Programs Center (SISEP) that began in October 2007.
Northern European states, regarded as world leaders in social welfare, have for a long time viewed implementation as enactment of legislation that is communicated top-down to the public and stakeholders. This study reports on interviews with 30 public sector executives in Northern Europe about how to achieve successful implementation. They confirm the necessity of the “Making it Happen” strategy that corresponds with implementation science.
The presentation highlights OSEP's shift from a system focused primarily on compliance to one that puts more emphasis on results.
As the evidence-based movement has advanced in public health, changes in public health practices have lagged far behind creating a science to service gap. For example, science has produced effective falls prevention interventions for older adults. It now is clearer WHAT needs to be done to reduce injury and death related to falls. However, issues have arisen regarding HOW to assure the full and effective uses of evidence-based programs in practice. Summary: Lessons learned from the science and practice of implementation provide guidance for how to change practices by developing new competencies, how to change organizations to support evidence-based practices, and how to change public health systems to align system functions with desired practices. The combination of practice, organization, and system change likely will produce the public health benefits that are the promise of evidence-based falls prevention interventions. IMPACT ON PUBLIC HEALTH: For the past several decades, the emphasis has been solely on evidence-based interventions. Public health will benefit from giving equal emphasis to evidence-based implementation. Impact on Industry: We now have over two decades of research on the effectiveness of fall prevention interventions. The quality of this research is judged by a number of credible international organizations, including the Cochrane Collaboration (http://www.cochrane.org/), the American and British Geriatrics Societies, and the Campbell Collaboration (http://www.campbellcollaboration.org/). These international bodies were formed to ponder and answer questions related to the quality and relevance of research. These developments are a good first step. However, while knowing WHAT to do (an evidence-based intervention) is critical, we also need to know HOW to effectively implement the evidence. Implementation, organization change, and system change methods produce the conditions that allow and support the full and effective use of evidence-based interventions. It is time to focus on utilization of implementation knowledge in public health. Without this focus the vast amount on new evidence being generated on the prevention of falls and related injuries among older adults will have little impact on their health and safety.
What Does It Take? How Federal Initiatives Can Support the Implementation of Evidence-Based Programs to Improve Outcomes for Adolescents
Over the last 20 years, there has been a growing emphasis on developing and identifying evidence-based programs and practices for children and families and within the last decade an increasing number of federally funded initiatives have been dedicated to replicating and scaling evidence-based programs with the hope of achieving socially meaningful impact. However, only recently have efforts to promote high-fidelity implementation been given the attention needed to ensure evidence-based practices are used as intended and generate the outcomes they were designed to produce. In this article, we propose that the wide-scale implementation of evidence-based practices requires: (1) careful assessment and selection of the “what”; (2) a stage-based approach that provides adequate time and resources for planning and installation activities; (3) the co-creation of a visible infrastructure by a triad of key stakeholders including funders and policymakers, program developers, and implementing sites; and (4) the use of data to guide decision-making and foster curiosity into continuous improvement among grantees. Each of these strategies is explored in greater detail through the lens of the Teen Pregnancy Prevention (TPP) Program, a $100 million initiative overseen by the Office of Adolescent Health (OAH) in the U.S. Department of Health and Human Services.
Validation of the Score of the Instructional Pedagogical and Instructional Student Engagement Components of Fidelity of Implementation
Students cannot benefit from what they do not experience. Multiple reasons exist for why an intervention may not be delivered as it was designed. In this era of educational accountability and limited dollars to go around, understanding how an intervention is delivered in the classroom is key to understanding program outcomes. In order to assess whether a program has been implemented as intended, an assessment of fidelity is needed. However, assessing fidelity is complex given varying conceptual interpretations, which then fosters inconsistent application of methods to measure the construct. Additionally, the methods for validating fidelity measures are still unclear. The current study evaluated the reliability and validity of the student Instructional Pedagogical (10 items) and Instructional Student Engagement (15 items) scores for use in assessing teachers' fidelity of implementation on the participant responsiveness component of fidelity. The sample consisted of over 5,000 responses from students and 242 teachers in Mathematics and Science across three school districts and 41 schools to an online fidelity of implementation questionnaire. Given that students were nested within teachers, the data structure was multilevel, which warranted that the psychometric analyses be conducted using a multilevel framework. Instructional Pedagogy is represented by 10 items that measure three factors. Multilevel confirmatory factor analysis was used to test a two-level model that had three factors at the student-level and three factors at the teacher-level. Instructional Student Engagement is represented by 15 items that measure four factors. Multilevel confirmatory factor analysis was used to test a two-level model that had four factors at the student-level and four factors at the teacher-level. The psychometric results of the student questionnaire assessing the student engagement components of fidelity were mixed. Support for the factorial validity of the multilevel student models was mixed, with model fit indicating that some of the measured variables did not load strongly on their respective factors and some of the factors lacked discriminant validity. Lastly, the correlations between students' and teachers' scores for both the observed and latent variables (ranging from -.15 to .72 in math; -.07 to .41 in science) displayed limited convergent validity.
Using Implementation Science to Support and Align Practice and System Change: Lessons Learned from the Catawba County Child Wellbeing Project
This is the second brief in a series, Building a Post-Care Service System in Child Welfare: Lessons Learned from the Frontlines of Implementation Science in Catawba County. This brief describes how implementation science principles informed technical assistance strategies used in Catawba County to support the full and effective use of evidence-based and evidence-informed practices (EBPs/EIPs). Topics include building the capacity of local implementation teams, conducting stage-appropriate activities, and creating an implementation infrastructure to sustain new interventions.
Using Implementation Science to Support and Align Service and System Change: A Study of the Catawba County Child Wellbeing Project
Usability Testing, Initial Implementation and Formative Evaluation of an Evidence-Based Intervention: Lessons from a Demonstration Project to Reduce Long-Term Foster Care
The field of child welfare faces an undersupply of evidence-based interventions to address long-term foster care. The Permanency Innovations Initiative is a five-year federal demonstration project intended to generate evidence to reduce long stays in foster care for those youth who encounter the most substantial barriers to permanency. This article describes a systematic and staged approach to implementation and evaluation of a PII project that included usability testing as one of its key activities. Usability testing is an industry-derived practice which analyzes early implementation processes and evaluation procedures before they are finalized. This article describes the iterative selection, testing, and analysis of nine usability metrics that were designed to assess three important constructs of the project's initial implementation and evaluation: intervening early, obtaining consent, and engaging parents. Results showed that seven of nine metrics met a predetermined target. This study demonstrates how findings from usability testing influenced the initial implementation and formative evaluation of an evidence-supported intervention. Implications are discussed for usability testing as a quality improvement cycle that may contribute to better operationalized interventions and more reliable, valid, and replicable evidence.
Technical Assistance to Promote Service and System Change. Roadmap to Effective Intervention Practices #4
This document is part of the Roadmap to Effective Intervention Practices series of syntheses, intended to provide summaries of existing evidence related to assessment and intervention for social-emotional challenges of young children. The purpose of syntheses is to offer consumers (professionals, other practitioners, administrators, families, etc.) practical information in a useful, concise format and to provide references to more complete descriptions of validated assessment and intervention practices. The syntheses are produced and disseminated by the Office of Special Education Programs (OSEP) Technical Assistance Center on Social Emotional Intervention for Young Children (TACSEI).
This article describes the brief history of attention to children with special needs and provides a summary of the future of the field, based in large part on the articles published in the current special issue of this journal (Vol. 29, No. 1).
The Critical Role of State Agencies in the Age of Evidence-Based Approaches: The Challenge of New Expectations
Since evidence-based and evidence-informed programs and practices began to emerge in the early childhood field, the Down East Partnership for Children (DEPC) of Nash and Edgecombe Counties, a non-profit organization located in Rocky Mount, North Carolina, has upheld that they must be implemented as part of the continuum of services that makes up the early childhood system. System supports, such as program coordination, evaluation, community leadership development, and community outreach, are part of the underlying foundation of that system that enables a community to effectively evaluate and implement effective, evidence-based strategies. As more and more funders move toward funding evidence-based programs, it is critical that the role played by system supports, such as program coordination, evaluation, community leadership development, and community outreach in implementing evidence-informed and evidence-based programs be integrated into funding priorities and decisions.
Selecting an EBP to Reduce Long Term Foster Care: Lessons from a University-Child Welfare Agency Partnership
A growing implementation literature outlines broad evidence-based practice implementation principles and pitfalls. Less robust is knowledge about the real-world process by which a state or agency chooses an evidence-based practice to implement and evaluate. Using a major U.S. initiative to reduce long-term foster care as the case, this article describes three major aspects of the evidence-based practice selection process: defining a target population, selecting an evidence-based practice model and purveyor, and tailoring the model to the practice context. Use of implementation science guidelines and lessons learned from a unique private-public-university partnership are discussed.