Resource Search: Articles, Books and Reports
Results 1 - 10 of 88
As the evidence-based movement has advanced in public health, changes in public health practices have lagged far behind creating a science to service gap. For example, science has produced effective falls prevention interventions for older adults. It now is clearer WHAT needs to be done to reduce injury and death related to falls. However, issues have arisen regarding HOW to assure the full and effective uses of evidence-based programs in practice. Summary: Lessons learned from the science and practice of implementation provide guidance for how to change practices by developing new competencies, how to change organizations to support evidence-based practices, and how to change public health systems to align system functions with desired practices. The combination of practice, organization, and system change likely will produce the public health benefits that are the promise of evidence-based falls prevention interventions. IMPACT ON PUBLIC HEALTH: For the past several decades, the emphasis has been solely on evidence-based interventions. Public health will benefit from giving equal emphasis to evidence-based implementation. Impact on Industry: We now have over two decades of research on the effectiveness of fall prevention interventions. The quality of this research is judged by a number of credible international organizations, including the Cochrane Collaboration (http://www.cochrane.org/), the American and British Geriatrics Societies, and the Campbell Collaboration (http://www.campbellcollaboration.org/). These international bodies were formed to ponder and answer questions related to the quality and relevance of research. These developments are a good first step. However, while knowing WHAT to do (an evidence-based intervention) is critical, we also need to know HOW to effectively implement the evidence. Implementation, organization change, and system change methods produce the conditions that allow and support the full and effective use of evidence-based interventions. It is time to focus on utilization of implementation knowledge in public health. Without this focus the vast amount on new evidence being generated on the prevention of falls and related injuries among older adults will have little impact on their health and safety.
What Does It Take? How Federal Initiatives Can Support the Implementation of Evidence-Based Programs to Improve Outcomes for Adolescents
Over the last 20 years, there has been a growing emphasis on developing and identifying evidence-based programs and practices for children and families and within the last decade an increasing number of federally funded initiatives have been dedicated to replicating and scaling evidence-based programs with the hope of achieving socially meaningful impact. However, only recently have efforts to promote high-fidelity implementation been given the attention needed to ensure evidence-based practices are used as intended and generate the outcomes they were designed to produce. In this article, we propose that the wide-scale implementation of evidence-based practices requires: (1) careful assessment and selection of the “what”; (2) a stage-based approach that provides adequate time and resources for planning and installation activities; (3) the co-creation of a visible infrastructure by a triad of key stakeholders including funders and policymakers, program developers, and implementing sites; and (4) the use of data to guide decision-making and foster curiosity into continuous improvement among grantees. Each of these strategies is explored in greater detail through the lens of the Teen Pregnancy Prevention (TPP) Program, a $100 million initiative overseen by the Office of Adolescent Health (OAH) in the U.S. Department of Health and Human Services.
Validation of the Score of the Instructional Pedagogical and Instructional Student Engagement Components of Fidelity of Implementation
Students cannot benefit from what they do not experience. Multiple reasons exist for why an intervention may not be delivered as it was designed. In this era of educational accountability and limited dollars to go around, understanding how an intervention is delivered in the classroom is key to understanding program outcomes. In order to assess whether a program has been implemented as intended, an assessment of fidelity is needed. However, assessing fidelity is complex given varying conceptual interpretations, which then fosters inconsistent application of methods to measure the construct. Additionally, the methods for validating fidelity measures are still unclear. The current study evaluated the reliability and validity of the student Instructional Pedagogical (10 items) and Instructional Student Engagement (15 items) scores for use in assessing teachers' fidelity of implementation on the participant responsiveness component of fidelity. The sample consisted of over 5,000 responses from students and 242 teachers in Mathematics and Science across three school districts and 41 schools to an online fidelity of implementation questionnaire. Given that students were nested within teachers, the data structure was multilevel, which warranted that the psychometric analyses be conducted using a multilevel framework. Instructional Pedagogy is represented by 10 items that measure three factors. Multilevel confirmatory factor analysis was used to test a two-level model that had three factors at the student-level and three factors at the teacher-level. Instructional Student Engagement is represented by 15 items that measure four factors. Multilevel confirmatory factor analysis was used to test a two-level model that had four factors at the student-level and four factors at the teacher-level. The psychometric results of the student questionnaire assessing the student engagement components of fidelity were mixed. Support for the factorial validity of the multilevel student models was mixed, with model fit indicating that some of the measured variables did not load strongly on their respective factors and some of the factors lacked discriminant validity. Lastly, the correlations between students' and teachers' scores for both the observed and latent variables (ranging from -.15 to .72 in math; -.07 to .41 in science) displayed limited convergent validity.
Using Implementation Science to Support and Align Practice and System Change: Lessons Learned from the Catawba County Child Wellbeing Project
This is the second brief in a series, Building a Post-Care Service System in Child Welfare: Lessons Learned from the Frontlines of Implementation Science in Catawba County. This brief describes how implementation science principles informed technical assistance strategies used in Catawba County to support the full and effective use of evidence-based and evidence-informed practices (EBPs/EIPs). Topics include building the capacity of local implementation teams, conducting stage-appropriate activities, and creating an implementation infrastructure to sustain new interventions.
Usability Testing, Initial Implementation and Formative Evaluation of an Evidence-Based Intervention: Lessons from a Demonstration Project to Reduce Long-Term Foster Care
The field of child welfare faces an undersupply of evidence-based interventions to address long-term foster care. The Permanency Innovations Initiative is a five-year federal demonstration project intended to generate evidence to reduce long stays in foster care for those youth who encounter the most substantial barriers to permanency. This article describes a systematic and staged approach to implementation and evaluation of a PII project that included usability testing as one of its key activities. Usability testing is an industry-derived practice which analyzes early implementation processes and evaluation procedures before they are finalized. This article describes the iterative selection, testing, and analysis of nine usability metrics that were designed to assess three important constructs of the project's initial implementation and evaluation: intervening early, obtaining consent, and engaging parents. Results showed that seven of nine metrics met a predetermined target. This study demonstrates how findings from usability testing influenced the initial implementation and formative evaluation of an evidence-supported intervention. Implications are discussed for usability testing as a quality improvement cycle that may contribute to better operationalized interventions and more reliable, valid, and replicable evidence.
Technical Assistance to Promote Service and System Change. Roadmap to Effective Intervention Practices #4
This document is part of the Roadmap to Effective Intervention Practices series of syntheses, intended to provide summaries of existing evidence related to assessment and intervention for social-emotional challenges of young children. The purpose of syntheses is to offer consumers (professionals, other practitioners, administrators, families, etc.) practical information in a useful, concise format and to provide references to more complete descriptions of validated assessment and intervention practices. The syntheses are produced and disseminated by the Office of Special Education Programs (OSEP) Technical Assistance Center on Social Emotional Intervention for Young Children (TACSEI).
This article describes the brief history of attention to children with special needs and provides a summary of the future of the field, based in large part on the articles published in the current special issue of this journal (Vol. 29, No. 1).
The Critical Role of State Agencies in the Age of Evidence-Based Approaches: The Challenge of New Expectations
Since evidence-based and evidence-informed programs and practices began to emerge in the early childhood field, the Down East Partnership for Children (DEPC) of Nash and Edgecombe Counties, a non-profit organization located in Rocky Mount, North Carolina, has upheld that they must be implemented as part of the continuum of services that makes up the early childhood system. System supports, such as program coordination, evaluation, community leadership development, and community outreach, are part of the underlying foundation of that system that enables a community to effectively evaluate and implement effective, evidence-based strategies. As more and more funders move toward funding evidence-based programs, it is critical that the role played by system supports, such as program coordination, evaluation, community leadership development, and community outreach in implementing evidence-informed and evidence-based programs be integrated into funding priorities and decisions.
Selecting an EBP to Reduce Long Term Foster Care: Lessons from a University-Child Welfare Agency Partnership
A growing implementation literature outlines broad evidence-based practice implementation principles and pitfalls. Less robust is knowledge about the real-world process by which a state or agency chooses an evidence-based practice to implement and evaluate. Using a major U.S. initiative to reduce long-term foster care as the case, this article describes three major aspects of the evidence-based practice selection process: defining a target population, selecting an evidence-based practice model and purveyor, and tailoring the model to the practice context. Use of implementation science guidelines and lessons learned from a unique private-public-university partnership are discussed.
Evidence-based programs will be useful to the extent they produce benefits to individuals on a socially significant scale. It appears the combination of effective programs and effective implementation methods is required to assure consistent uses of programs and reliable benefits to children and families. To date, focus has been placed primarily on generating evidence and determining degrees of rigor required to qualify practices and programs as “evidence-based.” To be useful to society, the focus needs to shift to defining “programs” and to developing state-level infrastructures for statewide implementation of evidence-based programs and other innovations in human services. In this article, the authors explicate a framework for accomplishing these goals and discuss examples of the framework in use.
Complex behaviour change interventions are not well described; when they are described, the terminology used is inconsistent. This constrains scientific replication, and limits the subsequent introduction of successful interventions. Implementation Science is introducing a policy of initially encouraging and subsequently requiring the scientific reporting of complex behaviour change interventions.
Students cannot benefit from education practices they do not experience. While this seems obvious (and it is), education systems have yet to develop the capacity to help all teachers learn to make good use of evidence-based practices that enhance the quality of education for all students. The purpose of this brief is to provide a framework that state leadership teams and others can use to develop the capacity to make effective, statewide, and sustained use of evidence-based practices and other innovations.
Children with emotional and behavioural disorders should be able to count on receiving care that meets their needs and is based on the best scientific evidence available, however, many do not receive these services. Implementation of evidence-based practice (EBP) relies, in part, on the research utilization practices of mental health care providers. This study reports on a survey of research utilization practices among 80 children's mental health (CMH) service provider organizations in Ontario, Canada. Methods: A web-based survey was distributed to 80 CMH service provider organizations, to which 51 executive directors and 483 children's mental health practitioners responded. Research utilization was assessed using questions with Likert-type responses based on the Canadian Health Services Research Foundation's Four-A's approach: access, assess, adapt, apply. Results: There was general agreement among executive directors and practitioners regarding the capacity of their organizations to use – access, assess, adapt, and apply – research evidence. Overall, both groups rated their organizations as using research information 'somewhat well.' The low response rate to the practitioner survey should be noted. Conclusion: These findings provide a useful benchmark from which changes in reported research utilization in the Ontario CMH sector can be tracked over time, as a function of EBP training and implementation initiatives, for instance. The need to improve access to research evidence should be addressed because it relates to the eventual implementation and uptake of evidence-based practices. Communities of practice are recommended as a strategy that would enable practitioners to build capacity in their adaptation and application of research evidence.
The purpose of this piece is to provide the research and rationales behind Practice Profiles. To achieve outcomes and develop effective implementation supports, innovations need to be “teachable, learnable, doable, and assessable.” Practice Profile methodology facilitates the development of innovations and their necessary infrastructure. Specific training on NIRN’s Practice Profile methodology can be found on the Active Implementation Hub.
Despite a robust body of evidence of effectiveness of social programs, few evidence-based programs have been scaled for population-level improvement in social problems. Since 2010 the federal government has invested in evidence-based social policy by supporting a number of new evidence-based programs and grant initiatives. These initiatives prioritize federal funding for intervention or prevention programs that have evidence of effectiveness in impact research. The increased attention to evidence in funding decision making is promising; however, to maximize the potential for positive outcomes for children and families, communities need to select programs that fit their needs and resources, the programs need to be implemented with quality, and communities need ongoing support. Drawing on experiences scaling evidence-based programs nationally, the authors raise a number of challenges faced by the field to ensure high-quality implementation and discuss specific proposals, particularly for the research and university communities, for moving the field forward. Recommendations include designing and testing intervention and prevention programs with an eye towards scaling from the start, increased documentation related to implementation of the programs, and working toward an infrastructure to support high-quality, effective dissemination of evidence-based prevention and intervention programs.
Mobilizing Communities for Implementing Evidence-Based Youth Violence Prevention Programming: A Commentary
Evidence-based programs have struggled for acceptance in human service settings. Information gleaned from these experiences indicates that implementation is the missing link in the science to service chain. The science and practice of implementation is progressing and can inform approaches to full and effective uses of youth violence prevention programs nationally. Implementation Teams that know (a) innovations, (b) implementation best practices, and (c) improvement cycles are essential to mobilizing support for successful uses of evidence-based programs on a socially significant scale. The next wave of development in implementation science and practice is underway: establishing infrastructures for implementation to make implementation expertise available to communities nationally. Good science, implemented well in practice, can benefit all human services, including youth violence prevention.
The uniqueness of education units from states to classrooms presents a challenge for implementation-informed approaches to using effective innovations to produce marked improvements in student outcomes. Education is an interaction-based profession. Education systems produce important outcomes that are the product of teachers interacting with students in education settings. If the adults don't teach, the children don't learn at an acceptable rate.