Topic 2: Usability Testing


Usability testing is used to test the feasibility and impact of a new way of work prior to rolling it out more broadly.”

Usability testing consists of a planned series of tests of an innovation, components of an innovation, or implementation processes.  Usability testing makes use of a series of PDSA cycles to refine and improve the innovation elements or the implementation processes.  It is used proactively to test the feasibility and impact of a new way of work prior to rolling out the innovation or implementation processes more broadly and/or prior to conducting an evaluation of the innovation.


Programs and practices are more likely to be adopted and sustained when they can be implemented as intended in real world settings – our classrooms and schools. Educators deserve supports to implement programs and practices that are “classroom ready”.  But how do we know if programs, practices, and educators are “ready”?

Usability testing is a method Implementation Teams use to test an innovation or the implementation methods with a larger sample under more typical conditions, as opposed to research or ‘pilot’ conditions characterized by special resources and conditions.

Usability testing originally was developed by computer scientists as a very efficient way to develop, test, and refine new software programs or websites, both very complex endeavors. The idea is to use the PDSA processes with small groups of 4 or 5 typical users. Computer scientists found that the first group would identify most of the errors in the first version of the program. Once the errors were corrected, the next group would find different and deeper errors. By repeating this process 4 or 5 times (involving about 20 typical users in total), the program would be nearly error free and ready for general use. Researchers have found that the end user experience is improved much more by 4 tests with 5 users each than by a single test with 20 users.

Key Functions and Processes

“How do I fit this into the weekly schedule?”

“Where are the data reports for monitoring progress? I entered my data on time.”

“Just fake it for a while, this too shall pass.”

“This coaching process is a great idea, but is it practical?”

It takes time and expertise to conduct a series of valid usability tests related to either the core components of an innovation or key implementation processes.  Below are the steps to consider.  Many of these should be familiar to you because they are built on the PDSA process detailed previously in Topic 1: Rapid-Cycle Problem-Solving.  

Keep in mind that you are creating improved processes not perfect processes. There will always be some variation around the ‘ideal’ – we can’t let the perfect be the enemy of the good.


1. Choose ‘worthy’ elements to test. 
Worthy elements are an instructional practice, innovation, setting, or implementation processes that increase the likelihood of getting good outcomes. They might be any of the following that the Implementation Team thinks will be challenging to do well.


Core components of instruction or innovation that are deemed or demonstrated to be necessary to getting good outcomes

90 minutes of instruction; use of evidence-based literacy practices

Core contextual components necessary to get good outcomes amount of time spent in academic instruction
Implementation processes that are necessary to getting good fidelity (e.g., processes that help educators and other staff change their instructional practices to support the innovation) coaching with observation and feedback occurs at least twice a month; fidelity assessments are reported monthly

2. Decide on the dimensions of the “test” by considering the following questions:

What criteria will be used to identify the first group of ‘testers’ and subsequent groups? “The four elementary schools that have at least three Grade 3 classrooms.  Four Grade 3 teachers, each with at least one year of teaching experience in each of four different schools will participate.  This will give us enough teachers to run the test 3 times.”
What is the scope of the test? “The test of the feasibility of weekly progress monitoring will occur over the course of two weeks of instruction using the new math curriculum.”
What data will be reported to whom, on what schedule? “At the end of each week the teachers involved will complete the six questions related to progress monitoring and will email the form to the District Curriculum Specialist. The data summary across the two testing weeks will be produced by the District Curriculum Specialist and emailed to the Implementation Team usability subgroup.”

What are the criteria for a successful test?

“If, on average, 75% or more of the possible ‘yes’ items are marked and there are no significant negative comments, the process is ready to roll out to the next cohort.  If the second cohort has similar results, the process will be used in all the Grade 3 classrooms.  Below criteria scores result in redeveloping and retesting the process.”


3. Engage in just enough preparation so that the participants can get started.
The goal is to build the successful processes not to develop an all-encompassing, perfect process “Each participating teacher will receive a two-page handout and will have access to a designated coach who can answer questions or visit the classroom once.”
4. Keep testing improved processes until the data indicate that most of the “bugs” have been found and fixed and the success criteria have been met.
5. Install the improved processes by considering which of the Implementation Drivers will be used to successfully scale-up the innovation, instructional practice, or implementation processes that were tested.


Implementation Drivers Triangle; Left side (selection, training, coaching); right side (systems intervention, facilitative admin, data systems); Base (leadership); top (fidelity leads to consistent uses of innovations leads to reliable benefits).


In summary, usability testing is a variation of the PDSA improvement cycle process.  It requires tests of ‘worthy’ processes using repeated PDSA cycles by small cohorts of participants. The goal is to work out the challenges and improve the processes before more widely implementing the innovation, instructional practice, or implementation process.