Skip to main content

Null, but Not Void

What can we learn from education initiatives that yield few results?
a child's number magnets, with hand holding a zero

Imagine you're George Bailey in It’s a Wonderful Life. An angel descends to earth to reveal what the community would look like without you. Except in this version, Bedford Falls looks no different from Pottersville. It’s neither better nor worse. That’s not so wonderful, is it?

This is how the education community feels when an initiative has no impact on students.

Educators are always seeking to invest in interventions that will improve outcomes, but identifying a program that does so is rare. According to a recent review of randomized controlled trials sponsored by the Institute of Education Sciences (IES), only 1 out of 10 interventions produced a significant effect. Ten percent might even be a high estimate, considering that researchers often don’t publish studies with null results.

Does this mean that we are unable to determine whether our best efforts to improve student outcomes actually work? Don’t jump off that bridge yet, George.

Last month, researchers convened in Washington, D.C., to find out what we can learn from large-scale education studies that show no impacts. Harvard Graduate School of Education faculty members Heather Hill, James Kim, and Stephanie Jones, alongside the University of Michigan’s Robin Jacob, led the charge to discuss why “null” doesn’t necessarily mean “nothing.”

According to these scholars, there are several reasons why researchers may not find effects:

  • An intervention may not have a sound theory of action. For example, some early childhood interventions assumed that when kids participated in buddy-play, they would develop self-regulation skills. There was no research to back that up. The components of an effective intervention should be based on good bench science.
  • Interventions may provide resources or activities that schools don’t actually need. “Doctors don’t go around giving kids medication without a diagnosis, yet we spend little time diagnosing the problem before assigning the treatment,” Hill explains.
  • Context is king. Over the course of a multiyear study, a district’s shifting priorities may reduce alignment with the intervention being tested. Or, a control group may participate in a new district initiative mid-way through a study. That makes comparison between groups difficult. “Business as usual is never business as usual,” reminds Kim.
  • Study design plays a role as well. Many early IES studies were underpowered, meaning they didn’t have enough teachers or students to pick up a contrast between the treatment and control groups. Approximately half a million U.S. teachers leave their positions each year. Many researchers don’t account for teachers leaving the study, much less the profession.

“People put their lives into developing what could potentially be a very strong intervention, but the effects are ‘null’ at a particular point in time,” says Hill. “It is never fun to deliver that news.” She and the other participants intend to collect the papers presented at the convening and publish a special journal issue that highlights lessons learned from no-effects studies.

In the meantime, what can we do as researchers to enhance our ability to identify what works?

Design for null. “Expect that you will get null results and collect the data that allows you to explain it,” Hill urges. At the end of a recent study, her team interviewed teachers about what made implementation challenging. This helps educators understand why the program did not succeed.

Separate out the components. We tend to implement complex interventions with many components. Researchers should design multiple conditions to try to tease out the effective elements.

Provide support. If you want to see a big change in classrooms and schools, be prepared to design much stronger supports for teachers and students. It is unlikely that a groundbreaking innovation will work without strong investment in quality implementation.

Exercise patience. You might not see results in the early years. Before making quick and dirty cuts to programs, “stretch the time horizons in which you view your results,” Kim advises.

Ultimately, the failure to find impacts should not promote cynicism about the role of research in education. It should improve the way we design, implement, and test our efforts. Each time we do so, an angel gets its wings.

Related Articles