Research. Practice. And the Gap Between the Two.

In education (as in other fields) we hear leaders who proclaim they are “data-driven” and they “use evidence.” Despite this, there tends to be agreement within the education research community and within the practitioner community that research is not a factor that affects decisions I the manner we would all hope. (In New England the regional educational research organization—the one to which I belong—has convened discussion groups about filling this gap between research and practice.)

Connecting research to practice and practice to research is worthy, but we must recognize the factors preventing it. Consider the following:

  • Research depends on very clear and explicit definitions. In classrooms and school such definitions are not useful. Practitioners do not have the time to determine exactly which students/ lessons/ tests/ whatever fit the definition. Educators may seek to streamline the research; “just tell me what works” is their response. “Well,” the researcher must respond, “it depends…” and they go into such detail the educator stops paying attention. Those details matter in research, but they can be an obstacle to practice.
  • Researchers seek to isolate for variables. While randomized trials are the “gold standard” of research, they are very difficult to conduct in many educational settings. Even when we only control for variables (rather than isolating them), we are imposing control on the research setting that is not always possible in the real world of practice. This isolation or control of variables is essential to the findings also; it is entirely possible that if the variables are not controlled, then the findings are not reliable.
  • Educational practices are driven by many factors other than research findings. It is entirely possible that the “next big thing” in math education (for example) is not based on research (or more likely is supported by research under certain specific circumstances and not generalizable to other circumstances), but it is adopted by a curriculum coordinator or committee, thus all are expected to use it. Similarly, there may be ample evidence that the state-mandated tests are not valid or reliable, yet public schools are compelled to administer them.

It may seem that I am suggesting that research cannot inform practice and practice cannot inform research. There are difficulties in bridging the gap between research and practice; these are inherent in the different purposes of the two activities.

Our research gets better when we look to practitioners. We learn more about relevant questions, the changing nature of students and the social milieu in which they live, the curriculum challenges they face, and the changing nature of teachers. Researchers can help us understand these aspects of teaching as well as the methodological frameworks and strategies that can lead to meaningful findings.

Our practice gets better when we look to researchers. We learn more about just which factors matter in which settings, we learn how to frame problems we encounter, we learn how to best make sense of what we observe. We learn how to ask the question “how do you know?’