A Closer Look at Educational Data

Educational data has been a recurring theme on this blog. In this post, I continue considering the nature of data in education and the nature of data in science… comparing and contrasting the two.

Constructs and Instruments

Scientists are always specific about what they are measuring, and there are accepted methods for measuring these quantities. As a botany student, I measured the diameter at breast height of trees—the standard measure for the size of a tree trunk. Some measurements, however, are difficult to define; botany students may be interested in the health of a plant, but that must be measured indirectly. Such indirect measurements are called constructs and must be defined with specific terms. During peer review, the definition of those constructs is evaluated by the reviewers as is the instrument used to measure the construct.

In education, data appears to be drawn from standardized test scores. A scientist who was reviewing one’s use of the score would ask the question, “what are you trying to measure?” The scientists would then ask how the scores measure whatever was the response.

Given the debate over standardized tests among educators (and others), it seems there is only a weak connection between the instrument (the standardized test) and the construct. Indeed, the construct and the instrument seem to have converged in education; “performance on the test” is the goal, but there is no agreement that the goal is worthy or measuring what it is designed to measure. This makes “data-driven decision-making” inherently unscientific.

Uncontrolled Variables

One of the common uses of educational data is to compare schools, and even teachers in some jurisdictions. These comparisons are made under the assumption that better teachers (the construct) can be measured with better performance (the instrument). If a scientist were to undertake such a comparison, he or she would begin by identifying all of the variables that account for student performance on the test. Certainly the “quality of the school” is a relevant variable, but so is “the socioeconomic status of the community,” “the race of the student,” and many other factors. Unless all of those other factors are accounted for, the quality of the data and the validity of the conclusions would be viewed as suspect by a reviewer. Unless all variables are accounted for, data cannot be scientific.

The Nature of Measurement

Quantitative data in science is gathered to measure differences between control groups and treatment groups. Measurements, however, are subject t error. If a student answers 9 out of 10 questions on a quiz right, I may be measuring how much of the material she knows, but that number might also include the questions on which she guessed correctly. The nine may also includes questions that she misunderstood and thought she was answering a different question, but she choose the incorrect answer that just happened t be the correct answer. I will not list all of the variations that can lead to correct (or incorrect) answers, but it should be clear with slight reflection that there are many circumstances that can led to one having inaccurate counts of correct answers on the test (because of actions of the student and actions of the scorer).

To minimize the damage done to data by errors in measurement, scientists make measurements in large numbers. Taken together, the measurements will tend to group around the “real” value; this is phenomenon we see in the bell curve. Using the mathematics associated with variation and the bell curve, scientists can support conclusions about cause and effect, but the measurements are recognized as approximations. If you ask a scientist “how big is it?” the response will be a dissatisfying “probably about 10 units, give or take .2 units.”

A single measurement alone is very unreliable because of the error that makes any measurement uncertain. When my children were in school, I would receive a report of their performance on the standardized tests that were administered. I regarded those with only passing interest, and I was most interested in the error bars that were included in the report. Unfortunately, the data and measurements that are reported in  education rarely gives sufficient attention to the true nature of measurements/

Uncertain Cause and Effect

As presented to educators, data driven decision making is summarized as a step-by-step process: a) we administer a test, b) identify deficiencies as demonstrated in the data, c) take remediation actions to address those deficiencies, and d) find the deficiencies reduced when a test is again administered.

This model is built upon the assumption that specific and known instructional actions in the classrooms is associated with higher tests scores. The reasoning is if I measure low math scores in a child, I can take specific and know actions that will cause those scores to be higher later. Instruction causes learning according to this model. Because education is a wicked problem (see essay #x), cause and effect relationships are very difficult to ascertain. Further, all conclusions when dealing with wicked problems are tentative, so students who are judged to be performing well in mathematics may be judged otherwise when a different measure of mathematics skill or knowledge is applied.

Conclusion

When educators adopt data-driven decision making and planning, the assumption is that they are becoming more objective in their approach, and that they are being held accountable to produce results that are better than results obtained using other methods. My experiences, however, support two conclusions. First, education is a social invention that cannot be studied through science. Second, the manner in which data is gathered and analyzed in education is similar to science in only superficial ways.

I do believe that educators who gather and analyze data in scientific ways are better informed than their colleagues who do not. Educators who want to adopt scientific data analysis are advised to find a scientist to be a mentor; educators and educational leaders who claim to be data driven may be looking at numbers, but their methods are not scientific.  

Together, an educator and a scientist can understand the differences between science and we do in the classroom. Enjoy the differences. Cherish the differences. Know the differences and defend the differences.