Hechinger Report just published an article on how having teachers study student data doesn’t actually result in better student learning outcomes.
Think about that for a minute. That finding is pretty counterintuitive, right? For at least two decades now we have been asking teachers to take summative and formative data and analyze the heck out of them. We create data teams and data walls. We implement benchmarking assessments and professional learning communities (PLCs). We make graphs and charts and tables. We sort and rank students and we flag and color code their data… And yet, research study after research study confirms that all of it has no positive impact on student learning:
[Heather Hill, professor at the Harvard Graduate School of Education] “reviewed 23 student outcomes from 10 different data programs used in schools and found that the majority showed no benefits for students” . . . . Similarly, “another pair of researchers also reviewed studies on the use of data analysis in schools, much of which is produced by assessments throughout the school year, and reached the same conclusion. ‘Research does not show that using interim assessments improves student learning,’ said Susan Brookhart, professor emerita at Duquesne University and associate editor of the journal Applied Measurement in Education.”
All of that time. All of that energy. All of that effort. Most of it for nothing. NOTHING.
No wonder the long-term reviews of standards-, testing-, and data-oriented educational policy and reform efforts have concluded that they are mostly a complete waste. We’re not closing gaps with other countries on international assessments. Instead, our own country’s achievement gaps are widening. The same patterns are occurring with our own national assessments here in the United States. Similarly, our efforts to ‘toughen’ teacher evaluations also show no positive impact on students. It’s all pointless. POINTLESS.
The past two decades have been incredibly maddening and demoralizing for millions of educators and students. And for what? NOTHING.
Are school administrators even paying attention? Or are they still leaning into outdated, unproductive paradigms of school reform?
This was the line in the article that really stood out for me:
Most commonly, teachers review or re-teach the topic the way they did the first time or they give a student a worksheet for more practice drills.
In other words, in school after school, across all of these different studies, our response to students who are struggling is to… do the same thing again. Good grief.
Make school different.
—
MARCH 8 ADDENDUM
Here are some additional paragraphs from the Hill article:
Goertz and colleagues also observed that rather than dig into student misunderstandings, teachers often proposed non-mathematical reasons for students’ failure, then moved on. In other words, the teachers mostly didn’t seem to use student test-score data to deepen their understanding of how students learn, to think about what drives student misconceptions, or to modify instructional techniques.
Field notes from teacher data-team meetings suggest a heavy focus on “watch list” students—those predicted to barely pass or to fail the annual state reading assessment. Teachers reported on each student, celebrating learning gains or giving reasons for poor performance—a bad week at home, students’ failure to study, or poor test-taking skills. Occasionally, other teachers chimed in with advice about how to help a student over a reading trouble spot—for instance, helping students develop reading fluency by breaking down words or sorting words by long or short vowel sounds. But this focus on instruction proved fleeting, more about suggesting short-term tasks or activities than improving instruction as a whole.
Common goals for improving reading instruction, such as how to ask more complex questions or encourage students to use more evidence in their explanations, did not surface in these meetings. Rather, teachers focused on students’ progress or lack of it. That could result in extra attention for a watch-list student, to the individual student’s benefit, but it was unlikely to improve instruction or boost learning for the class as a whole.
I think my takeaways from all of this are that:
- As would be expected, analyzing student data alone doesn’t do much for us. We also need to have effective interventions.
- Despite our best intentions and rhetoric, the research indicates that most schools don’t actually engage in effective interventions.
So all of our data-driven, PLC, RTI, etc. work isn’t actually doing much for us, at least in terms of student learning outcomes. Learning gaps continue to persist. Teacher instruction isn’t changing. And so on…
Image credit: Wincing, Frédéric Poirot
Pointless? That depends on what you think the objective is.
Fair enough! But usually these practices are done in the name of ‘improving student learning.’ So… pointless (according to the research).