[cross-posted at eduwonkette; see also her response]
When eduwonkette asked me to guest blog about data-driven decision-making in schools, I eagerly agreed. Why? Because in my work with numerous school organizations in multiple states, I have seen the power of data firsthand. When done right, data-driven education can have powerful impacts on the learning outcomes of students.
Unfortunately, most school districts still are struggling with their data-driven practice. Much of this is because they continue to think about using data from a compliance mindset rather than using data for meaningful school improvement. An uninformed model of data-driven decision-making looks something like this:
This is the NCLB model. Schools are expected to collect data once a year, slice and dice them in various ways, set some goals based on the analyses, do some things differently, and then wait another whole year to see if their efforts were successful. Somehow, this model is supposed to get schools to 100% proficiency on key learning outcomes. This is dumb. It’s like trying to lose weight but only weighing yourself once a year to see if you’re making progress. Compounding the problem is the fact that student learning data often are collected near the end of the year and given back to educators months later, which of course is helpful to no one.
A better model looks something like this:
The key difference in this model is an emphasis on ongoing progress monitoring and continuous, useful data flow to teachers. Under this approach, schools have good baseline data available to them, which means that the data are useful for diagnostic purposes in the classroom and thus relevant to instruction. The data also are timely, meaning that teachers rarely have to wait more than a few days to get results. In an effective data-driven school, educators also are very clear about what essential instructional outcomes they are trying to achieve (this is actually much rarer than one would suppose) and set both short- and long-term measurable instructional goals from their data.
Armed with clarity of purpose and clarity of goals, effective data-driven educators then monitor student progress during the year on those essential outcomes by checking in periodically with short, strategic formative assessments. They get together with role-alike peers on a regular basis to go over the data from those formative assessments, and they work as a team, not as isolated individuals, to formulate instructional interventions for the students who are still struggling to achieve mastery on those essential outcomes. After a short period of time, typically three to six weeks, they check in again with new assessments to see if their interventions have worked and to see which students still need help. The more this part of the model occurs during the year, the more chances teachers have to make changes for the benefit of students.
It is this middle part of the model that often is missing in school organizations. When it is in place and functioning well, schools are much more likely to achieve their short- and long-term instructional goals and students are much more likely to achieve proficiency on accountability-oriented standardized tests. Teachers in schools that have this part of the model mastered rarely, if ever, complain about assessment because the data they are getting are helpful to their classroom practice.
NCLB did us no favors. It could’ve stressed powerful formative assessment, which is the driving engine for student learning and growth on whatever outcomes one chooses. Instead, it went another direction and we lost an opportunity to truly understand the power of data-driven practice. There are hundreds, and probably thousands, of schools across the country that have figured out the middle part of the model despite NCLB. It is these schools that are profiled in books such as Whatever It Takes and It’s Being Done (both recommended reads) and by organizations such as The Education Trust.
When done right, data-driven decision-making is about helping educators make informed decisions to benefit students. It is about helping schools know whether what they are doing is working or not. I have seen effective data-driven practice take root and it is empowering for both teachers and students. We shouldn’t unilaterally reject the idea of data-driven education just because we hate NCLB. If we do, we lose out on the potential of informed practice.
Thanks for the guest spot, eduwonkette!
Yes, yes, yes and more yes.
My district implemented “benchmarks” as supposed “formative” assessments to try and do the middle part.
The problem? they’re large multiple choice tests given at the end of each quarter. Which teachers than have to grade and self-sort the data from. Supposedly we collaborate w/our same-subject teachers in reviewing the data and coming up with “remediation” strategies for things we taught 3 months ago.
The other problem? They’ve started to tie lower-than-average (for building, or district) scores to things like being put on a plan of improvement. They believe that forced competition is going to encourage collaboration and change.
Words cannot encompass my frustration with this system, but you’ve gotten a good start on it.
There’s nothing in NCLB that prohibits the model you just outlined. You can have both formative system as well as accountability systems. A number of groups have described this model for years, all within the context of NCLB. NSBA http://www.nsba.org/site/docs/9200/9153.pdf
DQC: http://www.dataqualitycampaign.org/, COSN http://www.3d2know.org/, even the Department of Education has talked about this http://orange.cc.ny.us/its/pdf/ed_tech_plan_2004_pres.pdf Reading First also has formative assessments built into the program.
What I see is not so much bad practice as a result of NCLB, it is bad data practices due to self-imposed limitations – a feeling like I can’t do something as a result of a state/federal law – which more often that naught is not the case. It’s great to see you contributing to the data discussion that has been occurring over these past several years.
llary52, I couldn’t agree with you more. I say this all the time to the schools with which I work: ‘No one made you do what you’re doing. You have to own your response to this.’ That said, I also can wish that NCLB better emphasized formative assessment!
Thanks for the comment and best wishes to you in your own work with schools on this issue.
I am new to this and not quite sure this will even be read by anyone else. I am reading this about two months after it was posted originally. I retrieved my Des Moines Register from the front porch this morning only to be disgusted once again with their inappropriate use of data. The date is March 30, 2008 and the article is on the front page of the Sunday Register. I consider myself a data-driven decision maker. I am cautious, however, on the types of data I use to influence my decision making. Our high school is a participant in a formative assessment grant study. We have spent a good deal of our time this year discussing just how data should be used and for what purpose. The article in the Des Moines Register is an example of how data can be misleading. The article rated schools based on the average G.P.A of their college freshmen attending the three Iowa Regents Universities. They imply in the article that their statistical manipulation of the data is sound. They fail to recognize the short comings of their study and thus perpetuate some very dangerous misconceptions in their readership. Many of the schools have very small numbers of students that were included as their graduates are attending universities not included in the study. They also fail to acknowledge the impact that just one student in those with smaller sample size can influence the mean G.P.A. Their data does not really support any of what they claim. I agree it is good data to have, but it is being used inappropriately. The part that upsets me the most is that the ten “worst” schools are probably engaged in better practices than several of the “best” schools. Their top students,however, are not included in the study as they went out of state or to a private university here in Iowa. It is a shame that those top schools will be praised by their public regardless of how well they are really doing and the administrators at the lower ranking schools will now spend a great deal of their already limited time explaining the flaws in the Des Moines Registers study. They may even be pressured to change programs and practices which are quite effective.
I wish that our educational leaders at the state level and those at the university level who actually understand how flawed and thus misleading these types of studies are.