Archive | Assessment RSS feed for this section

Responsible educational journalism

Leslie and David Rutkowski say:

simply reporting results, in daring headline fashion, without caution, without caveat, is a dangerous practice. Although cautious reporting isn’t nearly as sensational as crying “Sputnik!” every time the next cycle of PISA results are reported, it is the responsible thing to do.

via http://www.washingtonpost.com/blogs/answer-sheet/wp/2014/03/20/so-how-overblown-were-no-1-shanghais-pisa-results

This holds true, of course, for all other assessment results as well. I am continually amazed at how many press releases become ‘news stories,’ sometimes nearly verbatim. Too many educational journalists have abdicated their responsibility to ask questions, to investigate claims and evidence, to cast a skeptical eye on puffery, and to try and get to the truth…

Picking right answers from a set of prescribed alternatives that trivialize complexity and ambiguity

Leon Botstein says:

The essential mechanism of the SAT, the multiple choice test question, is a bizarre relic of long outdated twentieth century social scientific assumptions and strategies. As every adult recognizes, knowing something or how to do something in real life is never defined by being able to choose a “right” answer from a set of possible answers (some of them intentionally misleading) put forward by faceless test designers who are rarely eminent experts. No scientist, engineer, writer, psychologist, artist, or physician – and certainly no scholar, and therefore no serious university faculty member – pursues his or her vocation by getting right answers from a set of prescribed alternatives that trivialize complexity and ambiguity.

via http://time.com/15199/college-president-sat-is-part-hoax-and-part-fraud

Data resisters aren’t Chicken Littles

Chicken in a pot

John Kuhn says:

The vocal opposition we see to data collection efforts like inBloom, to curriculum standards (which define the data to be collected) like the Common Core, and to tests (the data source) like the MAP can all be traced back, largely, to two things: (1) dismay over how much class time is sacrificed for the all-encompassing data hunt, and (2) a foundational mistrust regarding the aims of those who gather and control the data. If your dad brings home a new baseball bat, it’s a pretty happy time in the family – unless your dad has been in the habit of beating the family with blunt objects. Data is that baseball bat. A better analogy might be a doctor who causes his patients pain unnecessarily with his medical equipment. Patients are naturally going to resist going in for procedures that the doctor says are “good for them” if they know it will come with excessive pain. There is a vigorous campaign online and in the papers and political buildings to discredit opponents of school reform as just so many Chicken Littles “defending the status quo” and sticking their heads in the sand. A salient question, though, is this: has the sector-controlling school reform movement, going back to the dawn of No Child Left Behind, wielded data honestly, ethically, and constructively? If not, then yeah, there will be resistance. These people aren’t Chicken Littles. They’re Chickens Who Won’t Get in the Pot.

via http://atthechalkface.com/2014/01/03/johnkuhntx-the-tyranny-of-the-datum

Educators don’t trust the powers that be, and the powers that be don’t trust educators. And thus our dysfunctional systems and dialogues…

Image credit: 11.20.11 Every Sunday, Peas

Adaptive learning

Unit 1

Teacher 1:

In the past I have mapped out my school year ahead of time. I’ve planned how long each unit is going to take; identified the resources, activities, and assessments that I’ll use for each unit; and then marched students through the content. But this year, I’ve got an amazing idea! Before school starts I’m going to print off all of the worksheets, quizzes, and tests that the publisher sends with the textbook. I’ll also add in a few of my own supplemental activities, and put everything into numbered folders. Since kids like videos, for some units I’ve even got some VHS tapes on which I’ll place Post-It notes with time-marked segments for them to watch. Students will have access to a printed checklist for each unit that shows what they need to read, watch, and do, and they’ll also get an overview checklist of all of the units for the entire year. This way, instead of students marching to my pace, they can go as fast or as slow as they need to. They can even bounce around different units as desired, focusing on whatever they want to work on that day, and can skip stuff if they can prove mastery! I’ll also put some stickers into each folder. As students complete each reading, worksheet, quiz, test, activity, or video, they can put a sticker on their checklist showing that they’ve completed it. It will be just like getting points and leveling up in a video game! We’ll also have tracking posters stapled to the bulletin board so that I can monitor overall task and unit completion for each student, and intervene as necessary if students are moving too slow, need extra help, or are ready for enrichment activities. The system will be entirely student-driven, freeing me up to be a facilitator of learning instead of a ‘sage on the stage.’ I’m so excited to set up this system of personalized learning!

Teacher 2:

In the past I have mapped out my school year ahead of time. I’ve planned how long each unit is going to take; identified the resources, activities, and assessments that I’ll use for each unit; and then marched students through the content. But this year, my school has an amazing idea! Before school starts I’m going to have access to an online adaptive learning system that includes all of the worksheet, quiz, and test items that the publisher sends with the digital textbook. There also are some supplemental activities, and everything is organized into numbered units. Since kids like videos, for some units the system even has some digital tutorials for them to watch. Students will have access to an online checklist for each unit that shows what they need to read, watch, and do, and they’ll also get an overview checklist of all of the units for the entire year. This way, instead of students marching to my pace, they can go as fast or as slow as they need to. They can even bounce around different units as desired, focusing on whatever they want to work on that day, and can skip stuff if they can prove mastery! The system also has digital badges for each unit. As students complete each reading, worksheet, quiz, test, activity, or video item, they get a digital badge for their checklist showing that they’ve completed it. It will be just like getting points and leveling up in a video game! We’ll also have access to an online data analytics system so that I can monitor overall task and unit completion for each student, and intervene as necessary if students are moving too slow, need extra help, or are ready for enrichment activities. The system will be entirely student-driven, freeing me up to be a facilitator of learning instead of a ‘sage on the stage.’ I’m so excited we have this system of personalized learning!

12 education guidelines from Alfie Kohn

Alfie Kohn says:

  1. Learning should be organized around problems, projects, and (students’) questions – not around lists of facts or skills, or separate disciplines.
  2. Thinking is messy; deep thinking is really messy. Therefore beware prescriptive standards and outcomes that are too specific and orderly.
  3. The primary criterion for what we do in schools: How will this affect kids’ interest in the topic (and their excitement about learning more generally)?
  4. If students are “off task,” the problem may be with the task, not with the kids.
  5. In outstanding classrooms, teachers do more listening than talking, and students do more talking than listening. Terrific teachers often have teeth marks on their tongues.
  6. Children learn how to make good decisions by making decisions, not by following directions.
  7. When we aren’t sure how to solve a problem relating to curriculum, pedagogy, or classroom conflict, the best response is often to ask the kids.
  8. The more focused we are on kids’ “behaviors,” the more we end up missing the kids themselves – along with the needs, motives, and reasons that underlie their actions.
  9. If students are rewarded or praised for doing something (e.g., reading, solving problems, being kind), they’ll likely lose interest in whatever they had to do to get the reward.
  10. The more that students are led to focus on how well they’re doing in school, the less engaged they’ll tend to be with what they’re doing in school.
  11. All learning can be assessed, but the most important kinds of learning are very difficult to measure – and the quality of that learning may diminish if we try to reduce it to numbers.
  12. Standardized tests assess the proficiencies that matter least. Such tests serve mostly to make unimpressive forms of instruction appear successful.

via http://www.washingtonpost.com/blogs/answer-sheet/wp/2013/10/30/a-dozen-basic-guidelines-for-educators

We’ve got extra funds! Let’s buy test-taking devices!

Amy Prime says:

I know of another administrator who sent out a happy email to her staff because the school had some extra funds and was able to purchase a classroom set of tablets for the teachers to use for classroom testing. How many of you parents out there, upon discovering your child’s school had extra funds available, would want that money used to buy test-taking devices?

via http://www.desmoinesregister.com/article/20131020/OPINION01/310200071/Iowa-View-Opt-Out-Movement-growing-schools-test-test-test

Want students to look worse? Change the cut scores.

In a must-read article explaining the politics behind cut scores on state-level tests, principal Carol Burris notes:

In 2011, the College Board created a College Readiness index.   It was a combined index of 1550, which only 43 percent of all SAT test takers achieved. You can find it here. Now add up New York’s chosen index.  It is 1630, significantly higher than the 2011 College Board’s index associated with a B- in college.

The above illustrates how one can manipulate the percentage of college readiness by … changing the definition of “college ready” to suit oneself. . . . In the end, [the New York State Education Department] chose values that are extraordinarily high, producing an index that exceeds the College Board’s index for achieving a B- average.

Next time you read or hear about someone saying that more difficult tests and/or higher cut scores will ensure college readiness, remember this and ask some tough questions.

Performance assessments may not be ‘reliable’ or ‘valid.’ So what?

Meh

In a comment on Dan Willingham’s recent post, I said

we have plenty of alternatives that have been offered, over and over again, to counteract our current over-reliance on – and unfounded belief in – the ‘magic’ of bubble sheet test scores. Such alternatives include portfolios, embedded assessments, essays, performance assessments, public exhibitions, greater use of formative assessments (in the sense of Black & Wiliam, not benchmark testing) instead of summative assessments, and so on. . . . We know how to do assessment better than low-level, fixed-response items. We just don’t want to pay for it…

Dan replied

I don’t think money is the problem. These alternatives are not, to my knowledge, reliable or valid, with the exception of essays.

And therein lies the problem… (with this issue in general, not with Dan in particular)

Most of us recognize that more of our students need to be doing deeper, more complex thinking work more often. But if we want students to be critical thinkers and problem solvers and effective communicators and collaborators, that cognitively-complex work is usually more divergent rather than convergent. It is more amorphous and fuzzy and personal. It is often multi-stage and multimodal. It is not easily reduced to a number or rating or score. However, this does NOT mean that kind of work is incapable of being assessed. When a student creates something – digital or physical (or both) – we have ways of determining the quality and contribution of that product or project. When a student gives a presentation that compels others to laugh, cry, and/or take action, we have ways of identifying what made that an excellent talk. When a student makes and exhibits a work of art – or sings, plays, or composes a musical selection – or displays athletic skill – or writes a computer program – we have ways of telling whether it was done well. When a student engages in a service learning project that benefits the community, we have ways of knowing whether that work is meaningful and worthwhile. When a student presents a portfolio of work over time, we have ways of judging that. And so on…

If there is anything that we’ve learned (often to our great dismay) over the last decade, it’s that assessment is the tail that wags the instructional, curricular, and educational dogs. If we continue to insist on judging performance assessments with the ‘validity’ and ‘reliability’ criteria traditionally used by statisticians and psychometricians, we never – NEVER – will move much beyond factual recall and procedural regurgitation to achieve the kinds of higher-level student work that we need more of.

The upper ends of Bloom’s taxonomy and/or Webb’s Depth of Knowledge levels probably can not – and likely SHOULD not – be reduced to a scaled score, effect size, or regression model without sucking the very soul out of that work. As I said in another comment on Dan’s post, “What score should we give the Mona Lisa? And what would the ‘objective’ rating criteria be?“ I’m willing to confess that I am unconcerned about the lack of statistical ‘validity’ and ‘reliability’ of authentic performance assessments if we are thoughtful assessors of those activities.

How about you? Dan (or others), what are your thoughts on this?

Image credit: Meh, Ken Murphy

We can no longer claim to believe in the individual student

Ryan Bretag says:

We can no longer claim to believe in the individual student if we continue supporting a system of assessment driven by seeing them as a collective unit meant to be mass produced in the same way at the same time instead of seeing them as a separate learner meant to be uniquely fostered.

via http://www.ryanbretag.com/blog/?p=4141

Teacher ‘accountability’ [VIDEO]

I don’t get to attend the meetings between educators and policymakers when they talk about teacher ‘accountability,’ but this is how I envision the conversation often plays out…

Happy viewing! (with captions!)

Switch to our mobile site