Tag Archives: assessment

The exam sham

Harvard

Mike Crowley said:

Teachers are being judged and schools rated based on test and exam results. How many kids are getting into Yale and Oxford, Harvard and Cambridge, we are perpetually asked. I have yet to be asked, how many of your students go to the college that is right for them? … how many are pursuing their passions? … how many are leading happy, fulfilling lives and believe that the curriculum was relevant to their daily, real-world challenges? No, we rarely ask the right questions.

via http://crowleym.com/2015/06/21/the-exam-sham-onwards-we-blindly-go

Image credit: Harvard, Anne Helmond

Let’s be honest about annual testing

Testing pencils

Let’s be honest: students and parents obtain no tangible benefit from large-scale annual testing. Kids and families give up numerous days of learning time – both for the tests themselves and for the test prep sessions whose sole purpose is to get ready for the tests (and maybe also for the testing pep rally) – and for what? The data come back too late to be actionable. The questions are shrouded in secrecy so that no one has any idea what students actually missed. As Diane Ravitch has noted, given the immense amounts of time, energy, money, and personnel that we expend on our summative assessments, “there’s no instructional gain … [there’s] no diagnostic value.” The tests fail the fundamental rule of good assessment – which is to provide feedback to fuel future improvement – and come at a tremendous opportunity cost.

All of this might be fine – students and families might dutifully and kindly take a few hours or even days out of the school year to support their local school’s desire to get some institutional-level benchmarks (like when I was a kid) – if the stakes currently weren’t so high and the problems weren’t so prevalent (unlike when I was a kid). The use of extremely-volatile, statistically-unreliable data to punish teachers and schools… the misuse of assessment results to fuel anti-public-school political agendas… the billions of public dollars that go into the pockets of testing companies instead of under-resourced classrooms… the narrowing of curricula and the neglect of non-tested subjects… the appropriation of computers for weeks on end for testing instead of learning… the recharacterization of schools as test score factories, not life success enablers… no wonder parents are starting to scream. It’s a miracle that more families aren’t opting out of these tests and it’s awfully hard to blame them if they do.

Our assessment systems are a complete mess right now. As parents experience empty-threat tantrums from policymakers, vindictive ‘sit and stare’ policies from school districts, and testing horror story after horror story, they are rightfully pushing back against testing schemes that offer no learning feedback or other concrete benefits to their children. There are looming battles with governors and the federal government around opt-out policies. Put your money on the parents.

Many educators are still running scared on this front. Most schools are still fearful and compliant. Our inactivity makes us complicit. When do we say ‘enough is enough?’ How bad does it have to get before we stand with our parents and our communities? When do we fight for what’s educationally sound instead of caving in (yet again)?

Image credit: perfect, romana klee

Whipping people into line

Bullwhip

Sir Ken Robinson said:

It’s not the need for standards. It’s the way they play out. . . . testing is not some benign educational process. It is a multibillion-dollar industry that is absorbing massive time, resources and cash that could be used for other things. Its a massive profit-making machine. . . . You can look at the value of there being some sort of commonly-agreed standards and some core content that could be helpful to schools. That’s one conversation. You can look at some value of some form of diagnostic testing. But when you look at it cumulatively and lay the politics on top of it, it’s just a mess. . . . People are just exhausted by this whole enterprise. . . . If you don’t implement reforms, then you don’t get the cash. It’s just trying to whip people into line. And it doesn’t have to be that way, as other countries are showing, looking for more creative approaches to education. . . .

via http://www.washingtonpost.com/blogs/answer-sheet/wp/2015/04/21/sir-ken-robinson-has-a-lot-to-say-about-u-s-school-reform-it-isnt-good

Image credit: 10’Morgan Blacksnake, AldoZL

 

There’s no diagnostic value in locked-down summative assessments

Diane Ravitch said:

It’s totally inappropriate to compare opting out of testing to opting out of immunization. One has a scientific basis, the other has none. The tests that kids take today have nothing to do with the tests that we took when we were kids. When we were kids, we took an hour test to see how we did in reading, an hour test to see how we did in math. Children today in third grade are taking eight hours of testing. They’re spending more time taking tests than people taking the bar exam.

Now, when we talk about the results of the test, they come back four to six months later. The kids already have a different teacher. And all they get is a score and a ranking. The teachers can’t see the item analysis. They can’t see what the kids got wrong. They’re getting no instructional gain, no possibility of improvement for the kids, because there’s no value to the test. They have no diagnostic value.

[It’s as] if you go to a doctor and you say, ‘I have a pain,’ and the doctor says, ‘I’ll get back to you in six months,’ and he gets back to you and tells you how you compare to everyone else in the state, but he doesn’t have any medicine for you.

via http://www.washingtonpost.com/blogs/answer-sheet/wp/2015/04/16/why-the-debate-between-diane-ravitch-and-merryl-tisch-was-remarkable

The magical power of PARCC

Magician ahead sign

Peter Greene said:

[When advocates] come to explain how crucial PARCC testing is for your child’s future, you might try asking some questions:

  • Exactly what is the correspondence between PARCC results and college readiness? Given the precise data, can you tell me what score my eight year old needs to get on the test to be guaranteed at least a 3.75 GPA at college?
  • Does it matter which college he attends, or will test results guarantee he is ready for all colleges?
  • Can you show me the research and data that led you to conclude that Test Result A = College Result X? How exactly do you know that meeting the state’s politically chosen cut score means that my child is prepared to be a college success?
  • Since the PARCC tests math and language, will it still tell me if my child is ready to be a history or music major? How about geology or women’s studies?
  • My daughter plans to be a stay-at-home mom. Can she skip the test? Since that’s her chosen career, is there a portion of the PARCC that tests her lady parts and their ability to make babies?
  • Which section of the PARCC tests a student’s readiness to start a career as a welder? Is it the same part that tests readiness to become a ski instructor, pro football player, or dental assistant?
  • I see that the PARCC will be used to “customize instruction.” Does that mean you’re giving the test tomorrow (because it’a almost November already)? How soon will the teacher get the detailed customizing information– one week? Ten days? How will the PARCC results help my child’s choir director and phys ed teacher customize instruction?

… The PARCC may look like just one more poorly-constructed standardized math and language test, but it is apparently super-duper magical, with the ability to measure every aspect of a child’s education and tell whether the child is ready for college and career, regardless of which college, which major, which career, and which child we are talking about. By looking at your eight year old’s standardized math and language test, we can tell whether she’s on track to be a philosophy major at Harvard or an airline pilot! It’s absolutely magical!

Never has a single standardized test claimed so much magical power with so little actual data to back up its assertions.

via http://curmudgucation.blogspot.com/2014/10/parcc-is-magical.html

Image credit: Caution: Magician Ahead!, Kevin Trotman

What’s good about standardized tests?

Susan Berfield said:

Most standardized tests aren’t objective, don’t measure a student’s ability to think, and don’t reliably predict how well a kid will do in the workplace. So what’s good about them? They’re relatively cheap to create, easy to administer, and they yield data.

via http://www.businessweek.com/articles/2014-12-11/book-review-parents-can-band-together-to-end-standardized-testing

From data to wisdom

Image credit: From data to wisdom, Nick Webb

Grading and assessment as an opportunity

Greg Jouriles said:

We have the grade problem at my high school. In the same course or department, a B in one classroom might be an A, or even a C, in another. It’s a problem for us, and, likely, a problem in most schools.

But it has also been an opportunity. Recognizing our grading differences, we opted to create a common conception of achievement, our graduate profile, and department learning outcomes with rubrics. Our standards now align closely with the Common Core State Standards. Second, we created common performance tasks that measure these standards and formative assessments that scaffold to them. Third, we look together at student work. Fourth, we have begun to grade each other’s students on these common tasks.

We could publish the results of these performance tasks, and the public would have a good idea of what we’re good at and what we’re not. For example, our students effectively employ reading strategies to comprehend a text, but are often stymied by a lack of vocabulary or complex syntax. We’ve also learned most of our students can coherently develop a claim, citing the appropriate evidence to support it when choosing from a restricted universe of data. They aren’t as good when the universe of data is broadened. They are mediocre at analysis, counter-arguments, rebuttals, and evaluation of sources, though they have recently gotten better at evaluating sources as we have improved our instruction and formative assessments. A small percentage of our students do not show even basic competency in reading and writing.

That’s better information than we’ve ever received from standardized testing. What’s also started to happen is that teachers who use the same standards and rubrics, assign the same performance tasks, and grade each other’s work are finding their letter grades starting to align.

And, this approach has led to a lot of frank discussions. For example, why are grades different? Where we have looked, different conceptions of achievement and rigor seem most important. So we have to talk about it. The more we do, the more aligned we will become, and the more honest picture of achievement we can create. It has been fantastic professional development – done without external mandates. We have a long way to go, but we can understand the value of our efforts and see improvement in student work.

via http://www.edweek.org/ew/articles/2014/07/09/36jouriles.h33.html

The declining economic value of routine cognitive work

Workforce data show that U.S. employees continue to do more non-routine cognitive and interpersonal work. [Note: these data tend to be fairly similar for most developed countries, not just the U.S.]

Fewer and fewer employment opportunities exist in America for both routine cognitive work and manual labor, and the gap is widening over the decades. Unless they’re location-dependent, manual labor jobs often are outsourced to cheaper locations overseas. Unless they’re location-dependent, routine cognitive jobs are increasingly being replaced both by cheaper workers overseas and by software algorithms.

What kind of schoolwork do most American students do most of the time? Routine cognitive work. What kind of work is emphasized in nearly all of our national and state assessment schemes? Routine cognitive work. For what kind of work do traditionalist parents and politicians continue to advocate? Routine cognitive work.

2013AutorPrice

[open in new tab to view larger image]

Some information from Autor & Price (2013) that may be helpful…

  • Routine manual tasks – activities like production and monitoring jobs performed on an assembly line; easily automated and often replaced by machines; picking, sorting, repetitive assembly (p. 2)
  • Non-routine manual tasks – activities that demand situational adaptability, visual and language recognition, and perhaps in-person interaction; require modest amounts of training; activities like driving a truck, cleaning a hotel room, or preparing a meal (pp. 2-3)
  • Routine mental tasks – activities that are sufficiently well-defined that they can be carried out by a less-educated worker in a developing country with minimal discretion; also increasingly replaced by computer software algorithms; activities like bookkeeping, clerical work, information processing and record-keeping (e.g., data entry), and repetitive customer service (pp. 1-2)
  • Non-routine mental tasks – activities that require problem-solving, intuition, persuasion, and creativity; facilitated and complemented by computers, not replaced by them; hypothesis testing, diagnosing, analyzing, writing, persuading, managing people; typical of professional, managerial, technical, and creative professions such as science, engineering, law, medicine, design, and marketing (p. 2)

Ames High band: Modeling innovation, risk-taking, and feedback

I’m pretty impressed with the Ames High School band directors. Not only are Chris Ewan and Andrew Buttermore facilitating a great band program musically (250+ students who give amazing performances), they also are modeling instructional innovation and risk-taking with technology. When our district provided laptops for students, for example, they immediately jumped on the opportunity for band students to record themselves and then submit their digital files for review. Many students are using SmartMusic to help them practice and – even cooler – marching band participants now can see what they’re trying to accomplish on the field because they’ve been sent a Pyware video that shows them what it looks like from the perspective of those of us in the stands. [Next up, Ohio State!]

But I think the most enthralling thing they’ve done to date was a video that they showed us during Parent Night last week (feel free to pause at any time to get the full effect):

How do you help a group of incoming 9th graders realize what it looks like when they’re out of step? Put a video camera on the track at foot level, of course!

BRILLIANT.

Imagine you’re a brand new band student… You’ve only been marching for a few days. You’re juggling learning new music with learning how to step in time. It’s difficult to see what everyone else is doing. Your opportunities for feedback are relatively limited in the large group. And so on. It’s easy to feel like maybe you’re doing better than you really are. Heck, you didn’t hit the student next to you today with your tuba, right? But the video doesn’t lie… “Wait, those are MY feet! And I’m not there yet.” And that other video from up in the stands that shows that our lines need work too? Also useful for helping me see where I fit into the overall picture…

Why do I like this video so much? Because it models creative ways to give kids feedback and because it uses technology to help students learn how to get better. As Chris Anderson noted in his TED talk, video often allows us to innovate more rapidly. Want your 9th graders to ramp up their marching band footwork as fast as possible? Show them – don’t just tell them – what it looks like…

How is your school using technology to help kids SEE how they can get better? (and, no, I’m not talking about ‘adaptive’ multiple choice software)

Test makers should not be driving instruction

In a post about the difficulty of New York’s Common Core assessments, Robert Pondiscio said:

Test makers have an obligation to signal to the field the kind of instructional choices they want teachers to make

via http://edexcellence.net/articles/new-york%E2%80%99s-common-core-tests-tough-questions-curious-choices

I’m going to disagree with Robert on this one. I’m fairly certain that test makers should NOT be the ones driving instruction…

Switch to our mobile site

Thanks for being part of Foliovision!