There’s no diagnostic value in locked-down summative assessments

Diane Ravitch said:

It’s totally inappropriate to compare opting out of testing to opting out of immunization. One has a scientific basis, the other has none. The tests that kids take today have nothing to do with the tests that we took when we were kids. When we were kids, we took an hour test to see how we did in reading, an hour test to see how we did in math. Children today in third grade are taking eight hours of testing. They’re spending more time taking tests than people taking the bar exam.

Now, when we talk about the results of the test, they come back four to six months later. The kids already have a different teacher. And all they get is a score and a ranking. The teachers can’t see the item analysis. They can’t see what the kids got wrong. They’re getting no instructional gain, no possibility of improvement for the kids, because there’s no value to the test. They have no diagnostic value.

[It’s as] if you go to a doctor and you say, ‘I have a pain,’ and the doctor says, ‘I’ll get back to you in six months,’ and he gets back to you and tells you how you compare to everyone else in the state, but he doesn’t have any medicine for you.

via http://www.washingtonpost.com/blogs/answer-sheet/wp/2015/04/16/why-the-debate-between-diane-ravitch-and-merryl-tisch-was-remarkable

Day 3 and 4: The End.

As Worlds came to end, I realized something: experience is everything. In your life you will feel an endless amount of emotion and all of it will have been caused by the experience. We ended up only winning two matches and loosing the rest. The floor mats were squishy because they were new and so the wheels on our robot would sink into the ground. There was a team (The Pandas) and a group that was there (a sponsor) who let us borrow their wheels so that we could drive a little bit better. The rest was just being paired against teams who were better than us, and that’s completely okay. Robotics and the FIRST program isn’t about winning. Yes, it is nice to get an award for being the best but there is so much more to it.

Aside from the arena, there is also the pit area. Think of it like NASCAR for a minute and you will understand. In between matches, if something really bad happens to the robot (Linda,) she will come to the pit to get fixed…and quickly. The pit area is also a place for judges to come talk to us and a place for us to present ourselves to the general public/other teams. We decorate our pit area pretty heavily like many other teams there. It attracts many little kids and a lot of adults too… our theme is pretty much “any-age-friendly.” Gillian and I decided to mix things up this time and we would dance and sing for teams along with statue standing. We had stamps, buttons, key chains, stickers and pamphlets to give out. The team was interviewed twice while down there. Once by the people of FIRST and another time by Student News Net! The FTC played our interview on the live stream and Student News Net will publish our story tomorrow (Monday!)

At closing cerimonies Dean Kamen, Woodie Flowers, and many others gave speeches, handed out awards and introduced new technology to us. They gave a senior recognition and a small speech to all of us…we got to stand up. In a stadium of thousands it was intimidating. It was exciting and made everyone jittery for the next couple of years. It got me pumped up for the next couple of years. After that, we had the “after party.” We got to hear Christina Grimmie perorm along with BoysLikeGirls a pop punk band. We didn’t end up getting back to the hotel until around 11 PM-ish and I got home about 5 minutes about (6:00 PM.) It’s really nice to be back in Iowa around familiar things…like my bed. It has been a long but extremely successful week for the Sock Monkeys. We hope to do this all over again next year-even though I nor Logan, Caleb, and Giovanni will be there.

A HUGE thank you to Scott McLeod for letting me share the experiences of FIRST again and a HUGE thank you to my community/school for helping us get to where we are now!P1080582 P1080590 P1080581 P1080582 P1080583 P1080584 P1080585 P1080586 P1080587 P1080588 P1080590 P1080589 P1080592fP1080603P1080613P1080616Screen Shot 2015-04-22 at 9.08.28 PM P1080626 P1080627 P1080628 P1080629 P1080630

The magical power of PARCC

Magician ahead sign

Peter Greene said:

[When advocates] come to explain how crucial PARCC testing is for your child’s future, you might try asking some questions:

  • Exactly what is the correspondence between PARCC results and college readiness? Given the precise data, can you tell me what score my eight year old needs to get on the test to be guaranteed at least a 3.75 GPA at college?
  • Does it matter which college he attends, or will test results guarantee he is ready for all colleges?
  • Can you show me the research and data that led you to conclude that Test Result A = College Result X? How exactly do you know that meeting the state’s politically chosen cut score means that my child is prepared to be a college success?
  • Since the PARCC tests math and language, will it still tell me if my child is ready to be a history or music major? How about geology or women’s studies?
  • My daughter plans to be a stay-at-home mom. Can she skip the test? Since that’s her chosen career, is there a portion of the PARCC that tests her lady parts and their ability to make babies?
  • Which section of the PARCC tests a student’s readiness to start a career as a welder? Is it the same part that tests readiness to become a ski instructor, pro football player, or dental assistant?
  • I see that the PARCC will be used to “customize instruction.” Does that mean you’re giving the test tomorrow (because it’a almost November already)? How soon will the teacher get the detailed customizing information– one week? Ten days? How will the PARCC results help my child’s choir director and phys ed teacher customize instruction?

… The PARCC may look like just one more poorly-constructed standardized math and language test, but it is apparently super-duper magical, with the ability to measure every aspect of a child’s education and tell whether the child is ready for college and career, regardless of which college, which major, which career, and which child we are talking about. By looking at your eight year old’s standardized math and language test, we can tell whether she’s on track to be a philosophy major at Harvard or an airline pilot! It’s absolutely magical!

Never has a single standardized test claimed so much magical power with so little actual data to back up its assertions.

via http://curmudgucation.blogspot.com/2014/10/parcc-is-magical.html

Image credit: Caution: Magician Ahead!, Kevin Trotman

What’s good about standardized tests?

Susan Berfield said:

Most standardized tests aren’t objective, don’t measure a student’s ability to think, and don’t reliably predict how well a kid will do in the workplace. So what’s good about them? They’re relatively cheap to create, easy to administer, and they yield data.

via http://www.businessweek.com/articles/2014-12-11/book-review-parents-can-band-together-to-end-standardized-testing

From data to wisdom

Image credit: From data to wisdom, Nick Webb

Ames High band: Modeling innovation, risk-taking, and feedback

I’m pretty impressed with the Ames High School band directors. Not only are Chris Ewan and Andrew Buttermore facilitating a great band program musically (250+ students who give amazing performances), they also are modeling instructional innovation and risk-taking with technology. When our district provided laptops for students, for example, they immediately jumped on the opportunity for band students to record themselves and then submit their digital files for review. Many students are using SmartMusic to help them practice and – even cooler – marching band participants now can see what they’re trying to accomplish on the field because they’ve been sent a Pyware video that shows them what it looks like from the perspective of those of us in the stands. [Next up, Ohio State!]

But I think the most enthralling thing they’ve done to date was a video that they showed us during Parent Night last week (feel free to pause at any time to get the full effect):

How do you help a group of incoming 9th graders realize what it looks like when they’re out of step? Put a video camera on the track at foot level, of course!

BRILLIANT.

Imagine you’re a brand new band student… You’ve only been marching for a few days. You’re juggling learning new music with learning how to step in time. It’s difficult to see what everyone else is doing. Your opportunities for feedback are relatively limited in the large group. And so on. It’s easy to feel like maybe you’re doing better than you really are. Heck, you didn’t hit the student next to you today with your tuba, right? But the video doesn’t lie… “Wait, those are MY feet! And I’m not there yet.” And that other video from up in the stands that shows that our lines need work too? Also useful for helping me see where I fit into the overall picture…

Why do I like this video so much? Because it models creative ways to give kids feedback and because it uses technology to help students learn how to get better. As Chris Anderson noted in his TED talk, video often allows us to innovate more rapidly. Want your 9th graders to ramp up their marching band footwork as fast as possible? Show them – don’t just tell them – what it looks like…

How is your school using technology to help kids SEE how they can get better? (and, no, I’m not talking about ‘adaptive’ multiple choice software)

Test makers should not be driving instruction

In a post about the difficulty of New York’s Common Core assessments, Robert Pondiscio said:

Test makers have an obligation to signal to the field the kind of instructional choices they want teachers to make

via http://edexcellence.net/articles/new-york%E2%80%99s-common-core-tests-tough-questions-curious-choices

I’m going to disagree with Robert on this one. I’m fairly certain that test makers should NOT be the ones driving instruction…

What testing should do for us

Multiple choice test

John Robinson said:

‘We would like to dethrone measurement from its godly position, to reveal the false god it has been. We want instead to offer measurement a new job – that of helpful servant. We want to use measurement to give us the kind and quality of feedback that supports and welcomes people to step forward with their desire to contribute, to learn, and to achieve.’ – Margaret Wheatley, Finding Our Way: Leadership for an Uncertain Time

Want to know what’s wrong with testing and accountability today? It’s more about a ‘gotcha game’ than really trying to help teachers improve their craft. Over and over ad nauseam, those pushing these tests talk about using test data to improve teaching and thereby student learning, but that’s not what is happening at all.

via http://the21stcenturyprincipal.blogspot.com/2014/08/time-to-dethrone-testing-from-its-godly.html

Image credit: Exams Start… Now, Ryan M.

Why meaningful math problems are defined out of online assessments

Dan Meyer said:

at this moment in history, computers are not a natural working medium for mathematics.

For instance: think of a fraction in your head.

Say it out loud. That’s simple.

Write it on paper. Still simple.

Now communicate that fraction so a computer can understand and grade it. Click open the tools palette. Click the fraction button. Click in the numerator. Press the “4″ key. Click in the denominator. Press the “9″ key.

That’s bad, but if you aren’t convinced the difference is important, try to communicate the square root of that fraction. If it were this hard to post a tweet or update your status, Twitter and Facebook would be empty office space on Folsom Street and Page Mill Road.

It gets worse when you ask students to do anything meaningful with fractions. Like: “Explain whether 4/3 or 3/4 is closer to 1, and how you know.”

It’s simple enough to write down an explanation. It’s also simple to speak that explanation out loud so that somebody can assess its meaning. In 2012, it is impossible for a computer to assess that argument at anywhere near the same level of meaning. Those meaningful problems are then defined out of “mathematics.”

via http://blog.mrmeyer.com/2012/what-silicon-valley-gets-wrong-about-math-education-again-and-again

Responsible educational journalism

Leslie and David Rutkowski say:

simply reporting results, in daring headline fashion, without caution, without caveat, is a dangerous practice. Although cautious reporting isn’t nearly as sensational as crying “Sputnik!” every time the next cycle of PISA results are reported, it is the responsible thing to do.

via http://www.washingtonpost.com/blogs/answer-sheet/wp/2014/03/20/so-how-overblown-were-no-1-shanghais-pisa-results

This holds true, of course, for all other assessment results as well. I am continually amazed at how many press releases become ‘news stories,’ sometimes nearly verbatim. Too many educational journalists have abdicated their responsibility to ask questions, to investigate claims and evidence, to cast a skeptical eye on puffery, and to try and get to the truth…

Picking right answers from a set of prescribed alternatives that trivialize complexity and ambiguity

Leon Botstein says:

The essential mechanism of the SAT, the multiple choice test question, is a bizarre relic of long outdated twentieth century social scientific assumptions and strategies. As every adult recognizes, knowing something or how to do something in real life is never defined by being able to choose a “right” answer from a set of possible answers (some of them intentionally misleading) put forward by faceless test designers who are rarely eminent experts. No scientist, engineer, writer, psychologist, artist, or physician – and certainly no scholar, and therefore no serious university faculty member – pursues his or her vocation by getting right answers from a set of prescribed alternatives that trivialize complexity and ambiguity.

via http://time.com/15199/college-president-sat-is-part-hoax-and-part-fraud