Tag Archives: assessment

Grading and assessment as an opportunity

Greg Jouriles said:

We have the grade problem at my high school. In the same course or department, a B in one classroom might be an A, or even a C, in another. It’s a problem for us, and, likely, a problem in most schools.

But it has also been an opportunity. Recognizing our grading differences, we opted to create a common conception of achievement, our graduate profile, and department learning outcomes with rubrics. Our standards now align closely with the Common Core State Standards. Second, we created common performance tasks that measure these standards and formative assessments that scaffold to them. Third, we look together at student work. Fourth, we have begun to grade each other’s students on these common tasks.

We could publish the results of these performance tasks, and the public would have a good idea of what we’re good at and what we’re not. For example, our students effectively employ reading strategies to comprehend a text, but are often stymied by a lack of vocabulary or complex syntax. We’ve also learned most of our students can coherently develop a claim, citing the appropriate evidence to support it when choosing from a restricted universe of data. They aren’t as good when the universe of data is broadened. They are mediocre at analysis, counter-arguments, rebuttals, and evaluation of sources, though they have recently gotten better at evaluating sources as we have improved our instruction and formative assessments. A small percentage of our students do not show even basic competency in reading and writing.

That’s better information than we’ve ever received from standardized testing. What’s also started to happen is that teachers who use the same standards and rubrics, assign the same performance tasks, and grade each other’s work are finding their letter grades starting to align.

And, this approach has led to a lot of frank discussions. For example, why are grades different? Where we have looked, different conceptions of achievement and rigor seem most important. So we have to talk about it. The more we do, the more aligned we will become, and the more honest picture of achievement we can create. It has been fantastic professional development – done without external mandates. We have a long way to go, but we can understand the value of our efforts and see improvement in student work.

via http://www.edweek.org/ew/articles/2014/07/09/36jouriles.h33.html

The declining economic value of routine cognitive work

Workforce data show that U.S. employees continue to do more non-routine cognitive and interpersonal work. [Note: these data tend to be fairly similar for most developed countries, not just the U.S.]

Fewer and fewer employment opportunities exist in America for both routine cognitive work and manual labor, and the gap is widening over the decades. Unless they’re location-dependent, manual labor jobs often are outsourced to cheaper locations overseas. Unless they’re location-dependent, routine cognitive jobs are increasingly being replaced both by cheaper workers overseas and by software algorithms.

What kind of schoolwork do most American students do most of the time? Routine cognitive work. What kind of work is emphasized in nearly all of our national and state assessment schemes? Routine cognitive work. For what kind of work do traditionalist parents and politicians continue to advocate? Routine cognitive work.

2013AutorPrice

[open in new tab to view larger image]

Some information from Autor & Price (2013) that may be helpful…

  • Routine manual tasks – activities like production and monitoring jobs performed on an assembly line; easily automated and often replaced by machines; picking, sorting, repetitive assembly (p. 2)
  • Non-routine manual tasks – activities that demand situational adaptability, visual and language recognition, and perhaps in-person interaction; require modest amounts of training; activities like driving a truck, cleaning a hotel room, or preparing a meal (pp. 2-3)
  • Routine mental tasks – activities that are sufficiently well-defined that they can be carried out by a less-educated worker in a developing country with minimal discretion; also increasingly replaced by computer software algorithms; activities like bookkeeping, clerical work, information processing and record-keeping (e.g., data entry), and repetitive customer service (pp. 1-2)
  • Non-routine mental tasks – activities that require problem-solving, intuition, persuasion, and creativity; facilitated and complemented by computers, not replaced by them; hypothesis testing, diagnosing, analyzing, writing, persuading, managing people; typical of professional, managerial, technical, and creative professions such as science, engineering, law, medicine, design, and marketing (p. 2)

Ames High band: Modeling innovation, risk-taking, and feedback

I’m pretty impressed with the Ames High School band directors. Not only are Chris Ewan and Andrew Buttermore facilitating a great band program musically (250+ students who give amazing performances), they also are modeling instructional innovation and risk-taking with technology. When our district provided laptops for students, for example, they immediately jumped on the opportunity for band students to record themselves and then submit their digital files for review. Many students are using SmartMusic to help them practice and – even cooler – marching band participants now can see what they’re trying to accomplish on the field because they’ve been sent a Pyware video that shows them what it looks like from the perspective of those of us in the stands. [Next up, Ohio State!]

But I think the most enthralling thing they’ve done to date was a video that they showed us during Parent Night last week (feel free to pause at any time to get the full effect):

How do you help a group of incoming 9th graders realize what it looks like when they’re out of step? Put a video camera on the track at foot level, of course!

BRILLIANT.

Imagine you’re a brand new band student… You’ve only been marching for a few days. You’re juggling learning new music with learning how to step in time. It’s difficult to see what everyone else is doing. Your opportunities for feedback are relatively limited in the large group. And so on. It’s easy to feel like maybe you’re doing better than you really are. Heck, you didn’t hit the student next to you today with your tuba, right? But the video doesn’t lie… “Wait, those are MY feet! And I’m not there yet.” And that other video from up in the stands that shows that our lines need work too? Also useful for helping me see where I fit into the overall picture…

Why do I like this video so much? Because it models creative ways to give kids feedback and because it uses technology to help students learn how to get better. As Chris Anderson noted in his TED talk, video often allows us to innovate more rapidly. Want your 9th graders to ramp up their marching band footwork as fast as possible? Show them – don’t just tell them – what it looks like…

How is your school using technology to help kids SEE how they can get better? (and, no, I’m not talking about ‘adaptive’ multiple choice software)

Test makers should not be driving instruction

In a post about the difficulty of New York’s Common Core assessments, Robert Pondiscio said:

Test makers have an obligation to signal to the field the kind of instructional choices they want teachers to make

via http://edexcellence.net/articles/new-york%E2%80%99s-common-core-tests-tough-questions-curious-choices

I’m going to disagree with Robert on this one. I’m fairly certain that test makers should NOT be the ones driving instruction…

What testing should do for us

Multiple choice test

John Robinson said:

‘We would like to dethrone measurement from its godly position, to reveal the false god it has been. We want instead to offer measurement a new job – that of helpful servant. We want to use measurement to give us the kind and quality of feedback that supports and welcomes people to step forward with their desire to contribute, to learn, and to achieve.’ – Margaret Wheatley, Finding Our Way: Leadership for an Uncertain Time

Want to know what’s wrong with testing and accountability today? It’s more about a ‘gotcha game’ than really trying to help teachers improve their craft. Over and over ad nauseam, those pushing these tests talk about using test data to improve teaching and thereby student learning, but that’s not what is happening at all.

via http://the21stcenturyprincipal.blogspot.com/2014/08/time-to-dethrone-testing-from-its-godly.html

Image credit: Exams Start… Now, Ryan M.

Why meaningful math problems are defined out of online assessments

Dan Meyer said:

at this moment in history, computers are not a natural working medium for mathematics.

For instance: think of a fraction in your head.

Say it out loud. That’s simple.

Write it on paper. Still simple.

Now communicate that fraction so a computer can understand and grade it. Click open the tools palette. Click the fraction button. Click in the numerator. Press the “4″ key. Click in the denominator. Press the “9″ key.

That’s bad, but if you aren’t convinced the difference is important, try to communicate the square root of that fraction. If it were this hard to post a tweet or update your status, Twitter and Facebook would be empty office space on Folsom Street and Page Mill Road.

It gets worse when you ask students to do anything meaningful with fractions. Like: “Explain whether 4/3 or 3/4 is closer to 1, and how you know.”

It’s simple enough to write down an explanation. It’s also simple to speak that explanation out loud so that somebody can assess its meaning. In 2012, it is impossible for a computer to assess that argument at anywhere near the same level of meaning. Those meaningful problems are then defined out of “mathematics.”

via http://blog.mrmeyer.com/2012/what-silicon-valley-gets-wrong-about-math-education-again-and-again

The REAL international story of American education

Linda Darling-Hammond said:

Federal policy under No Child Left Behind (NCLB) and the Department of Education’s ‘flexibility’ waivers has sought to address [the problem of international competitiveness] by beefing up testing policies — requiring more tests and upping the consequences for poor results: including denying diplomas to students, firing teachers, and closing schools. Unfortunately, this strategy hasn’t worked. In fact, U.S. performance on the Program for International Student Assessment (PISA) declined in every subject area between 2000 and 2012 — the years in which these policies have been in effect.

Now we have international evidence about something that has a greater effect on learning than testing: Teaching. The results of the Teaching and Learning International Survey (TALIS), released last week by the Organization for Economic Cooperation and Development (OECD), offer a stunning picture of the challenges experienced by American teachers, while providing provocative insights into what we might do to foster better teaching — and learning — in the United States.

In short, the survey shows that American teachers today work harder under much more challenging conditions than teachers elsewhere in the industrialized world. They also receive less useful feedback, less helpful professional development, and have less time to collaborate to improve their work. Not surprisingly, two-thirds feel their profession is not valued by society — an indicator that OECD finds is ultimately related to student achievement.

Nearly two-thirds of U.S. middle-school teachers work in schools where more than 30 percent of students are economically disadvantaged. This is by far the highest rate in the world, and more than triple the average TALIS rate. The next countries in line after the United States are Malaysia and Chile.

Along with these challenges, U.S. teachers must cope with larger class sizes (27 versus the TALIS average of 24). They also spend many more hours than teachers in any other country directly instructing children each week (27 versus the TALIS average of 19). And they work more hours in total each week than their global counterparts (45 versus the TALIS average of 38), with much less time in their schedules for planning, collaboration, and professional development.

via http://www.huffingtonpost.com/linda-darlinghammond/to-close-the-achievement_b_5542614.html

The dangers of a single story

Nadia Behizadeh said:

If a child does not perform well on [one timed large-scale assessment essay], there will be a single story told about this student: he/she has below basic skills in writing, or maybe even far below basic skills. Yet this same student may be a brilliant poet or have a hundred pages of a first novel carefully stowed in his/her backpack. However, when a single story of deficiency is repeated again and again to a student, that student develops low writing self-efficacy and a poor self-concept of himself/herself as a writer. . . . [T]he danger of the single story is the negative effect on students when one piece of writing on a decontextualized prompt is used to represent writing ability. (pp. 125-126)

via http://edr.sagepub.com/content/43/3/125

Responsible educational journalism

Leslie and David Rutkowski say:

simply reporting results, in daring headline fashion, without caution, without caveat, is a dangerous practice. Although cautious reporting isn’t nearly as sensational as crying “Sputnik!” every time the next cycle of PISA results are reported, it is the responsible thing to do.

via http://www.washingtonpost.com/blogs/answer-sheet/wp/2014/03/20/so-how-overblown-were-no-1-shanghais-pisa-results

This holds true, of course, for all other assessment results as well. I am continually amazed at how many press releases become ‘news stories,’ sometimes nearly verbatim. Too many educational journalists have abdicated their responsibility to ask questions, to investigate claims and evidence, to cast a skeptical eye on puffery, and to try and get to the truth…

Picking right answers from a set of prescribed alternatives that trivialize complexity and ambiguity

Leon Botstein says:

The essential mechanism of the SAT, the multiple choice test question, is a bizarre relic of long outdated twentieth century social scientific assumptions and strategies. As every adult recognizes, knowing something or how to do something in real life is never defined by being able to choose a “right” answer from a set of possible answers (some of them intentionally misleading) put forward by faceless test designers who are rarely eminent experts. No scientist, engineer, writer, psychologist, artist, or physician – and certainly no scholar, and therefore no serious university faculty member – pursues his or her vocation by getting right answers from a set of prescribed alternatives that trivialize complexity and ambiguity.

via http://time.com/15199/college-president-sat-is-part-hoax-and-part-fraud

Switch to our mobile site