by Richard Kassissieh
A student gazes at a mystery solution. Its contents are unknown. The student reaches into her toolkit, a set of known solutions, and one by one, combines them with a small portion of the mystery solution. One test changes the color to bright yellow. Another produces a milky, solid substance. Gradually, the student pieces together the clues that allow her to identify the unknown solution.
This qualitative analysis laboratory required the student to recall properties of different solutions, understand reaction processes, and synthesize the results of different experimental tests while under pressure. To practice, the student had worked together with classmates to identify a series of mystery solutions and shared their findings with their classmates.
Did this performance assessment play out in the classroom of an innovative, 21st-century educator? No, the qualitative analysis laboratory has been part of British national chemistry examinations for decades.
It is tempting to rail against high-stakes state tests in the United States. They emphasize basic skills and fact recall, pressuring school districts to strip programs to the basics and eliminate or reduce “non-core” subjects such as art and history. For example, many school districts “double dose” language arts and mathematics classes in order to improve students’ performances on state tests (see this EdWeek article).
Are large-scale, standardized assessments bound to forever measure a narrow range of skills and knowledge?
Summative work has to insist on standards of uniformity and reliability in collection and recording of data, which are not needed in formative work, and which inhibit the freedom and attention to individual needs that formative work requires.
External tests which are economical are bound to only take a short time. Therefore, their reliability and validity are bound to be severely constrained: they can only use a limited range of methods and must be limited in respect to their sampling of relevant domains.
Paul Black, Testing: Friend or Foe?
On a brighter note, some national assessments include hands-on performance assessments. Not only do we have the IGCSE national examination described above, but the U.S. Advanced Placement examinations test higher-order thinking skills through essays in many subjects and the submission of a portfolio of work in art.
The International Baccalaureate program, originally from Switzerland and growing in popularity in the U.S., includes summative assessments that measure creative problem solving, analysis and presentation of information, and argumentation (IB Diploma Programme Assessment Philosophy).
The Partnership for 21st Century Skills reports on work in progress in several states to develop “demonstrations of 21st century skills graded based on a common rubric” (p21.org).
The U.S. college admission process represents another effort to assess higher-order thinking skills fairly across a broad pool of candidates. Most selective colleges use multiple measures to evaluate students: essays, standardized test scores, lists of accomplishments, interviews, the reputation of one’s high school, and letters of recommendation.
It may well be possible to develop standardized assessments that measure 21st century skills. Cost is the main obstacle. How much would it cost to systematically test problem-solving, collaboration, and presentation skills across all U.S. schools?
Though we may see inconsistencies between standardized assessments and 21st century skills, we can advocate for an increased role of performance assessment and the assessment of higher-order thinking skills in these high-stakes tests.
Richard Kassissieh is director of information technology at Catlin Gabel School (Portland, Oregon, U.S.A.) You may find him at http://kassblog.com and @kassissieh.
Photo credit: judybaxter on Flickr
Using the premise that “what gets tested gets taught” – we must begin to assess creativity & critical thinking skills.
I see the growth of IB, the development of the P21.org and even the growth in New Tech High programs as evidence that in pockets educators are moving toward project-based learning environments that promote this type of learning.
Don’t be fooled into thinking that you can’t “teach to the test” with Rubrics. Giving complete details of what is scored (and how) often means that you are presented with work that hits the minimum qualifications as opposed to a more authentic “Show what you know” assessment.
Scott, Thanks for sharing this. I’ve been thinking a lot about “what is growth?” and obviously how we assess (and what we assess) is pretty married to the answer to that question. I just wrote down my thoughts to the what is growth question about 15 minutes before I read this article. Funny. http://www.edulicious.com/blog/2010/10/3/what-is-growth.html