Contrary to my earlier worry, there are slides and even videos for some of the talks from the CIME 2013 conference (“Assessment of Mathematical Proficiencies in the Age of the Common Core”); just scroll down the main page to the schedule and find links in the right column. If you have time to watch, I would recommend these:
- “Dissolving the Boundaries,” with Bill McCallum and Jason Zimba. There’s an introduction by Deborah Ball first, valuable in its own right. It’s particularly interesting, though, to see two of the three lead CCSSM writers, who have been up to their necks in this stuff for three years, speaking more or less off the cuff about the challenges of measuring student thinking.
- “Assessment in practice: Use, needs, and examples,” especially David Baiz and then the team of Eyal Wallenberg and Melanie Smith. As I’ve said about some other teachers I’ve known, if we could clone these three, we’d go a long way toward improving mathematics education.
- “Broadening the conversation: Issues and concerns,” with Diane Briars. Her slides say a lot, but it’s still worth watching her presentation.
Residents of Vermont and the other states in the Smarter Balanced Assessment Consortium may also want to view the SBAC presentation from the conference. While it is clearly true (as was pointed out several times over the two and a half days) that assessment people and math people have different areas of expertise, it’s clear that Shelbi Cole knows math and knows the Common Core. The SBAC website includes sample items; notice that when you’re looking at a particular item, you can click on a link at the top right to give feedback on it.
One idea sticking in my head from Jason Zimba’s remarks is the “mania for discrimination” that he attributes to organizations that produce high-stakes tests. In their design of tasks, he says,
…the idea that everyone might be able to do a problem and it still be a good problem doesn’t fit. The methods that they’re using make it, it would appear, almost mathematically impossible for students to do what we’ve all signed on to help them do, which is to meet the Standards.
This reminds me of the debate on my campus, and many others, about grade inflation. In a conversation with a colleague in the English department years ago, it emerged that we had different ideas about what information a final course grade should convey. At the time, I was firmly in the “discrimination” camp; that is, that grades are meant to distinguish the highest achievers from the rest of the pack. He, if I remember and interpret correctly, was in the “meet the standards” faction. His goal as an instructor was to bring as many students as possible to mastery, and if a lot of them got A’s, then he’d been successful.
Now, of course, I’m not sure where I sit, which may be why I dislike grading so much. It’s not the looking through reams of student work and giving responses, though that can be tiring; rather, it’s the choices about partial credit and the background worry that the test was too hard (might discourage some students) or too easy (might not discriminate enough, and then I might have to give out too many A’s, and is that a problem or just a measure of my students’ — and my — success?). I still like to think that one can be in both camps at once, so a B means “met the standards,” and an A means “met the standards with distinction”. An interesting, and I think compelling, take can be found in David Bressoud’s “The Red Herring of Grade Inflation” (the January 2013 column in his Launchings series, which is sponsored by the Mathematical Association of America). I won’t spoil things by quoting his last paragraph; it’s better to read the whole column first.