Thursday, February 1, 2007

Blogging Bean

_Slate_ has a daily blog called "Blogging the Bible" that I enjoy despite my agnosticism--and I like the alliteratiion--so I'm co-opting it.

I'm interested in reading the works of the "numberous researchers [who] have refined or refocused Diederich's criteria and have developed successful strategies for training readers as evaluators" (256). I'm primarily interested in the implication that follows--seeing how "Many of these strategies have classroom applications also, for training students as evaluators of writing greatly improves their ability to give high-quality advice in peer review workshops" (ibid). Since I plan on giving peer review workshops a prominent role in my classes, I'm intrigued to follow-up on this point. I like how Bean concludes the chapter by asserting that "When students know an instructor's criteria for assigning grades--and when they have the opportunity to help one another apply these criteria to works in progress--the quality of their final products will improve gratifyingly. It is gratifying indeed to see how well many undergraduates can write when they are engaged in their porjects and follow the stages of the writing process through multiple drafts and peer reviews" (264). That the way I feel.

Back to page 256, Bean's point that while readers can been trained to use "uniform criteria" to evaluate student papers, "these criteria often vary from discipline to discipline (and from teacher to teacher)" reminds me of Gerald Graff's essay "Two Ships in the Night" which is all about this very topic. Graff points that in a humanities class, foundationalism is often verboten, whereas in a political science class foundationalism is often, well, foundational. It seems like a problem that will come up when thinking about teaching to write across the curriculum.

I found myself siding with the analytic model as I read Bean because (as I wrote to myself in the margin), it seems that not only does this approach convey "detailed information about the teacher's judgement of the essay" as Bean notes, but that information can tell studnets what their strengths and weaknesses are. But then I read Bri's post on the strength of the holistic approach and now I'm undecided. I think I'll shoot for some hybrid of the two.

As Katie pointed out, both Bean and Curzan & Damour stress that one should "read through a set of papers quickly before marking them and assigning grades, trying to get a feel for the range of responses and sizing up what the best papers are like (Bean 263) and that the correlation between readers increasess if readers read quickly, "trusting the reliabilty of their first impressions" as Bean puts it (259). Bean also cites E.M. White here, as he does in a couple of other places where I found myself noting things I thought were key. So I'm checking out both of the White works that Bean includes in his bibliography.

I found quite a few ideas from both Bean and Curzan & Damour to incorporate into my syllabus, like providing "samples of successful student papers from previous classes" (Bean 257)--which I think Andrew does in his syllabus--and then also just writing out what the specific grading criteria will be for each writing assignment. The analytic scale (primary trait method) example Bean includes on page 260 is a good example--but I *really* like the explanation he includes from Harry Shaw at Cornell (page 264). I think I want to do something more like this to actually hand to students with my syllabus.

2 comments:

Claire Schmidt said...

I agree with you about the high level of useful suggestions in Bean. I particularly appreciated the inclusion of student responses to teacher comments. I so vividly remember thinking (and muttering) the same things in response to the same comments, but I'd forgotten how frustrated I would get at incomprehensible instructor comments. I think that'll be a big challenge to still give useful feedback without being to squishy or too mean.

Katharine said...

Court,

I'm also kind of torn on which grading rubric I would want to use. I sort of take issue with the holistic approach — I could see a student looking at his/her paper and the assignment sheet and saying, "Yeah, I have an original and interesting argument. I should be getting an A on this." On the other hand, as we saw on Thursday in class, how can an instructor decide which criteria deserve the most weight on an analytical rubric?

I think I'm going to be relying on the Writing Lab's Guide to Revision to look for some ideas on what I want to include.