Monday, July 13, 2015

Using the mastery quiz work-flow to eliminate project grading

**WARNING: UNTESTED IDEA**

When my colleague Rob and I discussed Stats projects and reflections, we were reviewing the value of each and time we need to invest in them.  On a parallel track, we have both been using mastery quizzes in our other classes and are moving them now into Stats.

The projects in Stats class (Infographic, Prove-it video, Minute to Win It paper, Ultimate Frisbee spreadsheet) are fun to do, and students seem to learn a lot, but the grading process is tedious and doesn't seem to correlate with individual learning (reflecting only the effort of the hardest worker on the team).  The reflections, which are individual, are a great opportunity to see what each person really learned, but students rarely take them seriously.  Since they always come at the end after I went through multiple revision cycles of the project with students, they either get no direct response from me or only something small.

My thought was to stop grading projects altogether.  I would keep the rubrics up as the ideal to work towards, but I would remove the points.  Instead, I would create mastery "quizzes", let's call them "check-ins", that ask students to explain (in writing) specifically how the project taught them the skill in question.  When finished writing, they would bring that check-in sheet and their project (on computer/iPad) and show it to me.  If I think they understand it AND their project is an artifact of that understanding, that individual is checked off with that skill.  Each person in the group needs to do this.  They can all share the project itself, even if they divided and conquered some of the work, but everyone needs to defend their understanding of the core components individually.  That is part I would track.

Since this check requires that the work gets completed (except any busywork that didn't lead to any objectives of mine), I don't have to directly grade the project.  Since the check-in involves self-reflection on the desired outcome, I can also skip a separate reflection assignment.  Most importantly, the time I spend assessing would be mostly in direct dialogue with students.  This is the change we saw in our department when we transitioned into mastery quizzes elsewhere, and class becomes very productive.

I think the hardest part will be properly sizing the assessments so there are not too many to handle each class, but being comprehensive enough to ensure that the key learning is taking place.

If you think this makes sense, please let me know.  Also please tell me if this is a terrible idea so I don't invest too many hours in it, or at least tweak it as soon as possible.  Thanks!

5 comments:

  1. I love this idea -- especially how students would explain how the project work is an artifact of that concept/skill/standard. I've struggled with how to grade group projects as well; specifically how to assess individual mastery when others in the group might have been doing the work that leads to that "mastery". Seems like this would fit well into the formal scales we've worked on in Byron, too.

    ReplyDelete
    Replies
    1. I think there would be a lot of overlap between stats and science -- if you find yourself trying something like this, keep me updated on how it is going. I think the concept is good but proper implementation will be an art so class time isn't wasted checking boxes and stays focused on learning conversations. I totally agree that the scales apply well here too.

      Delete
  2. "If I think they understand it AND their project is an artifact of that understanding, that individual is checked off with that skill."

    how do u differentiate the "checks" btw students - or do you at all?

    ReplyDelete
    Replies
    1. I would check one person at a time, each would individually explain their understanding in writing, and each would individually meet with me. The rubric for the section and my teacher judgment would determine if they individually reached understanding. Do you think that answers your concern or is there more?

      Delete
  3. An additional thought from John McCarthy (Buck Institute) -- add a "gatekeeper" requirement to these individual rubrics. As an example, I would not individually assess your skills until your team completes the project checklist (ie does the project).

    ReplyDelete