By ensuring that their grading methods accurately report content knowledge, teachers can promote and reward student growth.
At the end of a semester, a student has an 83 percent in my class. What does that actually mean? Did they understand 83 percent of the material, do 83 percent of the work, and collect 83 percent of the points available? Do they know the material, or did they “do school” to earn that grade?
I’ve been thinking about my grading practices a lot lately and how they’re informed (or not) by my overall teaching philosophy. I believe my role as a classroom teacher is to do the following:
- Teach students about my content area (mathematics)
- Encourage their growth in that discipline
- Accurately report their level of content understanding
With that in mind, I’ve critically examined three pervasive traditional grading practices.
Moving Away From Traditional Practices
1. Averaging scores over time: Most grade books average scores over time. We teach for a semester and evaluate students at different intervals. For a student who comes in with strong skills, this feels like a fine practice.
But think about this scenario: Marisa came to class with super-strong skills and aced the whole year. Jacob’s performance was fine to start out, it improved over time, and he ended up at the same place as Marisa. Elias’s and Taylor’s journeys were more of a struggle, but they worked hard, and you did a fantastic job teaching them! If at the end of the semester all four students have the same level of understanding, shouldn’t their grade reflect their current level of knowledge?
Everyone learns at a different pace. Should we penalize students who had a bad foundational experience, may have experienced some trauma that caused a temporary dip, or just take more time to learn something new? No. I think all four students deserve the same grade if they demonstrate the same level of understanding. Grade books communicate teacher values.
Over the past few years, teaching students a growth mindset has pervaded teacher blogs and ed speak. However, I think that the way teachers typically grade completely undermines the talk about a growth mindset. If our grading practices don’t promote, encourage, and reward growth, then we don’t value it. How do we show kids that their growth matters? We eliminate averaging scores over time and do something else instead.
This is how my practice has evolved: I consistently update old performance scores with new and more accurate ones. Students get unlimited retakes on assessments. I require them to continually practice essential content and demonstrate retention of that content. I tell students that learning doesn’t stop after an assessment. An assessment isn’t a final judgment, it’s a progress marker. I believe that every student can succeed in my class.
I tell them, “I will not give up on you just because you don’t know how to do something yet.” I stick to that statement, and it’s reflected in my grade book. If a student scores below proficient on a standard, mandatory retakes are assigned and completed during class time. Students are given individualized opportunities to practice and are reassessed after they’ve learned more. I only keep the most recent reports of demonstrated proficiency. This also requires retention of content knowledge, which has always been a problem in math classes.
2. Adding elements other than content understanding into grading: I’ve had students who passed my class, but I knew they had little content understanding. I’ve also had students who I knew understood the material pretty well but had a really low grade. How? Students with inflated grades might have tutors who do their homework, so they collect the “busywork” points, and they might copy work or use Photomath to complete assignments.
In reality, with a 60/40 split of assessments and classwork/homework, a student could fail every test (averaging 33 percent) but do all of the “work” and still pass. The student doesn’t know the content, but they’ll pass.
It’s worrisome when grades don’t accurately reflect that a student needs extra attention. Alternatively, a student with good content knowledge and a poor grade is likely in this predicament because they don’t demonstrate good “students” behaviors. Instead of grades reflecting these behaviors, it’s more accurate for them to focus on content knowledge.
I now base 100 percent of a student’s grade on demonstrated proficiency of learning targets.
“They don’t get points for their work?”
“Do they still do work?”
“Do some kids have missing assignments?”
I report those work habits in the online grade book to parents so they can support their kids, but those points don’t affect the numerical grade (positively or negatively). There’s no more point grabbing, and students do the work because we’ve built a culture of work that yields results.
I constantly talk about the correlation between practice and performance. Kids understand this connection thanks to sports, dance, video games, and other hobbies. Why wouldn’t we lean into that connection with something as important as education?
3. Reporting opaque scores like “Quiz 4B: 71%”: I’m extremely transparent with students and parents on what their grade is. I report on students’ current level of understanding on specific learning targets that are based upon learning progressions designed with my colleagues. Using my professional judgment of student understanding, I score assessments on a proficiency scale:
4—Nearing Mastery (A)
1—Not Yet (D)
0—No Evidence of Learning (F)
If an assessment covers four learning targets, the assessment results in four separate scores in my grade book:
Target 1: 4 (A)
Target 2: 2 (C)
Target 3: 2 (C)
Target 4: 1 (D)
I have current data on each target for each student. Everyone transparently sees where this student is excelling and where they are struggling. If this is the only thing in the grade book, the student’s overall grade would be a C (9 / 4 = 2.25). That feels accurate to me.
Alternatively, if I graded this test based solely on percent correct (and let’s assume each target had four problems), the student would have 9 / 16 (56%), which is traditionally an F. That feels inaccurate to me. Based on my professional judgment, this student has passing knowledge on three out of four topics and a really good understanding of one of them.
Do they really deserve to fail with that level of demonstrated knowledge? This way of reporting grades to students allows them to take more ownership of their learning path. They can track their progress over time, set goals, and see how they advance.