Our online gradebook displays grades with one decimal point. And according to the company's online forums, many teachers and college professors around the country are quite upset with this lack of precision. How can a teacher possibly rate a student's performance if they don't know if their grade is a 92.44% or a 92.45%?
For me, grading is one of the hardest parts about teaching. It's not easy to assign numerical grades to students' work, especially knowing just how many students tend to conflate that grade in the course with their value as a person. (Not that we teachers should get any sympathy from students for this–their job is hard too.) Anything we teachers can do to make it more objective removes a bit of that psychic burden.
But we have to admit that there's always a subjective quality to grading, from the creation of the assessments to the grading of them. We can try to limit the subjective portion, but my fear is that using our fancy digital gradebooks to give more precise numbers only gives the illusion of objectivity. "I'm sorry, Bertha, I know you're a great student, but you only earned 89.99% of the available points this semester, so that's just not an A."
Some teachers are fine with that philosophy, and that's OK. Of course, there has to be a cutoff between an A and a B somewhere. I just don't have the confidence to say that I didn't "cost" them a hundredth of a point somewhere over the course of the semester. So much as I try to ensure that each assessment is precisely written and then graded consistently for every student, I'm not perfect. What if I was a little hungry when I read Bertha's essay, and I marked off a few more points than I did on somebody else's essay of similar quality? Or what if a single multiple choice question on a test was unclear, with two possible correct answers?
There's nothing wrong with teachers who don't have these same doubts as I do, and are completely comfortable with highly precise grading scales. But I can't help but wonder if this has the potential to be an example of the "False Precision Fallacy":
False precision (also called overprecision, fake precision, misplaced precision and spurious accuracy) occurs when numerical data are presented in a manner that implies better precision than is actually the case; since precision is a limit to accuracy, this often leads to overconfidence in the accuracy as well.
It's entirely possible (likely even) that I'm wrong about all this, but acknowledging the inherent subjectivity of grading rather than fighting it has given me peace–I don't lose sleep when I give students the benefit of the doubt.