Posted: December 15, 2008 Filed under: Digital Humanities, Essays, Teaching | Tags: Computers, digital humanities, digital pedagogy, grading, plagiarism, student engagement, students, Teaching, testing
I have recently started using plagiarism detection software. Not so much for the ability to detect plagiarism as for the essay submission- and grading- management capabilities it offered. But the system was, of course, originally designed to detect plagiarism—which means that I too can use it to check my students’ originality.
To the extent that one semesters’ data is a sufficient sample, my preliminary conclusions are that the problem of plagiarism, at least in my classes, seems to be more-or-less as insignificant as I thought it was when I graded by hand, and that my old method of discovering plagiarism (looking into things when a paper didn’t seem quite “right”) seemed to work. This past semester, I caught two people plagiarising. But neither of them had particularly high unoriginality scores: in both cases, I discovered the plagiarism after something in their essays seemed strange to me and caused me to go through originality reports turnitin provides on each essay more carefully. I then went through the reports for every essay submitted by that class (a total of almost 200), to see if I had missed any essays that turnitin’s reports suggested might be plagiarised. None of the others showed the same kind of suspicious content that had led me to suspect the two I caught. So for me, at least, the “sniff test” remains apparently reliable.