The use of text-matching software's similarity scores

Document Type

Article

Publication Date

5-1-2023

Abstract

Popular text-matching software generates a percentage of similarity - called a ``similarity score'' or ``Similarity Index'' - that quantifies the matching text between a particular manuscript and content in the software's archives, on the Internet and in electronic databases. Many evaluators rely on these simple figures as a proxy for plagiarism and thus avoid the burdensome task of inspecting the longer, detailed Similarity Reports. Yet similarity scores, though alluringly straightforward, are never enough to judge the presence (or absence) of plagiarism. Ideally, evaluators should always examine the Similarity Reports. Given the persistent use of simplistic similarity score thresholds at some academic journals and educational institutions, however, and the time that can be saved by relying on the scores, a method is arguably needed that encourages examining the Similarity Reports but still also allows evaluators to rely on the scores in some instances. This article proposes a four-band method to accomplish this. Used together, the bands oblige evaluators to acknowledge the risk of relying on the similarity scores yet still allow them to ultimately determine whether they wish to accept that risk. The bands - for most rigor, high rigor, moderate rigor and less rigor - should be tailored to an evaluator's particular needs.

Keywords

Plagiarism, text-matching software, accountability in research, similarity score, iThenticate, Turnitin, publication ethics

Divisions

fac_law

Publication Title

Accountability in Research - Policies and Quality Assurance

Volume

30

Issue

4

Publisher

Taylor & Francis

Publisher Location

530 WALNUT STREET, STE 850, PHILADELPHIA, PA 19106 USA

This document is currently not available here.

Share

COinS