Reliable Information Retrieval Systems Performance Evaluation: A Review

Document Type

Article

Publication Date

1-1-2024

Abstract

With the progressive and availability of various search tools, interest in the evaluation of information retrieval based on user perspective has grown tremendously among researchers. The Information Retrieval System Evaluation is done through Cranfield-paradigm in which the test collections provide the foundation of the evaluation process. The test collections consist of a document corpus, topics, and a set of relevant judgments. The relevant judgments are the documents which retrieved from the test collections based on the topics. The accuracy of the evaluation process is based on the number of relevant documents in the relevance judgment set, called qrels. This paper presents a comprehensive study, which discusses the various ways to improve the number of relevant documents in the qrels to improve the quality of qrels and through that increase the accuracy of the evaluation process. Different ways in which each methodology was performed to retrieve more relevant documents were categorized, described, and analyzed, resulting in an inclusive flow of these methodologies.

Keywords

Information retrieval, Costs, System performance, Reliability, XML, Web sites, Testing, Document handling, Performance evaluation, Document similarity, human accessors, information retrieval, information systems evaluation, pooling, topics

Divisions

fsktm

Funders

Ministry of Higher Education Malaysia via Fundamental Research Grant Scheme

Publication Title

IEEE Access

Volume

12

Publisher

Institute of Electrical and Electronics Engineers

Publisher Location

445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA

This document is currently not available here.

Share

COinS