Test case information extraction from requirements specifications using NLP-based unified boilerplate approach
Document Type
Article
Publication Date
5-1-2024
Abstract
Automated testing which extracts essential information from software requirements written in natural language offers a cost-effective and efficient solution to error-free software that meets stakeholders' requirements in the software industry. However, natural language can cause ambiguity in requirements and increase the challenges of automated testing such as test case generation. Negative requirements also cause inconsistency and are often neglected. This research aims to extract test case information (actors, conditions, steps, system response) from positive and negative requirements written in natural language (i.e. English) using natural language processing (NLP). We present a unified boilerplate that combines Rupp's and EARS boilerplates, and serves as the grammar guideline for requirements analysis. Extracted information is populated in a test case template, becoming the building blocks for automated test case generation. An experiment was conducted with three public requirements specifications from PURE datasets to investigate the correctness of information extracted using this proposed approach. The results presented correctness of 50 % (Mdot), 61.7 % (Pointis) and 10 % (Npac) on information extracted. The lower correctness on negative over positive requirements was observed. The correctness by specific categories is also analysed, revealing insights into actors, steps, conditions, and system response extracted from positive and negative requirements.
Keywords
Natural language processing, Test case generation, Automation, Software requirements, Software Testing, Test Case
Divisions
Software
Funders
Universiti Malaya, Malaysia (GPF097B-2020-A)
Publication Title
Journal of Systems and Software
Volume
211
Publisher
Elsevier
Publisher Location
STE 800, 230 PARK AVE, NEW YORK, NY 10169 USA