Deep reinforcement and transfer learning for abstractive text summarization: A review

Document Type

Article

Publication Date

1-1-2022

Abstract

Automatic Text Summarization (ATS) is an important area in Natural Language Processing (NLP) with the goal of shortening a long text into a more compact version by conveying the most important points in a readable form. ATS applications continue to evolve and utilize effective approaches that are being evaluated and implemented by researchers. State-of-the-Art (SotA) technologies that demonstrate cutting-edge performance and accuracy in abstractive ATS are deep neural sequence-to-sequence models, Reinforcement Learning (RL) approaches, and Transfer Learning (TL) approaches, including Pre-Trained Language Models (PTLMs). The graph-based Transformer architecture and PTLMs have influenced tremendous advances in NLP applications. Additionally, the incorporation of recent mechanisms, such as the knowledge-enhanced mechanism, significantly enhanced the results. This study provides a comprehensive review of recent research advances in the area of abstractive text summarization for works spanning the past six years. Past and present problems are described, as well as their proposed solutions. In addition, abstractive ATS datasets and evaluation measurements are also highlighted. The paper concludes by comparing the best models and discussing future research directions.

Keywords

Abstractive summarization, Sequence-to-sequence, Reinforcement learning, Pre-trained models

Divisions

fsktm

Funders

None

Publication Title

Computer Speech & Language

Volume

71

Publisher

Academic Press Ltd- Elsevier Science Ltd

Publisher Location

24-28 OVAL RD, LONDON NW1 7DX, ENGLAND

This document is currently not available here.

Share

COinS