Classical versus reinforcement learning algorithms for unmanned aerial vehicle network communication and coverage path planning: A systematic literature review

Document Type

Article

Publication Date

3-1-2023

Abstract

The unmanned aerial vehicle network communication includes all points of interest during the coverage path planning. Coverage path planning in such networks is crucial for many applications, such as surveying, monitoring, and disaster management. Since the coverage path planning belongs to NP-hard issues, researchers in this domain are constantly looking for optimal solutions for this task. The speed, direction, altitude, environmental variations, and obstacles make coverage path planning more difficult. Researchers have proposed numerous algorithms regarding coverage path planning. In this study, we examined and discussed existing state-of-the-art coverage path planning algorithms. We divided the existing techniques into two core categories: Classical and reinforcement learning. The classical algorithms are further divided into subcategories due to the availability of considerable variations in this category. For each algorithm in both types, we examined the issues of mobility, altitude, and characteristics of known and unknown environments. We also discuss the optimality of different algorithms. At the end of each section, we discuss the existing research gaps and provide future insights to overcome those gaps.

Keywords

Air 2 ground, Coverage path planning, Network communication, Reinforcement learning, Unmanned aerial vehicles

Divisions

fsktm

Publication Title

International Journal of Communication Systems

Volume

36

Issue

5

Publisher

Wiley

Publisher Location

111 RIVER ST, HOBOKEN 07030-5774, NJ USA

This document is currently not available here.

Share

COinS