Deep reinforcement learning for addressing disruptions in traffic light control

Document Type

Article

Publication Date

1-1-2022

Abstract

This paper investigates the use of multi-agent deep Q-network (MADQN) to address the curse of dimensionality issue occurred in the traditional multi-agent reinforcement learning (MARL) approach. The proposed MADQN is applied to traffic light controllers at multiple intersections with busy traffic and traffic disruptions, particularly rainfall. MADQN is based on deep Q-network (DQN), which is an integration of the traditional reinforcement learning (RL) and the newly emerging deep learning (DL) approaches. MADQN enables traffic light controllers to learn, exchange knowledge with neighboring agents, and select optimal joint actions in a collaborative manner. A case study based on a real traffic network is conducted as part of a sustainable urban city project in the Sunway City of Kuala Lumpur in Malaysia. Investigation is also performed using a grid traffic network (GTN) to understand that the proposed scheme is effective in a traditional traffic network. Our proposed scheme is evaluated using two simulation tools, namely Matlab and Simulation of Urban Mobility (SUMO). Our proposed scheme has shown that the cumulative delay of vehicles can be reduced by up to 30% in the simulations.

Keywords

Artificial intelligence, Traffic light control, Traffic disruptions, Multi-agent deep Q-network, Deep reinforcement learning

Divisions

fsktm

Funders

Universiti Teknologi MARA, Fundamental Research Grant Scheme (FRGS) [Grant No: 600-IRMI/FRGS 5/3 (342/2019)],Ministry of Higher of Higher Education (MOHE)

Publication Title

CMC-Computers Materials & Continua

Volume

71

Issue

2

Publisher

Tech Science Press

Publisher Location

871 CORONADO CENTER DR, SUTE 200, HENDERSON, NV 89052 USA

This document is currently not available here.

Share

COinS