Training multilayer neural network based on optimal control theory for limited computational resources
Document Type
Article
Publication Date
2-1-2023
Abstract
Backpropagation (BP)-based gradient descent is the general approach to train a neural network with a multilayer perceptron. However, BP is inherently slow in learning, and it sometimes traps at local minima, mainly due to a constant learning rate. This pre-fixed learning rate regularly leads the BP network towards an unsuccessful stochastic steepest descent. Therefore, to overcome the limitation of BP, this work addresses an improved method of training the neural network based on optimal control (OC) theory. State equations in optimal control represent the BP neural network's weights and biases. Meanwhile, the learning rate is treated as the input control that adapts during the neural training process. The effectiveness of the proposed algorithm is evaluated on several logic gates models such as XOR, AND, and OR, as well as the full adder model. Simulation results demonstrate that the proposed algorithm outperforms the conventional method in terms of improved accuracy in output with a shorter time in training. The training via OC also reduces the local minima trap. The proposed algorithm is almost 40% faster than the steepest descent method, with a marginally improved accuracy of approximately 60%. Consequently, the proposed algorithm is suitable to be applied on devices with limited computation resources, since the proposed algorithm is less complex, thus lowering the circuit's power consumption.
Keywords
Multilayer neural network, Optimal control, Pontryagin minimum principle, Backpropagation, Logic gates
Divisions
sch_ecs
Funders
Faculty Research Grant (FRG) of Universiti Malaya [Grant No: GPF055A-2020],King Khalid University [Grant No: RGP.1/74/43]
Publication Title
Mathematics
Volume
11
Issue
3
Publisher
MDPI
Publisher Location
ST ALBAN-ANLAGE 66, CH-4052 BASEL, SWITZERLAND