Skip to main content

Research Repository

Advanced Search

Task scheduling for control system based on deep reinforcement learning

Liu, Yuhao; Ni, Yuqing; Dong, Chang; Chen, Jun; Liu, Fei

Task scheduling for control system based on deep reinforcement learning Thumbnail


Authors

Yuhao Liu

Yuqing Ni

Jun Chen

Fei Liu



Abstract

We investigate the control system’s computational task scheduling problem within limited time and with limited CPU cores in the cloud server. We employ a neural network model to estimate the runtime consumption of linear quadratic regulators (LQR) under varying numbers of CPU cores. Building upon this, we model the task scheduling problem as a two-dimensional bin packing problem (2D BPP) and formulate the BPP as a Markov Decision Process (MDP). By studying the characteristics of the MDP, we simplify the action space, design an efficient reward function, and propose a Double DQN-based algorithm with a simplified action space. Experimental results demonstrate that the proposed approach has improved training efficiency and learning performance compared to other packing algorithms, effectively addressing the challenges of task scheduling in the context of the control system.

Citation

Liu, Y., Ni, Y., Dong, C., Chen, J., & Liu, F. (2024). Task scheduling for control system based on deep reinforcement learning. Neurocomputing, 610, 128609. https://doi.org/10.1016/j.neucom.2024.128609

Journal Article Type Article
Acceptance Date Sep 11, 2024
Online Publication Date Sep 18, 2024
Publication Date 2024-12
Deposit Date Nov 7, 2024
Publicly Available Date Nov 7, 2024
Journal Neurocomputing
Print ISSN 0925-2312
Electronic ISSN 1872-8286
Publisher Elsevier
Peer Reviewed Peer Reviewed
Volume 610
Pages 128609
DOI https://doi.org/10.1016/j.neucom.2024.128609
Public URL https://durham-repository.worktribe.com/output/3084273

Files





You might also like



Downloadable Citations