Open Access
Open access

Combined heat and power system intelligent economic dispatch: A deep reinforcement learning approach

Suyang Zhou 1
Zijian Hu 1
Wei Gu 1
Meng Jiang 2
Meng Chen Meng Chen 3
Qiteng Hong 4
Campbell D. Booth 4
Publication typeJournal Article
Publication date2020-09-01
scimago Q1
wos Q1
SJR1.714
CiteScore13.0
Impact factor5.0
ISSN01420615, 18793517
Electrical and Electronic Engineering
Energy Engineering and Power Technology
Abstract
This paper proposed a Deep Reinforcement learning (DRL) approach for Combined Heat and Power (CHP) system economic dispatch which obtain adaptability for different operating scenarios and significantly decrease the computational complexity without affecting accuracy. In the respect of problem description, a vast of Combined Heat and Power (CHP) economic dispatch problems are modeled as a high-dimensional and non-smooth objective function with a large number of non-linear constraints for which powerful optimization algorithms and considerable time are required to solve it. In order to reduce the solution time, most engineering applications choose to linearize the optimization target and devices model. To avoid complicated linearization process, this paper models CHP economic dispatch problems as Markov Decision Process (MDP) that making the model highly encapsulated to preserve the input and output characteristics of various devices. Furthermore, we improve an advanced deep reinforcement learning algorithm: distributed proximal policy optimization (DPPO), to make it applicable to CHP economic dispatch problem. Based on this algorithm, the agent will be trained to explore optimal dispatch strategies for different operation scenarios and respond to system emergencies efficiently. In the utility phase, the trained agent will generate optimal control strategy in real time based on current system state. Compared with existing optimization methods, advantages of DRL methods are mainly reflected in the following three aspects: 1) Adaptability: under the premise of the same network topology, the trained agent can handle the economic scheduling problem in various operating scenarios without recalculation. 2) High encapsulation: The user only needs to input the operating state to get the control strategy, while the optimization algorithm needs to re-write the constraints and other formulas for different situations. 3) Time scale flexibility: It can be applied to both the day-ahead optimized scheduling and the real-time control. The proposed method is applied to two test system with different characteristics. The results demonstrate that the DRL method could handle with varieties of operating situations while get better optimization performance than most of other algorithms.
Found 
Found 

Top-30

Journals

2
4
6
8
10
12
14
International Journal of Electrical Power and Energy Systems
14 publications, 12.28%
Energies
11 publications, 9.65%
Applied Energy
5 publications, 4.39%
IET Renewable Power Generation
3 publications, 2.63%
Energy Conversion and Management
3 publications, 2.63%
Applied Soft Computing Journal
2 publications, 1.75%
Journal of Cleaner Production
2 publications, 1.75%
Energy Reports
2 publications, 1.75%
IEEE Access
2 publications, 1.75%
Frontiers in Energy Research
2 publications, 1.75%
Electric Power Systems Research
2 publications, 1.75%
Sustainable Energy, Grids and Networks
2 publications, 1.75%
Energy Informatics
2 publications, 1.75%
IET Generation, Transmission and Distribution
1 publication, 0.88%
International Journal of Production Research
1 publication, 0.88%
Processes
1 publication, 0.88%
Mathematics
1 publication, 0.88%
Fluid Dynamics and Materials Processing
1 publication, 0.88%
Applied Sciences (Switzerland)
1 publication, 0.88%
Soft Computing
1 publication, 0.88%
Electrical Engineering
1 publication, 0.88%
Engineering Applications of Artificial Intelligence
1 publication, 0.88%
Expert Systems with Applications
1 publication, 0.88%
Sustainable Energy Technologies and Assessments
1 publication, 0.88%
Journal of Energy Storage
1 publication, 0.88%
Computers and Chemical Engineering
1 publication, 0.88%
Renewable Energy Focus
1 publication, 0.88%
Renewable and Sustainable Energy Reviews
1 publication, 0.88%
International Journal of Energy Research
1 publication, 0.88%
2
4
6
8
10
12
14

Publishers

5
10
15
20
25
30
35
40
45
50
Elsevier
47 publications, 41.23%
Institute of Electrical and Electronics Engineers (IEEE)
22 publications, 19.3%
MDPI
16 publications, 14.04%
Springer Nature
10 publications, 8.77%
Institution of Engineering and Technology (IET)
5 publications, 4.39%
Wiley
3 publications, 2.63%
Taylor & Francis
2 publications, 1.75%
American Institute of Mathematical Sciences (AIMS)
2 publications, 1.75%
Frontiers Media S.A.
2 publications, 1.75%
Tech Science Press
1 publication, 0.88%
Hindawi Limited
1 publication, 0.88%
IntechOpen
1 publication, 0.88%
The Japan Institute of Marine Engineering
1 publication, 0.88%
American Society of Civil Engineers (ASCE)
1 publication, 0.88%
5
10
15
20
25
30
35
40
45
50
  • We do not take into account publications without a DOI.
  • Statistics recalculated weekly.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Metrics
114
Share
Cite this
GOST |
Cite this
GOST Copy
Zhou S. et al. Combined heat and power system intelligent economic dispatch: A deep reinforcement learning approach // International Journal of Electrical Power and Energy Systems. 2020. Vol. 120. p. 106016.
GOST all authors (up to 50) Copy
Zhou S., Hu Z., Gu W., Jiang M., Meng Chen M. C., Hong Q., Booth C. D. Combined heat and power system intelligent economic dispatch: A deep reinforcement learning approach // International Journal of Electrical Power and Energy Systems. 2020. Vol. 120. p. 106016.
RIS |
Cite this
RIS Copy
TY - JOUR
DO - 10.1016/j.ijepes.2020.106016
UR - https://doi.org/10.1016/j.ijepes.2020.106016
TI - Combined heat and power system intelligent economic dispatch: A deep reinforcement learning approach
T2 - International Journal of Electrical Power and Energy Systems
AU - Zhou, Suyang
AU - Hu, Zijian
AU - Gu, Wei
AU - Jiang, Meng
AU - Meng Chen, Meng Chen
AU - Hong, Qiteng
AU - Booth, Campbell D.
PY - 2020
DA - 2020/09/01
PB - Elsevier
SP - 106016
VL - 120
SN - 0142-0615
SN - 1879-3517
ER -
BibTex
Cite this
BibTex (up to 50 authors) Copy
@article{2020_Zhou,
author = {Suyang Zhou and Zijian Hu and Wei Gu and Meng Jiang and Meng Chen Meng Chen and Qiteng Hong and Campbell D. Booth},
title = {Combined heat and power system intelligent economic dispatch: A deep reinforcement learning approach},
journal = {International Journal of Electrical Power and Energy Systems},
year = {2020},
volume = {120},
publisher = {Elsevier},
month = {sep},
url = {https://doi.org/10.1016/j.ijepes.2020.106016},
pages = {106016},
doi = {10.1016/j.ijepes.2020.106016}
}