Usage
  • 44 views
  • 46 downloads

Reinforcement Learning For Adaptive Distribution Network Reconfiguration

  • Author / Creator
    Gholizadeh, Nastaran
  • The increasing demand for electricity driven by the widespread adoption of electric vehicles necessitates effective distribution network reconfiguration methods. However, existing distribution network reconfiguration approaches often rely on precise network parameters, leading to scalability and optimality challenges. To overcome these issues, this thesis proposes a data-driven reinforcement learning-based algorithm for distribution network reconfiguration which is divided into three parts.

    In the first part, five reinforcement learning algorithms, including deep Q-learning, dueling deep Q-learning, deep Q-learning with prioritized experience replay, soft actor-critic, and proximal policy optimization, are compared for the distribution network reconfiguration problem in 33- and 136-node test systems. Additionally, a new deep Q-learning-based action sampling method is introduced to reduce the action space size and optimize system loss reduction.

    In the second stage of this research, an innovative action-space sampling method is developed which utilizes a graph theory-based algorithm named Yamada-Kataoka-Watanabe to find all minimum spanning trees in the network structure as an undirected graph. Subsequently, powerflow analysis is conducted for all these spanning tree structures to rank them from the most optimal to the least in terms of system loss. Notably, this new sampling method stands out from the previous deep Q-learning-based approach as it offers greater versatility and can be seamlessly applied to any test system. This method is applied to the 33-, 119-, and 136-node test systems. Comparative analysis against conventional methods demonstrated the effectiveness, scalability, and efficiency of the proposed method in reducing system losses and managing electricity demand effectively.

    While reinforcement learning methods offer fast decision-making capabilities, the lack of transparency in their decision processes hinders their application in critical decision-making scenarios. In particular, distribution network reconfiguration involves altering switch states, which can significantly impact switch lifespan, requiring careful consideration. To address this transparency issue, in the third part of this study, a novel approach is introduced that employs an explainer neural network to analyze and interpret reinforcement learning-based decisions in distribution network reconfiguration. The explainer network is trained using the reinforcement learning agent's decisions, considering active and reactive power of the buses as inputs and generating line states as outputs. Later, the utilization of attribution methods enabled a deeper understanding of the relationship between inputs and outputs, offering valuable insights into the agent's decision-making process. Overall, this thesis presents a comprehensive and innovative approach to distribution network reconfiguration leveraging data-driven reinforcement learning algorithms for decision making, graph theory-based action sampling for improving the optimality of decisions, and an explainer neural network for decision interpretation.

  • Subjects / Keywords
  • Graduation date
    Spring 2024
  • Type of Item
    Thesis
  • Degree
    Doctor of Philosophy
  • DOI
    https://doi.org/10.7939/r3-nf7c-gw66
  • License
    This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.