This paper investigates the application of deep neural networks (DNNs) and reinforcement learning (RL) to improve power grid resilience during disaster scenarios within a simulated environment. The DNN model is employed to extract critical features related to grid performance, including weather conditions, transformer loads, and infrastructure vulnerabilities, while the RL agent optimizes grid recovery strategies. Multiple disaster scenarios, such as hurricanes, floods, and cyberattacks, were simulated to test the models’ effectiveness in reducing grid downtime, minimizing cascading failures, and managing resource allocation. The RL agent leveraged real-time feedback loops to dynamically adjust its decisions, enhancing adaptability to evolving grid conditions. Results demonstrated that the combined DNN-RL system maintained grid stability, prioritized critical infrastructure recovery, and optimized the deployment of repair crews and backup resources. The study highlights the potential of machine learning models to effectively manage complex grid operations under stress, providing a framework for further research into adaptive disaster management strategies in power systems.
@artical{h13112024ijsea13111010,
Title = "Enhancing Power Grid Resilience through Deep Neural Networks and Reinforcement Learning: A Simulated Approach to Disaster Management",
Journal ="International Journal of Science and Engineering Applications (IJSEA)",
Volume = "13",
Issue ="11",
Pages ="43 - 54",
Year = "2024",
Authors ="Hossein Rahimighazvini, Zeyad Khashroum, Maryam Bahrami, Sahand Saeidi"}