View : 236 Download: 0

Optimizing Energy Management for New & Renewable Energy through Deep Reinforcement Learning

Title
Optimizing Energy Management for New & Renewable Energy through Deep Reinforcement Learning
Authors
강도은
Issue Date
2023
Department/Major
대학원 화공신소재공학과
Publisher
이화여자대학교 대학원
Degree
Master
Advisors
나종걸
Abstract
As the importance of energy saving is highlighted in accordance with the eco-friendly issues and the obligation to reduce greenhouse gases, the importance of energy management is also gradually increasing [1, 2]. Saving energy can fundamentally reduce greenhouse gas emissions from the process by reducing the amount of power generation and is economically efficient in a situation where oil prices continue to rise worldwide [3]. However, as energy resources have recently diversified due to the recent use of new & renewable energy, it has become difficult to manage energy in an integrated manner [4]. Energy resources have their own power generation characteristics [5]. Especially, in the case of renewable energy, it is difficult to manage energy because it is not possible to generate power as much as desired and is determined by nature such as time of day or weather [2, 6, 7]. Reinforcement learning(RL) is one of the Artificial Intelligence (AI) techniques that solves the Markov Decision Problem (MDP)[8-10]. RL has the characteristic of deducting and considering future rewards, so it is advantageous for continuous decision-making rather than one-time decision- making compared to other techniques. In short, it can be said that RL is strong in solving time-series problems [11, 12]. Nowadays, with the development of deep neural networks, the use of neural networks to learn policies of RL has also been greatly developed, and the possibility of developing the performance and usability of deep reinforcement learning (DRL) has also been greatly developed, and the possibility of developing the performance and usability of DRL is on the rise [13]. Unlike other AI techniques, RL interacts with the environment, so it is strong against extrapolation and strong against uncertainty. In addition, by implementing multi-agent deep reinforcement learning (MADRL), scalability also shows superior potential compared to traditional methods. Although the memory and CPU usage in the learning process is large, the memory of the learned policy itself is very light, so RL policy is likely to be installed in small IoT devices, and the greater the uncertainty, the less computation is required than the mathematical calculation, which is advantageous on computational power [14, 15]. These points are suitable features to overcome the difficulties of current energy management, so in this thesis, energy management planning system was built for the future energy sources in two types of problems. Part I covered the background knowledge needed to solve the optimization problem. Here, we covered the traditional mathematical method of finding an optimal solution and the basic knowledge of reinforcement learning, a new method. In Part Ⅱ, dealt with the issue of energy storage systems for utilizing curtailed renewable energy. In California, where the amount of power generation using renewable energy is exponentially increasing, the amount of energy remaining at a specific time increases exponentially, too. Curtailed renewable energy is more difficult to predict than existing renewable energy because it is energy left after use, and reinforcement learning was applied to overcome this uncertainty. It was designed with the goal of maximizing profits by storing curtailed energy generated by solar and wind power using a storage system and then selling it when demand for electricity is high. Part aims to build a system that optimizes the energy management system of hydrogen charging stations in preparation for the upcoming hydrogen energy society. A competitive structure between a distribution system that distributes and sells hydrogen energy at an appropriate price to make a profit and a hydrogen charging station that wants to operate at a minimum cost that can meet demand is implemented as a MADRL environment. In the case of a hydrogen charging station, each charging station has a different capacity. At this time, since the required electrical energy resources are diverse and used including renewable energy, a decision-making system on which energy source to use and purchase or production is implemented in real time.;최근 친환경적인 이슈가 대두되면서 온실가스 감축의무에 따라 신재생에너지의 발전과 에너지 절약에 따른 중요성이 부각되고 있다. 에너지를 절약한다는 것은 발전량을 감소시킴으로써 공정에서 나오는 온실가스를 원천적으로 줄일 수 있음을 의미한다. 또한 전세계적으로 고유가가 지속되는 상황에서 경제적으로도 효율성을 가져올 수 있다. 하지만 최근 신재생에너지의 자원 종류가 다양해지면서 다양한 자원으로 부터의 통합된 에너지를 관리하는 것에 어려움이 커지고 있다. 재생 에너지의 자원들은 각각 서로 다른 발전 특징을 가지고 있고 시간대나 날씨와 같은 자연적 현상에 영향을 받아 원하는 시간대에 원하는 만큼 발전할 수 없어 불확실성이 매우 큰 에너지원이다. 강화학습은 Markove Deicison Problem (MDP) 문제를 해결하는 인공지능 기법 중 하나로 환경과 상호작용하며 미래의 보상을 감가하여 고려하기 때문에 시계열적인 문제를 해결하는데 뛰어나다는 장점이 있다. 딥러닝의 발전으로 강화학습의 정책을 학습하는데 뉴럴 네트워크가 결합된 심층 강화학습의 발전 가능성이 커지면서, MDP에 이를 적용하고자 하는 시도가 이루어지고 있다. 환경과 상호작용하는 특징 덕분에 기존의 딥러닝과는 달리 외삽에도 뛰어난 성능을 보여 uncertainty에 robust한 결과를 보일 수 있다. 또한 멀티 에이전트 심층 강화학습의 구현으로 모델의 확장성 또한 전통적인 방법에 비해 우수하고, 불확실성이 클수록 비교적 필요한 연산량이 작아져 차원의 저주를 해결할 수 있는 방법이다. 이 논문에서는, 이러한 특징들은 현재의 에너지 관리의 어려움을 극복하기에 적절한 특징이므로 이를 활용하여 다양한 에너지 관리 문제를 심층 강화학습을 통해 예측하고 최적화한다.
Fulltext
Show the fulltext
Appears in Collections:
ETC > ETC
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

BROWSE