Show simple item record

dc.contributor.advisorArmstrong, Robert C.
dc.contributor.advisorSakti, Apurba
dc.contributor.authorKumar, Dheekshita
dc.date.accessioned2022-01-14T15:01:57Z
dc.date.available2022-01-14T15:01:57Z
dc.date.issued2021-06
dc.date.submitted2021-06-17T20:13:33.514Z
dc.identifier.urihttps://hdl.handle.net/1721.1/139292
dc.description.abstractDecarbonizing power systems will require introducing renewable sources to the energy supply mix. Intermittent sources in the supply mix, however, make balancing energy supply and demand more challenging. Energy storage systems can be used to balance supply and demand by storing energy when renewable sources generate more energy than needed, and providing energy when generation is insufficient. Failing to account for degradation, however, when operating a battery can dramatically reduce the battery’s life span and increase degradation-related costs. Existing optimization techniques that account for degradation when determining the optimal battery operation policies are both computationally intensive and time-consuming. Machine Learning techniques like reinforcement learning, can develop models that calculate action-policies in milliseconds and account for complicated system dynamics. In this thesis, we consider the problem of battery operation for energy arbitrage. We explore the use of reinforcement learning to determine arbitrage policies that account for degradation. We compare policies learned by reinforcement learning to the optimal policy, as determined by an advanced mixed-integer linear programming (MILP) model, on NYISO 2013 day-ahead electricity price data. We show that accounting for reinforcement learning results in learned policies that are comparable to the behavior of MILP-determined policies with degradation. We then present a case study that uses reinforcement learning to determine arbitrage policies on PJM 2019 real-time electricity price data, and we find that the use of reinforcement learning for real-time battery operations in the case of energy arbitrage, has promise.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright MIT
dc.rights.urihttp://rightsstatements.org/page/InC-EDU/1.0/
dc.titleReinforcement Learning for Energy Storage Arbitrage in the Day-Ahead and Real-Time Markets with Accurate Li-Ion Battery Dynamics Model
dc.typeThesis
dc.description.degreeM.Eng.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeMaster
thesis.degree.nameMaster of Engineering in Electrical Engineering and Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record