A reinforcement learning algorithm for efficient dynamic trading execution in the presence of signals
Author(s)
Elkind, Daniel(Daniel Harris)
Download1149013871-MIT.pdf (2.462Mb)
Other Contributors
Sloan School of Management.
Advisor
Adrien Verdelhan
Terms of use
Metadata
Show full item recordAbstract
This paper focuses the optimal trading execution problem, where a trader seeks to maximize the proceeds from trading a given quantity of shares of a financial asset over a fixed-duration trading period, considering that trading impacts the future trajectory of prices. I propose a reinforcement learning (RL) algorithm to solve this maximization problem. I prove that the algorithm converges to the optimal solution in a large class of settings and point out a useful duality between the learning contraction and the dynamic programming PDE. Using simulations calibrated to historical exchange trading data, I show that (i) the algorithm reproduces the analytical solution for the case of random walk prices with a linear absolute price impact function and (ii) matches the output of classical dynamic programming methods for the case of geometric brownian motion prices with linear relative price impact. In the most relevant case, when a signal containing information about prices is introduced to the environment, traditional computational methods become intractable. My algorithm still finds the optimal execution policy, leading to a statistically and economically meaningful reduction in trading costs.
Description
Thesis: S.M. in Management Research, Massachusetts Institute of Technology, Sloan School of Management, 2019 Cataloged from PDF version of thesis. Includes bibliographical references (pages 27-29).
Date issued
2019Department
Sloan School of ManagementPublisher
Massachusetts Institute of Technology
Keywords
Sloan School of Management.