dc.contributor.advisor | Adrien Verdelhan | en_US |
dc.contributor.author | Elkind, Daniel(Daniel Harris) | en_US |
dc.contributor.other | Sloan School of Management. | en_US |
dc.date.accessioned | 2020-04-13T18:28:51Z | |
dc.date.available | 2020-04-13T18:28:51Z | |
dc.date.copyright | 2019 | en_US |
dc.date.issued | 2019 | en_US |
dc.identifier.uri | https://hdl.handle.net/1721.1/124585 | |
dc.description | Thesis: S.M. in Management Research, Massachusetts Institute of Technology, Sloan School of Management, 2019 | en_US |
dc.description | Cataloged from PDF version of thesis. | en_US |
dc.description | Includes bibliographical references (pages 27-29). | en_US |
dc.description.abstract | This paper focuses the optimal trading execution problem, where a trader seeks to maximize the proceeds from trading a given quantity of shares of a financial asset over a fixed-duration trading period, considering that trading impacts the future trajectory of prices. I propose a reinforcement learning (RL) algorithm to solve this maximization problem. I prove that the algorithm converges to the optimal solution in a large class of settings and point out a useful duality between the learning contraction and the dynamic programming PDE. Using simulations calibrated to historical exchange trading data, I show that (i) the algorithm reproduces the analytical solution for the case of random walk prices with a linear absolute price impact function and (ii) matches the output of classical dynamic programming methods for the case of geometric brownian motion prices with linear relative price impact. In the most relevant case, when a signal containing information about prices is introduced to the environment, traditional computational methods become intractable. My algorithm still finds the optimal execution policy, leading to a statistically and economically meaningful reduction in trading costs. | en_US |
dc.description.statementofresponsibility | by Daniel Elkind. | en_US |
dc.format.extent | 29 pages | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Sloan School of Management. | en_US |
dc.title | A reinforcement learning algorithm for efficient dynamic trading execution in the presence of signals | en_US |
dc.type | Thesis | en_US |
dc.description.degree | S.M. in Management Research | en_US |
dc.contributor.department | Sloan School of Management | en_US |
dc.identifier.oclc | 1149013871 | en_US |
dc.description.collection | S.M.inManagementResearch Massachusetts Institute of Technology, Sloan School of Management | en_US |
dspace.imported | 2020-04-13T18:28:21Z | en_US |
mit.thesis.degree | Master | en_US |
mit.thesis.department | Sloan | en_US |