Show simple item record

dc.contributor.advisorAdrien Verdelhanen_US
dc.contributor.authorElkind, Daniel(Daniel Harris)en_US
dc.contributor.otherSloan School of Management.en_US
dc.date.accessioned2020-04-13T18:28:51Z
dc.date.available2020-04-13T18:28:51Z
dc.date.copyright2019en_US
dc.date.issued2019en_US
dc.identifier.urihttps://hdl.handle.net/1721.1/124585
dc.descriptionThesis: S.M. in Management Research, Massachusetts Institute of Technology, Sloan School of Management, 2019en_US
dc.descriptionCataloged from PDF version of thesis.en_US
dc.descriptionIncludes bibliographical references (pages 27-29).en_US
dc.description.abstractThis paper focuses the optimal trading execution problem, where a trader seeks to maximize the proceeds from trading a given quantity of shares of a financial asset over a fixed-duration trading period, considering that trading impacts the future trajectory of prices. I propose a reinforcement learning (RL) algorithm to solve this maximization problem. I prove that the algorithm converges to the optimal solution in a large class of settings and point out a useful duality between the learning contraction and the dynamic programming PDE. Using simulations calibrated to historical exchange trading data, I show that (i) the algorithm reproduces the analytical solution for the case of random walk prices with a linear absolute price impact function and (ii) matches the output of classical dynamic programming methods for the case of geometric brownian motion prices with linear relative price impact. In the most relevant case, when a signal containing information about prices is introduced to the environment, traditional computational methods become intractable. My algorithm still finds the optimal execution policy, leading to a statistically and economically meaningful reduction in trading costs.en_US
dc.description.statementofresponsibilityby Daniel Elkind.en_US
dc.format.extent29 pagesen_US
dc.language.isoengen_US
dc.publisherMassachusetts Institute of Technologyen_US
dc.rightsMIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission.en_US
dc.rights.urihttp://dspace.mit.edu/handle/1721.1/7582en_US
dc.subjectSloan School of Management.en_US
dc.titleA reinforcement learning algorithm for efficient dynamic trading execution in the presence of signalsen_US
dc.typeThesisen_US
dc.description.degreeS.M. in Management Researchen_US
dc.contributor.departmentSloan School of Managementen_US
dc.identifier.oclc1149013871en_US
dc.description.collectionS.M.inManagementResearch Massachusetts Institute of Technology, Sloan School of Managementen_US
dspace.imported2020-04-13T18:28:21Zen_US
mit.thesis.degreeMasteren_US
mit.thesis.departmentSloanen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record