dc.contributor.advisor | Leonid Kogan. | en_US |
dc.contributor.author | Xu, ZihaoS.M.Massachusetts Institute of Technology. | en_US |
dc.contributor.other | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. | en_US |
dc.date.accessioned | 2020-09-15T21:57:58Z | |
dc.date.available | 2020-09-15T21:57:58Z | |
dc.date.copyright | 2020 | en_US |
dc.date.issued | 2020 | en_US |
dc.identifier.uri | https://hdl.handle.net/1721.1/127437 | |
dc.description | Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, May, 2020 | en_US |
dc.description | Cataloged from the official PDF of thesis. | en_US |
dc.description | Includes bibliographical references (pages 53-54). | en_US |
dc.description.abstract | The study of the market with adverse selection risks is an appealing topic in the field of market microstructure. Multiple theoretical models have been proposed to address this issue over the past few years, such as the Kyle model (1985), the Glosten-Milgrom model (1985) and so on. The main goal of these theoretical models is to provide an optimal pricing strategy based on the market condition they used. However, the market is a competitive but not always efficient environment. The optimal pricing strategy cannot provide enough insights in the markets with multiple interacting agents. Also, the theoretical models cannot be easily extended to other complex market. In our work, we apply the deep reinforcement learning techniques to train neural agents in our designed multiagent environment. The result shows that the neural agents could learn the best strategy conditioned on the pricing behaviors of other competitors. It suggests a new approach to study the price formation process in the complex market.. | en_US |
dc.description.statementofresponsibility | by Zihao Xu. | en_US |
dc.format.extent | 54 pages | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Massachusetts Institute of Technology | en_US |
dc.rights | MIT theses may be protected by copyright. Please reuse MIT thesis content according to the MIT Libraries Permissions Policy, which is available through the URL provided. | en_US |
dc.rights.uri | http://dspace.mit.edu/handle/1721.1/7582 | en_US |
dc.subject | Electrical Engineering and Computer Science. | en_US |
dc.title | Reinforcement learning in the market with adverse selection | en_US |
dc.type | Thesis | en_US |
dc.description.degree | S.M. | en_US |
dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
dc.identifier.oclc | 1192966235 | en_US |
dc.description.collection | S.M. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science | en_US |
dspace.imported | 2020-09-15T21:57:58Z | en_US |
mit.thesis.degree | Master | en_US |
mit.thesis.department | EECS | en_US |