Show simple item record

dc.contributor.authorWang, Shenhao
dc.contributor.authorMo, Baichuan
dc.contributor.authorZhao, Jinhua
dc.date.accessioned2021-10-27T20:34:24Z
dc.date.available2021-10-27T20:34:24Z
dc.date.issued2020
dc.identifier.urihttps://hdl.handle.net/1721.1/136235
dc.description.abstract© 2020 Elsevier Ltd Whereas deep neural network (DNN) is increasingly applied to choice analysis, it is challenging to reconcile domain-specific behavioral knowledge with generic-purpose DNN, to improve DNN's interpretability and predictive power, and to identify effective regularization methods for specific tasks. To address these challenges, this study demonstrates the use of behavioral knowledge for designing a particular DNN architecture with alternative-specific utility functions (ASU-DNN) and thereby improving both the predictive power and interpretability. Unlike a fully connected DNN (F-DNN), which computes the utility value of an alternative k by using the attributes of all the alternatives, ASU-DNN computes it by using only k's own attributes. Theoretically, ASU-DNN can substantially reduce the estimation error of F-DNN because of its lighter architecture and sparser connectivity, although the constraint of alternative-specific utility can cause ASU-DNN to exhibit a larger approximation error. Empirically, ASU-DNN has 2–3% higher prediction accuracy than F-DNN over the whole hyperparameter space in a private dataset collected in Singapore and a public dataset available in the R mlogit package. The alternative-specific connectivity is associated with the independence of irrelevant alternative (IIA) constraint, which as a domain-knowledge-based regularization method is more effective than the most popular generic-purpose explicit and implicit regularization methods and architectural hyperparameters. ASU-DNN provides a more regular substitution pattern of travel mode choices than F-DNN does, rendering ASU-DNN more interpretable. The comparison between ASU-DNN and F-DNN also aids in testing behavioral knowledge. Our results reveal that individuals are more likely to compute utility by using an alternative's own attributes, supporting the long-standing practice in choice modeling. Overall, this study demonstrates that behavioral knowledge can guide the architecture design of DNN, function as an effective domain-knowledge-based regularization method, and improve both the interpretability and predictive power of DNN in choice analysis. Future studies can explore the generalizability of ASU-DNN and other possibilities of using utility theory to design DNN architectures.
dc.language.isoen
dc.publisherElsevier BV
dc.relation.isversionof10.1016/j.trc.2020.01.012
dc.rightsCreative Commons Attribution-NonCommercial-NoDerivs License
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/
dc.sourceOther repository
dc.titleDeep neural networks for choice analysis: Architecture design with alternative-specific utility functions
dc.typeArticle
dc.contributor.departmentMassachusetts Institute of Technology. Media Laboratory
dc.contributor.departmentMassachusetts Institute of Technology. Department of Urban Studies and Planning
dc.relation.journalTransportation Research Part C: Emerging Technologies
dc.eprint.versionAuthor's final manuscript
dc.type.urihttp://purl.org/eprint/type/JournalArticle
eprint.statushttp://purl.org/eprint/status/PeerReviewed
dc.date.updated2020-08-31T12:31:47Z
dspace.orderedauthorsWang, S; Mo, B; Zhao, J
dspace.date.submission2020-08-31T12:31:50Z
mit.journal.volume112
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Needed


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record