An Information Theoretic Interpretation to Deep Neural Networks
Author(s)
Huang, Shao-Lun; Xu, Xiangxiang; Zheng, Lizhong; Wornell, Gregory W
DownloadSubmitted version (681.7Kb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
© 2019 IEEE. It is commonly believed that the hidden layers of deep neural networks (DNNs) attempt to extract informative features for learning tasks. In this paper, we formalize this intuition by showing that the features extracted by DNN coincide with the result of an optimization problem, which we call the "universal feature selection" problem, in a local analysis regime. We interpret the weights training in DNN as the projection of feature functions between feature spaces, specified by the network structure. Our formulation has direct operational meaning in terms of the performance for inference tasks, and gives interpretations to the internal computation results of DNNs. Results of numerical experiments are provided to support the analysis.
Date issued
2019-07Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer ScienceJournal
IEEE International Symposium on Information Theory - Proceedings
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Citation
Huang, Shao-Lun, Xu, Xiangxiang, Zheng, Lizhong and Wornell, Gregory W. 2019. "An Information Theoretic Interpretation to Deep Neural Networks." IEEE International Symposium on Information Theory - Proceedings, 2019-July.
Version: Original manuscript