An Information Theoretic Interpretation to Deep Neural Networks
Author(s)Huang, Shao-Lun; Xu, Xiangxiang; Zheng, Lizhong; Wornell, Gregory W
MetadataShow full item record
© 2019 IEEE. It is commonly believed that the hidden layers of deep neural networks (DNNs) attempt to extract informative features for learning tasks. In this paper, we formalize this intuition by showing that the features extracted by DNN coincide with the result of an optimization problem, which we call the "universal feature selection" problem, in a local analysis regime. We interpret the weights training in DNN as the projection of feature functions between feature spaces, specified by the network structure. Our formulation has direct operational meaning in terms of the performance for inference tasks, and gives interpretations to the internal computation results of DNNs. Results of numerical experiments are provided to support the analysis.
IEEE International Symposium on Information Theory - Proceedings
Institute of Electrical and Electronics Engineers (IEEE)
Huang, Shao-Lun, Xu, Xiangxiang, Zheng, Lizhong and Wornell, Gregory W. 2019. "An Information Theoretic Interpretation to Deep Neural Networks." IEEE International Symposium on Information Theory - Proceedings, 2019-July.