Document Type
Conference Proceeding
Date of Original Version
1-1-2012
Abstract
Dimensionality reduction has been a long-standing research topic in academia and industry for two major reasons. First, in almost every domain, ranging from biology, social science, economics, to military data processing applications, the increasingly large volume of data is challenging the existing computing capability and raising the computing cost. Second, the notion of "intrinsic structure" allows us to remove some redundant dimensions from high-dimensional observations and reduce it into low-dimensional features without significant information loss. Autoencoder, as a powerful tool for dimensionality reduction has been intensively applied in image reconstruction, missing data recovery and classification. In this paper, we propose a new structure, folded autoencoder based on symmetric structure of conventional autoencoder, for dimensionality reduction. The new structure reduces the number of weights to be tuned and thus reduces the computational cost. Simulation results over MNIST data benchmark validate the effectiveness of this structure. © 2012 Published by Elsevier B.V.
Publication Title, e.g., Journal
Procedia Computer Science
Volume
13
Citation/Publisher Attribution
Wang, Jing, Haibo He, and Danil V. Prokhorov. "A folded neural network autoencoder for dimensionality reduction." Procedia Computer Science 13, (2012): 120-127. doi: 10.1016/j.procs.2012.09.120.
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.