Deep associative neural network for associative memory based on unsupervised representation learning
Document Type
Article
Date of Original Version
5-1-2019
Abstract
This paper presents a deep associative neural network (DANN) based on unsupervised representation learning for associative memory. In brain, the knowledge is learnt by associating different types of sensory data, such as image and voice. The associative memory models which imitate such a learning process have been studied for decades but with simpler architectures they fail to deal with large scale complex data as compared with deep neural networks. Therefore, we define a deep architecture consisting of a perception layer and hierarchical propagation layers. To learn the network parameters, we define a probabilistic model for the whole network inspired from unsupervised representation learning models. The model is optimized by a modified contrastive divergence algorithm with a novel iterated sampling process. After training, given a new data or corrupted data, the correct label or corrupted part is associated by the network. The DANN is able to achieve many machine learning problems, including not only classification, but also depicting the data given a label and recovering corrupted images. Experiments on MNIST digits and CIFAR-10 datasets demonstrate the learning capability of the proposed DANN.
Publication Title, e.g., Journal
Neural Networks
Volume
113
Citation/Publisher Attribution
Liu, Jia, Maoguo Gong, and Haibo He. "Deep associative neural network for associative memory based on unsupervised representation learning." Neural Networks 113, (2019): 41-53. doi: 10.1016/j.neunet.2019.01.004.