A fast deep learning system using GPU

Document Type

Conference Proceeding

Date of Original Version

1-1-2014

Abstract

The invention of deep belief network (DBN) provides a powerful tool for data modeling. The key advantage of DBN is that it is driven by training data only, which can alleviate researchers from the routine of devising explicit models or features for data with complicated distributions. However, as the dimensionality and quantity of data increase, the computing load of training a DBN increases rapidly. Prospectively, the remarkable computing power provided by modern GPU devices can reduce the training time of DBN significantly. As highly efficient computational libraries become available, it provides additional support for GPU based parallel computing. Moreover, GPU server is more affordable and accessible compared with computer cluster or supercomputer. In this paper, we implement a variant of the DBNs, called folded-DBN, on NVIDA's Tesla K20 GPU. In our simulations, two sets of database are used to train the folded-DBNs on both CPU and GPU platforms. Comparing execution time of the fine-tuning process, the GPU implementation results 7 to 11 times speedup over the CPU platform. © 2014 IEEE.

Publication Title, e.g., Journal

Proceedings - IEEE International Symposium on Circuits and Systems

Share

COinS