A visual long-short-term memory based integrated CNN model for fabric defect image classification
Date of Original Version
Fabric defect classification is traditionally achieved by human visual examination, which is inefficient and labor-intensive. Therefore, using intelligent and automated methods to solve this problem has become a hot research topic. With the increasing diversity of fabric defects, it is urgent to design effective methods to classify defects with a higher accuracy, which can contribute to ensuring the fabric products' quality. Considering that the fabric defect is not obvious against the texture background and many kinds of them are too confusing to distinguish, a visual long-short-term memory (VLSTM) based integrated CNN model is proposed in this paper. Inspired by the human visual perception and visual memory mechanism, three categories of features are extracted, which are the visual perception (VP) information extracted by stacked convolutional auto-encoders (SCAE), the visual short-term memory (VSTM) information characterized by a shallow convolutional neural network (CNN), and the visual long-term memory (VLTM) information characterized by non-local neural networks. Experimental results on three fabric defect datasets have shown that the proposed model provides competitive results to the current state-of-the-art methods on fabric defect classification.
Zhao, Yudi, Kuangrong Hao, Haibo He, Xuesong Tang, and Bing Wei. "A visual long-short-term memory based integrated CNN model for fabric defect image classification." Neurocomputing 380, (2020): 259-270. doi:10.1016/j.neucom.2019.10.067.