One-Shot Unsupervised Domain Adaptation for Object Detection
Document Type
Conference Proceeding
Date of Original Version
7-1-2020
Abstract
The existing unsupervised domain adaptation (UDA) methods require not only labeled source samples but also a large number of unlabeled target samples for domain adaptation. Collecting these target samples is generally time-consuming, which hinders the rapid deployment of these UDA methods in new domains. Besides, most of these UDA methods are developed for image classification. In this paper, we address a new problem called one-shot unsupervised domain adaptation for object detection, where only one unlabeled target sample is available. To the best of our knowledge, this is the first time this problem is investigated. To solve this problem, a one-shot feature alignment (OSFA) algorithm is proposed to align the low-level features of the source domain and the target domain. Specifically, the domain shift is reduced by aligning the average activation of the feature maps in the lower layer of CNN. The proposed OSFA is evaluated under two scenarios: adapting from clear weather to foggy weather; adapting from synthetic images to real-world images. Experimental results show that the proposed OSFA can significantly improve the object detection performance in target domain compared to the baseline model without domain adaptation.
Publication Title, e.g., Journal
Proceedings of the International Joint Conference on Neural Networks
Citation/Publisher Attribution
Wan, Zhiqiang, Lusi Li, Hepeng Li, Haibo He, and Zhen Ni. "One-Shot Unsupervised Domain Adaptation for Object Detection." Proceedings of the International Joint Conference on Neural Networks (2020). doi: 10.1109/IJCNN48605.2020.9207244.