Abstract:
For the classification problem of unlabeled high-dimensional images, the commonly used deep neutral networks have difficulty in producing good classification results in the unlabeled datasets. This paper proposes an unsupervised feature-level domain adaptation with generative adversarial networks (Feature-GAN), which learns the feature level transformation from one domain to another in unsupervised manner. It maps the source domain image features to the target domain image features and keeps the label information, and these generated labeled features can be used to train a classifier adapted to the target domain features. This model avoids the generation process of the image itself in the complex image domain adaptation problem and focuses on feature generation. The model is easy to train and has high stability. Experiments show that the proposed method can be widely applied to complex image classification scenarios, and it outperforms traditional sample generation-based unsupervised domain adaptation algorithms in terms of accuracy, convergence speed, and stability.