ICCV19-Paper-Review

Summaries of ICCV 2019 papers.

Drop to Adapt: Learning Discriminative Features for Unsupervised Domain Adaptation

Purbayan Chowdhury

With increased demand of synthetic datasets in deep neural networks, domain adaptation becomes a prominent part of research. A model trained on the source domain shows disappointing results in the target domain and this is known as domain shift.

In this paper, unsupervised domain adaptation id dealt with domain adversarial training where an auxiliary domain discriminator gives a domain-invariant and non-discriminative feature representation. It implements the cluster assumption (decision boundaries should be placed in low density regions in the feature space) using Drop to Adapt (DTA) with adversarial dropout and introduced element-wise and channel-wise adversarial dropout for fully-connected and convolutional layers, respectively.

Proposed Method

Experiment Result

On Small Datasets

SVHN ⟶ MNIST - MNIST consists of binary handwritten digit images, SVHN consists of coloured images of street house numbers.

MNIST ⟷ USPS - MNIST and USPS contain grayscale images.

CIFAR ⟷ STL - CIFAR and STL are 10-class object recognition datasets with coloured images.

Results of experiment on small image datasets.

A substantial margin of improvement is achieved over the source only model across all domain configurations. In four of the five configurations, the stated method outperforms the recent state-of-the-art results.

On Large Datasets

The stated method clearly improves upon the mIoU of not only the source only model, but also competing methods. Even with the same training procedure and settings as in the classification experiments, DTA is extremely effective at adapting the most common classes in the dataset.

For code, visit this link.