Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks

Konstantinos Bousmalis 1
Nathan Silberman 2
David Dohan 3
Dumitru Erhan 4
Dilip Krishnan 5
1
 
[Google Brain, London, UK]
2
 
Google Research, New York, NY
3
 
[Google Brain, Mountain View, CA]
4
 
[Google Brain, San Francisco, CA]
5
 
Google Research, Cambridge, MA
Publication typeProceedings Article
Publication date2017-07-01
Abstract
Collecting well-annotated image datasets to train modern machine learning algorithms is prohibitively expensive for many tasks. One appealing alternative is rendering synthetic data where ground-truth annotations are generated automatically. Unfortunately, models trained purely on rendered images fail to generalize to real images. To address this shortcoming, prior work introduced unsupervised domain adaptation algorithms that have tried to either map representations between the two domains, or learn to extract features that are domain-invariant. In this work, we approach the problem in a new light by learning in an unsupervised manner a transformation in the pixel space from one domain to the other. Our generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain. Our approach not only produces plausible samples, but also outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Finally, we demonstrate that the adaptation process generalizes to object classes unseen during training.
Found 
Found 

Top-30

Journals

20
40
60
80
100
20
40
60
80
100

Publishers

100
200
300
400
500
600
100
200
300
400
500
600
  • We do not take into account publications without a DOI.
  • Statistics recalculated only for publications connected to researchers, organizations and labs registered on the platform.
  • Statistics recalculated weekly.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Found error?