Conditional Generative Adversarial Networks

Slides:



Advertisements
Similar presentations
Weakly supervised learning of MRF models for image region labeling Jakob Verbeek LEAR team, INRIA Rhône-Alpes.
Advertisements

Unsupervised Learning Clustering K-Means. Recall: Key Components of Intelligent Agents Representation Language: Graph, Bayes Nets, Linear functions Inference.
1 University of Southern California Keep the Adversary Guessing: Agent Security by Policy Randomization Praveen Paruchuri University of Southern California.
Pattern Recognition and Machine Learning
Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1, Lehigh University.
Large-Scale Object Recognition with Weak Supervision
Deep Learning.
Lecture 29: Optimization and Neural Nets CS4670/5670: Computer Vision Kavita Bala Slides from Andrej Karpathy and Fei-Fei Li
Lecture 5: Learning models using EM
CSC321: Introduction to Neural Networks and Machine Learning Lecture 20 Learning features one layer at a time Geoffrey Hinton.
Understanding and evaluating blind deconvolution algorithms
Introduction to domain adaptation
Learning Structure in Bayes Nets (Typically also learn CPTs here) Given the set of random variables (features), the space of all possible networks.
CVPR 2013 Diversity Tutorial Closing Remarks: What can we do with multiple diverse solutions? Dhruv Batra Virginia Tech.
Multi-Output Learning for Camera Relocalization Abner Guzmán-Rivera UIUC Pushmeet Kohli Ben Glocker Jamie Shotton Toby Sharp Andrew Fitzgibbon Shahram.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
Generative Adversarial Nets ML Reading Group Xiao Lin Jul
Keep the Adversary Guessing: Agent Security by Policy Randomization
Generative Adversarial Nets
Generative Adversarial Network (GAN)
Learning Deep Generative Models by Ruslan Salakhutdinov
Convolutional Neural Network
Generative Adversarial Imitation Learning
Deep Feedforward Networks
Deep Neural Net Scenery Generation
Dhruv Batra Georgia Tech
CSC321: Neural Networks Lecture 22 Learning features one layer at a time Geoffrey Hinton.
ECE 5424: Introduction to Machine Learning
Convolutional Neural Fabrics by Shreyas Saxena, Jakob Verbeek
Adversarial Learning for Neural Dialogue Generation
Mastering the game of Go with deep neural network and tree search
Tracking Objects with Dynamics
Photorealistic Image Colourization with Generative Adversarial Nets
Generative Adversarial Networks
CSCI 5922 Neural Networks and Deep Learning Generative Adversarial Networks Mike Mozer Department of Computer Science and Institute of Cognitive Science.
Multimodal Learning with Deep Boltzmann Machines
Machine Learning Basics
Authors: Jun-Yan Zhu*, Taesun Park*, Phillip Isola, Alexei A. Efros
Adversarially Tuned Scene Generation
"Playing Atari with deep reinforcement learning."
Low Dose CT Image Denoising Using WGAN and Perceptual Loss
Computer Vision James Hays
Solutions Sample Games 1
Jia-Bin Huang Virginia Tech ECE 6554 Advanced Computer Vision
Image recognition: Defense adversarial attacks
Non-Stationary Texture Synthesis by Adversarial Expansion
Image to Image Translation using GANs
GAN Applications.
Lip movement Synthesis from Text
RCNN, Fast-RCNN, Faster-RCNN
Speakers: Luo Mai, Jingqing Zhang
Presented by Wanxue Dong
Introduction to Object Tracking
Machine learning overview
CS 2770: Computer Vision Generative Adversarial Networks
Textual Video Prediction
Advances in Deep Audio and Audio-Visual Processing
Compressive Image Recovery using Recurrent Generative Model
Course Recap and What’s Next?
Abnormally Detection
Jia-Bin Huang Virginia Tech
Semi-Supervised Learning
Jia-Bin Huang Virginia Tech
Classifier-Based Approximate Policy Iteration
Angel A. Cantu, Nami Akazawa Department of Computer Science
Cengizhan Can Phoebe de Nooijer
Text-to-speech (TTS) Traditional approaches (before 2016) Neural TTS
CS249: Neural Language Model
20 November 2019 Output maps Normal Diffuse Roughness Specular
ONNX Training Discussion
Presentation transcript:

Conditional Generative Adversarial Networks Jia-Bin Huang Virginia Tech ECE 6554 Advanced Computer Vision

Today’s class Discussions Review basic ideas of GAN Examples of conditional GAN Experiment presentation by Sanket

Why Generative Models? Excellent test of our ability to use high- dimensional, complicated probability distributions Simulate possible futures for planning or simulated RL Missing data Semi-supervised learning Multi-modal outputs Realistic generation tasks (Goodfellow 2016)

Generative Modeling Density estimation Sample generation

Adversarial Nets Framework

Training Procedure Use SGD-like algorithm of choice (Adam) on two minibatches simultaneously: A minibatch of training examples A minibatch of generated samples Optional: run k steps of one player for every step of the other player. (Goodfellow 2016)

Minimax Game Equilibrium is a saddle point of the discriminator loss Resembles Jensen-Shannon divergence Generator minimizes the log-probability of the discriminator being correct (Goodfellow 2016)

Discriminator Strategy Optimal discrimator

Non-Saturating Game Equilibrium no longer describable with a single loss Generator maximizes the log-probability of the discriminator being mistaken Heuristically motivated; generator can still learn even when discriminator successfully rejects all generator samples (Goodfellow 2016)

Review: GAN GANs are generative models that use supervised learning to approximate an intractable cost function GANs can simulate many cost functions, including the one used for maximum likelihood Finding Nash equilibria in high-dimensional, continuous, nonconvex games is an important open research problem

Conditional GAN Learn P(Y|X) [Ledig et al. CVPR 2017]

Image Super-Resolution Conditional on low-resolution input image [Ledig et al. CVPR 2017]

Image-to-Image Translation Conditioned on an image of different modality No need to specify the loss function [Isola et al. CVPR 2017]

[Isola et al. CVPR 2017]

Label2Image [Isola et al. CVPR 2017]

Edges2Image [Isola et al. CVPR 2017]

Generative Visual Manipulation [Zhu et al. ECCV 2016]

[Zhu et al. ECCV 2016]

Text2Image [Reed et al. ICML 2016]

Text2Image F Positive samples: Negative samples: real image + right texts Negative samples: fake image + right texts Real image + wrong texts [Reed et al. ICML 2016]

StackGAN [Zhang et al. 2016]

Plug & Play Generative Networks [Nguyen et al. 2016]

Video GAN Videos http://web.mit.edu/vondrick/tinyvideo/ [Vondrick et al. NIPS 2016]

Generative Modeling as Feature Learning [Vondrick et al. NIPS 2016]

Shape modeling using 3D Generative Adversarial Network [Wu et al. NIPS 2016]

Things to remember GANs can generate sharp samples from high- dimensional output space Conditional GAN can serve as general mapping model X->Y No need to define domain-specific loss functions Handle one-to-many mappings Handle multiple modalities