Normalized Cut Loss for Weakly-supervised CNN Segmentation

Slides:



Advertisements
Similar presentations
Learning Specific-Class Segmentation from Diverse Data M. Pawan Kumar, Haitherm Turki, Dan Preston and Daphne Koller at ICCV 2011 VGG reading group, 29.
Advertisements

Interactively Co-segmentating Topically Related Images with Intelligent Scribble Guidance Dhruv Batra, Carnegie Mellon University Adarsh Kowdle, Cornell.
SPONSORED BY SA2014.SIGGRAPH.ORG Annotating RGBD Images of Indoor Scenes Yu-Shiang Wong and Hung-Kuo Chu National Tsing Hua University CGV LAB.
Fully Convolutional Networks for Semantic Segmentation
Learning from Big Data Lecture 5
Feedforward semantic segmentation with zoom-out features
Unsupervised Visual Representation Learning by Context Prediction
Rich feature hierarchies for accurate object detection and semantic segmentation 2014 IEEE Conference on Computer Vision and Pattern Recognition Ross Girshick,
Gaussian Conditional Random Field Network for Semantic Segmentation
Yann LeCun Other Methods and Applications of Deep Learning Yann Le Cun The Courant Institute of Mathematical Sciences New York University
Deep Learning for Dual-Energy X-Ray
Learning to Compare Image Patches via Convolutional Neural Networks
Demo.
Convolutional Neural Network
Summary of “Efficient Deep Learning for Stereo Matching”
Object Detection based on Segment Masks
Deep Reinforcement Learning
Convolutional Neural Fabrics by Shreyas Saxena, Jakob Verbeek
Recurrent Neural Networks for Natural Language Processing
Krishna Kumar Singh, Yong Jae Lee University of California, Davis
Jure Zbontar, Yann LeCun
Announcements Project proposal due tomorrow
Matt Gormley Lecture 16 October 24, 2016
Regularizing Face Verification Nets To Discrete-Valued Pain Regression
Combining CNN with RNN for scene labeling (segmentation)
Part-Based Room Categorization for Household Service Robots
Neural networks (3) Regularization Autoencoder
Summary Presentation.
Huazhong University of Science and Technology
Structured Predictions with Deep Learning
Efficient Deep Model for Monocular Road Segmentation
Project Implementation for ITCS4122
Adversarially Tuned Scene Generation
A brief introduction to neural network
Low Dose CT Image Denoising Using WGAN and Perceptual Loss
Secrets of GrabCut and Kernel K-means
Computer Vision James Hays
Face Recognition with Deep Learning Method
Counting in Dense Crowds using Deep Learning
Normalized Cut meets MRF
Road Traffic Sign Recognition
Oral presentation for ACM International Conference on Multimedia, 2014
A Proposal Defense On Deep Residual Network For Face Recognition Presented By SAGAR MISHRA MECE
Image to Image Translation using GANs
Semantic segmentation
Deep Learning and Mixed Integer Optimization
Convolutional networks
Use 3D Convolutional Neural Network to Inspect Solder Ball Defects
Outline Background Motivation Proposed Model Experimental Results
RCNN, Fast-RCNN, Faster-RCNN
Coding neural networks: A gentle Introduction to keras
Zhedong Zheng, Liang Zheng and Yi Yang
Neural Network Pipeline CONTACT & ACKNOWLEDGEMENTS
边缘检测年度进展概述 Ming-Ming Cheng Media Computing Lab, Nankai University
Course Recap and What’s Next?
The Image The pixels in the image The mask The resulting image 255 X
Department of Computer Science Ben-Gurion University of the Negev
Human-object interaction
Introduction to Neural Networks
Automating stroke lesion segmentation in brain images using a multi-model multi-path convolutional neural network Yunzhe.
“Traditional” image segmentation
Learning Deconvolution Network for Semantic Segmentation
Heterogeneous Graph Convolutional Network
Deep Structured Scene Parsing by Learning with Image Descriptions
End-to-End Facial Alignment and Recognition
Machine Learning.
CRCV REU 2019 Kara Schatz.
Recent Developments on Super-Resolution
Rong Ge, Duke University
An introduction to neural network and machine learning
Presentation transcript:

Normalized Cut Loss for Weakly-supervised CNN Segmentation Meng Tang1,2 Abdelaziz Djelouah1 Federico Perazzi1,3 Yuri Boykov2 Christopher Schroers1 1Disney Research Zurich, Switzerland 2University of Waterloo, Canada 3Adobe Research Previous work: Proposal Generation Our methodology: Regularized Loss E.g. Normalized Cut Regularized Loss Experiments Train CNN from full but “fake” proposals: Joint loss for labeled & unlabeled pixels (pointwise) Empirical risk loss (pairwise or high-order) regularization loss “Shallow” normalized cut segmentation Pairwise affinity Wij on RGBXY Balanced color clustering “Deep” normalized cut loss scribbles unknown pixels test image w/ proposals w/ NC loss ground truth Scribbles (partial masks) Proposals (full masks) mIOU on val set with different networks test image ground truth w/ full masks [Lin et al. 2016] Via “shallow” segmentation e.g. graph cut: empirical risk loss for labeled pixels regularization loss for unlabeled pixels Network Weak Supervison Full Sup. pCE pCE + NC DeepLab-MSc-largeFOV+CRF 62.0 65.1 68.7 DeepLab-VGG16 60.4 62.4 68.8 ResNet101 69.5 72.8 75.6 ResNet101+CRF 74.5 76.8 data term regularization term shallow NC w/ pCE loss only w/ extra NC loss Train with shorter scribbles partial Cross Entropy (pCE) Gradient of normalized cut Towards better color clustering What’s wrong with proposals? a heuristic to mimic full supervision mislead training to fit errors require expensive inference Examples of regularization: clustering criteria, e.g. normalized cut (NC) pairwise CRF Gradient computation only, no inference length 1 length 0.5 follow-up this work length 0.3 length 0 training image network output gradient proposal method click Motivation: Regularized Loss for Semi-supervised Learning Given M labeled data and U unlabeled data , learn Semi-supervised deep learning [Weston et al. 2012] Partial CE as Loss Sampling Follow-up Work [Tang et al. arXiv 2018] input empirical risk loss for labeled data regularization loss for unlabeled data Simple but overlooked Sampling in Stochastic Gradient Descent scribble as sampling up=1 for scribbles, 0 otherwise Other regularization as losses: pairwise CRF regularization normalized Cut plus CRF Besides weak-supervision full-supervision (all fully labeled) semi-supervision (w/ unlabeled images) ? ? output labeled data only labeled & unlabeled data 1. Lin et al. "Scribblesup: Scribble-supervised convolutional networks for semantic segmentation." CVPR 2016. 2. Weston et al. "Deep learning via semi-supervised embedding." Neural Networks: Tricks of the Trade, 2012 3. Tang et al. "On Regularized Losses for Weakly-supervised CNN Segmentation." arXiv 2018.