Interpretability vs Performance: Deep Neural Networks perform well on most computer vision tasks, yet they are notoriously difficult to interpret. 
The desire to understand deep neural networks has triggered a flurry of research into Neural Network Visualization, but in practice we are often forced to treat deep learning systems as blackboxes. (See my recent Deep Learning Trends @ ICLR 2016 post for an overview of recent neural network visualization techniques.) But just because we can't grok the innerworkings of our favorite deep models, it doesn't mean we can't ask more out of our deep learning systems.
There exists a simple trick for upgrading blackbox neural network outputs into probability distributions.
The probabilistic approach provides confidences, or "uncertainty" measures, alongside predictions and can make almost any deep learning systems into a smarter one. For robotic applications or any kind of software that must make decisions based on the output of a deep learning system, being able to provide meaningful uncertainties is a true gamechanger.
Applying Dropout to your Deep Neural Network is like occasionally zapping your brain 
Without dropout, it is too easy to make a moderately deep network attain 100% accuracy on the training set.The accepted knowledge is that an unregularized network (one without dropout) is too good at memorizing the training set. For a great introductory machine learning video lecture on dropout, I highly recommend you watch Hugo Larochelle's lecture on Dropout for Deep learning.
Geoff Hinton's dropout lecture, also a great introduction, focuses on interpreting dropout as an ensemble method. If you're looking for new research ideas in the dropout space, a thorough understanding of Hinton's interpretation is a must.
But while dropout is typically used at trainingtime, today we'll highlight the keen observation that dropout used at testtime is one of the simplest ways to turn raw neural network outputs into probability distributions. Not only does this probabilistic "free upgrade" often improve classification results, it provides a meaningful notion of uncertainty, something typically missing in Deep Learning systems.
The idea is quite simple: to estimate the predictive mean and predictive uncertainty, simply collect the results of stochastic forward passes through the model using dropout.
How to use dropout: 2016 edition
 Start with a moderately sized network
 Increase your network size with dropout turned off until you perfectly fit your data
 Then, train with dropout turned on
 At testtime, turn on dropout and run the network T times to get T samples
 The mean of the samples is your output and the variance is your measure of uncertainty
Remember that drawing more samples will increase computation time during testing unless you're clever about reusing partial computations in the network. Please note that if you're only using dropout near the end of your network, you can reuse most of the computations. If you're not happy with the uncertainty estimates, consider adding more layers of dropout at testtime. Since you'll already have a pretrained network, experimenting with testtime dropout layers is easy.
Bayesian Convolutional Neural Networks
To be truly Bayesian about a deep network's parameters, we wouldn't learn a single set of parameters w, we would infer a distribution over weights given the data, p(wX,Y). Training is already quite expensive, requiring large datasets and expensive GPUs.Bayesian learning algorithms can in theory provide much better parameter estimates for ConvNets and I'm sure some of our friends at Google are working on this already.But today we aren't going to talk about such full Bayesian Deep Learning systems, only systems that "upgrade" the model prediction y to p(yx,w). In other words, only the network outputs gain a probabilistic interpretation.
An excellent deep learning computer vision system which uses testtime dropout comes from a recent University of Cambridge technique called SegNet. The SegNet approach introduced an EncoderDecoder framework for dense semantic segmentation. More recently, SegNet includes a Bayesian extension that uses dropout at testtime for providing uncertainty estimates. Because the system provides a dense perpixel labeling, the confidences can be visualized as perpixel heatmaps. Segmentation system is not performing well? Just look at the confidence heatmaps!
Bayesian SegNet. A fully convolutional neural network architecture which provides
perpixel class uncertainty estimates using dropout.

Bayesian SegNet: Model Uncertainty in Deep Convolutional EncoderDecoder Architectures for Scene Understanding Alex Kendall, Vijay Badrinarayanan, Roberto Cipolla, in arXiv:1511.02680, November 2015. [project page with videos]
Confidences are quite useful for evaluation purposes, because instead of providing a single average result across all pixels in all images, we can sort the pixels and/or images by the overall confidence in prediction. When evaluation the top 10% most confident pixels, we should expect significantly higher performance. For example, the Bayesian SegNet approach achieves 75.4% global accuracy on the SUN RGBD dataset, and an astonishing 97.6% on most confident 10% of the testset [personal communication with Bayesian SegNet authors]. This kind of sortbyconfidence evaluation was popularized by the PASCAL VOC Object Detection Challenge, where precision/recall curves were the norm. Unfortunately, as the research community moved towards largescale classification, the notion of confidence was pushed aside. Until now.
Theoretical Bayesian Deep Learning
Deep networks that model uncertainty are truly meaningful machine learning systems. It ends up that we don't really have to understand how a deep network's neurons process image features to trust the system to make decisions. As long as the model provides uncertainty estimates, we'll know when the model is struggling. This is particularly important when your network is given inputs that are far from the training data.The Gaussian Process: A machine learning approach with builtin uncertainty modeling 
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning Yarin Gal, Zoubin Ghahramani, in ICML. June 2016. [Appendix with relationship to Gaussian Processes]
A Theoretically Grounded Application of Dropout in Recurrent Neural Networks Yarin Gal, in arXiv:1512.05287. May 2016.
What My Deep Model Doesn't Know. Yarin Gal. Blog Post. July 2015
Homoscedastic and Heteroscedastic Regression with Dropout Uncertainty. Yarin Gal. Blog Post. February 2016.
Testtime dropout is used to provide uncertainty estimates for deep learning systems. 
In conclusion, maybe we can never get both interpretability and performance when it comes to deep learning systems. But, we can all agree that providing confidences, or uncertainty estimates, alongside predictions is always a good idea. Dropout, the very single regularization trick used to battle overfitting in deep models, shows up, yet again. Sometimes all you need is to add some random variations to your input, and average the results over many trials. Dropout lets you not only wiggle the network inputs but the entire architecture.
I do wonder what Yann LeCun thinks about Bayesian ConvNets... Last I heard, he was allergic to sampling.
Related Posts
Deep Learning vs Probabilistic Graphical Models vs Logic April 2015
Deep Learning Trends @ ICLR 2016 June 2016
No comments:
Post a Comment