Tuesday, December 08, 2015

ICCV 2015: Twenty one hottest research papers

"Geometry vs Recognition" becomes ConvNet-for-X

Computer Vision used to be cleanly separated into two schools: geometry and recognition. Geometric methods like structure from motion and optical flow usually focus on measuring objective real-world quantities like 3D "real-world" distances directly from images and recognition techniques like support vector machines and probabilistic graphical models traditionally focus on perceiving high-level semantic information (i.e., is this a dog or a table) directly from images.

The world of computer vision is changing fast has changed. We now have powerful convolutional neural networks that are able to extract just about anything directly from images. So if your input is an image (or set of images), then there's probably a ConvNet for your problem.  While you do need a large labeled dataset, believe me when I say that collecting a large dataset is much easier than manually tweaking knobs inside your 100K-line codebase. As we're about to see, the separation between geometric methods and learning-based methods is no longer easily discernible.

By 2016 just about everybody in the computer vision community will have tasted the power of ConvNets, so let's take a look at some of the hottest new research directions in computer vision.

ICCV 2015's Twenty One Hottest Research Papers



This December in Santiago, Chile, the International Conference of Computer Vision 2015 is going to bring together the world's leading researchers in Computer Vision, Machine Learning, and Computer Graphics.

To no surprise, this year's ICCV is filled with lots of ConvNets, but this time the applications of these Deep Learning tools are being applied to much much more creative tasks. Let's take a look at the following twenty one ICCV 2015 research papers, which will hopefully give you a taste of where the field is going.


1. Ask Your Neurons: A Neural-Based Approach to Answering Questions About Images Mateusz Malinowski, Marcus Rohrbach, Mario Fritz


"We propose a novel approach based on recurrent neural networks for the challenging task of answering of questions about images. It combines a CNN with a LSTM into an end-to-end architecture that predict answers conditioning on a question and an image."




2. Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler



"To align movies and books we exploit a neural sentence embedding that is trained in an unsupervised way from a large corpus of books, as well as a video-text neural embedding for computing similarities between movie clips and sentences in the book."







3. Learning to See by Moving Pulkit Agrawal, Joao Carreira, Jitendra Malik


"We show that using the same number of training images, features learnt using egomotion as supervision compare favourably to features learnt using class-label as supervision on the tasks of scene recognition, object recognition, visual odometry and keypoint matching."







4. Local Convolutional Features With Unsupervised Training for Image Retrieval Mattis Paulin, Matthijs Douze, Zaid Harchaoui, Julien Mairal, Florent Perronin, Cordelia Schmid



"We introduce a deep convolutional architecture that yields patch-level descriptors, as an alternative to the popular SIFT descriptor for image retrieval."






5. Deep Networks for Image Super-Resolution With Sparse Prior Zhaowen Wang, Ding Liu, Jianchao Yang, Wei Han, Thomas Huang



"We show that a sparse coding model particularly designed for super-resolution can be incarnated as a neural network, and trained in a cascaded structure from end to end."



6. High-for-Low and Low-for-High: Efficient Boundary Detection From Deep Object Features and its Applications to High-Level Vision Gedas Bertasius, Jianbo Shi, Lorenzo Torresani



"In this work we show how to predict boundaries by exploiting object level features from a pretrained object-classification network."















7. A Deep Visual Correspondence Embedding Model for Stereo Matching Costs Zhuoyuan Chen, Xun Sun, Liang Wang, Yinan Yu, Chang Huang



"A novel deep visual correspondence embedding model is trained via Convolutional Neural Network on a large set of stereo images with ground truth disparities. This deep embedding model leverages appearance data to learn visual similarity relationships between corresponding image patches, and explicitly maps intensity values into an embedding feature space to measure pixel dissimilarities."





8. Im2Calories: Towards an Automated Mobile Vision Food Diary Austin Meyers, Nick Johnston, Vivek Rathod, Anoop Korattikara, Alex Gorban, Nathan Silberman, Sergio Guadarrama, George Papandreou, Jonathan Huang, Kevin P. Murphy



"We present a system which can recognize the contents of your meal from a single image, and then predict its nutritional contents, such as calories."









9. Unsupervised Visual Representation Learning by Context Prediction Carl Doersch, Abhinav Gupta, Alexei A. Efros



"How can one write an objective function to encourage a representation to capture, for example, objects, if none of the objects are labeled?"
















10. Deep Neural Decision Forests Peter Kontschieder, Madalina Fiterau, Antonio Criminisi, Samuel Rota Bulò



"We introduce a stochastic and differentiable decision tree model, which steers the representation learning usually conducted in the initial layers of a (deep) convolutional network."






11. Conditional Random Fields as Recurrent Neural Networks Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, Philip H. S. Torr



"We formulate mean-field approximate inference for the Conditional Random Fields with Gaussian pairwise potentials as Recurrent Neural Networks."






12. Flowing ConvNets for Human Pose Estimation in Videos Tomas Pfister, James Charles, Andrew Zisserman



"We investigate a ConvNet architecture that is able to benefit from temporal context by combining information across the multiple frames using optical flow."





13. Dense Optical Flow Prediction From a Static Image Jacob Walker, Abhinav Gupta, Martial Hebert



"Given a static image, P-CNN predicts the future motion of each and every pixel in the image in terms of optical flow. Our P-CNN model leverages the data in tens of thousands of realistic videos to train our model. Our method relies on absolutely no human labeling and is able to predict motion based on the context of the scene."


14. DeepBox: Learning Objectness With Convolutional Networks Weicheng Kuo, Bharath Hariharan, Jitendra Malik



"Our framework, which we call DeepBox, uses convolutional neural networks (CNNs) to rerank proposals from a bottom-up method."








15. Active Object Localization With Deep Reinforcement Learning Juan C. Caicedo, Svetlana Lazebnik



"This agent learns to deform a bounding box using simple transformation actions, with the goal of determining the most specific location of target objects following top-down reasoning."





16. Predicting Depth, Surface Normals and Semantic Labels With a Common Multi-Scale Convolutional Architecture David Eigen, Rob Fergus



"We address three different computer vision tasks using a single multiscale convolutional network architecture: depth prediction, surface normal estimation, and semantic labeling."















17. HD-CNN: Hierarchical Deep Convolutional Neural Networks for Large Scale Visual Recognition Zhicheng Yan, Hao Zhang, Robinson Piramuthu, Vignesh Jagadeesh, Dennis DeCoste, Wei Di, Yizhou Yu



"We introduce hierarchical deep CNNs (HD-CNNs) by embedding deep CNNs into a category hierarchy. An HD-CNN separates easy classes using a coarse category classifier while distinguishing difficult classes using fine category classifiers."





18. FlowNet: Learning Optical Flow With Convolutional Networks Alexey Dosovitskiy, Philipp Fischer, Eddy Ilg, Philip Häusser, Caner Hazırbaş, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, Thomas Brox



"We construct appropriate CNNs which are capable of solving the optical flow estimation problem as a supervised learning task."







19. Understanding Deep Features With Computer-Generated Imagery Mathieu Aubry, Bryan C. Russell


"Rendered images are presented to a trained CNN and responses for different layers are studied with respect to the input scene factors."







20. PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization Alex Kendall, Matthew Grimes, Roberto Cipolla



"Our system trains a convolutional neural network to regress the 6-DOF camera pose from a single RGB image in an end-to-end manner with no need of additional engineering or graph optimisation."





21. Visual Tracking With Fully Convolutional Networks Lijun Wang, Wanli Ouyang, Xiaogang Wang, Huchuan Lu




"A new approach for general object tracking with fully convolutional neural network."



Conclusion

While some can argue that the great convergence upon ConvNets is making the field less diverse, it is actually making the techniques easier to comprehend. It is easier to "borrow breakthrough thinking" from one research direction when the core computations are cast in the language of ConvNets. Using ConvNets, properly trained (and motivated!) 21 year old graduate student are actually able to compete on benchmarks, where previously it would take an entire 6-year PhD cycle to compete on a non-trivial benchmark.

See you next week in Chile!


Update (January 13th, 2016)

The following awards were given at ICCV 2015.

Achievement awards

  • PAMI Distinguished Researcher Award (1): Yann LeCun
  • PAMI Distinguished Researcher Award (2): David Lowe
  • PAMI Everingham Prize Winner (1): Andrea Vedaldi for VLFeat
  • PAMI Everingham Prize Winner (2): Daniel Scharstein and Rick Szeliski for the Middlebury Datasets

Paper awards

  • PAMI Helmholtz Prize (1): David MartinCharles FowlkesDoron Tal, and Jitendra Malik for their ICCV 2001 paper "A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics".
  • PAMI Helmholtz Prize (2): Serge BelongieJitendra Malik, and Jan Puzicha, for their ICCV 2001 paper "Matching Shapes".
  • Marr Prize: Peter KontschiederMadalina FiterauAntonio Criminisi, and Samual Rota Bulo, for "Deep Neural Decision Forests".
  • Marr Prize honorable mention: Saining Xie and Zhuowen Tu for"Holistically-Nested Edge Detection".
For more information about awards, see Sebastian Nowozin's ICCV-day-2 blog post.

I also wrote another ICCV-related blog post (January 13, 2016) about the Future of Real-Time SLAM.

Saturday, November 07, 2015

The Deep Learning Gold Rush of 2015

In the last few decades, we have witnessed major technological innovations such as personal computers and the internet finally reach the mainstream. And with mobile devices and social networks on the rise, we're now more connected than ever. So what's next? When is it coming? And how will it change our lives? Today I'll tell you that the next big advance is well underway and it's being fueled by a recent technique in the field of Artificial Intelligence known as Deep Learning.


The California Gold Rush of 2015 is all about Deep Learning. 
It's everywhere, you just don't know how to look.


All of today's excitement in Artificial Intelligence and Machine Learning stems from ground-breaking results in speech and visual object recognition using Deep Learning[1]. These algorithms are being applied to all sorts of data, and the learned deep neural networks outperform traditional expert systems carefully designed by scientists and engineers. End-to-end learning of deep representations from raw data is now possible due to a handful of well-performing deep learning recipes (ConvNets, Dropout, ReLUs, LSTM, DQNImageNet). But if there's one final takeaway that we can extract from decades of machine learning research, is that for many problems going deep isn't a choice, it's often a requirement.

Most of the apps and services you're already using (AirBnB, Snapchat, Twitch.tv, Uber, Yelp, LinkedIn, etc) are quite data-hungry and before you know it, they're all going to go mega-deep. So whether you need to revitalize your data science team with deep learning or you're starting an AI-from-day-one operation, it's pretty clear that everybody is rushing to get some of this Silicon Valley Gold.

From Titans to Gold Miners: Your atypical Gold Rush

Like all great gold rushes, this movement is led by new faces, which are pouring into Silicon Valley like droves. But these aren't your typical unskilled immigrants willing to pick up a hammer, nor your fresh computer science grads with some app-writing skills. The key deep learning players of today (known as the Titans of Deep Learning) are computer science professors and researchers (seldom born in the USA) leaving their academic posts and bringing their students and ideas straight into Silicon Valley.

"Turn on, Tune in, Dropout" -- Timothy Leary

Recently, Google and Facebook announced that their operations are now being powered by Deep Learning [2,3]. And with most Deep Learning Titans representing the tech giants (Yann LeCun at Facebook Research, Geoffrey Hinton at Google, Andrew Ng at Baidu), Deep Learning is likely to become one of the most sought after tech skills. With Toyota to invest in $1 Billion in Robotics and Artificial Intelligence Research (November 6, 2015), the announcement of YC Research (October 7, 2015), and the new Google Brain Residency Program "Pre-doc" AI jobs (October 26, 2015), Silicon Valley just got a whole lot more interesting.

Silicon Valley re-defines itself, yet again 

To understand why it took so long for Deep Learning to take-off, let's take a brief look at the key technologies which defined Silicon Valley over the last 50 years.  The following timeline gives an overview of where Silicon Valley has been and where it's going.



1970s: Semiconductors 
The story of the digital-era starts with semiconductors. "Silicon" in "Silicon Valley" originally referred to the silicon chip or integrated circuit innovations as well as the location (close to Stanford) of much tech-related activity. The dominant firm from that time period was Fairchild Semiconductor International and it eventually gave rise to more recognizable companies like Intel. For a more detailed discussion of this birthing era, take a look at Steve Blank's Secret History of Silicon Valley[4].
Read more about Fairchild at TechCrunch's First Trillion-Dollar Startup 

1980s: Personal Computers
Initially computers were quite large and used solely by research labs, government, and big businesses. But it was the personal computer which turned computer programming from a hobby into a vital skill. You no longer needed to be an MIT student to program on one of these badboys. While both Microsoft and Apple were founded in 1975 and 1976, respectively, they persevered due to their pioneering work in graphical user interfaces. This was the birth of the modern user-friendly Operating System. IBM approached Microsoft in 1980, regarding its upcoming personal computer, and from then on Microsoft would be King for a very long time.

See Mac-history's article on Microsoft's relationship with Apple


1990s: Internet
While the nerds at Universities were posting ascii messages on newsgroups in the 90s, service providers in the 1990s like AOL helped make the internet accessible to everyone. Remember getting all those AOL disks in the mail? Buying a chunk of digital real state (your own domain name) became possible and anybody with a dial up connection and some primitive text/HTML skills could start posting online content. With a mission statement like "organize the world's information", it was eventually Google that got the most of out the late 90s dot-com bubble, and remains a very strong player in all things tech.

2000s: Mobile and Social
While the dot-com bubble was about creating an online presence for startups and established companies, the way we use the internet has dramatically changed since 2001. A ton of new social communities have emerged, and due to Facebook we're now stars in our own reality show. Social and advertising have essentially turned the modern internet into a mainstream TV-like experience. The internet is no longer only for the nerds. The kings of this era (Google and Facebook) are also the biggest players in the Deep Learning space, because they have the largest user bases and in-house apps which can benefit most from machine learning.

2010-2015: Deep Learning comes to the party
Spend more than a day in Silicon Valley and you'll hear the popular expression, "Software is eating the world." Rampant spreading of software was only possible once the internet (1990s) AND mobile devices (2000s) became essential parts of our lives. No longer do we physically mail floppy disks, and social media fuels any app that goes viral. What traditional software is missing (or has been missing up until now) is the ability to improve over time from everyday use. If that same software is able to connect to a large Deep Learning system and start improving, then we have a game-changer on our hands. This is already happening with online advertising, digital assistants like Siri, and smart auto-responders like Google's new email auto-reply feature.



The hierarchical award-winning "AlexNet" Deep Learning architecture 
Visualized using MIT's Toolbox for Deep Learning Neuron Visualization


Massive hiring of deep learning experts by the leading tech companies has only begun, but we also should be on the lookout for new ventures built on top of Deep Learning, not just a revitalization of last decade's successes. On this front, keep a close look at the following Deep Learning Cloud Service upstarts: Richard Socher from MetaMindMatthew Zeiler from Clarifai, and Carlos Guestrin from Dato.

2015-2020: Deep Learning Revitalizes Robotics
Recently it has been shown that Deep Learning can be used to help robots learn tasks involving movement, object manipulation, and decision making[6,7,8,9]. Before Deep Learning, lots of different pieces of robotic software and hardware would have to be developed independently and then hacked together for demo day. Today, you can use one of a handful of "Deep Learning for Robotics recipes" and start watching your robot learn the task you care about.

Robots Learns to Grasp using Deep Learning at Carnegie Mellon University. 

With their 2013 acquisition of Boston Dynamics (a hardware play), 2014 acquisition of DeepMind (a software play), and a serious autonomous car play, Google is definitely early to the Robotics party. But the noteworthy bits are happening at the intersection of deep learning and robotics.  I suggest taking a closer look at the Robotics research of Pieter Abbeel of Berkeley, Abhinav Gupta of Carnegie Mellon, and Ashutosh Saxena of Stanford -- all likely stars in the next Deep Learning for Robotics race. As long as Rodney Brooks keeps creating innovative Robotics platforms like Baxter, my expectations for Robotics are off the charts.

Conclusion

Unlike in 1849, the Deep Learning Gold Rush of 2015 is not going to bring some 300,000 gold-seekers in boats to California's mainland. This isn't a bring-your-own-hammer kind of game -- the Titans have already descended from their Ivory Towers and handed us ample mining tools. But it won't hurt to gain some experience with traditional "shallow" machine learning techniques so you can appreciate the power of Deep Learning.

I hope you enjoyed today's read and have a better sense of how Silicon Valley is undergoing a transformation. And remember, today's wave of Deep Learning upstart CEOs have PhDs, but once Deep Learning software becomes more user-friendly (TensorFlow?), maybe you won't have to wait so long to dropout.


References

[1] Krizhevsky, A., Sutskever, I. and Hinton, G. E. ImageNet Classification with Deep Convolutional Neural Networks. In NIPS 2012.
[2] D'Onfro, J. Google is 're-thinking' all of its products to include machine learning. Business Insider. October 22, 2015.
[3] D'Onfro, J. How Facebook will use artificial intelligence to organize insane amounts of data into the perfect News Feed and a personal assistant with superpowers. Business Insider. November 3, 2015.
[4] Blank, S. Secret History of Silicon Valley. 2008.
[5] Donglai Wei, Bolei Zhou, Antonio Torralba William T. Freeman. mNeuron: A Matlab Plugin to Visualize Neurons from Deep Models. 2015.
[6] Lerrel Pinto, Abhinav Gupta. Supersizing Self-supervision: Learning to Graspfrom 50K Tries and 700 Robot Hours. arXiv. 2015.
[7] Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel. End-to-End Training of Deep Visuomotor Policies. In RSS 2015.
[8] Mnih, Volodymyr, et al. "Human-level control through deep reinforcement learning." Nature 518.7540 (2015): 529-533.
[9] Ian Lenz, Ross Knepper, and Ashutosh Saxena. DeepMPC: Learning Deep Latent Features for Model Predictive Control.  In Robotics Science and Systems (RSS), 2015









Friday, June 26, 2015

Deep down the rabbit hole: CVPR 2015 and beyond

CVPR is the premier Computer Vision conference, and it's fair to think of it as the Olympics of Computer Vision Research. This year it was held in my own back yard -- less than a mile away from lovely Cambridge, MA!  Plenty of my MIT colleagues attended, but I wouldn't be surprised if Google had the largest showing at CVPR 2015. I have been going to CVPR almost every year since 2004, so let's take a brief tour at what's new in the exciting world of computer vision research.




A lot has changed. Nothing has changed. Academics used to be on top, defending their Universities and the awesomeness happening inside their non-industrial research labs. Academics are still on top, but now defending their Google, Facebook, Amazon, and Company X affiliations. And with the hiring budget to acquire the best and a heavy publishing-oriented culture, don't be surprised if the massive academia exodus continues for years to come. It's only been two weeks since CVPR, and Google has since then been busy making ConvNet art, showing the world that if you want to do the best Deep Learning research, they are King.

An army of PhD students and Postdocs simply cannot defeat an army of Software Engineers and Research Scientists. Back in the day, students used to typically depart after a Computer Vision PhD (there used to be few vision research jobs and Wall Street jobs were tempting). Now the former PhD students run research labs at big companies which have been feverishly getting into vision. It seems there aren't enough deep experts to fill the deep demand.

Datasets used to be the big thing -- please download my data!  Datasets are still the big thing -- but we regret to inform you that your university’s computational resources won’t make the cut (but at Company X we’re always hiring, so come join us, and help push the frontier of research together).

Related Article: Under LeCun's Leadership, Facebook's AI Research Lab is beefing up their research presence

If you want to check out the individual papers, I recommend Andrej Karpathy's online navigation tool for CVPR 2015 papers or take a look at the vanilla listing of CVPR 2015 papers on the CV foundation websiteZoya Bylinskii, an MIT PhD Candidate, also put together a list of interesting CVPR 2015 papers.

The ConvNet Revolution: There's a pre-trained network for that

Machine Learning used to be the Queen. Machine Learning is now the King. Machine Learning used to be shallow, but today's learning approaches are so deep that the diagrams barely fit on a single slide. Grad students used to pass around jokes about Yann LeCun and his insistence that machine learning will one day do the work of the feature engineering stage. Now it seems that the entire vision community gets to ignore you when you insist that “manual feature engineering” is going to save the day. Yann LeCun gave a keynote presentation with the intriguing title "What's wrong with Deep Learning" and it seems that Convolutional Neural Networks (also called CNNs or ConvNets) are everywhere at CVPR.




It used to be hard to publish ConvNet research papers at CVPR, it's now hard to get a CVPR paper if you didn't at least compare against a ConvNet baseline. Got a new cool problem? Oooh, you didn’t try a ConvNet-based baseline? Well, that explains why nobody cares.

But it's not like the machines are taking over the job of the vision scientist. Today's vision scientist is much more of an applied machine learning hacker than anything else, and because of the strong CNN theme, it is much easier to understand and re-implement today's vision systems. What we're seeing at CVPR is essentially a revisiting of the classic problems like segmentation and motion, using this new machinery. As Samson Timoner phrased it at the local Boston Vision Meetup, when Mutual Information was popular, the community jumped on that bandwagon -- it's ConvNets this time around. But it's not just a trend, the non-CNN competition is getting crushed.


Figure from Bharath Hariharan's Hypercolumns CVPR 2015 paper on segmentation using CNNs


There's still plenty to be done by a vision scientist, and a solid formal education in mathematics is more important than ever. We used to train via gradient descent. We still train via gradient descent. We used to drink Coffee, now we all drink Caffe. But behind the scenes, it is still mathematics.

Related Page: Caffe Model Zoo where you can download lots of pretrained ConvNets

Deep down the rabbit hole


CVPR 2015 reminds of the pre-Newtonian days of physics. A lot of smart scientists were able to predict the motions of objects using mathematics once the ingenious Descartes taught us how to embed our physical thinking into a coordinate system. And it's pretty clear that by casting your computer vision problem in the language of ConvNets, you are going to beat just about anybody doing computer vision by hand. I think of Yann LeCun (one of the fathers of Deep Learning) as a modern day Descartes, only because I think the ground-breaking work is right around the corner. His mental framework of ConvNets is like a much needed coordinate system -- we might not know what the destination looks like, but we now know how to build a map.

Deep Networks are performing better every month, but I’m still waiting for Isaac to come in and make our lives even easier. I want a simplification. But I'm not being pessimistic -- there is a flurry of activity in the ConvNet space for a very good reason (in case you didn't get to attend CVPR 2015), so I'll just be blunt: ConvNets fuckin' work! I just want the F=ma of deep learning.


Open Source Deep Learning for Computer Vision: Torch vs Caffe

CVPR 2015 started off with some excellent software tutorials on day one.  There is some great non-alpha deep learning software out there and it has been making everybody's life easier.  At CVPR, we had both a Torch tutorial and a Caffe tutorial.  I attended the DIY Deep Learning Caffe tutorial and it was a full house -- standing room only for slackers like me who join the party only 5 minutes before it starts. Caffe is much more popular that Torch, but when talking to some power users of Deep Learning (like +Andrej Karpathy and other DeepMind scientists), a certain group of experts seems to be migrating from Caffe to Torch.



Caffe is developed at Berkeley, has a vibrant community, Python bindings, and seems to be quite popular among University students. Prof. Trevor Darrell at Berkeley is even looking for a Postdoc to help the Caffe effort. If I was a couple of years younger and a fresh PhD, I would definitely apply.

Instead of following the Python trend, Torch is Lua-based. There is no need for an interpreter like Matlab or Python -- Lua gives you the magic console. Torch is heavily used by Facebook AI Research Labs and Google's DeepMind Lab in London.  For those afraid of new languages like Lua, don't worry -- Lua is going to feel "easy" if you've dabbled in Python, Javascript, or Matlab. And if you don't like editing protocol buffer files by hand, definitely check out Torch.

It's starting to become clear that the future power of deep learning is going to come with its own self-contained software package like Caffe or Torch, and not from a dying breed of all-around tool-belts like OpenCV or Matlab. When you share creations made in OpenCV, you end up sharing source files, but with the Deep Learning toolkits, you end up sharing your pre-trained networks.  No longer do you have to think about a combination of 20 "little" algorithms for your computer vision pipeline -- you just think about which popular network architecture you want, and then the dataset.  If you have the GPUs and ample data, you can do full end-to-end training.  And if your dataset is small/medium, you can fine-tune the last few layers. You can even train a linear classifier on top of the final layer, if you're afraid of getting your hands dirty -- just doing that will beat the SIFTs, the HOGs, the GISTs, and all that was celebrated in the past two decades of computer vision.

Related Article: Torch vs Theano on fastml.com
Related Code: Andrea Vedaldi's MatConvNet Deep Learning Library for MATLAB users

The way in which ConvNets are being used at CVPR 2015 makes me feel like we're close to something big.  But before we strike gold, ConvNets still feel like a Calculus of Shadows, merely "hoping" to get at something bigger, something deeper, and something more meaningful. I think the flurry of research which investigates visualization algorithms for ConvNets suggests that even the network architects aren't completely sure what is happening behind the scenes.

The Video Game Engine Inside Your Head: A different path towards Machine Intelligence


Josh Tenenbaum gave an invited talk titled The Video Game Engine Inside Your Head at the Scene Understanding Workshop on the last day of the CVPR 2015 conference. You can read a summary of his ideas in a short Scientific American article. While his talk might appear to be unconventional by CVPR standards, it is classic Tenenbaum. In his world, there is no benchmark to beat, no curves to fit to shadows, and if you allow my LeCun-Descartes analogy, then in some sense Prof. Tenenbaum might be the modern day Aristotle. As Prof. Jianxiong Xiao introduced Josh with a grand intro, he was probably right -- this is one of the most intelligent speakers you can find.  He speaks 100 words a second, you can't help but feel your brain enlarge as you listen.

One of Josh's main research themes is going beyond the shadows of image-based recognition.  Josh's work is all about building mental models of the world, and his work can really be thought of as analysis-by-synthesis. Inside his models is something like a video game engine, and he showed lots of compelling examples of inferences that are easy for people, but nearly impossible for the data-driven ConvNets of today.  It's not surprising that his student is working at Google's DeepMind this summer.

A couple of years ago, Probabilistic Graphical Models (the marriage of Graph Theory and Probabilistic Methods) used to be all the rage.  Josh gave us a taste of Probabilistic Programming, and while we're not yet seeing these new methods dominate the world of computer vision research, keep your eyes open. He mentioned a recent Nature paper (citation below) from another well respected machine intelligence research, which should keep the trendsetters excited for quite some time. Just take a look at the bad-ass looking Julia code below:

Probabilistic machine learning and artificial intelligence. Zoubin Ghahramani. Nature 521, 452–459 (28 May 2015) doi:10.1038/nature14541




To see some of Prof. Tenenbaum's ideas in action, take a look at the following CVPR 2015 paper, titled Picture: A Probabilistic Programming Language for Scene Perception. Congrats to Tejas D. Kulkarni, the first author, an MIT student, who got the Best Paper Honorable Mention prize for this exciting new work. Google DeepMind, you're going to have one fun summer.




Object Detectors Emerge in Deep Scene CNNs

There were lots of great presentation as the Scene Understanding Workshop, and another talk that truly stood out was about a new large-scale dataset (MIT Places) and a thorough investigation of what happens when you train with scenes vs. objects.



Antonio Torralba from MIT gave the talk about the Places Database and an in-depth analysis of what is learned when you train on object-centric databases like ImageNet vs. Scene-scentric databases like MIT Places. You can check out "Object Detectors Emerge" slides or their ArXiv paper to learn more. Great work by an upcoming researcher, Bolei Zhou!

Overheard at CVPR: ArXiv Publishing Frenzy & Baidu Fiasco 


In the long run, the recent trend of rapidly pushing preprints to ArXiv.org is great for academic and industry research alike. When you have a large collection of experts exploring ideas at very fast rates, waiting 6 months until the next conference deadline just doesn't make sense.  The only downside is that it makes new CVPR papers feel old. It seems like everybody has already perused the good stuff the day it went up on ArXiv. But you get your "idea claim" without worrying that a naughty reviewer will be influenced by your submission. Double blind reviewing, get ready for a serious revamp.  We now know who's doing what, significantly before publication time.  Students, publish-or-perish just got a new name. Whether the ArXiv frenzy is a good or a bad thing, is up to you, and probably more a function of your seniority than anything else. But the CV buzz is definitely getting louder and will continue to do so.

The Baidu cheating scandal might appear to be big news for outsiders just reading the Artificial Intelligence headlines, but overfitting to the testing set is nothing new in Computer Vision. Papers get retracted, grad students often evaluate their algorithms on test sets too many times, and the truth is that nobody's perfect.  When it's important to be #1, don't be surprised that your competition is being naughty. But it's important to realize the difference between ground-breaking research and petty percentage chasing. We all make mistakes, and under heavy pressure, we're all likely to show our weaknesses.  So let's laugh about it.  Let's hire the best of the best, encourage truly great research, and stop chasing percentages.  The truth is that a lot of the top performing methods are more similar than different.


Conclusion
CVPR has been constantly growing in attendance. We now have Phd Students, startups, Professors, recruiters, big companies, and even undergraduates coming to the show. Will CVPR become the new SIGGRAPH?

CVPR attendance plot from Changbo Hu


ConvNets are here to stay, but if we want ConvNets to be more than than a mere calculus of shadows, there's still ample work do be done. Geoff Hinton's capsules keep popping up during midnight discussions. "I want to replace unstructured layers with groups of neurons that I call 'capsules' that are a lot more like cortical columns" -- Geoff Hinton during his Reddit AMA. A lot of people (like Prof. Abhinav Gupta from CMU) are also talking about unsupervised CNN training, and my prediction is that learning large ConvNets from videos without annotations is going to be big at next year's CVPR.

Most importantly, when the titans of Deep Learning get to mention what's wrong with their favorite methods, I only expect the best research to follow. Happy computing and remember, never stop learning.


Wednesday, May 06, 2015

Dyson 360 Eye and Baidu Deep Learning at the Embedded Vision Summit in Santa Clara

Bringing Computer Vision to the Consumer

Mike Aldred
Electronics Lead, Dyson Ltd

While vision has been a research priority for decades, the results have often remained out of reach of the consumer. Huge strides have been made, but the final, and perhaps toughest, hurdle is how to integrate vision into real world products. It’s a long road from concept to finished machine, and to succeed, companies need clear objectives, a robust test plan, and the ability to adapt when those fail. 




The Dyson 360 Eye robot vacuum cleaner uses computer vision as its primary localization technology. 10 years in the making, it was taken from bleeding edge academic research to a robust, reliable and manufacturable solution by Mike Aldred and his team at Dyson. 

Mike Aldred’s keynote at next week's Embedded Vision Summit (May 12th in Santa Clara) will chart some of the high and lows of the project, the challenges of bridging between academia and business, and how to use a diverse team to take an idea from the lab into real homes.

Enabling Ubiquitous Visual Intelligence Through Deep Learning

Ren Wu 
Distinguished Scientist, Baidu Institute of Deep Learning

Deep learning techniques have been making headlines lately in computer vision research. Using techniques inspired by the human brain, deep learning employs massive replication of simple algorithms which learn to distinguish objects through training on vast numbers of examples. Neural networks trained in this way are gaining the ability to recognize objects as accurately as humans. Some experts believe that deep learning will transform the field of vision, enabling the widespread deployment of visual intelligence in many types of systems and applications. But there are many practical problems to be solved before this goal can be reached. For example, how can we create the massive sets of real-world images required to train neural networks? And given their massive computational requirements, how can we deploy neural networks into applications like mobile and wearable devices with tight cost and power consumption constraints? 




Ren Wu’s morning keynote at next week's Embedded Vision Summit (May 12th in Santa Clara) will share an insider’s perspective on these and other critical questions related to the practical use of neural networks for vision, based on the pioneering work being conducted by his team at Baidu.

Vision-as-a-Service: Democratization of Vision for Consumers and Businesses

Herman Yau
Co-founder and CEO, Tend

Hundreds of millions of video cameras are installed around the world—in businesses, homes, and public spaces—but most of them provide limited insights. Installing new, more intelligent cameras requires massive deployments with long time-to-market cycles. Computer vision enables us to extract meaning from video streams generated by existing cameras, creating value for consumers, businesses, and communities in the form of improved safety, quality, security, and health. But how can we bring computer vision to millions of deployed cameras? The answer is through “Vision-as-a-Service” (VaaS), a new business model that leverages the cloud to apply state-of-the-art computer vision techniques to video streams captured by inexpensive cameras. Centralizing vision processing in the cloud offers some compelling advantages, such as the ability to quickly deploy sophisticated new features without requiring upgrades of installed camera hardware. It also brings some tough challenges, such as scaling to bring intelligence to millions of cameras. 





Herman Yau's talk at next week's Embedded Vision Summit (May 12th in Santa Clara) will explain the architecture and business model behind VaaS, show how it is being deployed in a wide range of real-world use cases, and highlight some of the key challenges and how they can be overcome.

Embedded Vision Summit on May 12th, 2015

There will be many more great presentations at the upcoming Embedded Vision Summit.  From the range of topics, it looks like any startup with interest in computer vision will be able to benefit from attending. The entire day is filled with talks by great presenters (Gary Bradski will talk about the latest developments in OpenCV). You can see the list of speakers: Embedded Vision Summit 2015 List of speakers or the day's agenda Embedded Vision Summit 2015 Agenda.

Embedded Vision Summit 2015 Registration (249$ for the one day event + food)

Demos during lunch: The Technology Showcase at the Embedded Vision Summit will highlight demonstrations of technology for computer vision-based applications and systems from the following companies.



The vision topics covered will be: Deep Learning, CNNs, Business, Markets, Libraries, Standards, APIs, 3D Vision, and Processors. I will be there with my vision.ai team, together with some computer vision guys from KnitHealth, Inc, a new SF-based Health Vision Company. If you're interested in meeting with us, let's chat at the Vision Summit.

What kind of startups and companies should attend? Definitely robotics. Definitely vision sensors. Definitely those interested in deep learning hardware implementations. Seems like even half of the software engineers at Google could benefit from learning about their favorite deep learning algorithms being optimized for hardware. 


Tuesday, May 05, 2015

Deep Learning vs Big Data: Who owns what?

In order to learn anything useful, large-scale multi-layer deep neural networks (aka Deep Learning systems) require a large amount of labeled data. There is clearly a need for big data, but only a few places where big visual data is available. Today we'll take a look at one of the most popular sources of big visual data, peek inside a trained neural network, and ask ourselves some data/model ownership questions. The fundamental question to keep in mind is the following, "Are the learned weights of a neural network derivate works of the input images?" In other words, when deep learning touches your data, who owns what?



Background: The Deep Learning "Computer Vision Recipe"
One of today's most successful machine learning techniques is called Deep Learning. The broad interest in Deep Learning is backed by some remarkable results on real-world data interpretation tasks dealing with speech[1], text[2], and images[3]. Deep learning and object recognition techniques have been pioneered by academia (University of Toronto, NYU, Stanford, Berkeley, MIT, CMU, etc), picked up by industry (Google, Facebook, Snapchat, etc), and are now fueling a new generation of startups ready to bring visual intelligence to the masses (Clarifai.com, Metamind.io, Vision.ai, etc). And while it's still not clear where Artificial Intelligence is going, Deep Learning will be a key player.

Related blog postDeep Learning vs Machine Learning vs Pattern Recognition
Related blog postDeep Learning vs Probabilistic Graphical Models vs Logic

For visual object recognition tasks, the most popular models are Convolutional Neural Networks (also known as ConvNets or CNNs). They can be trained end-to-end without manual feature engineering, but this requires a large set of training images (sometimes called big data, or big visual data). These large neural networks start out as a Tabula Rasa (or "blank slate") and the full system is trained in an end-to-end fashion using a heavily optimized implementation of Backpropagation (informally called "backprop"). Backprop is nothing but the chain rule you learned in Calculus 101 and today's deep neural networks are trained in almost the same way they were trained in the 1980s. But today's highly-optimized implementations of backprop are GPU-based and can process orders of magnitude more data than was approachable in the pre-internet pre-cloud pre-GPU golden years of Neural Networks. The output of the deep learning training procedure is a set of learned weights for the different layers defined in the model architecture -- millions of floating point numbers representing what was learned from the images. So what's so interesting about the weights? It's the relationship between the weights and the original big data, that will be under scrutiny today.

"Are weights of a trained network based on ImageNet a derived work, a cesspool of millions of copyright claims? What about networks trained to approximate another ImageNet network?"
[This question was asked on HackerNews by kastnerkyle in the comments of A Revolutionary Technique That Changed Machine Vision.]

In the context of computer vision, this question truly piqued my interest, and as we start seeing robots and AI-powered devices enter our homes I expect much more serious versions of this question to arise in the upcoming decade. Let's see how some of these questions are being addressed in 2015.

1. ImageNet: Non-commercial Big Visual Data

Let's first take a look at the most common data source for Deep Learning systems designed to recognize a large number of different objects, namely ImageNet[4]. ImageNet is the de-facto source of big visual data for computer vision researchers working on large scale object recognition and detection. The dataset debuted in a 2009 CVPR paper by Fei-Fei Li's research group and was put in place to replace both PASCAL datasets (which lacked size and variety) and LabelMe datasets (which lacked standardization). ImageNet grew out of Caltech101 (a 2004 dataset focusing on image categorization, also pioneered by Fei-Fei Li) so personally I still think of ImageNet as something like "Stanford10^N". ImageNet has been a key player in organizing the scale of data that was required to push object recognition to its new frontier, the deep learning phase.

ImageNet has over 15 million images in its database as of May 1st, 2015.


Problem: Lots of extremely large datasets are mined from internet images, but these images often come with their own copyright.  This prevents collecting and selling such images, and from a commercial point of view, when creating such a dataset, some care has to be taken.  For research to keep pushing the state-of-the-art on real-world recognition problems, we have to use standard big datasets (representative of what is found in the real-world internet), foster a strong sense of community centered around sharing results, and maintain the copyrights of the original sources.

Solution: ImageNet decided to publicly provide links to the dataset images so that they can be downloaded without having to be hosted on an University-owned server. The ImageNet website only serves the image thumbnails and provides a copyright infringement clause together with instructions where to file a DMCA takedown notice. The dataset organizers provide the entire dataset only after signing a terms of access, prohibiting commercial use. See the ImageNet clause below (taken on May 5th, 2015).

"ImageNet does not own the copyright of the images. ImageNet only provides thumbnails and URLs of images, in a way similar to what image search engines do. In other words, ImageNet compiles an accurate list of web images for each synset of WordNet. For researchers and educators who wish to use the images for non-commercial research and/or educational purposes, we can provide access through our site under certain conditions and terms."

2. Caffe: Unrestricted Use Deep Learning Models

Now that we have a good idea where to download big visual data and an understanding of the terms that apply, let's take a look at the the other end of the spectrum: the output of the Deep Learning training procedure. We'll take a look at Caffe, one of the most popular Deep Learning libraries, which was engineered to handle ImageNet-like data.  Caffe provides an ecosystem for sharing models (the Model Zoo), and is becoming an indispensable tool for today's computer vision researcher. Caffe is developed at the Berkeley Vision and Learning Center (BVLC) and by community contributors -- it is open source.

Problem: As a project that started at a University, Caffe's goal is to be the de-facto standard for creating, training, and sharing Deep Learning models. The shared models were initially licensed for non-commercial use, but the problem is that a new wave of startups is using these techniques, so there must be a licensing agreement which allows Universities, large companies, and startups to explore the same set of pretrained models.

Solution: The current model licensing for Caffe is unrestricted use. This is really great for a broad range of hackers, scientists, and engineers.  The models used to be shared with a non-commercial clause. Below is the entire model licensing agreement from the Model License section of Caffe (taken on May 5th, 2015).

"The Caffe models bundled by the BVLC are released for unrestricted use. 

These models are trained on data from the ImageNet project and training data includes internet photos that may be subject to copyright. 

Our present understanding as researchers is that there is no restriction placed on the open release of these learned model weights, since none of the original images are distributed in whole or in part. To the extent that the interpretation arises that weights are derivative works of the original copyright holder and they assert such a copyright, UC Berkeley makes no representations as to what use is allowed other than to consider our present release in the spirit of fair use in the academic mission of the university to disseminate knowledge and tools as broadly as possible without restriction." 

3. Vision.ai: Dataset generation and training in your home 

Deep Learning learns a summary of the input data, but what happens if a different kind of model memorizes bits and pieces of the training data? And more importantly what if there are things inside the memorized bits which you might not want shared with outsiders?  For this case study, we'll look at Vision.ai, and their real-time computer vision server which is designed to simultaneously create a dataset and learn about an object's appearance. Vision.ai software can be applied to real-time training from videos as well as live webcam streams.

Instead of starting with big visual data collected from internet images (like ImageNet), the vision.ai training procedure is based on a person waving an object of interest in front of the webcam. The user bootstraps the learning procedure with an initial bounding box, and the algorithm continues learning hands-free. As the algorithm learns, it is stores a partial history of what it previously saw, effectively creating its own dataset on the fly. Because the vision.ai convolutional neural networks are designed for detection (where an object only occupies a small portion of the image), there is a large amount of background data presented inside the collected dataset. At the end of the training procedure you get both the Caffe-esque bit (the learned weights) and the ImageNet bit (the collected images). So what happens when it's time to share the model?

A user training a cup detector using vision.ai's real-time detector training interface


Problem: Training in your home means that potentially private and sensitive information is contained inside the backgrounds of the collected images. If you train in your home and make the resulting object model public, think twice about what you're sharing. Sharing can also be problematic if you have trained an object detector from a copyrighted video/images and want to share/sell the resulting model.

Solution: When you save a vision.ai model to disk, you get both a compiled model and the full model. The compiled model is the full model sans the images (thus much smaller). This allows you to maintain fully editable models on your local computer, and share the compiled model (essentially only the learned weights), without the chance of anybody else peeking into your living room. Vision.ai's computer vision server called VMX can run both compiled and uncompiled models; however, only uncompiled models can be edited and extended. In addition, vision.ai provides their vision server as a standalone install, so that all of the training images and computations can reside on your local computer. In brief, vision.ai's solution is to allow you to choose whether you want to run the computations in the cloud or locally, and whether you want to distribute full models (with background images) or the compiled models (solely what is required for detection). When it comes to sharing the trained models and/or created datasets, you are free to choose your own licensing agreement.

4. Open Problems for Licensing Memory-based Machine Learning Models

Deep Learning methods aren't the only techniques applicable to object recognition. What if our model was a Nearest-Neighbor classifier using raw RGB pixels? A Nearest Neighbor Classifier is a memory based classifier which memorizes all of the training data -- the model is the training data. It would be contradictory to license the same set of data differently if one day it was viewed as training data and another day as the output of a learning algorithm. I wonder if there is a way to reconcile the kind of restrictive non-commercial licensing behind ImageNet with the unrestricted licensing use strategy of Caffe Deep Learning Models. Is it possible to have one hacker-friendly data/model license agreement to rule them all?

Conclusion

Don't be surprised if neural network upgrades come as part of your future operating system. As we transition from a data economy (sharing images) to a knowledge economy (sharing neural networks), legal/ownership issues will pop up. I hope that the three scenarios I covered today (big visual data, sharing deep learning models, and training in your home) will help you think about the future legal issues that might come up when sharing visual knowledge. When AI starts generating its own art (maybe by re-synthesizing old pictures), legal issues will pop up. And when your competitor starts selling your models and/or data, legal issues will resurface. Don't be surprised if the MIT license vs. GPL license vs. Apache License debate resurges in the context of pre-trained deep learning models. Who knows, maybe AI Law will become the next big thing.

References
[1] Deep Speech: Accurate Speech Recognition with GPU-Accelerated Deep Learning: NVIDIA dev blog post about Baidu's work on speech recognition using Deep Learning. Andrew Ng is working with Baidu on Deep Learning.

[2] Text Understanding from Scratch: Arxiv paper from Facebook about end-to-end training of text understanding systems using ConvNets. Yann Lecun is working with Facebook on Deep Learning.

[3] ImageNet Classification with Deep Convolutional Neural Networks. Seminal 2012 paper from the Neural Information and Processing Systems (NIPS) conference which showed breakthrough performance from a deep neural network. Paper came out of University of Toronto, but now most of these guys are now at Google.  Geoff Hinton is working with Google on Deep Learning.

[4] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, ImageNet: A Large-Scale Hierarchical Image Database. IEEE Computer Vision and Pattern Recognition (CVPR), 2009.

Jia Deng is now assistant professor at Michigan University and he is growing his research group. If you're interested in starting a PhD in deep learning and vision, check out his call for prospective students. This might be a younger version of Andrew Ng.

Richard Socher is the CTO and Co-Founder of MetaMind, and new startup in the Deep Learning space. They are VC-backed and have plenty of room to grow.

Jia Li is now Head of Research at Snapchat, Inc. I can't say much, but take a look at the recent VentureBeat article: Snapchat is quietly building a research team to do deep learning on images, videos. Jia and I overlapped at Google Research back in 2008.

Fei-Fei Li is currently the Director of the Stanford Artificial Intelligence Lab and the Stanford Vision Lab. See the article on Wired: If we want our machines to think, we need to teach them to see. Yann, you have some competition.

Yangqing Jia created the Caffe project during his PhD at UC Berkeley. He is now a research scientist at Google.

Tomasz Malisiewicz is the Co-Founder of Vision.ai, which focuses on real-time training of vision systems -- something which is missing in today's Deep Learning systems. Come say hi at CVPR.