Tuesday, August 18, 2009

exciting stuff at BAVM2009 #1: joint regularization

There were a couple of cool computer vision ideas that I was exposed to at BAVM2009. First, Trevor Darrell mentioned some cool work by Ariadna Quattoni on L1/L_inf regularization. The basic idea, which has also recently been used in other ICML 2009 works such as Han Liu and Mark Palatucci's Blockwise Coordinate Descent, is that you want to regularize across a bunch of problems. This is sometimes referred to as multi-task learning. Imagine solving two SVM optimization problems to find linear classifiers for detecting cars and bicycles in images. It is reasonable to expect that in high dimensional spaces these two classifiers will something in common. To provide more intuition, it might be the case that your feature set provides many irrelevant variables and when learning these classifiers independently much work is spent on removing these dumb variables. By doing some sort of joint regularization (or joint feature selection), you can share information across seemingly distinct classification problems.

In fact, when I was talking about my own CVPR08 work Daphne Koller suggested that this sort of regularization might work for my task of learning distance functions. However, I am currently exploiting the independence that I get from not doing any cross-problem regularization by solving the distance function learning problems independently. While regularization might be desirable, it couples problems and it might be difficult to solve hundreds of thousands of such problems jointly.

I will mention some other cool works in future posts.

Friday, August 14, 2009

Bay Area Vision Meeting (BAVM 2009): Image and Video Understanding

Tomorrow (Friday) afternoon is BAVM 2009, a Bay Area workshop on Image and Video Understanding, which will be held at Stanford this year. It is being organized by Juan Carlos Niebles, one of Fei-Fei Li's students, and I will be there representing CMU. I have a poster about some new research and getting feedback is always good, but I'm really excited about meeting some of the other graduate students who work on image understanding. The Berkeley group has been pushing hard segmentation-driven image understanding so seeing what they're up to should be interesting. There will also be many fellow Googlers and researchers from companies in the Bay Area so it will also be a good place to network.


I look forward to hearing the invited speakers and the seeing the bleeding-edge stuff during the poster sessions. I'll try to blog a little bit about some of the coolest stuff I encounter when I get back.

Friday, August 07, 2009

Graphviz for Object Recognition Research

Many of the techniques that I employ for object recognition utilize a non-parametric representation of visual concepts. In many such non-parametric models, examples of visual concepts are stored in a database as opposed to "abstracted away" as is commonly done when fitting a parametric appearance model. When designing such non-parametric models, I find it important to visualize the relationships between concepts. The ability to visualize what you're working on creates an intimate link between you and your ideas and can often drive creativity.

One way to visualize a database of exemplar objects, or a "soup of concepts," is as a graph. This generally makes sense when it is meaningful to define an edge between to atoms. While a vector-drawing utility (such as Illustrator) is great for manually putting together graphs for presentations or papers, automated visualization of large graphs is critical for debugging many graph-based algorithms.

A really cool (and secret) figure which I generated using Graphviz somewhat recently can be seen below. I use Matlab to write a simple .dot file and then call something like neato to get the pdf output. Click on the image to see the vectorized pdf automatically produced by Graphviz.

Graphviz generated graph
What does this graph show? Its a secret... (details coming soon)