Showing posts with label joint regulariztion. Show all posts
Showing posts with label joint regulariztion. Show all posts

Saturday, September 19, 2009

joint regularization slides

Trevor Darrell posted his slides from BAVM about joint regularization across classifier learning. I think this is a really cool and promising idea and I plan on applying it to my own research on local distance function learning when I get back to CMU in October.

The idea is there should be significant overlap between what a cat classifier learns and what a dog classifier learns. So why independently learn two classifiers?

My paper on the Visual Memex got accepted to NIPS 2009
so I will be there representing my work in December. Be sure to read future blog posts about this work which strives to break free from using categories in Computer Vision.

On another note, today was my last day interning at Google (a former Robograd was my mentor) and I will be driving back to Pittsburgh from Mountain View this Sunday. Yosemite is the first stop! I plan on doing some light hiking with my new Vibram Five Fingers! I've been using them for deadlifting and they've been great for both working out and just chilling/coding around the Googleplex.


Tuesday, August 18, 2009

exciting stuff at BAVM2009 #1: joint regularization

There were a couple of cool computer vision ideas that I was exposed to at BAVM2009. First, Trevor Darrell mentioned some cool work by Ariadna Quattoni on L1/L_inf regularization. The basic idea, which has also recently been used in other ICML 2009 works such as Han Liu and Mark Palatucci's Blockwise Coordinate Descent, is that you want to regularize across a bunch of problems. This is sometimes referred to as multi-task learning. Imagine solving two SVM optimization problems to find linear classifiers for detecting cars and bicycles in images. It is reasonable to expect that in high dimensional spaces these two classifiers will something in common. To provide more intuition, it might be the case that your feature set provides many irrelevant variables and when learning these classifiers independently much work is spent on removing these dumb variables. By doing some sort of joint regularization (or joint feature selection), you can share information across seemingly distinct classification problems.

In fact, when I was talking about my own CVPR08 work Daphne Koller suggested that this sort of regularization might work for my task of learning distance functions. However, I am currently exploiting the independence that I get from not doing any cross-problem regularization by solving the distance function learning problems independently. While regularization might be desirable, it couples problems and it might be difficult to solve hundreds of thousands of such problems jointly.

I will mention some other cool works in future posts.