All the cool vision kids are going, so why aren't you?
This will be my first ICCV ever! and my first trip to Spain! Seriously though, if you need to find me over the next week, come to Barcelona. There are lots of great papers out this year and I'll be sure the write about the few which I find interesting (and haven't already blogged about). If you want to learn more about the craziness behind ExemplarSVMs, or just to say 'Hi' don't hesitate to find me walking around the conference. I'll be there during all the workshop days too.
If anybody has a favourite ICCV2001 paper they want me to look and perhaps write something about (hardcore object recognition please -- I don't care about illumination models), please send me your requests (in the comments below).
Can you post about the Marr Prize winners?ReplyDelete
Because a sloppy introduction will not do for a Marr Prize winning paper, I will have to delay my Marr-post for next week! Keep posted!
Great blog, nice posts and very interesting way of combining machine learning derived, mathematical concepts with a philosophical perspective.
Have you read this ICCV'11 paper: "Discriminative Learning of Relaxed Hierarchy for Large-scale Visual Recognition"
I'm glad you enjoy the blog!
Regarding Tianshi's paper, I have looked over it. The most important detail, at least for me, to glean from that ICCV paper is that there is growing interest in models which explicitly "choose" the positives. What this means is that for every data point (or at least for every positive), there is a binary variable which modulates the loss function from that data point.
This simple idea had been a key ingredient in my older CVPR2008 paper, it is also a big ingredient in Lim et al. NIPS 2011 paper (from Torralba's group), as well as a key ingredient in Daphe Koller's papers. Pawan Kumar (from Daphne's group) had used this idea in their self-paced learning work, and it seems to be popping up all over the place.