This past Friday I went SUNS 2009 at MIT, and in my opinion the coolest talks were by Aude Oliva, David Forsyth, and Ce Liu.
While I will not summarize their talks which referred to unpublished work, I will provide a few high level questions that summarize (according to me) the ideas conveyed by these speakers.
Aude: What is the interplay between scene-level context, local category-specific features, as well as category-independent saliency that makes us explore images in a certain way when looking for objects?
David: Is naming all the objects depicted in an image the best way to understand an image? Don't we really want some type of understanding that will allow us to reason about never before seen objects?
Ce: Can we understand the space of all images by cleverly interpolating between what we are currently perceiving and what we have seen in the past?
Those three ideas are quite interesting.
ReplyDeleteAude's slides are now online.
ReplyDeleteContext rules supreme in visual search through real-world scenes