Tuesday, February 28, 2006

some statistics terminology fun

In my Statistical Learning Theory class, the professor made a funny anecdote about the difference in terminology between the fields of computer science and statistics. The point of Larry Wasserman's short discussion was to mention that the term 'inference' means two slightly different things for those two fields. The funny part of the discussion was when John Lafferty told us about a talk he attended a few years ago. During this talk, the speaker also talked about the differences in terminology between those two fields. The speaker talked about the term 'data-mining' which is often used in computer science. The analogous term in statistics is 'over-fitting.' This should make you laugh because CS people view 'data-mining' as too much of a bad thing and statisticians view 'over-fitting' as too much of a bad thing.

Wednesday, February 22, 2006

i'm not your ordinary sheep: PASCAL averages torralba-style

What do you get when you average 251 images containing sheep?


What do you get when you average 421 bounding boxes of manually segmented sheep?
I averaged scenes (Torralba-style) containing objects of interest and the bounding boxes of objects of interest from the PASCAL 2006 Challenge (see the blog entry below). You can see the results here:

Means for PASCAL Visual Object Classes Challenge 2006: Dataset trainval

Monday, February 20, 2006

thinking about kats

In what follows, I shall explain why I've been recently thinking about cats. It all started last night when my friend thought he saw a paper on my desk titled "Graph Partition by Swendsen-Wang Cats." Of course what he really saw was the paper called "Graph Partition by Swendsen-Wang Cuts." However, as he laughingly mentioned that he though I was reading on spectral graph partitioning using cats (those fuzzy cute animals), my mind rapidly explored the consequences of utilizing animals such as cats for solving difficult computational problems.

How can one use cats to solve computationally intractable problems?

Consider the problem of object recognition. The goal is to take an image, perform some low-level image manipulation and present the image to a cat. Then utilizing a system that tracks the cat's physical behavior, one needs to only map the cat's response of the visual stimulus presented into a new signal -- a solution to the more difficult problem. The hypothesis underlying the Swendsen-Want Cat Theory is that one can exploit the underlying high-level intelligence of primitive life forms to solve problems that are of interest to humans.

Thus I've been thinking about kats all of last night. I guess the word 'thinking' doesn't even do justice in this context. If anybody is interested in other (perhaps even more credible) applications of cats, I can tell them about dynamic obstacle-avoiding path planning via cats or about space exploration via colonies of ants.

Saturday, February 18, 2006

PASCAL Visual Object Classes Recognition Challenge 2006

The goal of this challenge is to recognize objects from a number of visual object classes in realistic scenes (i.e. not pre-segmented objects). It is fundamentally a supervised learning learning problem in that a training set of labelled images is provided.

Detailed Information and Development kit at:
http://www.pascal-network.org/challenges/VOC/voc2006/index.html

<><><><><><><><><><><><><><><><><><><><><><><><>
TIMETABLE

* 14 Feb 2006 : Development kit (training and validation data plus
evaluation software) made available.

* 31 March 2006: Test set made available

* 21 April 2006: DEADLINE for submission of results

* 7 May 2006: Half-day (afternoon) challenge workshop to be held in
conjunction with ECCV06, Graz, Austria

<><><><><><><><><><><><><><><><><><><><><><><><>

Anybody down?
It's not like going to class is going to be more fun. :D

Monday, February 13, 2006

Keep Your Friends Close and Your Markov Blankets Closer

Let me begin by explaining the Markov Property and the concept of a Markov Blanket. In probability theory, a stochastic process that has the Markov Property behaves in such a way that the future state only depends on the current state. In other words, the future state is conditionally independent of the past given the current state.

Extending the notion of a one-dimensional path of states to a higher order network of states, a Markov Blanket of a node is its set of neighbouring states. In a Bayesian network, the Markov Blanket consists of the node's parents, the node's children, and the node's children's parents.


Quoting Wikipedia, "The Markov blanket of a node is interesting because it identifies all the variables that shield off the node of the rest of the network. This means that the Markov blanket of a node is the only knowledge that is needed to predict the behaviour of that node."


I've been recently exposed to the world of Dynamic Programming for my Kinematics, Dynamics, and Control class -- taught by Chris Atkeson. For the first two assignments, I've tried to implement a Dijkstra-like Dynamic Programming algorithm for the optimal control of a robotic arm. I will not get into the technical details here, but the basic idea is to cleverly maintain the Markov Blanket of a set of alive nodes instead of performing full sweeps of configuration space when doing value iteration. It ends up that you really cannot look at neighbors in a regular grid of quantized configurations; one must model the dynamics of the problem at hand. If anybody cares for a more detailed explanation, visit my KDC hw2 page.

Why am I -- a vision hacker -- talking about optimal control of a robotic arm? Why should I care about direct policy search, value iteration, dynamic programming, and planning in high-dimensional configuration spaces. Vision is my main area of research, but I am a roboticist and this is what I do. One day robots are going to need to act in order to understand the world around them (or act because they understand the world around them), and I'm not going to simply pass my vision system over to some planning/control people so they can integrate it into their robotic system.

Friday, February 10, 2006

physical strength = mental health

In 11th grade of high school, I was at the peak of my physical strength. My one-rep max was 225 lbs on the benchpress and I weighed 165 pounds. After a few stressful years of college filled with quantum mechanics and 3D laser data analysis, I've decided to start a semi-strict weight lifting schedule when I moved to Pittsburgh. I've been lifting weights and running on a steady basis since August and I feel great!

I'm at 172 lbs now, and almost at the 225lb bench max level. Yesterday I had a good lift and was able to get 205 lbs up 4 times. This means that in just 2 weeks I should be back to my 225lb max level! w00t! My goal is to get 240lbs up 1nce before the summer season starts, but if that doesn't happen then I'd like to be able to at least finish each chest workout with a few reps of 225lbs. I've realized that the key to moving up in weight on the benchpress is making sure that all those little muscles that you use in the chest/shoulder/tricep area get hit pretty hard every once in a while. You are only as strong as your weakest link.


I can't wait until warm weather so I can start running in the mornings. (I also might give yoga a shot in the spring.)


The question at the end of the day is: why bother lifting weights if you're not competing in a sport that requires physical strength? The answer is that physical well-being is intimately related to mental health, which directly influences the progress of my research in computer vision.

Wednesday, February 01, 2006

a jordan sighting, a null space, and a google robot!

A few days ago, my friend John spotted Michael Jordan in Wean Hall. I commented that John should have had Jordan draw a graphical model on a piece of paper (or just draw it on his arm with a permanent marker) and autograph it!


On another note, I saw this image in a paper called Object Categorization by Learned Universal Visual Dictionary by Winn, Criminsci, and T. Minka. This image, which depicts a rough human segmentation of an image of a cow, contains a 'null' space. In Statistical Machine Learning class, we reviewed some linear algebra and thus talked about the 'null space' associated with a linear operator.





Ever hear of google robots? Ever read 1984?