Wednesday, April 23, 2008

newton's method fractal

Back in high school I was 'into' newton's method fractals. Some old images can be seen by clicking on the following image


When people make fractal videos (check them out on youtube), they are usually zooming into a fixed fractal. I have generated a fractal where the axis is fixed and the equation is changing. Check it out!

Tuesday, April 08, 2008

Recognition by Association via Learning Per-exemplar Distances

Tomasz Malisiewicz, Alexei A. Efros. Recognition by Association via Learning Per-exemplar Distances. In CVPR, June 2008.

Abstract:

We pose the recognition problem as data association. In this setting, a novel object is explained solely in terms of a small set of exemplar objects to which it is visually similar. Inspired by the work of Frome et al., we learn separate distance functions for each exemplar; however, our distances are interpretable on an absolute scale and can be thresholded to detect the presence of an object. Our exemplars are represented as image regions and the learned distances capture the relative importance of shape, color, texture, and position features for that region. We use the distance functions to detect and segment objects in novel images by associating the bottom-up segments obtained from multiple image segmentations with the exemplar regions. We evaluate the detection and segmentation performance of our algorithm on real-world outdoor scenes from the LabelMe dataset and also show some promising qualitative image parsing results.

http://www.cs.cmu.edu/~tmalisie/projects/cvpr08/

Thursday, April 03, 2008

Vocabulary Lesson: Transductive Learning

The goal of this blog post isn't to necessarily provide new insights into the relationship between Transductive Learning versus Semi-Supervised Learning. I will attempt to simply answer the question: "What is Transductive Learning?" To understand what Transductive means, we have to understand what induction (or Inductive Learning) means.

Induction, as opposed to deduction, is a form of reasoning that makes generalizations based on individual instances. It is important to note that induction isn't the kind of reasoning that predicate calculus or any other logic system was meant to handle. The conclusions produced from induction might have a high probability of being true but are never as certain as the inputs. The generalizations obtained from induction can be propagated onto newly observed inputs. One can think of a generalization obtained from induction as a function -- an abstract entity that can always map inputs to outputs.

The Marriam-Webster definition of Transduction states that it is: the transfer of genetic material from one microorganism to another by a viral agent (as a bacteriophage). While this definition has its roots in one particular branch of science, the crucial component of this definition is still present. Transduction is the transfer of something from entity A to entity B.

The Machine Learning definition of Transduction states that it is reasoning from observed inputs to specific test inputs. The key difference between induction and transduction is that induction refers to learning a function that can be applied to any novel inputs, while transduction is only concerned with transferring some property onto a specific set of test inputs.

Rather than paraphrasing Wikipedia, the interested reader should do some follow research of their own into the merits of Transductive Learning.

To conclude, a WILLOW Research Team member --
Olivier Duchenne -- gave a talk about their CVPR 2008 work on applying Transductive Learning to the problem of image segmentation. This was my first exposure to the concepts of transductive learning and it is always a good thing to learn new things.