I'm currently in Chicago while traveling to Vancouver (NIPS 2009 conference) where I'll be defending my research during Tuesday's poster session. Instead of delving into the computational challenges that motivate my research, I want to take a step back and criticize what (sometimes? often?) happens during the publishing cycle.
According to me, good research starts with the passion to solve a particular problem or address a specific concern. Quite often, good research will raise more questions than it successfully solves. Unfortunately, when we submit papers to conferences we are judged on the clarity of presentation, level of experimental validation, as well as overall completeness. This means that the publishing cycle quite often promotes writing "cute" papers that have little long term impact in the field and can only be viewed as thorough and complete due to their narrow scope. This is why we should not solely rely on peer review nor cater our scientific lives towards pleasing others. Sometimes being a good scientist means breaking free from the norms that the world around us rigidly follows, sometimes publishing too often skews our research focus, and sometimes falling off the face of the earth for a period of time is necessary to push science in a new direction.
I want to challenge every scientist to follow their dreams and attempt to solve the problems they truly care about and not just attempt to please peer review. Maybe some think that the perversion of science (that is evaluating scientists by the number of publications they have) is okay, but in my book a scientific career which produces a single grand idea is superior to a career saturated with myriad "cute and thorough" papers. I'm not particularly upset with the progress of Computer Vision, but I think more people should ponder the negative consequences of pulling the publish-trigger too often.
I agree. I think deadlines should be primarily used as a forcing function to get work done rather than 'polluting' the field with sub-optimal ideas.
ReplyDeleteFrom J Wing: http://portal.acm.org/citation.cfm?id=1610257&dl=GUIDE&coll=GUIDE&CFID=65778347&CFTOKEN=33396118
There are notable counter-examples ...
ReplyDeletepapers that explored "compact"
ideas in a rigorous and complete way, and
had long-term impact. Canny's
edge detector, Perlin's noise, Al Barr's
super-quadrics paper ...
John:
ReplyDeleteTrue but these examples seem to be few and far between. Recently, a very well known computer vision researcher in object recognition came by our institution and gave a talk. He described his recent work in the following manner (yes not exactly in these words but the message is the same): I did this two years ago, I did this last year and currently we're doing this. The same problem, a variety of unrelated techniques, no continuity in thought or practice but the result was three papers in top conferences. And his justification, +90% on very constrained views of objects. My belief is that object recognition will not be solved by "compact" ideas.
I agree with the anonymous rebuttal to John's comment. There number of compact papers that advance the field has been small. I'm not trying to undermine the work of Canny and others, I just want to point out that when it comes to object recognition we really have no clue what we're doing.
ReplyDeleteMaybe some will finish grad school and get their PhDs with the feeling that object recognition and artificial intelligence are 'just around the corner,' but not me. I feel that there are lots of deep issues regarding the representation of concepts (visual and non-visual) that aren't being addressed by the computer vision community. Machine Learning has sort of 'exploded' recently with many impressionable students jumping on the learning-bandwagon. What I keep seeing is students downloading some canonical feature set (SIFT, PHOG, ...) for some standard recognition task (PASCAL VOC, Caltech256, ...) and placing all their efforts on the new span(hierarchical, sparse, robust, nonparametric, bayesian, latent, ...) learning algorithm.
It seems that some of these students actually believe that the current problem of recognition is already well-posed and manual feature engineering (which is actually a hallmark of good vision research) is somehow below them. I feel that more students need to criticize what exactly is going into their learning algorithms and what sorts of tasks new techniques should be evaluated on.
There is a lot of room for creativity in computer vision research. We need to become open to asking new questions and breathing life into the field which has grown stale due to the advances of machine learning. Publishing to beat your friend or enemy's performance curve is just silly -- that is research that is doomed to become forgotten.
Don't get me wrong, I sympathize ... after all, I spent years of my life trying to use biologically plausible representations of early
ReplyDeleteauditory processing to do speech recognition, in a culture where for decades many researchers thought the deepest question left to ask
about front end processing was whether to use 8 or 10 cepstral coefficients.
But on the other hand, I was also attending NIPS back in the early years of the conference, and saw the early machine-learning vision
successes, like the talk Shumeet Baluja gave in 1995 on the first generation face detection neural-nets. And like most of the people in
the audience, as I recall, I thought it was really really cool. And 15 years later, after many generations of "incremental improvement"
sorts of papers, we have products that, in a conceptual sense at least, was based on the work he showed in that talk.
And so, I can't see how the work in that talk that he presented in 1995 at NIPS was a bad thing for vision research. But yet, your original blog post seems to say that sort of paper is a bad thing.
John, I don't think Baluja's paper on neural-net based face detection was bad at all! Face detection was far from solved at that time and showing how neural nets could be used for this problem was truly remarkable. Given that his ideas spawned 15 years of "incremental research" I think he really does deserve credit for advancing the state of the art and opening up a new direction in the field.
ReplyDeleteI doubt Baluja was simply trying to please reviewers and get one more publication added to his CV. In 1995 it might have been hard to predict the impact of his research, and sometimes we have to wait 15 years and look back on a published work to give it a fair evaluation.
Related to the discussion:
ReplyDeleteIgnorance, myopia, and naivete´ in computer vision systems
CVGIP: Image Understanding
Volume 53 , Issue 1 (January 1991)
Pages: 112 - 117
Year of Publication: 1991
Authors
Ramesh C. Jain
Thomas O. Binford