Tuesday, July 10, 2012

Machine Learning Doesn't Matter?



Bagpipes and International Conference of Machine Learning (ICML) in Edinburgh
Two weeks ago, I attended the ICML 2012 Conference in Edinburgh, UK.  First of all, Edinburgh is a great place for a conference!  The scenery is marvelous, the weather is comfortable, and most notably, the sound of bagpipes adds an inimitable charm to the city.  I attended the conference because I was invited to give an invited applications talk during the invited talks session.  In case you’re wondering, I did not have a plenary session (a plenary session is a session attended by all conference members) which is preserved for titans such as Yann Lecun, David MacKay, and Andrew Ng.  My presentation was on the last day of ICML and was titled “Exemplar-SVMs for Visual Object Detection, Label Transfer and Image Retrieval,” during which I gave an overview of my ICCV 2011 paper on visual object detection as well as the SIGGRAPH ASIA 2011 paper on cross-domain image retrieval.  As part of the invited talk, we submitted a 2 page extended abstract which summarizes some key ideas behind the exemplar-svm project: you can check out the abstract as well as the presentation slides online.  I believe the talk was recorded, so I will post the video link once it becomes available.  It was a great opportunity to convey some of my ideas to a non-vision audience.  I think I got a handful of new people excited about single example SVMs (i.e., Exemplar-SVMs)!

Tomasz Malisiewicz, Abhinav Shrivastava, Abhinav Gupta, and Alexei A. Efros. Exemplar-SVMs for Visual Object Detection, Label Transfer and Image Retrieval. To be presented as an invited applications talk at ICML, 2012. PDF | Talk Slides


Getting Ready for Edinburgh with David Hume
To get ready for my first visit to Edinburgh (pronounced Ed-in-bur-ah which does not rhyme with Pittsburgh), I bought a Kindle Touch and proceeded to read David Hume’s An Enquiry Concerning Human Understanding.  David Hume is one of the great British Empiricists (together with John Locke and George Berkeley) who stood by the empiricist motto: impressions are the source of all ideas.  Empiricists can be contrasted to rationalists who appeal to reason as the source of knowledge.  [Of course, I am neither an empiricist nor a rationalist.  Such polarizing extremes are a thing of the past.  I am a pragmatists and my world-view combines elements from many different philosophies.]  I choose Hume’s treatise because he is the one whom Kant credits for awakening him from his dogmatic slumber.  I found Hume’s words rejuvenating, full of gedankenexperiments which show the limits of radical empiricism, and most notably is free on the Kindle store!  In your attempts to build intelligent machines, maybe you will also words of inspiration in the classics.  It was a great book to get into the Edinburgh mindset (although the ICML crowd is probably more familiar with a different University of Edinburgh figure, namely Reverend Bayes).

Impressions of ICML
I would first like to first say that the ICML website is well-organized and serves as a great tool during the conference!  Good job ICML!  There is a great mobile version of the ICML website which is excellent for visiting on your iPhone when figuring out which talk to go to next.  The ICML website also provides a forum for discussing papers and every paper gets a presentation and a poster.  The discussion boards do not seem heavily utilized but it would be great to use a moderator-style system to have the actual after-presentation questions come from this forum.  I’m sure something like this will actually arise in the upcoming years.  ICML is much smaller than CVPR (compare ~700 attendees with ~2000 attendees) which makes for a much more intimate environment.  I was amazed by the number of people proving bounds and doing “theoretical” non-applied machine learning.  Its like some people really don't care about anything other than analysis.  However, this is not my style, and I personally prefer to build “real” systems and combine insights from disparate disciplines such as mathematics, cognitive science, philosophy, physics, and computer science.  There is a bit of ICML and Machine Learning conferences which I think of as nothing more than mathturbation.  I understand there's merit to doing analysis of this sort -- somebody’s gotta do it, but if you’re gonna do it, please at least try to understand the implications of the real-world problem your dataset and task are trying to address.

Machine Learning doesn’t Matter?
The highlight of the conference by far was Kiri Wagstaff’s plenary talk “Machine Learning that Matters.”  Kiri gave an enchanting 30 minute presentation regarding what is rotten in the state of Edinburgh (aka what is wrong with the style of machine learning conferences).  Her words were gentle, yet harsh, while simultaneously enlightening, yet morbid.  She showed us, machine learning researchers, just how useless much of machine learning research is today.  Let’s not forget that Machine Learning is one of the most revolutionary ideas if the modern computer science classroom.  Trying to get a PhD in Computer Science and avoiding Machine Learning is like avoiding Calculus while getting and undergraduate degree in Engineering.  There is nothing wrong with machine learning as a discipline, but there is something wrong with researchers making the field overly academic.  Making a discipline overly academic means creating a self-contained, overly-mathematical, self-citing, and jargon-filled discipline which doesn’t care about world-impact but only cares to propagate a small community’s citation count.  Note that much of these arguments also apply to the CVPR world. But do not take my words for granted, read Kiri’s treatise yourself.  Abstract Below:


"Machine Learning that Matters" Abstract: Much of current machine learning (ML) research has lost its connection to problems of import to the larger world of science and society. From this perspective, there exist glaring limitations in the data sets we investigate, the metrics we employ for evaluation, and the degree to which results are communicated back to their originating domains. What changes are needed to how we conduct research to increase the impact that ML has? We present six Impact Challenges to explicitly focus the field’s energy and attention, and we discuss existing obstacles that must be addressed. We aim to inspire ongoing discussion and focus on ML that matters.

Kiri Wagstaff, "Machine Learning that Matters," ICML 2012.


If you have something to say in response to Kiri's treatise, check out her Machine Learning Impact Forum on http://mlimpact.com/.

6 comments:

  1. Solving the computer vision using large scale feature learning

    http://metaoptimize.com/qa/questions/10848/transferring-to-new-domains#10873

    A discussion of encoder graphs (a massively scalable feature learning method)

    http://metaoptimize.com/qa/questions/10820/graph-database-encoder-graph-human-brain-size-unsupervised-learning

    ReplyDelete
  2. Wow!! Professor Kiri's Paper is great!! Thanks for sharing.

    Currently following your blog as an senior undergraduate student interested in Computer Vision.

    ReplyDelete
  3. This is amazing how you have contrived to completely uncover the subject which you have picked for this peculiar blog entry . By the way did you turn to some alike blog articles as a source of ideas to complete the whole picture that you have provided in this entry?

    ReplyDelete
  4. Your blog is really helpful. I like it. But there is a small problem in reading. My eyes feel tired because of the black background and the white words. May you consider to reverse it? I think the white background and the white/blue words are better.

    ReplyDelete
  5. This is an interesting article especially given that I never had a chance to attend ICML.

    ReplyDelete
  6. The Machine Learning Impact discussion forum has moved to http://wkiri.com/mlimpact/. Please join us and contribute!

    ReplyDelete