Showing posts with label empiricism. Show all posts
Showing posts with label empiricism. Show all posts

Tuesday, July 10, 2012

Machine Learning Doesn't Matter?



Bagpipes and International Conference of Machine Learning (ICML) in Edinburgh
Two weeks ago, I attended the ICML 2012 Conference in Edinburgh, UK.  First of all, Edinburgh is a great place for a conference!  The scenery is marvelous, the weather is comfortable, and most notably, the sound of bagpipes adds an inimitable charm to the city.  I attended the conference because I was invited to give an invited applications talk during the invited talks session.  In case you’re wondering, I did not have a plenary session (a plenary session is a session attended by all conference members) which is preserved for titans such as Yann Lecun, David MacKay, and Andrew Ng.  My presentation was on the last day of ICML and was titled “Exemplar-SVMs for Visual Object Detection, Label Transfer and Image Retrieval,” during which I gave an overview of my ICCV 2011 paper on visual object detection as well as the SIGGRAPH ASIA 2011 paper on cross-domain image retrieval.  As part of the invited talk, we submitted a 2 page extended abstract which summarizes some key ideas behind the exemplar-svm project: you can check out the abstract as well as the presentation slides online.  I believe the talk was recorded, so I will post the video link once it becomes available.  It was a great opportunity to convey some of my ideas to a non-vision audience.  I think I got a handful of new people excited about single example SVMs (i.e., Exemplar-SVMs)!

Tomasz Malisiewicz, Abhinav Shrivastava, Abhinav Gupta, and Alexei A. Efros. Exemplar-SVMs for Visual Object Detection, Label Transfer and Image Retrieval. To be presented as an invited applications talk at ICML, 2012. PDF | Talk Slides


Getting Ready for Edinburgh with David Hume
To get ready for my first visit to Edinburgh (pronounced Ed-in-bur-ah which does not rhyme with Pittsburgh), I bought a Kindle Touch and proceeded to read David Hume’s An Enquiry Concerning Human Understanding.  David Hume is one of the great British Empiricists (together with John Locke and George Berkeley) who stood by the empiricist motto: impressions are the source of all ideas.  Empiricists can be contrasted to rationalists who appeal to reason as the source of knowledge.  [Of course, I am neither an empiricist nor a rationalist.  Such polarizing extremes are a thing of the past.  I am a pragmatists and my world-view combines elements from many different philosophies.]  I choose Hume’s treatise because he is the one whom Kant credits for awakening him from his dogmatic slumber.  I found Hume’s words rejuvenating, full of gedankenexperiments which show the limits of radical empiricism, and most notably is free on the Kindle store!  In your attempts to build intelligent machines, maybe you will also words of inspiration in the classics.  It was a great book to get into the Edinburgh mindset (although the ICML crowd is probably more familiar with a different University of Edinburgh figure, namely Reverend Bayes).

Impressions of ICML
I would first like to first say that the ICML website is well-organized and serves as a great tool during the conference!  Good job ICML!  There is a great mobile version of the ICML website which is excellent for visiting on your iPhone when figuring out which talk to go to next.  The ICML website also provides a forum for discussing papers and every paper gets a presentation and a poster.  The discussion boards do not seem heavily utilized but it would be great to use a moderator-style system to have the actual after-presentation questions come from this forum.  I’m sure something like this will actually arise in the upcoming years.  ICML is much smaller than CVPR (compare ~700 attendees with ~2000 attendees) which makes for a much more intimate environment.  I was amazed by the number of people proving bounds and doing “theoretical” non-applied machine learning.  Its like some people really don't care about anything other than analysis.  However, this is not my style, and I personally prefer to build “real” systems and combine insights from disparate disciplines such as mathematics, cognitive science, philosophy, physics, and computer science.  There is a bit of ICML and Machine Learning conferences which I think of as nothing more than mathturbation.  I understand there's merit to doing analysis of this sort -- somebody’s gotta do it, but if you’re gonna do it, please at least try to understand the implications of the real-world problem your dataset and task are trying to address.

Machine Learning doesn’t Matter?
The highlight of the conference by far was Kiri Wagstaff’s plenary talk “Machine Learning that Matters.”  Kiri gave an enchanting 30 minute presentation regarding what is rotten in the state of Edinburgh (aka what is wrong with the style of machine learning conferences).  Her words were gentle, yet harsh, while simultaneously enlightening, yet morbid.  She showed us, machine learning researchers, just how useless much of machine learning research is today.  Let’s not forget that Machine Learning is one of the most revolutionary ideas if the modern computer science classroom.  Trying to get a PhD in Computer Science and avoiding Machine Learning is like avoiding Calculus while getting and undergraduate degree in Engineering.  There is nothing wrong with machine learning as a discipline, but there is something wrong with researchers making the field overly academic.  Making a discipline overly academic means creating a self-contained, overly-mathematical, self-citing, and jargon-filled discipline which doesn’t care about world-impact but only cares to propagate a small community’s citation count.  Note that much of these arguments also apply to the CVPR world. But do not take my words for granted, read Kiri’s treatise yourself.  Abstract Below:


"Machine Learning that Matters" Abstract: Much of current machine learning (ML) research has lost its connection to problems of import to the larger world of science and society. From this perspective, there exist glaring limitations in the data sets we investigate, the metrics we employ for evaluation, and the degree to which results are communicated back to their originating domains. What changes are needed to how we conduct research to increase the impact that ML has? We present six Impact Challenges to explicitly focus the field’s energy and attention, and we discuss existing obstacles that must be addressed. We aim to inspire ongoing discussion and focus on ML that matters.

Kiri Wagstaff, "Machine Learning that Matters," ICML 2012.


If you have something to say in response to Kiri's treatise, check out her Machine Learning Impact Forum on http://mlimpact.com/.

Thursday, March 08, 2012

"I shot the cat with my proton gun."


I often listen to lectures and audiobooks when I drive more than 2 hours because I don't always have the privilege of enjoying a good conversation with a passenger.  Recently I was listening to some philosophy of science podcasts on my iPhone while driving from Boston to New York when the following sentence popped into my head:

"I shot the cat with my proton gun."


I had just listened to three separate Podcasts (one about Kant, one about Wittgenstein and one about Popper) when the sentence came to my mind.  What is so interesting about this sentence is that while it is effortless to grasp, it uses two different types of concepts in a single sentence, a "proton gun" and a "cat."  It is a perfectly normal sentence, and the above illustration describes the sentence fairly well (photo credits to http://afashionloaf.blogspot.com/2010/03/cat-nap-mares.html for the kitty, and http://www.colemanzone.com/ for the proton gun).

Cat == an "everyday" empirical concept
"Cat" is an everyday "empirical" concept, a concept with which most people have first hand experience (i.e., empirical knowledge).  It is commonly believed that such everyday concepts are acquired by children at a young age -- it is an exemple of a basic level concept which people like Immanuel Kant and Ludwig Wittgenstein discuss at great length.  We do not need a theory of cats for the idea of a cat to stick.





Proton Gun == a "scientific" theoretical concept
On the other extreme is the "proton gun." It is an example of a theoretical concept -- a type of concept which rests upon classroom (i.e., "scientific") knowledge.  The idea of a proton gun is akin to the idea of Pluto, an esophagus or cancer -- we do not directly observe such entities, we learn about them from books and by seeing illustrations such as the one below.  Such theoretical constructs are the the entities which Karl Popper and the Logical Positivists would often discuss.  


While many of us have never seen a proton (nor a proton gun), it is a perfectly valid concept to invoke in my sentence.  If you have a scientific background, then you have probably seen so many artistic renditions of protons (see Figure below) and spent so many endless nights studying for chemistry and physics exams, that the word proton conjures a mental image.  It is hard for me to thing of entities which trigger mental imagery as non-empirical.  

How do we learn such concepts?  The proton gun comes from scientific education!  The cat comes from experience!  But since the origins of the concept "proton" and the concept "cat" are so disjoint, our (human) mind/brain must be more-amazing-than-previously-thought because we have no problem mixing such concepts in a single clause.  It does not feel like these two different types of concepts are stored in different parts of the brain.

The idea which I would like you, the reader, to entertain over the next minute or so is the following:

Perhaps the line between ordinary "empirical" concepts and complex "theoretical" concepts is an imaginary boundary -- a boundary which has done more harm than good.  

One useful thing I learned from Philosophy of Science, is that it is worthwhile to doubt the existence of theoretical entities.  Not for iconoclastic ideals, but for the advancement of science!  Descartes' hyperbolic doubt is not dead.  Another useful thing to keep in mind is Wittgenstein's Philosophical Investigations and his account of the acquisition of knowledge.  Wittgenstein argued elegantly that "everyday" concepts are far from "easy-to-define." (see his family resemblances argument and the argument on defining a "game.")  Kant, with his transcendental aesthetic, has taught me to question a hardcore empiricist account of knowledge.

So then, as good cognitive scientists, researchers, and pioneers in artificial intelligence, we must also doubt the rigidity of those everyday concepts which appear to us so ordinary. If we want to build intelligent machines, then we must be ready to break down own understanding of reality, and not be afraid to questions things which appear unquestionable.

In conclusion, if you find popular culture reference more palatable than my philosophical pseudo-science mumbo-jumbo, then let me leave you with two inspirational quotes.  First, let's not forget Pink Floyd's lyrics which argued against the rigidity of formal education: "We don't need no education, We don't need no thought control." And the second, a misunderstood, yet witty aphorism which comes to us from Dr. Timothy Leary reminds us that there is a time for education and there is a time for reflection.  In his own words:  "Turn on, tune in, drop out."