Saturday, November 12, 2005

softcore study of consciousness is for wimps

What is softcore study of consciousness? My personal view is that softcore study of anything is study performed by people who lack hardcore quantitative skills. For example, consider the contemporary philosopher who conveys his ideas by writing large corpora of text as opposed to any type of analysis (whether it be an empirical study or dabbling in gedanken-hilbert space).

If somebody wants to convince me that I should read their long publications on consciousness, they better be a hardcore scientist and not some type of calculus-avoiding softee.

Allow me to now boast of MIT's Center for Biological & Computational Learning. It's not like I one day decided to learn about biological research; I know of Tomaso Poggio (the big name associated with this lab) because a few weeks a go I wanted to learn about Reproducing Kernel Hilbert Spaces. Awesome! These guys are no dabblers and I personally encourage them to speak of consciousness. If you take a look at their publications list you'll notice that it is well aligned with my current academic interests. Their entire research plan supports the lemma that computer vision isn't all about machines!


  1. Anonymous7:26 PM

    I agree with you in order to fully understand the "concept" of consciousness, one must be able to quantitate the order or lack thereof... however this is not enough... one must be able to connect these quantitative models to actual physicality. The mind (brain, ego, I) is not just a computational machine but also a dynamic, balanced, and LIVING chemical concoction. So in order to fully represent consciousness in a machine one can not overlook this. Not to mention the metaphysical arguments that no machine will actually ever have a true conciousness, as they have no livelihood in which it is necessary to choose and refute certain information attainable in the world (though this is done unconciously (usually), unbeknownest to the beholder). I personally believe a machine can simulate what consciousness is like, however it will never hold that consciousness true to itself(not definable by a 0 or a 1).

  2. Do you personally believe that a machine can simulate what consciousness is like?

    You mentioned that [we are] "not just a computaitonal machine but also a dynamic, balanced, and LIVING chemical concoction" and that machines have "no livelihood in which it is necessary to choose and refute certain information attainable in the world" and I strongly agree with these statements. However, I personally believe that unless machines have more control (more action) then they will be unable to simulate consciousness.

    There is more to consciousness than statistical machine learning, but I believe that machine learning is very important. Once a machine learns something (posits some hypothesis after it analyzed some data), it must take initiative and perform some action that is consistent with its new knowledge. As long as we feed machines data and expect them to take no role in data collection then we will not attain a level worthy of the term 'consciousness simulation.'

    An interesting book I'm currently reading is Alva Noe's (philosopher at Berkeley) book entitled "Action in Perception" whose thesis is that in order to truly percieve the world, a machine must be able to navigate (interact with) the world. He presents his ideas in the context of vision, where in order to understand the visual world it is not enough to present a system with pictures. A system must see an image of the world and then decide to do something such as {look at the object from another viewpoint, stare at it some longer, walk away and start looking at another object}. This immersion of a system in the world is the "connect these quantitative models to actual physicality" idea that you mentioned.

    I must remind you that although I am studying Computer Vision and Machine Learning, I am a member of the Robotics Institute. Robots are essently systems which integrate action (movement in the physical world) with perception (computer vision or some other imaging technology) and although I personally choose not to work on them at this point in my life, I believe that such systems are key to gaining invaluable insight into the problem of consciousness.

  3. Anonymous3:19 AM

    Ahaha Tombone... so your goal is not to create a consciousness but rather to simulate it? Which I feel is more than an attainable goal. However I feel to fully simulate it in a sense the "machine" will have to be dependent on some outside source for its survival; knowingly. I do not believe the basis of our consciousness is our personal computational and analytic skills carried out by our brains (which is what current opinion says), but rather this (which we hold, alone) was one form of progression in the first reason a consciousness was necessary, which was survival and proliferation of our "likelihood". Our consciousness owes a great deal towards our mortality. So in some way I do believe it is possible for a computer to simulate the "outwardly" forms of our consciousness, but I don't not see it possible that a computer would be self-aware of this consciousness and inescapable projection of self, as the necessity for "selfwardly" thinking would be absent. Therefore I think I forsee a partial simulation of consciousness... however consciousness as the human LIVES it may not be so attainable. Semantical I know... consciousness comes in many forms, and we could be blindly programmed just like a computer (highly doubtful, though believable) but in that case I guess I say "Take me to my maker"...

    Also with learning specifically in the visual field.... I believe that the begininng emergence of this sense was based on the necessity to percieve nutrition in the environemnt around us and to percieve possible threats. When we are little kids we pick up everything and try to eat it (stick in our mouths) cause we are interested in nutrition and survival(its in the back of our heads at all times: food and sex)... then we learn hey you can't eat plastic.The spawnings of the statistics in which you use in your programs, began with the human being able to make sense of probability of a calamity, or luck in tracking a herd down in this location where there have for the past 3 years at this time of year (nomadic people). I hope you see my point that our ability to learn and the progression our minds took is based largely on our own mortality and that this concept may help you in producing
    your learning visual system, just one more aspect to take on.

  4. I thank you for your insightful comments which demonstrate your skepticism regarding what I'd like to call vision for vision's sake. Your last comments were centered around the main point that humans' ability to parse the visual world emerged to serve a useful purpose. This purpose being the "survival and proliferation of our 'likelihood'."

    I am particularly fond of the quote, "Our consciousness owes a great deal towards our mortality." We can attribute the mainstream research paradigm in Computational Intelligence (AI/Vision/ML) to Rene Descartes. Since the
    beginning of modern philosophy we have been trying to separate the mind from the body and this underlying principle has
    culminated in a modern research program that is focused on styding intelligence apart from the body. Vision for vision's sake adheres to the thesis that it makes sense to speak of an intelligent mind that exists independent of a mortal body. Perhaps this is not the proper time to expatiate on the subject of the God, but it should not be too difficult for one to see how my former posts relating the 'omnipotency problem of vision' relate to Cartesian Duality.

    Your main points result in a transformation of the current problem of "How vision?" to "Why vision?". This is a bold question to ask, in fact, one that has been neglected by the modern overly-mathematical treatment of computational intelligence. I also believe that studying the "progression our minds took" is necessary to understand what constitutes intelligence. The theory of evolution has recently percolated towards the top of my to-think-about list, and this new interest is consistent with your insightful comments.

    By looking at evolution we can start asking questions such as: what external factors in the physical world are necessary for high-level tasks such as object recognition to emerge? As I mentioned earlier, the Cartesian paradigm reverberates through the walls of the modern research institution and the community is convinced that they can solve the object recognition problem by trying to recognize objects. From an theoretical-evolutionary point of view, there is no a-priori reason why object recognition is truly necessary for the survival of a species. However, from a 'man must eat, find shelter, and reproduce' point-of-view we can start to see that 'efficient parsing of the visual world' would promote the survival and propagation of man. To conclude, I am not as optimistic as you and I do not "forsee a partial simulation of consciousness" with respect to the current mainstream paradigm of machine intelligence. An intelligent agent must be 'one with the world' in a way much deeper than 'obtaining images of the world' before he the emergence of anything resembling consciousness.