Cognitive Science is a computational study of the mind: McGill Cognitive Science
One of the biggest accomplishments in the field of Artificial Intelligence was when Deep Blue, a chess playing program developed at IBM, beat the world chess champion, Garry Kasparov. But this was in the early days of artificial intelligence -- when computer scientists still weren't sure on what it means for a machine to be intelligent. Chess is a well-known thinking-man's game, and at first glance it seems that a machine can only be worthy of being dubbed intelligent if it performs competitively on intelligent-people activities such as chess.
Chess: Human vs. Machine: Slate article about Deep Blue
Given the plethora of tasks that humans can effortlessly perform in daily life, is engineering a machine to rival humans on just one such task bringing researchers any closer to building truly intelligent machines?
The problem with chess is that it has a "finite universe problem" -- there is a finite number of primitives (the chess pieces) which can be manipulated by choosing a move from a finite set of allowable actions. But if we think of normal life (going to work, eating dinner, talking to a friend) as a game, then it is not hard to see that most everyday situations involving humans involve a sea of infinite objects (just look around and name all the different objects you can see around you!) and an equally capacious space of allowable actions (consider all the things you could with all those objects around you!). Intelligence is what allows us to cope with the complexities of the universe by focusing our attention on a limited set of relevant variables -- but the working set of objects/concepts we must consider at any single instant is chosen from a seemingly infinite set of alternatives.
I believe that everyday human-level visual intelligence is greatly undervalued by people -- and there is a very good reason for this! The ability to make sense of what is going on in a single picture is such a trivial and autonomous task for humans, that we don't even bother quantifying just how good we are at it. But let me reassure you that automated image understanding is no trivial feat. The world is not composed of 20 visual object categories and the space of allowable and interpretable utterances we could associate with a static picture is seemingly infinite. While the 20 category object detection task (as popularized by the PASCAL VOC) does have a finite universe problem, the grander version of the vision master problem (a combination of detection/recognition/categorization where you can interpret an input any way you like) is much more complex and mirrors the structure of the external world well.
Robotics Challenge: Build a Robot like Bender
Any application which calls for automated analysis of images requires vision. A robot, if it is to be successful interacting with the world and performing useful tasks, needs to perceive the external world and organize it. While some see vision as just one small piece of the "Robotics Challenge" (build a robot and make it do cool stuff), it totally unclear to me where to draw the boundary between low-level pixel analysis and high-level cognitive scene understanding. Over the years, I have been thinking more and more about this problem, and I've convinced myself that the interesting part of vision is precisely at the boundary between what is commonly thought of as low-level representation of signal and what is considered high-level representation of visual concepts. While some view computer vision as "applied mathematics" or "applied machine learning" or "image processing in disguise", I passionately believe the following:
Computer Vision is Artificial Intelligence
Feel free to comment if your own computer vision philosophy is at odds with anything I said.
ReplyDeleteWith the same argument, one can also say that Natural Language Processing is Artificial Intelligence too. If you think about it, to understand a sentence, it is not suffice to have a dictionary of sorts of the meaning of every word in the sentence. One needs context and knowledge about the real world for a deeper understanding. How to represent knowledge itself is a problem.
ReplyDeleteIn fact Natural Language Processing is Artificial Intelligence. Just like speaking the languages. See cleverbot.com Using language in the right context and understanding the context of the conversation is AI. Maybe that is only my opinon but I am sure about it.
DeleteThe subset of computer vision that you define here would be simply the identification of objects in visual space which I think is probably not high-level enough to be considered real AI.
ReplyDeleteAt the same time, from my limited knowledge of the AI domain, I don't think we have anything like that yet. Perhaps we should take baby steps on our march towards AI and knock these issues out one at at time?
perhaps start by learning from human vision system, language system etc.
ReplyDeleteI think computer vision is an AI subfield.
ReplyDeleteOn the other hand a bat or other creature without vision surely could become intelligent.
www.robert-w-jones.com
www.robertwilliamjones.blogspot.com
I think 'making sense of' any kind of sensor data is AI, which brings vision, sound, touch and any other ubiquitous sensing modality under purview. In that sense Tomasz's statement is correct and being a computer vision student myself, I do not begrudge his loyalty to the field. Furthermore if you consider us humans, a large part of our intelligence (in > 90% of humans) have evolved from our visual systems. If you have read Hans Moravec's memo "Locomotion, Vision and intelligence", you would know that there is in fact a correlation between evolution of vision and locomotion (which in itself was an evolutionary necessity for food gathering).
ReplyDeleteIf you think about it, to understand a sentence, it is not suffice to have a dictionary of sorts of the meaning of every word in the sentence. One needs context and knowledge about the real world for a deeper understanding.
ReplyDeleteA common feature of the computer vision systems that appear in student projects and theses is that "the output" of the vision system is some sort of data structure that attempts to give complete information about the scene being analyzed. However, when I look at a scene, I don't have the sense of any comprehensive set of information that is simultaneously perceived . I can answer questions questions about a scene such as "is that printer in front of the stack of books or behind it?". If I form the intent to pick up a book from the stack of books, I can guide my hand to it. From the point of view of engineering a robot, it would be seem convenient to have a data structure with "complete" information about the scene but it's unclear whether such a structure exists for human beings.
ReplyDeleteI agree with the title "Computer Vision Is Artificial Intelligence". Implementing the whole process may include implementing a system that can formulate a question.
Here is how I view intelligence... Complex creatures have goals. Humans are complex creatures. Humans have goals. What makes us unique compared to other creatures is how we adapt rapidly within a single lifetime to accomplish our goals. When I say 'goal' I'm speaking very generally about what I view as 'the main goal' of all creatures, survival. Most people reading this blog are of the intellectual type. We survive by convincing others that the thoughts that bubble out of our heads are worth money. We take this money and buy food. Yada, yada, division of labor, no big deal. However, the big deal is this: if civilization were to end tomorrow (i.e. there was no more use for our brains) every last one of us could start foraging for food, or start a garden, etc. To say it another way, most creatures are hard-wired to perform a small array of tasks to accomplish their ultimate goal of survival. If something unexpected comes up that interrupts whatever they do to survive, they parish. To put it in yet another way: fleas have a very narrow spectrum of potential behavior operations; humans, compared to fleas and every other creature we are aware of, have a very very broad spectrum of what we are capable of doing. And not only is our behavior spectrum immense but it isn't static.
ReplyDeleteHuman beings don't experience 'reality;' we do however experience a slice of reality. Vision is a slice of the slice. Any slice of reality that we do experience is dictated by the hardware that we are equipped with, or that we have built. Our eyes are hardware in this sense, but so is a spectrometer. We call ourselves intelligent because we navigate through this narrow band of reality successfully. We navigate successfully because we have sensors, but what propels us, is the nagging chemically ingrained thought that we must accomplish our goals. So to give another piece of equipment intelligence is to give it sensors and appendages that mesh with it's goals, but to also give it the ability to analyze what it is sensing and place it within context of it's survival.
It's interesting to take the thing from the opposite side. We don't see and know-we know because we see-so intelligence can be created from the visual learning experience. Epistemology aside I think it's brilliant to turn the thing on it's head and craft an approach. Bravo!
ReplyDeleteI can say one thing (from experience): general computer vision is not possible without AI. Some simpler tasks are, but general computer vision with recognition and object matching is not possible without AI. I have experienced this myself (I am a software engineer) when was working on a computer vision project which I thought would take 2 months, and it took 3 years. It worked out in the end, but even now it is not 100% (too many statistics, assumptions and thresholds which are in my opinion too inflexible for general use).
ReplyDeletePlenty of animals can perform sophisticated vision tasks, yet we don't consider them to have human level intelligence. Lower mammals, birds, and reptiles can recognize objects, perceive spatial relationships, and generally navigate the world using vision. The vast majority of visual perception does not require sophisticated cognitive capabilities at the human level. Therefore, there will be quite a bit of work left in the field of AI beyond the point at which computer vision is considered "solved."
ReplyDeleteComputer vision, in its early days, was nothing more than signal processing. But today's systems are able to perform reasonably well on object recognition benchmarks. And if you look closely at the research community, you'll see plenty attempts at tasks like "action recognition" and "emotion recognition" under the computer vision category. Just like any other hard CS problem, we keep raising the bar.
ReplyDeleteWe'll keep wanting more out of our vision systems, and once we get to 90% on a task, we'll strive for the next human-like ability. It used to be edge detection, then segmentation, then classification, then emotions, then actions, and the push won't stop.
I'm not entirely sure where to draw the boundary between AI and Computer Vision. There's a lot to the perception problem, and the kind of "world knowledge" required to fully understand a picture goes beyond a mere 100,000-way classification problem. It's almost as if the software has to first live in the world, learn from the world, and then be applied to an image recognition task. Maybe embodiment is necessary for learning. Maybe not.
Learning architectures of today seem to be converging, but we've been feeding object recognition algorithms the same kind of data for the past 20 years. There's a lot we know, and much more that we don't know.
let you know if I ever do that. I'd be interested to see those distributions as well.virtual assistant program
ReplyDeleteI personally like your post; you have shared good insights and experiences. Keep it up. https://bestgamingthings.com/call-of-duty-black-ops-4-the-best-controller-for-the-game/
ReplyDeleteThis is truly a deep learning place so far. The term computer and artificial intelligence perhaps can be defined a combined functional process. Computer today a super integral part of our modern life and the technology AI added a new dimension in its triumphant march. Even nowadays, AI has been used in every sectors of our working life especially business arena and industrial field. Jumping into this long detailed article I came to learn many precious views. I can also recommend an article here for the readers https://www.signally.ai/blog/5-kinds-of-useful-data-you-can-extract-from-your-machines-to-improve-uptime
ReplyDelete