Tuesday, November 08, 2005

Action at a Distance and Computer Vision

The problem of action at a distance which was around since the time of Newton still plagues us. While introduced in the context of gravitational attraction between two heavenly bodies, it has recently came up again in the context of object independence. Allow me to quickly explain.

The original problem was: how can two objects instantaneously 'communicate' via a gravitational attraction? How can scientists make sense of this action at a distance?

In the context of vision, how does the localization of one object influence the localization of another object in a scene? In other words, how can information about object A's configuration be embedded in object B's configuration?

Being the postmodern idealist that I am, I am not afraid to post the thesis that we, the perceivers, are the quark gluon plasma that binds together the seemingly distinct bits of information we acquire from the world. Perhaps what we semantically segment and label as object A is nothing but a subjective boundary that allows our perception to relate it to another subjective semantic segmentation called object B. When working on your next research project, remember that maybe the world isn't made up of things that you can see.


  1. Yo.... Read Science and The Akashic Field... exactly what you are talking about. I'll bring it home for thanksgiving.

  2. Never heard of it.

    Give a friend a book for a week and enlighten a mind for eternity.