I am going to Mountain View, California next week to start my 3 month-long summer internship at Google. While I don't know the specific details on what I'll be working on, I will be working with the Computer Vision group there.
Here is an idea: imagine making sense of the billions of objects embedded in images contained in google street-view image database. Google is already blurring faces in these images -- which means they are running vision algorithms on this dataset -- but are google researchers finding makes/models of cars, reading street signs, analyzing building facades to see which homes are victorian/ranch/etc, aligning visual information with google maps, etc?
Google street view is an excellent portal from machine to the world. If there is ever any hope of visual recognition happening on a robot, then it will have to happen at Google first. First using immense computational power. If that works, why not outsource visual recognition capabilities to a company like Google? Imagine a little computer onboard your favorite humanoid robot that is actually communicating via some standard recognition API with google's servers. What the robot sees is sent over to Google for analysis -- then 'image understanding' data is propagated back. I imagine such a service could be set up, and for a fairly cheap price.