About this Blogsphere:

This blogsphere attempts to capture, catalog and share resources relating to visual perception of information. It is about a world mostly dealing with Physical (Touch, Taste, See/Sight, Smell and Hear) and sometimes Metaphysical (and that is none-of-the-above category). Physical, for instance, touch (e.g., feel, felt, found), look and visualization, is here with an attempt to combine verbal, vocal and visual--to synchronously see, hear, share and do much more. Interestingly, in order to visualize one does not need special skills, competencies, etc. It is all about common sense, especially with human visualizations. In short, "information is in the eye of the beholder." Continue reading much more all-ado-about this Blogosphere

Akbani is a Cutchi Memon family name.

September 08, 2012

Help teach robots to see - with your Kinect

Visual Dictionary and Visual Browsing revisited by Devin Coldewey

Swedish researchers are hoping to create a visual dictionary to improve the ability of robots to understand the world around them, but they can't do it alone. They need your help — if you've got a Kinect.

Robots are smart in that they can store tons of information and process it quickly, but unlike humans they haven't been walking the Earth for long, and have very little practical knowledge of common objects. How heavy is a coffee mug? No data, though a human could easily size it up and make a good estimate. What's that thing on the floor? A human would recognize it as a shoe, but to a robot, it could be anything.

A project called Kinect@Home is being led by Alper Aydemir at Sweden's Royal Institute of Technology. Aydemir hopes that by outsourcing that common knowledge to humans, robots will develop a better idea of their surroundings. To that end, he has started a database of 3-D models of everyday objects like books and shoes, captured with the cheap and effective Kinect. continue reading

No comments: