Researchers Create Technology to Help Robots Learn to ‘See’ in 3D

Published by , July 17, 2017 11:56 am

(EurekaAlert)  It’s relatively straightforward for robots to “see” objects with cameras and other sensors, interpreting what they see, from a single glimpse, is more difficult. Humans can glance at a new object and intuitively know what it is, whether it is right side up, upside down or sideways, in full view or partially obscured by other objects. Humans mentally fill in the parts they can’t see. Duke University graduate student Ben Burchfiel says the most sophisticated robots in the world can’t yet do what most children do automatically, but he and his colleagues may be closer to a solution.
Burchfiel and his thesis advisor George Konidaris, now an assistant professor of computer science at Brown University, have developed new technology that enables machines to make sense of 3D objects in a richer and more human-like way. Burchfiel’s robot perception algorithm can simultaneously guess what a new object is, and how it’s oriented, without examining it from multiple angles first. It can also “imagine” any parts that are out of view.

Tags: , , ,