(MIT Technology Review) — Rather than take weeks to reprogram an industrial robot to perform a complicated new task, a robot training academy developed at the Autonomy, Robotics and Cognition Lab at the University of Maryland shows a robot a task and lets the robot figure out most parts of sequences of things it needs to do, and then fine-tune things to make it work. The approach involves training a computer system to associate specific robot actions with video footage showing people performing various tasks. A recent paper from the group, for example, shows that a robot can learn how to pick up different objects using two systems by watching thousands of instructional YouTube videos.
At a recent conference in St. Louis, the researchers demonstrated a cocktail-making robot that uses the approaches they’re working on. The robot — a two-armed industrial machine made by a Boston-based company called Rethink Robotics — watched a person mix a drink by pouring liquid from several bottles into a jug, and would then copy those actions, grasping bottles in the correct order before pouring the right quantities into the jug. Yezhou Yang, a graduate student in the Autonomy, Robotics and Cognition Lab at the University of Maryland, carried out the work with Yiannis Aloimonos and Cornelia Fermuller, two professors of computer science at the University of Maryland.Tags: Autonomy, industrial robot, Rethink Robotics, robotics, Robotics and Cognition Lab