Pointers at Glance
- Finding a targeted item from a pile is a challenging task for robots, unlike humans, as complex reasoning is involved.
- Robots now overcome the uncertainty of retrieving buried objects with a FuseBot algorithm developed by researchers.
- Previously, a robot could find objects based on RFID tagging, and now it can identify the targeted item without RFID tagging as long as some of the objects in a pile are RFID tagged.
Finding an item buried under a pile of items is pretty easy for humans. We move the things from the pile and find the needed item. But for robots, finding the buried item from a pile is not easy and is a complex task/ a steep challenge.
Massachusetts Institute of Technology (MIT) researchers had previously developed a robotic arm that combines visual information and radio frequency signals to find hidden objects tagged with RFID tags. Now, they have developed a new system that finds the hidden target object without an RFID tag as long as some items in a pile are tagged.
The algorithm behind this system is known as FuseBot. It is designed so that the robot can identify the probable location and orientation of objects under the pile. This FuseBot gives the most efficient way to remove obstructing objects and retrieve the target object. Researchers were able to do this by adding multimodal reasoning to the system. FuseBot can reason video information (vision) and RF signals to understand the pile of items.
The problem in a recent market report inspired our researchers as 90% of US retailers use RFID tagging, but the technology is not universal.
A robotic arm uses an attached video camera and RF antenna with a FuseBot to retrieve an untagged target item from a mixed pile. The system scans the mixed pile with its camera to create a 3D environment model. It sends signals from its antenna to locate RFID tags at the same time. These radio waves can move through most solid surfaces so that the robot can “see” deep into the pile. Since the target item is not tagged, FuseBot knows it cannot locate the item at the same spot as an RFID tag.
Algorithms fuse this information to update the 3D model of the environment and highlight the potential locations of the target item; the robot knows its size and shape. Then the system reasons the objects in a pile and RFID tag locations to determine which item to remove and find the target item with the fewest moves.
The robot is unclear how objects are oriented under the pile or how a squishy item might be deformed by heavier items pressing on it. However, it overcomes this challenge with probabilistic reasoning, using what it knows about the size and shape of an object and its RFID tag location to model the 3D space that the object is likely to occupy. It removes items and uses reasoning to decide which item would be “best” to draw next.
FuseBot retrieved the target item successfully 95 percent of the time, compared to 84 percent for the other robotic system. It accomplished this using 40 percent fewer moves and could locate and retrieve targeted items more than twice as fast.
Shortly, researchers plan to incorporate more complex models into FuseBot to perform better on deformable objects. They are interested in exploring different manipulations, such as a robotic arm that pushes items out of the way. Future reworks of the system could also be used with a mobile robot that searches multiple piles for lost objects.