System lets robots identify an object’s properties through handling

System lets robots identify an object’s properties through handling

Calibrating object parameters through differentiable physics using proprioceptive signals. Left: Our method aims to identify object parameters, such as the mass and material properties of the purple sphere. Middle: We utilize differentiable physics to simulate interactions between the robot and the object. Right: Object parameters are identified by supervising the differentiable physics simulation (top) using proprioceptive signals (joint positions, shown as green circles) from the real robot (bottom). Notably, our approach does not require tracking the object’s trajectory (red circles); instead, it relies solely on the robot’s internal sensors for the calibration process. Credit: arXiv (2024). DOI: 10.48550/arxiv.2410.03920

A human clearing junk out of an attic can often guess the contents of a box simply by picking it up and giving it a shake, without the need to see what’s inside. Researchers from MIT, Amazon Robotics, and the University of British Columbia have taught robots to do something similar.

They developed a technique that enables robots to use only internal sensors to learn about an object’s weight, softness, or contents by picking it up and gently shaking it. With their method, which does not require external measurement tools or cameras, the robot can accurately guess parameters like an object’s mass in a matter of seconds.

This low-cost technique could be especially useful in applications where cameras might be less effective, such as sorting objects in a dark basement or clearing rubble inside a building that partially collapsed after an earthquake.

Key to their approach is a simulation process that incorporates models of the robot and the object to rapidly identify characteristics of that object as the robot interacts with it.

The researchers’ technique is as good at guessing an object’s mass as some more complex and expensive methods that incorporate computer vision. In addition, their data-efficient approach is robust enough to handle many types of unseen scenarios.

“This idea is general, and I believe we are just scratching the surface of what a robot can learn in this way. My dream would be to have robots go out into the world, touch things and move things in their environments, and figure out the properties of everything they interact with on their own,” says Peter Yichen Chen, an MIT postdoc and lead author of the paper on this technique.

His co-authors include fellow MIT postdoc Chao Liu; Pingchuan Ma, Ph.D.; Jack Eastman, MEng; Dylan Randle and Yuri Ivanov of Amazon Robotics; MIT professors of electrical engineering and computer science Daniela Rus, who leads MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL); and Wojciech Matusik, who leads the Computational Design and Fabrication Group within CSAIL.

The research will be presented at the International Conference on Robotics and Automation, and the paper is available on the arXiv preprint server.






Sensing signals

The researchers’ method leverages proprioception, which is a human or robot’s ability to sense its movement or position in space.

For instance, a human who lifts a dumbbell at the gym can sense the weight of that dumbbell in their wrist and biceps, even though they are holding the dumbbell in their hand. In the same way, a robot can “feel” the heaviness of an object through the multiple joints in its arm.

“A human doesn’t have super-accurate measurements of the joint angles in our fingers or the precise amount of torque we are applying to an object, but a robot does. We take advantage of these abilities,” Liu says.

As the robot lifts an object, the researchers’ system gathers signals from the robot’s joint encoders, which are sensors that detect the rotational position and speed of its joints during movement.

Most robots have joint encoders within the motors that drive their movable parts, Liu adds. This makes their technique more cost-effective than some approaches because it doesn’t need extra components like tactile sensors or vision-tracking systems.

To estimate an object’s properties during robot-object interactions, their system relies on two models: one that simulates the robot and its motion and one that simulates the dynamics of the object.

“Having an accurate digital twin of the real world is really important for the success of our method,” Chen adds.

Their algorithm “watches” the robot and object move during a physical interaction and uses joint encoder data to work backwards and identify the properties of the object.

For instance, a heavier object will move slower than a light one if the robot applies the same amount of force.

Differentiable simulations

They utilize a technique called differentiable simulation, which allows the algorithm to predict how small changes in an object’s properties, like mass or softness, impact the robot’s ending joint position. The researchers built their simulations using NVIDIA’s Warp library, an open-source developer tool that supports differentiable simulations.

Once the differentiable simulation matches up with the robot’s real movements, the system has identified the correct property. The algorithm can do this in a matter of seconds and only needs to see one real-world trajectory of the robot in motion to perform the calculations.

“Technically, as long as you know the model of the object and how the robot can apply force to that object, you should be able to figure out the parameter you want to identify,” Liu says.

The researchers used their method to learn the mass and softness of an object, but their technique could also determine properties like moment of inertia or the viscosity of a fluid inside a container.

Plus, because their algorithm does not need an extensive dataset for training like some methods that rely on computer vision or external sensors, it would not be as susceptible to failure when faced with unseen environments or new objects.

In the future, the researchers want to try combining their method with computer vision to create a multimodal sensing technique that is even more powerful.

“This work is not trying to replace computer vision. Both methods have their pros and cons. But here we have shown that without a camera, we can already figure out some of these properties,” Chen says.

They also want to explore applications with more complicated robotic systems, like soft robots, and more complex objects, including sloshing liquids or granular media like sand.

In the long run, they hope to apply this technique to improve robot learning, enabling future robots to quickly develop new manipulation skills and adapt to changes in their environments.

“Determining the physical properties of objects from data has long been a challenge in robotics, particularly when only limited or noisy measurements are available. This work is significant because it shows that robots can accurately infer properties like mass and softness using only their internal joint sensors, without relying on external cameras or specialized measurement tools,” says Miles Macklin, senior director of simulation technology at NVIDIA, who was not involved with this research.

More information:
Peter Yichen Chen et al, Learning Object Properties Using Robot Proprioception via Differentiable Robot-Object Interaction, arXiv (2024). DOI: 10.48550/arxiv.2410.03920

Journal information:
arXiv


Provided by
Massachusetts Institute of Technology


This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and teaching.

Citation:
System lets robots identify an object’s properties through handling (2025, May 8)
retrieved 8 May 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.




Source link

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every week.

We don’t spam! Read our privacy policy for more info.

More From Author

Live Updates: Robert Francis Prevost Is 1st American Pope

Live Updates: Robert Francis Prevost Is 1st American Pope

How NASCAR drivers keep their cool in 140-degree cockpits

How NASCAR drivers keep their cool in 140-degree cockpits

Leave a Reply

Your email address will not be published. Required fields are marked *