When you pick up one pencil from a pile on a table, how does your hand know where to go or what to grab? Your eyes help out.
For robots to do the same, they need their own way to see the object that they want to pick up or put down. It takes perception, specifically a sensor-based system that can accurately identify – sometimes to within a millimeter or less – an object’s position, orientation and dimensions.
Currently, robots on a factory line require special, time-consuming and expensive accommodations to function. A product’s parts need to be held at just the right height and just the right angle so that the preprogrammed robots can get the job done. If that job changes, manufacturers need to reprogram the robot’s movements and build an entirely new set of fixtures to position the parts all over again.
Just as your eyes would help with that pencil, 3D perception systems let robots switch from one task to the next, identify specific parts and pick them up.
But just as no two eyes are exactly the same, no two manufacturers’ robot sensors perform the exact same way. Sensors respond differently to variations in light, how shiny an object is, the shape of an object, etc. Right now, there are very few ways to compare a 3D sensor’s performance from one company to another.
That’s the space that NIST engineers are entering right now. We’re developing standards for robot perception systems and testing their capabilities against measurements from highly accurate instruments.
And we’re looking for friends in the field. Developing standards depends on a community of experts and users reaching consensus. If you’re interested in this topic or know someone who is, please spread the word and get in touch with our researchers.
Follow us on social media for more like this from all across NIST!