Robots ‘capture’ shadows to sense touch

Researchers in the US have developed a low-cost method for soft, deformable robots to detect physical interactions, from pats to punches to hugs, without relying on touch.

 The method, devised by a team at Cornell University in New York, uses a USB camera located inside the robot that “captures” the shadow movements of hand gestures on the robot’s skin and classifies them with machine-learning algorithm.

The technology, known as ShadowSense, is the latest project from the Human-Robot Collaboration and Companionship Lab, led by senior author Professor Guy Hoffman.

“Touch is such an important mode of communication for most organisms, but it has been virtually absent from human-robot interaction. One of the reasons is that full-body touch used to require a massive number of sensors, and was therefore not practical to implement,” Hoffman explained. “This research offers a low-cost alternative.”

The technology originated as part of a collaboration with Professors Hadas Kress-Gazit and Kirstin Petersen to develop inflatable robots that could guide people to safety during emergency evacuations. Such robots would need to be able to communicate with humans in extreme conditions and environments. For example, these robots could physically lead someone down a noisy, smoke-filled corridor by detecting the pressure of the person’s hand.

Rather than installing a large number of contact sensors to gauge touch – which would add weight and complex wiring to the robot, and would be difficult to embed in a deforming skin – the team looked to sight.

“By placing a camera inside the robot, we can infer how the person is touching it and what the person’s intent is just by looking at the shadow images,” said doctoral student Yuhan Hu, lead author on the project. “We think there is interesting potential there because there are lots of social robots that are not able to detect touch gestures.”

The prototype robot consists of a soft inflatable bladder of nylon skin stretched around a cylindrical skeleton and mounted on a mobile base. Under the robot’s skin is a USB camera, which connects to a laptop.

The researchers also developed a neural network-based algorithm that distinguishes between six touch gestures (touching with a palm, punching, touching with two hands, hugging, pointing, and not touching at all) with an accuracy of 87.5 to 96 per cent, depending on the lighting.

The robot can be programmed to respond to certain touches and gestures, such as rolling away or issuing a message through a loudspeaker. The robot’s skin also has the potential to be turned into an interactive screen. By collecting enough data, a robot could be trained to recognise an even wider vocabulary of interactions, custom-tailored to fit the robot’s task, Hu added.

The ability to physically interact and understand a person’s movements and moods could ultimately be just as important to the person as it is to the robot, the researchers believe. “Touch interaction is a very important channel in terms of human-human interaction. It is an intimate modality of communication,” Hu said. “And that’s not easily replaceable.”