Wednesday, 6 March 2019

Generating cross-modal sensory data for robotic visual-tactile perception

Perceiving an object only visually (e.g. on a screen) or only by touching it, can sometimes limit what we are able to infer about it. Human beings, however, have the innate ability to integrate visual and tactile stimuli, leveraging whatever sensory data is available to complete their daily tasks.

* This article was originally published here