Our sense of touch endows us with an exquisite sensitivity to surface texture. We can discern surfaces whose elements are tens of nanometers in size and hundreds of nanometers apart. The perception of texture not only allows us to make fine discriminations – like telling real silk from fake silk – but also guides object manipulation. For example, our perception of the surface properties of objects informs how much grip force we apply on them: more force is required for slippery objects. One of the remarkable aspects of tactile texture processing is that it operates over six orders of magnitude in element sizes, from the smallest discernible elements (on the order of tens of nanometers) to the largest elements that can fit on a fingertip, measured in tens of millimeters. We have shown that this wide range of scales is accommodated by distributing information across three types of nerve fibers, each sensitive to surface elements over different spatial scales. Importantly, these different afferents convey texture information differently. Coarse textural features, on the order of millimeters, are conveyed in the spatial pattern of activation in one afferent population, drawing analogies to visual texture representations on the retina.
In contrast, fine textural features – with sizes in the tens of nanometers – are conveyed in temporal spiking patterns in two other afferent populations, driven by skin vibrations elicited when the textured surface moves across the skin, and drawing analogies to audition. How these two types of representations are integrated to achieve a unitary sensory experience of texture is a mystery. Furthermore, while afferent responses are highly dependent on exploratory parameters, such as contact force and scanning speed, the perception of texture is highly invariant with respect to these parameters. Thus, neural signals must be interpreted in the context of how they are acquired. Nothing is known about how this is achieved.