Desktop Layout

Cross-cutting Challenges - Theme 2: Keynote Talks
Cross-cutting Challenges
California West
Talks will begin every 30 min.
Parameterizing the Tactile Gamut (Keynote)
J. Edward Colgate
(Northwestern University, USA)
Abstract: Among the challenges associated with this theme are those that apply specifically to haptic displays. Here are a few: What is the range of experiences (the “tactile gamut”) that a given display can support? How can the largest possible tactile gamut be achieved? How can the realism of surface haptics be improved? Satisfactory answers to these questions don’t seem particularly close at hand, I’m afraid. In this talk, therefore, I am going to impose a few constraints in the hopes that they will lead to more narrowly defined and actionable problems … that are nonetheless interesting! I will begin by limiting myself to variable friction displays in which the gross level of friction experienced by the fingertip (µ) can be tightly controlled as a function of measured variables including finger position (x), velocity (v), normal force (Fn), and time (t). For much of the talk, I will focus mainly on µ(x) where x is a one or two-dimensional vector. Unfortunately, this doesn’t do much to limit the complexity of the problem. To illustrate why, suppose that µ(x) could take on two possible values (high or low) at each pixel on a typical smartphone screen (about two million pixels). Then the number of tactile patterns that we could generate would be two raised to the power of two million … basically infinity. And yet, the vast majority of those patterns would feel much the same: like tactile noise! This framing of the problem may seem like academic folly, a tactile version of monkeys sitting at typewriters! But, in my experience, it is thoroughly practical. One of the first tools that we built for programming haptics on the TPaD Phone was the tactile bitmap: start with an 8-bit grayscale bitmap, and convert the gray of each pixel into a friction value to be experienced as the center of the finger passes over that pixel. It is a simple tool that anyone can use (and hundreds of people have!). But, if truth be told, it is a terrible way to program haptics: for anything beyond the simplest patterns, it is impossible to predict the tactile experience by looking at the grayscale image; pictures that look very different may feel much the same; it is rarely clear how to change an image to achieve a particular tactile intent; and so on. We need a better parameterization. We need to know how to throw away extraneous information while scaffolding designers’ mental models and giving them control over a broad tactile gamut. In this talk, I’ll review the work that we have been doing toward those goals including: the notion of a “texel”, the largest region for which phase information may be disregarded; the use of low-order statistics to capture macro textures (the “Tactile Paintbrush”); the reduction of spectral information associated with fine texture; and the importance of other variables, such as velocity and time.


Time stamp: 2019-03-17T05:49:08+01:00