+ All documents
Home > Documents > Appearance-Preserving Tactile Optimization

Appearance-Preserving Tactile Optimization

Date post: 09-May-2023
Category:
Upload: khangminh22
View: 0 times
Download: 0 times
Share this document with a friend
16
Appearance-Preserving Tactile Optimization CHELSEA TYMMS, New York University SIQI WANG, New York University DENIS ZORIN, New York University Fig. 1. Our optimization procedure enables the control of a texture’s tactile roughness while maintaining its visual appearance. Starting with a target texture (leſt), the procedure optimizes toward a desired tactile roughness while preserving the visual appearance (center). The resulting textures can be used to fabricate visually similar but tactually different objects, such as these 3D-printed starfish (right, photographed). Textures are encountered often on various common objects and surfaces. Many textures combine visual and tactile aspects, each serving important purposes; most obviously, a texture alters the object’s appearance or tactile feeling as well as serving for visual or tactile identification and improving usability. The tactile feel and visual appearance of objects are often linked, but they may interact in unpredictable ways. Advances in high-resolution 3D printing enable highly flexible control of geometry to permit manipulation of both visual appearance and tactile properties. In this paper, we propose an optimization method to independently control the tactile properties and visual appearance of a texture. Our optimization is enabled by neural network-based models, and allows the creation of textures with a desired tactile feeling while preserving a desired visual appearance at a relatively low computational cost, for use in a variety of applications. CCS Concepts: Human-centered computing User studies; Com- puting methodologies Perception. Additional Key Words and Phrases: Roughness, fabrication, perception Authors’ addresses: Chelsea Tymms, New York University, [email protected]; Siqi Wang, New York University, [email protected]; Denis Zorin, New York University, [email protected]. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. © 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM. 0730-0301/2020/12-ART212 $15.00 https://doi.org/10.1145/3414685.3417857 ACM Reference Format: Chelsea Tymms, Siqi Wang, and Denis Zorin. 2020. Appearance-Preserving Tactile Optimization. ACM Trans. Graph. 39, 6, Article 212 (December 2020), 16 pages. https://doi.org/10.1145/3414685.3417857 1 INTRODUCTION Tactile textures are ubiquitous in everyday life. We encounter tactile textures on the surfaces of fruits and plants, skin, woven fabrics, and many manufactured surfaces. Tactile texture often serves a specific purpose, practical or aesthetic (an object should feel good, not just look good). Creating a particular tactile feeling is a common task which receives less attention than visual appearance, although is often just as important. Tactile feeling plays a particularly important role for people who are visually impaired, who rely on the sense of touch much more. The tactile feeling and visual appearance of objects can interact in unpredictable ways; for example, the tactile texture may be a byproduct of creating a particular appearance (e.g., an etched pat- tern), or vice-versa (e.g., knurled grips have a particular look). The goals of achieving particular visual and tactile appearances may be conflicting: e.g., one may want a particular visual pattern on a tool handle, while achieving specific tactile properties optimal for usabil- ity. While in many cases, little can be done about the interaction of visual and tactile properties, advanced fabrication technologies like high-resolution 3D printing enable highly flexible control of both visual and tactile texture. A characteristic feature of both visual and tactile textures is their statistical nature: that many distinct patterns and geometries may look or feel the same. We refer to distinct (in the sense of per-point ACM Trans. Graph., Vol. 39, No. 6, Article 212. Publication date: December 2020.
Transcript

Appearance-Preserving Tactile Optimization

CHELSEA TYMMS, New York UniversitySIQI WANG, New York UniversityDENIS ZORIN, New York University

Fig. 1. Our optimization procedure enables the control of a texture’s tactile roughness while maintaining its visual appearance. Starting with a target texture(left), the procedure optimizes toward a desired tactile roughness while preserving the visual appearance (center). The resulting textures can be used tofabricate visually similar but tactually different objects, such as these 3D-printed starfish (right, photographed).

Textures are encountered often on various common objects and surfaces.Many textures combine visual and tactile aspects, each serving importantpurposes; most obviously, a texture alters the object’s appearance or tactilefeeling as well as serving for visual or tactile identification and improvingusability. The tactile feel and visual appearance of objects are often linked,but they may interact in unpredictable ways. Advances in high-resolution 3Dprinting enable highly flexible control of geometry to permit manipulationof both visual appearance and tactile properties. In this paper, we proposean optimization method to independently control the tactile propertiesand visual appearance of a texture. Our optimization is enabled by neuralnetwork-based models, and allows the creation of textures with a desiredtactile feeling while preserving a desired visual appearance at a relativelylow computational cost, for use in a variety of applications.

CCS Concepts: • Human-centered computing → User studies; • Com-puting methodologies → Perception.

Additional Key Words and Phrases: Roughness, fabrication, perception

Authors’ addresses: Chelsea Tymms, New York University, [email protected]; SiqiWang, New York University, [email protected]; Denis Zorin, New York University,[email protected].

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected].© 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM.0730-0301/2020/12-ART212 $15.00https://doi.org/10.1145/3414685.3417857

ACM Reference Format:Chelsea Tymms, Siqi Wang, and Denis Zorin. 2020. Appearance-PreservingTactile Optimization. ACM Trans. Graph. 39, 6, Article 212 (December 2020),16 pages. https://doi.org/10.1145/3414685.3417857

1 INTRODUCTIONTactile textures are ubiquitous in everyday life. We encounter tactiletextures on the surfaces of fruits and plants, skin, woven fabrics, andmany manufactured surfaces. Tactile texture often serves a specificpurpose, practical or aesthetic (an object should feel good, not justlook good). Creating a particular tactile feeling is a common taskwhich receives less attention than visual appearance, although isoften just as important. Tactile feeling plays a particularly importantrole for people who are visually impaired, who rely on the sense oftouch much more.The tactile feeling and visual appearance of objects can interact

in unpredictable ways; for example, the tactile texture may be abyproduct of creating a particular appearance (e.g., an etched pat-tern), or vice-versa (e.g., knurled grips have a particular look). Thegoals of achieving particular visual and tactile appearances may beconflicting: e.g., one may want a particular visual pattern on a toolhandle, while achieving specific tactile properties optimal for usabil-ity. While in many cases, little can be done about the interaction ofvisual and tactile properties, advanced fabrication technologies likehigh-resolution 3D printing enable highly flexible control of bothvisual and tactile texture.

A characteristic feature of both visual and tactile textures is theirstatistical nature: that many distinct patterns and geometries maylook or feel the same. We refer to distinct (in the sense of per-point

ACM Trans. Graph., Vol. 39, No. 6, Article 212. Publication date: December 2020.

212:2 • Chelsea Tymms, Siqi Wang, and Denis Zorin

equality) textures that are perceived in a similar way as perceptu-ally equivalent. The large space of perceptually equivalent texturesmakes it possible to adjust one aspect of a texture (e.g., tactile) with-out affecting the other (visual). This type of adaptation makes itpossible to separate the process of visual and tactile design.In this paper, we propose an efficient optimization method for

independent control of tactile feeling and visual appearance of asurface. More precisely, the problems we solve can be formulatedas follows: given input texture geometry, how can we modify it toachieve certain target tactile properties while minimizing changesto its visual appearance? And conversely, how can we achieve spe-cific visual appearance by modifying geometry, while preservingtactile properties? Our method builds on the previous work onquantitative modeling of perceptual roughness, as well as visualappearance perception. One of the main drawbacks of the highlyaccurate roughness model we use is the relative expense of its eval-uation and the lack of differentiability, making it difficult to applyin the optimization context. One of the main contributions of ourwork is efficient neural network-based differentiable versions ofmodels for tactile roughness, visual appearance and contact area.The roughness model is in close agreement with an accurate butexpensive-to-evaluate model while it also does not require expen-sive 3D meshing and FEM simulation and can be evaluated directlyon the input texture geometry. The speedups we obtain are on theorder of 10,000 times for roughness evaluation (although the origi-nal FEM model we compare to was not fully optimized), making itpossible to use this model in the inner optimization loop. In addition,the resulting neural-network model provides gradients, making ittrivial to plug it into an efficient optimizer.Using the same basic approach, we also constructed a similar

neural network model for contact area and for a visual similaritymeasure for geometric textures involving advanced lighting effects,both with multiple-orders-of magnitude speedups.Using these models, we developed an optimization method that

allows for controlling the changes in visual appearance and tactileroughness. With the same approach, it can also control anotheraspect of tactile perception, temperature sensation. We demonstratethe behavior of our system for a variety of examples in differentcontexts and validate our approach with several visual and tactileexperimental studies on flat and curved surfaces.

2 RELATED WORKOur work is related to previous work in several domains. Two ofthe most important works we build on are [Tymms et al. 2018] (weuse the roughness model described in that paper as a starting point),and [Isola et al. 2017], which describes an image-to-image CNN thatwe adapt to our purposes. Our work is connected to a spectrum ofwork in visual and tactile perception modeling, texture synthesisand applications of CNN to optimization.

Tactile perception. Research on the sense of touch has found thattactile perception consists of 4-5 dimensions ([Tiest 2010]), includ-ing large-scale and small-scale roughness; compliance; friction; andtemperature. Here we focus on large-scale roughness, elicited byfeatures larger than 0.1 mm in size and detected through strain;we also consider temperature, controlled here by mediating the

area of contact between the skin and a surface. Most previous re-search in roughness perception has used different types of naturalor artificial stimuli that are difficult to control, e.g. [Manfredi et al.2014], [Connor et al. 1990]. We use 3D printing to allow creationof higher-resolution, more precisely controllable surfaces. We alsogain insights from [Tiest and Kappers 2009], who performed experi-ments on temperature perception based on the thermal diffusivityand found a relative threshold of discrimination of 43%.

Tactile fabrication. [Piovarči et al. 2016] developed a quantitativemodel for tactile compliance perception using stimuli fabricated frommaterials with different perceived tactile compliance, and demon-strated its applications to fabricating shapes with variable properties.Compared to roughness, compliance rarely affects the visual appear-ance of an object, so combining the two is relatively straightforward.In [Elkharraz et al. 2014] a roughness model was obtained usingtactile textures fabricated from a set of visual textures converted toshallow height maps, implicitly creating a close connection betweenvisual and tactile appearance. In our work, we aim to decouple these.

Other recent work in the fabrication domain has aimed to facil-itate the incorporation of tactile properties in 3D printed models.[Torres et al. 2015] provides an interface to fabricate objects witha user-specified weight, compliant infill, and rough displacementmap. However, their roughness metric relies on texture feature size,which is not always definable and does not provide a comprehen-sive model for all textures. [Chen et al. 2013] develops methods tofabricate objects with specified deformation behavior and texturedsurface displacement, but does not allow direct perceptual control.[Degraen et al. 2019] addresses a more specific question using 3D-printed hair structures to adequately simulate material roughnessand softness for use in immersive virtual reality.

Thermal conductivity is of interest in fabrication but is typicallycontrolled by altering the base material or creating a composite;[Wang et al. 2017] reviews several options to vary thermal conduc-tivity and other material properties. We aim to control conductivityfor tactile contexts by altering geometry. In a related application,[Zhang et al. 2017] optimizes the tessellation pattern of 3D-printedorthopedic casts for thermal comfort.

Texture synthesis. [Portilla and Simoncelli 2000] created a modelfor texture synthesis based on a set of image statistics. Their methodperforms well on some natural and artificial textures, but fails forothers; it also requires a significant amount of time and is thereforepoorly suited to optimization. [Wallis et al. 2017] is based on CNNfeature-based model (VGG-19) but similarly does not provide a closematch for many textures. Classical non-parametric texture synthesiswork, e.g. [Efros and Leung 1999];[Wei and Levoy 2000], yield high-quality results for many textures, but are not readily adaptable forour optimization purposes. A recent survey of synthesis methodscan be found in [Barnes and Zhang 2016]. Works such as [Gatys et al.2015] and [Ulyanov et al. 2016] present synthesis methods basedon CNNs but are not robust enough for our optimization purposes.[Zhou et al. 2018] presents a recent GAN-based texture synthesismethod with impressive results, but it requires several hours oftraining for each image, and similarly [Yu et al. 2019] providesperceptually-based texture synthesis but requires days of trainingfor a set of similar textures; neither is suitable for optimization in

ACM Trans. Graph., Vol. 39, No. 6, Article 212. Publication date: December 2020.

Appearance-Preserving Tactile Optimization • 212:3

its current form. In contrast, we seek a method that is robust forall textures, and whose loss computation does not require a largeamount of time.

Optimizing fabricated visual appearance. Several works use opti-mization to accomplish a similar goal of appearance preservationfor 3D printing. [Schüller et al. 2014] uses optimization to alter thegeometry of 3D objects to maintain visual appearance subject toother geometric constraints, to produce bas-reliefs for fabrication.[Rouiller et al. 2013] designed a pipeline to optimize a 3D printedsurface’s microgeometry to replicate a desired BRDF. [Elek et al.2017] employs optimization to correct for light scatter to more ac-curately reproduce color in 3D printing, and [Shi et al. 2018] usesoptimization of the internal layer structure of color multimaterial3D-printing to replicate the full spectrum of color of 2D art, invariantto illumination, more accurately than traditional 2D printing.

Visual similarity of images and textures. Visual similarity metricsare designed to quantify perceptual similarity, with consistencywith perception measured by pairwise or three-way comparisons:if the numerical indicator of similarity for one pair of images ishigher than for another, then we expect the first pair to be perceivedas more different. Well-established visual metrics include thosebased on structural similarity: SSIM [Wang et al. 2004], FSIM [Zhanget al. 2011], MSSIM [Wang et al. 2003]. A different metric designedprimarily for evaluation of image compression quality, and basedon a complex visual system model, is found in [Mantiuk et al. 2011].[Zhang et al. 2018] presents a metric based on deep features learnedfor, e.g., a classification task and combined with a simple metric inthe feature space. These metrics were demonstrated to be closer(on relevant datasets) to human perception compared to SSIM. Weuse a simple, tighter metric based on surface normals discussed inSection 3. We discuss our experiments with other measures there.This is consistent with some of the work on depth images, e.g.,[Haefner et al. 2018] a method for increasing resolution of depthimages using an additional color channel, uses a metric includingestimation of the normal difference. [Martín et al. 2019] develops aprocedure to measure texture similarity by matching a localizationtask to texture statistics; but the current implementation was notshown successful for diverse textures.

Neural networks in model reduction. Model reduction is a well-established area which was using a variety of machine learning-related techniques to decrease the number of parameters needed tosimulate a physical model, with the goal of reducing the cost of thesimulation, which is particularly important in optimization context.We share this motivation, although we do not aim to achieve thisgoal through explicitly reducing the number of parameters of themodel. Older methods are relatively well-covered in the survey[Forrester and Keane 2009]. Very recently, and concurrently to thiswork, neural networks were applied for reduced-order modeling ofPoisson and fluids in 2D [Hesthaven and Ubbiali 2018]. Other modelexamples are considered in [Raissi et al. 2019].

Steganography. Steganography algorithms aim to hide water-marking or other types of information in data, with a few papersfocusing on 3D data; see e.g., [Wang et al. 2008] for a survey, and

more recently [Yang et al. 2017]. As we do in our work, these meth-ods aim to preserve visual appearance, but the goal is to conceal thehidden information from the naive observer; in our case, we do notwant to make the modification of tactile properties apparent.

3 OVERVIEWThe main goal of this work is to develop a process to allow thecontrol of a texture’s tactile roughness or tactile temperature whilemaintaining its visual appearance, which can produce a range ofeffects.

Summary. Given an input 2D height field and a desired tactileroughness value or contact area, the model uses learned functions –one for appearance based on rendered shading, and one either fortactile roughness, based on variation of strain in simulated skin,or for tactile temperature, based on a simulated skin contact area– to perform an optimization for roughness or contact area whileminimizing visual distortion. We use psychophysical experimentsto validate the results. A general overview of the process is shownin Figure 1.The development of our optimization process consists of the

following steps:• We create a set of 6300 height maps comprising a variety oftextures and grayscale images. We run simulations estimatingthe human finger contacting these heightmaps, and find theresulting field of maximum compressive strain.

• We use a convolutional neural network to learn a functiontaking the input heightmap and outputting the maximumcompressive strain field, and we compute tactile roughnesson this field.

• We use a similar neural network to learn a function takingthe input heightmap and outputting the contact area betweenthe skin and the texture.

• We learn a function for the height field’s visual appearanceusing a CNN to learn the rendering with shadow and lighting.

• We develop an optimization procedure taking the losses fromthe learned roughness or contact function and the learnedrendering function to optimize for a target tactile roughnessor temperature while minimizing change in appearance.

• We validate this procedure by testing several textures bothas renderings and as 3D-printed textures and running humanpsychophysical experiments. We compare against the simplermethod of altering tactile feeling using linear scaling.

4 OPTIMIZATIONThe optimization procedure acts to alter the geometry of the inputtexture height field, in order to modify the tactile feeling of the inputwhile minimizing its change in visual appearance.

4.1 Optimization OverviewWe use three functions in our optimization process to computetactile and visual difference estimates:

• Roughness: ϕr : Rn → Rn , where n is the number of pixels inthe height and stress maps, mapping the height field to stressmagnitudes at a plane inside the skin where tactile sensors

ACM Trans. Graph., Vol. 39, No. 6, Article 212. Publication date: December 2020.

212:4 • Chelsea Tymms, Siqi Wang, and Denis Zorin

are located. The stresses are sampled at the same resolutionas the input height field.

• Visual appearance: ϕv : Rn → Rkn , mapping the heightfield to the pixel values of k rendered images with differentlighting.

• Contact area:ϕc : R2n → Rn , wheren is the number of pixelsin the height and contact maps, mapping the height field andcorresponding strain field to the distance between the skinand the surface at each point.

In addition, we use a function V : Rn → R, to evaluate the percep-tual roughness estimate from the stress field σ = ϕr (H ) of heightfield H .

Using these functions, which we define precisely below, our targetfunctional is defined as follows. For a given input texture height fieldH0, and target perceptual roughness rtrд , target contact area ctrдand target height range [0,Htrд] we define the following energyterms:

(1) Erouдh (H ) = |rtrд − V (ϕr (H ))|: the difference between thecurrent roughness and the target roughness, with the strainvariation function V defined in Section 4.2.

(2) Econtact (H ) = |ctrд −A(H ))|: the difference between the cur-rent contact and the target contact, where A is the weightedcontact distance function defined in section 4.3.

(3) Evis (H ,H0) =∑ 1

n ∥ϕv (H0) − ϕvk (H )∥2: the visual differ-ence, computed as the L2-norm of the pixel-wise differencebetween the current rendered image and target rendered im-age, summed over the three different rendering conditionsused.

(4) Er eд(H ,H0) =∑ 1

n (∥∆x (ϕv (H0)−ϕvk (H ))∥2+∥∆y (ϕv (H0)−ϕvk (H ))∥2): the sum of difference variation regularization en-ergies for all rendering conditions, where ∆x and ∆y are finitedifference matrix operators for horizontal and vertical direc-tions; i.e., an approximation of

∫∥∇(ϕv (H0) − ϕv (H )∥2dA.

(5) Eclamp (H ) = ∥H − clamp[0,Htrд ](H )∥22 : the clamping energy

to keep the result in the [0,Htrд] range.

The total energy we minimize is defined as

E(H ,H0) =Erouдh (H ) +w1Evis (H ,H0) +w2Er eд(H ,H0)

+w3Eclamp (H )(1)

To make the optimization of this function practical, we need to com-pute E(H ,H0) as well as ∇HE(H ,H0) efficiently. However, compu-tation of Erouдh involves a 3D finite element simulation, including3D domain meshing and contact resolution; computation of Evisrequires rendering of textures with some global illumination effects.

We address both of these problems by approximating ϕr , ϕc andϕv using neural networks, as these provide (a) fast evaluation offunction values (b) evaluation of derivatives with respect to theinput parameters. The details of the approximations are discussedbelow.

Convergence criteria and weight choices. The main parameter ofthe optimization is w1, controlled by the user, which representsthe trade-off between visual fidelity and closeness to the targetroughness.

Fig. 2. Parameter convergence during optimization for roughness and visualappearance. The goal is to alter the roughness of the input texture (iteration0) while preserving its visual appearance, which is done by the final iteration.

The weightw3 is chosen to be relatively high, 105, so that the lastterm operates as constraint. The weight w2 is chosen to be lowercompared to w1, as Er eд acts as a regularizing term, minimizingsmall-scale noise by picking smoother solutions among those withlow values of the first two terms. We usew2 = 0.06.For contact area, which has values on the order of 100mm2, ap-

proximately 1000 times the typical roughness values, these weightswere scaled up by 1000.

We use a stopping criteria for optimization that places bounds onthree of the energy components: For roughness, Erouдh < εr rtrд ,with εr = 0.1, about half of the 19% threshold for tactile rough-ness discrimination described in [Tymms et al. 2018]. For visualdifference, Evis < εv ∥ϕv (H0)∥2, with εv = 8; this is proportional toimage resolution, and was experimentally found as a conservativegoal to avoid visible changes, corresponding to a 2% change in pixelvalues.

The height constraint is expected to be satisfied nearly precisely:Eclamp < εc , with εc = 10−4. We used Htrд = 3, to ensure theheight remain below 3 mm, selected as a reasonable maximumheight for a fabricable tactile texture.

An example of the optimization process for a texture is shown in2. The effect on convergence of using altering the weights is shownin Figure 3.The Adam optimizer ([Kingma and Ba 2014]) implemented in

Pytorch is used for optimization. A learning rate of 0.027 was cho-sen through trials with single parameters to permit convergenceof the parameters but avoid excessive oscillation. In the next sec-tions, we explain how the roughness, contact and visual functions,respectively ϕr , ϕc and ϕv , are defined.

ACM Trans. Graph., Vol. 39, No. 6, Article 212. Publication date: December 2020.

Appearance-Preserving Tactile Optimization • 212:5

Fig. 3. a)When a significantly (10x) lower weight is used forw1, convergenceof roughness to the target may not occur. b) A significantly higher (10x)weight for w1 causes the visual energy to converge more slowly, and it maynot reach the target threshold.

Fig. 4. In the original roughness model, a 3D FEM simulation was used tosimulate the skin touching a textured surface, and the maximum compres-sive strain field was sampled from a depth of 0.75 mm.

4.2 Tactile roughnessWe use a modified version of the model developed in [Tymms et al.2018], which computes the tactile roughness of a surface by simu-lating the strain variation field resulting from skin contact on thesurface. The computation of the model is relatively expensive; webriefly summarize the model here for completeness. The main stepof the model is a finite element method simulation of the contactof the skin with the tactile texture defined by H (x ,y), to obtaina corresponding displacement field uH (u,v,w), where u,v,w are3D coordinates in the undeformed layer of skin, with w = 0 cor-responding to the surface, and w0 = 0.75mm corresponds to theapproximate depth of the tactile receptors.

To approximate the skin, we use the same two-layer block modelas [Tymms et al. 2018]. The block is 1cm2 in surface area and 0.5cmin height, with a rigid upper half and soft lower half, and a force of10 N is used. A model of the simulation is shown in Figure 4.

For a displacement field u, ϵ[u] = 12 (∇u+∇

T u) is small-deformationstrain tensor. If λ3(ϵ) is the largest-magnitude negative (compres-sive) eigenvalue of the strain tensor, our perceptual roughness esti-mate f (H ) can be written as

f (H ) = V(λ3(ϵ(uH (·, ·,w0))))

where V is the strain variation function on the plane w = w0. Wereplace a stochastic function defined in the original model with adeterministic function described in more detail below.

The expensive step is the computation of displacements uH for agivenH : it requires sufficiently fine 3D meshing to resolve the detailat the scale of smaller texture features, and solving a nonlinear (dueto contact) constrained elastic deformation problem, which in ourcurrent implementation has a computation time of 20-40 minutes

Fig. 5. a) Stochastic sampling; b) Equivalent deterministic sampling

and uses 15GB of memory when using the required highly-refinedmesh. In addition to the cost of evaluation, it is difficult to obtainan approximation of the derivative of this function other than byextremely expensive finite differences, so optimizing a functionaldepending on the uH can only be done with gradient-free methods.

This is the step that we replace with a direct map

ϕr (H )(u,v) ≈ λ3(ϵ(uH (u,v,w0))),

represented with a neural network.

Strain variation function. In [Tymms et al. 2018], the strain varia-tion function V (σ ) was computed using a large set S of N random-ized pairs of samples (p1,p2), pi = (ui ,vi ), separated, on average, bya distance d , are computed. Denoting σ (u,v) = λ3(ϵ[uH ](u,v,w0)),

V (H ) =1N

∑(p1,p2)∈S

|σ (u1,v1) − σ (u2,v2)|

N = 8000 sample pairs were used, sampled from disks of radius0.8 mm placed at the endpoints of randomly selected segments oflength 2.2 mm.Instead of using a random sampling of points, here we use a

deterministic evaluation of variation between each point and itsneighbors within the desired distance, in order to derive a strainvariation field (Figure 5):

V (H ) =12rl

∫ l

x=0

∫ l

y=0

∫ d+r

∆=d−r

∫ π

θ=0|σ (x ,y)

−σ (x + ∆ cosθ ,y + ∆ cosθ )|dxdyd∆dθ(2)

This function is smooth, so the gradient of the complete roughnessestimates can be computed.

Learning the strain field. The FEM simulation used to computeσ (u,v) in [Tymms et al. 2018] is used solely to find a single 2Dstrain field; that is, the simulation takes as input a 2D grid (theheightmap defining the boundary conditions for the contact area),and returns as output a 2D grid (the maximum compressive strainat a depth of 0.75 mm). Image-to-image translation problems havebeen studied extensively in machine learning, and here we adapta convolutional neural network described in [Isola et al. 2017] tolearn a relationship ϕr between the input height map and the outputmaximum compressive strain.To acquire ground truth simulation data, we ran the FEM sim-

ulation for the 3D skin model using a heightmap dataset of with

ACM Trans. Graph., Vol. 39, No. 6, Article 212. Publication date: December 2020.

212:6 • Chelsea Tymms, Siqi Wang, and Denis Zorin

Fig. 6. Two examples from the learned CNN test set show the learned andground-truthmaximum compressive strain fields from the input heightmaps.Strain fields are shown with increased contrast for visual clarity.

Fig. 7. The difference in computed roughness between the learned strainfield and the real strain field is typically very low, with a median of 5.3%.

6307 image pairs, similar to the size of several datasets success-fully trained with this neural network structure. We use a set ofblack and white images and textures (including the DescribableTextures Dataset [Cimpoi et al. 2014], VisTeX [MITMediaLab 1995],and Brodatz texture database [Brodatz 1966]) and procedural tex-tures to enrich the dataset. In some cases, images were randomlycropped and/or scaled, and in some cases procedural noise wasadded. Heightmaps had a maximum vertical height of 3mm and rep-resented a texture of size 100mm2. As suggested in [Tymms et al.2018], for each simulation we found the maximum compressivestrain field at a depth 0.75 mm, and the strain field of a flat texturesimulation was subtracted to discount any effect from edges.

Inputs and outputs were scaled to 128×128px images. The set wassplit randomly into three sets: testing (312 images); training (4918),and validation (1077). We used the convolutional neural networkused as the generator in [Isola et al. 2017], with no dropout andusing BCE loss, and trained for 200 epochs with batch size 1.The learned strain field and its resulting tactile roughness value

were computed from an unseen testing set, and the learned valuewascompared against the actual value. The median error in roughnesswas 5.3%, and the average error was 8.0%, well below the perceptualthreshold of 19%. These values are well below the threshold ofdiscrimination of 19% described in [Tymms et al. 2018]. The errordistribution is shown in Figure 7.

Fig. 8. The contact area function takes as input the input heightmap (left, redchannel) and the strain field (left, green channel) and outputs the distancefield (center, where black indicates a distance of 0). The distance field canbe used to compute the contact area (right, where the contact area is black)

.

Fig. 9. The learned contact area matches the actual contact area very closely,with an error of 2.7%.

.

The network allows the roughness to be computed in an averageof 0.05 seconds, a significant speedup compared to the 20-40 minutesrequired to run the full FEM simulation.

The learned function and its gradient are used in optimization fora texture to converge toward a desired tactile roughness, as shownin Figure 2.

4.3 Contact areaComputing the contact area requires the same time-intensive FEMsimulation as computing the roughness field. To compute the con-tact area, we use a function taking as input the height field andoutputting the field of distances between the surface and the simu-lated skin at each point. The computation of this distance field isexpensive and requires an FEM simulation as described in section4.2. Therefore, we replace this step with a neural network.

Learning the contact distance field. The FEM simulation takes inthe input height field H and outputs a mesh displacement field uHdescribing the displacement of the skin when in contact with heightfield H . From this displacement field and the height field, we canacquire the field of the distance d between the skin and the inputtexture at each point, where a distance of 0 indicates skin contactwith the texture surface.

We adapt a similar convolutional neural network to learn therelationship between the input heightmapH and the output distancefield d = ϕc (H ). We used the same height field training set as usedpreviously in Section 4.2, which had about 6300 pairs. To improvethe accuracy of the learned function, we also provided the strain field

ACM Trans. Graph., Vol. 39, No. 6, Article 212. Publication date: December 2020.

Appearance-Preserving Tactile Optimization • 212:7

as input, so that the input to the function has 2 channels of input:the height field and the strain field. An example of the function’sinput and output is shown in Figure 8.

To compute the error for the testing set of size 250, the learned dis-tance field was computed for each input heightmap with its learnedstrain fields, and the contact area was computed and comparedto the actual contact area derived from simulation. The errors incomputed contact area for this set had a mean of 2.7%, as shown inFigure 9.

Contact optimization. The optimization aims to modify a textureso that its total contact area moves to a particular target. Becausecontact area itself is a discontinuous function, the optimizationprocess was often unable to converge. Therefore, we use a smoothfunction weighting the contact area at each point proportionallyto the inverse of its distance. That is, for contact distance field d,contact area is approximated by:

A(H ) =

∫ l

x=0

∫ l

y=0

11 + 80 ∗ d(x ,y)

(3)

This function provides a smooth weighted contact distance, sothat a distance of 0 has a weight of 1; weights decay rapidly so thata distance of 0.01 mm has a weight of 0.5 and a distance of 0.1 mmhas a weight of 0.1.

4.4 Visual appearanceTo preserve a texture’s visual appearance during optimization, weuse a custom function based on visual similarity of the originalheight field and the optimized one. Ideally, to measure visual sim-ilarity, we would consider all possible views of a pair of texturesunder different lighting conditions, apply a visual difference metricbetween each pair, and compute an aggregate metric. We followthese steps, but use a restricted set of lighting conditions and use thesimplest visual metric to compare the images. In Section 5, we vali-date the setup we use comparing it with a more expensive multiviewoptimization.

We found that to ensure realistic results some features of imagesused to evaluate visual similarity are critical. Specifically, we haveobserved that shadows, ambient occlusion and gloss affect visual tex-ture perception in a critical way (Figure 10), as a texture comprisesmany small elements that cast shadows over the surface. For this rea-son, we must opt for a rendering pipeline supporting these featuresto generate views of the texture, rather than, e.g., approximatingthe texture image with the dot products of the normal with the lightdirection.As discussed in Section 2, a variety of measures of visual simi-

larity exist and are widely used. Most could be used in our contextin a way similar to the function V above used for roughness; e.g.,[Zhang et al. 2018] describes a perceptual measure of visual similar-ity represented with a neural network, that can be easily applied inour context. However, we found that in the optimization context,these measures tend to be too "permissive": while these metrics aregood for measuring distance between real images, synthetic imagescan be far from a given image perceptually, but close in the sense ofthese metrics. For this reason, we opt for a relatively conservative

Fig. 10. A texture heightmap rendered with (center) or without (right) shad-owing and ambient occlusion. Shadowing in small regions of lowered heightis critical to a texture’s visual appearance.

0 0.002 0.004 0.006 0.008 0.01 0.0120

0.1

0.2

0.3

0.4

0.5

DS

SIM

L2 difference

Fig. 11. Plot showing render DSSIM and L2 difference errors for a set oftextures in optimization steps.

L2 norm of the difference between images. Figure 11 shows a scatterplot exhibiting that L2 has a correlation with DSSIM.

Rendering. Heightmaps were rendered in a gray material with lowspecularity, similar to matte plastic, using a Phong shader. Objectswere rendered with three different lighting conditions, with a singleconstant-direction parallel-ray light sources at an angle of 35°fromthe x-y plane and rotated on the z-axis 10°, 130°, or 250°. Imageswere rendered at 128 × 128px using Blender.

While differentiable renders have recently appeared [Li et al.2018], given the highly restricted nature of the renderings that weneed to compute (square texture samples), we opted for a similarapproach as we use for the stress maps for the roughness measure.As an additional benefit, this approach also provides a gradient ofthe rendered image with respect to the heightfield.

For each lighting condition, we trained the generative adversarialnetwork of [Isola et al. 2017] on a set of 4764 images, with a vali-dation set of size 1059. The network was trained for 200 iterations.Results showed high accuracy, as seen in Figures 12 and 13. All threelighting conditions had similarly high accuracy (with mean pixelerrors of 2.2%, 2.3%, and 2.4%).The neural network offers a significant speedup to rendering:

the neural network render computation time is only 0.05 secondsafter the network is loaded; while traditional rendering time isapproximately 15 seconds with ray-tracing shadows using Adaptive

ACM Trans. Graph., Vol. 39, No. 6, Article 212. Publication date: December 2020.

212:8 • Chelsea Tymms, Siqi Wang, and Denis Zorin

Fig. 12. Two examples from the test set for visual rendering. The learnedfunction for the rendering of heightmaps was learned with high accuracy:in most cases generated and real renderings are visually indistinguishable.

Fig. 13. Left: L1 loss convergence of the generator during training of theGAN on texture rendering for one lighting condition. Right: Histogramof the percent differences of all rendered pixel values across 300 real andgenerated texture pairs in the test set. Real pixel values are approximatedvery closely by the network, with most pixels changing by less than 5%. Themean difference is 2.3%.

Fig. 14. Renderings of two textures optimized for different contact areas.

QMC and 20 samples for the lighting source, and ambient lightingand occlusion.

5 RESULTS

5.1 Optimization resultsFigure 15 shows the results of altering the roughness of a selectionof textures using our optimization for a desired tactile roughness

maintaining visual appearance. Textures are rendered here using adifferent lighting setup than the ones used for learning. Texturesshown represent 10 × 10mm2 in size.Choosing a ground truth to compare to in our experiments is

somewhat difficult, as we are not aware of any previous work onoptimizing tactile properties for complex textures. We have chosenlinear scaling as one obvious way to change geometry to increasetexture roughness, while maintaining similarity to the original tex-ture; this method was used in [Tymms et al. 2018].

On the righthand side of Figure 15, we show the results of usinglinear scaling of textures to achieve the desired roughness. Texturesare first scaled in height up to a limit of 3 mm; then, if necessary,they are scaled in the x-y direction. The optimization-generatedtextures are nearly indistinguishable from the original textures,while the textures modified with linear scaling are almost alwaysnoticeably different, except in cases where the desired roughness isvery close to the original roughness. Making the texture flatter orscaling it upwards results in obvious differences. Additionally, forsome textures, a sufficient change in roughness is not achievablethrough linear scaling alone.

5.1.1 Errors.

Roughness. To ensure that the learned functions were robust tothe types of textures generated with optimization, the errors incomputed roughness were also computed for a set of 150 optimizedtextures, with three different target roughness values. The averageerror between the simulated and learning-computed roughness forthe optimized texture was 8.4% with a median of 6.4%, compared toan average of 8.0% and mean of 5.3% for the overall test set.

Contact area. The same test was performed for a set of 100 tex-tures optimized to have significantly different target contact area.Here the average error in contact area between the simulation andthe learned data was 9% with a median of 4.1%. The average errorfor the non-optimized input set was 7.9%, with a median of 2.4%.

5.2 Evaluation and comparisonsThe relationship between a surface’s geometry and its tactile prop-erties is intricate, as it depends on the difficult-to-predict way theelastic skin contacts the texture geometry. The tactile roughness isdependent on the uneven distribution of pressure resulting fromthat contact area. Optimization for these tactile properties whilemaintaining a similar appearance results in subtle changes to thetexture geometry, as shown in Figure 16. It typically is not as simpleas, for example, using height modification or frequency filtering:the target may be impossible to achieve, and the visual appearancemay not be preserved well, as discussed below.

5.2.1 Comparison with other methods.

Height modification. Linear scaling is a simple method of alteringa texture’s roughness or contact area. If a texture is scaled up verti-cally, the contact area will decrease and the roughness will increase.The relative geometry is preserved, which suggests the appearanceis also preserved to an extent. However, as seen in Figures 17 and 15,the appearance often cannot be preserved. In contrast, our contact

ACM Trans. Graph., Vol. 39, No. 6, Article 212. Publication date: December 2020.

Appearance-Preserving Tactile Optimization • 212:9

Fig. 15. Seven example textures optimized for a desired roughness. The leftmost column shows the original, target visual texture; the next three columns showthe results when the roughness is achieved through our optimization process; the final three columns show the results when the same roughness is achievedthrough linear scaling in the z and/or xy directions. The optimization process achieves the desired roughness with nearly-imperceptible changes to the visualappearance.

ACM Trans. Graph., Vol. 39, No. 6, Article 212. Publication date: December 2020.

212:10 • Chelsea Tymms, Siqi Wang, and Denis Zorin

Fig. 16. Example of a texture cross-section for textures optimized for rough-ness. Changes to peaks and troughs are not easily predictable.

Fig. 17. Comparison of textures optimized or vertically scaled to alter con-tact area. The optimization results in smaller changes to the geometry andbetter preservation of visual appearance.

Fig. 18. Comparison of modifying a texture’s roughness by modifying thedominant frequency of the texture, and using the optimization process.Modifying the frequency adds large-scale noise to the geometry, which isclearly visible on the texture.

and roughness optimizations alter the geometry in precise and smallways to change the contact while preserving appearance.

Frequency modification. Another intuitive method of altering atexture’s roughness is to use bandpass filtering, altering the texture

Fig. 19. Top: We tested the optimization with the addition of more pointsof view, 45° from vertical on the yz and xz planes, along with the originalsingle top point of view. Bottom: The table shows the percent pixel differencebetween the optimization height map results, for the original top point ofview, one additional point of view, or all five points of view. Only smallchanges occur when one or more additional points of view are added.

in the frequency domain to reduce or amplify certain frequencies.Literature and psychophysics studies suggest that roughness per-ception is highest when features are spaced at a wavelength of 2-3mm apart [Hollins and Bensmaïa 2007].

However, we have observed that modifying a texture to alter thefrequencies in that range does not alter the roughness is a reliablemanner for all textures. For example, if the frequency is increasedbut the amplified areas are not contacted by the skin, the roughnesswill not be affected. More importantly, modifying the frequenciesdoes not guarantee preservation of visual appearance. Figure 18shows an example of modifying a texture’s roughness by 4 just-noticeable-difference (JND) thresholds by increasing the geometry’sfrequencies in the 2.5mm wavelength range. The large-scale noiseadded to the geometry to achieve the target roughness is visiblein the texture. Our optimization produces smaller changes in thegeometry that are not easily apparent.

Other filtering methods. Contact area and tactile roughness de-pend on the way the elastic skin conforms around the geome-try, which changes in nontrivial ways when the geometry is e.g.smoothed using a filter. For example, smoothing a sharp peak resultsin increased contact area and decreased height, which decreasesroughness; but smoothing a round peak results in decreased contactarea and therefore may increase perceived roughness.

5.2.2 Alternate points of view. Our visual optimization used a single,top point of view and multiple lighting conditions. To determine thevalue of utilizing additional view directions, we ran the optimizationwith four additional points of view, 45° from the vertical directionat four angles (Figure 19, top) and with the same three lighting

ACM Trans. Graph., Vol. 39, No. 6, Article 212. Publication date: December 2020.

Appearance-Preserving Tactile Optimization • 212:11

conditions. Aswith the top view, additional neural networks (one perview) were successfully trained to produce the rendered image fromthe specified angle and light source. Rendered images were rescaledto 128x128 pixels. The roughness optimization was run with visualweight assigned to the new points of view: the single additionalpoint of view was weighted 40%; and when using all five points ofview, 40% was given to the four new points of view. The remainderof the visual weight was given to the top view direction. We choosea higher weight for the top view to reflect higher importance ofdirect viewing in surface perception. As the sensitivity of the resultsto the addition of new points of view was shown to be low, we didnot explore other options for weight allocation further.We evaluated the results of the new optimizations against each

other and the results from the previous optimization, using pairwisecomparisons. As shown in the table in Figure 19, the optimizedheightmap did not change significantly when new points of viewwere incorporated: the image pixel difference between the resultsdiffered by less than 2.1%. For this reason, we determined that usinga single top viewpoint in the visual difference functional is an ade-quate choice: this agrees with the intuition that views of a texturefrom different directions are highly correlated from a broad rangeof angles.

5.3 Visual ExperimentsIn a set of visual psychophysics user studies, we tested the accuracyof our visual optimization by comparing the source texture appear-ance to the optimized texture appearance and a simple baselinemethod. For the baseline, we used a version of the texture scaledlinearly in the direction perpendicular to the surface to achievethe same roughness. We also tested the validity of our single-viewformulation by comparing it with a more expensive multiple-viewformulation.

5.3.1 Stimuli. Six source textures were tested (shown in Figure 20).Source textures comprised different types of natural and manufac-tured textures and had different base tactile roughness values.For each source texture, four target roughness values were se-

lected, and textures were optimized to achieve those four roughnessvalues. Additionally, successive linear scaling was used to createalternate textures with the same four target roughnesses.For each source textures, a set of eleven 25mm square textured

stimuli plates were 3D-printed using a B9Creator DLP stereolithog-raphy printer, at 50 µm resolution. Three of the textures were de-rived from different patches of the source textures; four were theoptimized textures; and four were linearly scaled textures.As we have used B9 Black resin to yield the most accurate geo-

metric results, to improve visibility, textures were spray-paintedwith matte gray primer (Rust-Oleum Flat Gray Primer) and a coatof clear matte varnish.

5.3.2 Experiments. In each trial, two textures were placed in a casethat slides beneath a circular window, through which one of thetextures could be seen. Observers viewed the textures overhead at adistance of 40 cm, viewing through a mirror placed at an angle of45° as seen in Figure 21.

Fig. 20. Heightmaps of the six textures used in visual experiments.

Fig. 21. The experimental setup for visual experiments, which allows thesubject to comfortably view the three trial textures from an overhead view.

During each trial, observers were presented with two differenttextured surfaces sequentially. One of the pair was derived from theoriginal source texture, and the other could be either another patchof the source texture; a version scaled to a different roughness usinglinear scaling; or a version scaled to a different roughness using ouroptimization process. Observers were tasked to choose whether thetwo textures appeared the same (i.e., derived from the same texturesource) or different. Locations of the pair of textures were switchedwith equal probability. Observers were given 4 seconds to view thetextures two times each.

Trials were presented in a pseudorandom ordering, with the con-straint that trials using the same source texture were separated byat least two trials.Six subjects took part in the experiments and performed four

repetitions per texture pair. Experiments took place in an officesetting with ambient fluorescent lighting.

Experiment results. Our experiment results showed that the opti-mization process performed substantially better than linear scaling.Figure 22 shows the proportions for the 48 test textures. The dottedblack line on each graph shows the threshold at which subjectsjudged the reference textures from the same patch as similar to eachother. Of the 24 optimized textures tested, 20 of them were judgedthe same as the source at least 50% of the time. In contrast, only 4of the linearly scaled textures were judged the same as the sourceat least 50% of the time. In fact, half of the linearly scaled textureswere judged different from the reference textures over 90% of the

ACM Trans. Graph., Vol. 39, No. 6, Article 212. Publication date: December 2020.

212:12 • Chelsea Tymms, Siqi Wang, and Denis Zorin

Fig. 22. Top panel: results from the experiments for each of our six textures.Bottom panel: Proportions for all textures accumulated by JND distancefrom the reference texture. The x-axis for each graph shows the distancein just-noticeable-differences in roughness values between the test textureand the reference, and the y-axis shows the proportion judged the same.The dotted black line shows the reference threshold at which the referencetextures were judged the same as each other.

time throughout all trials. 23 of 24 of the optimized textures werejudged more similar than the non-optimized version. The other onewas derived from T1, a texture which was had high sensitivity tosmall changes, as shown by the fact that only 50% of textures fromthe same source were judged the same.The bottom panel of Figure 22 shows textures accumulated by

JND threshold distance from the reference texture, according to thedifference threshold of 19% found in [Tymms et al. 2017]. In all cases,the optimized textures match the references better than the linearlyscaled textures. In general, linear scaling tends to perform moresuccessfully for small decreases in roughness, but performs poorlyfor larger decreases or increases in roughness. Optimization createstextures that appear very similar for small differences in roughness;the visual difference is only visible when the target roughness ismuch larger.

5.3.3 Alternate points of view. To validate the visual similarity fordifferent points of view and complex geometry, a subset of three tex-tures was chosen for a less restricted version of the experiments. Inthese experiments, three texture heightmaps (T4, T5, T6) were usedto fabricate a new set of textured objects, whose curved geometryincludes a local maximum and saddle (Figure 23). As in the previousexperiment, the reference stimulus was the source texture, and the

Fig. 23. A shape with curvature, including a local maximum and a saddle,was used for a less-restricted visual experiment. A rendering of the texturedshape with the T5 input texture and a two-JND rougher optimized outputtexture is shown here.

test stimuli included a source texture along with three optimizedtextures and three linearly-scaled textures of different roughnessvalues. The textures on the test stimuli were shifted by 50% so as tonot appear identical to the reference source texture. Protocols weresimilar to the previous experiment: the reference and test stimuluswere displayed sequentially for one second each, and the participantwas asked whether the two plates had the same or different textures.The participant sat around 30 cm from the textures on the table, andwas free to move their head and rotate the textures; in combinationwith the shape’s curvature, the viewer was able to see the texturegeometry from many directions.Eight people participated in the study, and each performed two

evaluations of each texture against the source. As shown in theresults in Figure 24, participants judged the optimized textures thesame as the reference a majority of the time, but the linearly-scaledtextures were almost never judged the same. The linearly-scaledtextures were judged the same as the reference at a lower rate thanthe previous experiment as a result of the new viewing angles, sug-gesting differences are more apparent when many view directionsare allowed; in contrast, the optimized textures were judged thesame at a rate similar to the previous experiments, showing thatour optimization is robust to different viewing directions.The bottom panel of Figure 22 shows textures accumulated by

just-noticeable-difference (JND) threshold distance from the refer-ence texture, according to the difference threshold of 19% found in[Tymms et al. 2017]. In all cases, the optimized textures match thereferences better than the linearly scaled textures. In general, linearscaling tends to perform more successfully for small decreases inroughness, but performs poorly for larger decreases or increasesin roughness. Optimization creates textures that appear very simi-lar for small differences in roughness; the visual difference is onlyvisible when the target roughness is much larger.

5.4 Tactile roughness experimentsTactile roughness experiments were used to validate the tactileroughness optimization. Stimuli for this experiment were the samesix sets of five 3D-printed texture plates used in the first visualexperiments.In the tactile experiments, ten participants were asked to sort

groups of five texture plates by touch from smoothest to roughest.In each trial, the five plates were placed in a random order beneath atranslucent panel that obscured the textures’ fine-scale appearance.Participants used their dominant index finger to press each plate and

ACM Trans. Graph., Vol. 39, No. 6, Article 212. Publication date: December 2020.

Appearance-Preserving Tactile Optimization • 212:13

Fig. 24. Texture similarity results from the experiments on a subset of threetextured shapes with curved geometry. The x-axis shows the difference inroughness just-noticeable-difference intervals between the test texture andthe reference, and the y-axis shows the proportion judged the same. Thedotted line shows the reference threshold a which the reference textureswere judged the same as each other.

Fig. 25. This map shows the average proportion of trials in which eachtexture (vertical) was sorted as tactually rougher than each other texture(horizontal). Almost all discrepancies were between textures designed todiffer by one threshold, and the error rate is close to the expected 84%.

determine a sorted order. They were free to feel the plates multipletimes and to use as much time as needed.

5.4.1 Results. The heat map in Figure 25 shows the mean propor-tion with which each texture plate was judged rougher than eachother of the same texture source. Textures are numbered from 1 to5 according to the designed JND level from smoothest to roughest.Participants were able to reliably sort the plates, including pairsdiffering by only one threshold, a majority of the time. Participantssorted these most-similar pairs according to the designed ordering85% of the time (across-subject standard deviation 4%), which isnearly the expected threshold of 84% with which consecutive plateswere designed. Only three of the 60 total comparisons resulted in anordering discrepancy between a pair of textures differing by morethan one threshold step.

5.5 Contact area experimentsTwelve textures were fabricated to experimentally verify the changein contact area. For each texture, three versions were fabricated:the original texture, a texture optimized to have 70% the contact

Fig. 26. The results of one texture optimized for a lower (top) or higher (bot-tom) contact area. From left to right: texture rendering, texture fingerprint,thresholded finger contact, and simulated contact.

Fig. 27. Comparison of the experimental and simulated contact area of 36textures.

area, and a texture optimized to 140% the contact area. These 36textures were fabricated as 10mm squares, using B9Creator V1.2with a resolution of 50 µm in B9 black resin.

To compute the contact area, the fabricated texture surfaces werecoated in ink using a compliant sponge and an ink-pad. The thumbor second finger of each participant was covered with Tegaderm(3M), a thin layer of transparent plastic with a thickness of 0.1 mm.The tegaderm was used to avoid discrepancies due to the fingerprintridges, and to provide easier cleaning of the finger surface betweentrials to avoid ink residue. The finger was pressed against the texturewith a weight of 8.8 N placed on the finger to ensure uniform force.Then the finger was pressed to a sheet of paper to derive an inkprintof the contact surface.Nine participants provided texture finger prints in this manner.

The prints were scanned at 600dpi in 8-bit grayscale, and the contactareas were computed and averaged over all subjects. The pipeline isshown in Figure 26. All optimized contact surfaces fell within 20%of their target contact area, with an average difference of 9.2%. Addi-tionally, the contact areas of the 36 printed textures were comparedagainst the simulated contact areas. Figure 27, shows the result ofthis comparison, with a close linear correlation with an approximateslope of 1.

ACM Trans. Graph., Vol. 39, No. 6, Article 212. Publication date: December 2020.

212:14 • Chelsea Tymms, Siqi Wang, and Denis Zorin

Fig. 28. Photograph of sets of bronze-cast textures used for tactile tempera-ture experiments (T3 and T5). Textures are ordered from less to more contactarea. Inconsistencies in appearance may be due to the manufacturing pro-cess and polishing.

5.5.1 Temperature Experiments. An additional set of experimentswas used to determine whether the printed textures felt differentfrom one another in tactile temperature.

Stimuli. The stimuli for this experiment were twelve textures:four base textures (T2, T3, T4, T5) each optimized with three differ-ent contact areas differing by 40%. Each texture surface was appliedto the top of a flat plate 1.2mm in height. To enable tactile discrim-inability at room temperature for the purposes of the experiment,we used metal rather than plastic, due to its higher thermal con-ductivity: texture models were 3D printed and cast in bronze. Aphotograph is shown in Figure 28.

Setup and protocols. In the experiments, textures placed on a flatcast-iron plate over ice, which maintained a temperature at the topsurface of approximately 16°C as measured by a laser thermometer.In each trial, the participant was presented with two textures

of the same class with different optimized contact area (either 40%smaller or 40% greater, where 40% is approximately the JND thresh-old for thermal discrimination described by [Tiest and Kappers2009]). A cover was placed over the experiment area to hide thetextures from view.In experiments, eight participants were asked to feel the two

textures using static pressure with the index finger and to answerwhich texture felt colder. Participants were allowed as much timeas needed to feel the textures, and were given time between trialsto ensure the finger itself was not too cold.

Results. Throughout the trials, participants responded that thetexture with more contact area felt colder 83.1% of the time, whichsuggests that the threshold of discrimination is indeed approxi-mately 40%, as found by previous research. For pairs that differed bytwo JND, the texture with more contact area was judged as colder91% of the time.

6 APPLICATIONSApplying tactile textures to fabricated objects is useful for both aes-thetic and practical purposes. We have formulated several examplesand have fabricated a subset as textured 3D models (Figure 29).

Modeling. Often, one might prefer a particular visual texture foran object while preferring a distinct tactile feeling. For example,imitation plastics are often used to match a specific material’s ap-pearance, and our model could help match the material’s desiredfeeling. Our model could enable the creation of multiple surfacesthat look similar but feel different, either for aesthetic purposes orto serve as a tactile signifier of another characteristic. It could alsobe used to make surfaces that feel similar but look different, whichcould be combined in a visual pattern or logo, for example on a mat,that feels uniform when touched.

We manufactured two different animal models as examples. First,a starfish model was textured with a relatively smooth surface tex-ture (roughness 0.05). The texture was altered to feel rougher (rough-ness 0.092), and was applied to produce another, rougher fabricatedstarfish with the same appearance (shown in Figure 1). We also fab-ricated a textured model of a tree frog with a keeled wood pattern.The initial texture had a roughness of 0.045, and was modified toproduce a smoother texture of roughness 0.03 and used to fabricatea smoother frog with the same appearance (Figure 29a).

Wearables. Tactile and visual aesthetics are common to clothing,jewelry, and other wearables, which often touch the skin. Tactileproperties may also serve as functional. Some wearable devices,such as headphones with buttons, have areas that the user findsand uses by touch rather than sight; these areas could be hiddenvisually for aesthetic appearance or for more discreet use. Wearablescould also use roughness actively to convey haptic signals thatare unobtrusive to the user and invisible to others: a watchbandwith a high-resolution pin array could produce different tactiletextures that could be felt by the user to convey different signals;similarly, altering the contact area could allow different rates ofthermal transfer between a wearable and the user’s skin.

As an example of a wearable with tactile aesthetics, we fabricateda bracelet band with links having the same texture appearance,but alternating smoother and rougher tactile feelings (Figure 29b).Smoother links had a roughness of 0.06, and the rougher links hadroughness 0.09.

Accessibility. Tactile items and textures are particularly useful forpeople with visual impairments. If a designer creates two objectsthat look different, our model could be used to tune the textures sothat they also feel different, while preserving the designed visualappearance. Visual textured objects are commonly used in piecesfor board games, puzzles, and household items, where colors orvisual labels are often used to distinguish between otherwise similarobjects or regions. A variation in tactile feeling can provide similarcues for a person unable to see the differences. As an example, weproduced a model for a dimmer light switch slider. The texturegradient looks the same throughout, but the roughness increasessuch that it will correspond with the light intensity as the slider ismoved (Figure 29c).

7 CONCLUSIONWe have presented an optimization procedure to preserve textureappearance while altering tactile roughness or temperature.We usedneural networks to enable computation of tactile roughness, contact

ACM Trans. Graph., Vol. 39, No. 6, Article 212. Publication date: December 2020.

Appearance-Preserving Tactile Optimization • 212:15

Fig. 29. From left to right: textured models; models colored by roughness;photographs of 3D-printed models. a) Two frogs textured with a tactile woodtexture optimized for different roughnesses. (Modified from [YahooJAPAN2013]). b) A bracelet with textured links that have the same visual texturebut have alternating roughnesses. c) A procedural texture slider for a lightswitch, where the tactile roughness corresponds to light intensity.

area, and visual appearance at speeds several orders of magnitudefaster than the standard methods, providing differentiable func-tions usable in optimization for a target appearance and feeling. Weused psychophysical experiments to demonstrate that our methodprovides a significant improvement over simple linear scaling incontrolling tactile roughness, and we provided several examples ofhow our procedure can be used to produce interesting and usefultextured objects.

7.1 Limitations and Future WorkWhile the tactile model has been tested on objects withmoderate cur-vature ([Tymms et al. 2018]), it may not be usable for high-curvature3D objects. Furthermore it was tuned for hard materials, and it hasa minimum feature resolution; using it for soft or fine materialsmay require changes. Our static touch simulation is adequate fordynamic touch up to a certain resolution, as static touch receptorsdominate perception for features over 100µm ([Hollins and Risner2000]); nevertheless it may be improved by dynamic simulation. Themodel is also based on a simulation of a simplified model of humanskin, which, while found to be robust, may be improved by a morecomplex model. Our procedure could used for a different materialor more physically complex skin structure by retraining the neuralnetwork on a new set of simulation field outputs.

Similarly, our procedure describing visual appearance was tunedfor the shading of the our material (diffuse plastic resin) and may notbe directly usable for surfaces that are much more glossy, translu-cent, or non-smooth. In these cases, our method could be modifiedto learn the rendered appearance for a particular desired materialgiven a suitable training set. However, as seen in Figure 28, ourcurrent visual model can still work to preserve visual appearancefairly well even for non-matte materials.Our model presents a tradeoff between preserving exact visual

appearance and achieving an exact tactile roughness. Very highchanges in tactile roughness may not be achievable while fullypreserving visual appearance. We found that similarly-appearingtextures can typically be produced within a range of 3-4 JND thresh-olds in each direction. Our metric for visual appearance similarity islikely a lower bound for perceptual similarity, so a fast perceptually-based method for texture similarity could be used instead in theoptimization and could improve texture generation. Our model wasevaluated to target either a tactile roughness or contact area; optimiz-ing for both or more quantities is future work. Other optimizationparameters could also be used: for example, we could enforce print-ability constraints depending on the printer used to manufacture amodel.Our visual model uses shading from an overhead view with am-

bient lighting. At severe angles or severe lighting conditions, thedifferences may be more apparent. Our model could be tuned toa particular lighting condition or viewpoint if it were used in thetraining set, but any optimized texture likely will not appear exactlythe same under all lighting and viewing conditions, as some geo-metric changes will always be present near the surface. However,as we found in both optimization tests and human user studies withcurved objects, our optimization using a single viewpoint is robust,and the results look similar to the target even when viewed at otherangles.

Our model limits texture height to 3mm as a manufacturing con-straint. We have observed that due to the limited elasticity of thefinger, textures deeper than this are not different tactually fromthose with the lower-depth truncated to 3mm. However, we couldeasily optimize a taller texture by optimizing the top 3mm of it, andpreserving the remainder.

To aid in the fabrication process, our method could be integratedinto a 3D modeling tool to provide precise control of tactile feeling

ACM Trans. Graph., Vol. 39, No. 6, Article 212. Publication date: December 2020.

212:16 • Chelsea Tymms, Siqi Wang, and Denis Zorin

when modeling a textured fabricable object. Using our model andexisting models for compliance using compliant microstructures,it would be possible to control three of major dimensions of touch:compliance, temperature, and roughness, and to study the unknowninteractions between these properties. Creating objects with differ-ing tactile properties as separate from appearance may also be ofinterest to the fields of neuroscience and neurophysiology in futurestudies of psychophysics and multi-modal perception.

REFERENCESConnelly Barnes and Fang-Lue Zhang. 2016. A survey of the state-of-the-art in patch-

based synthesis. Computational Visual Media (2016), 1–18. https://doi.org/10.1007/s41095-016-0064-2

Phil Brodatz. 1966. Testures: A Photographic Album for Artists and Designers. Dover.Desai Chen, David IW Levin, Piotr Didyk, Pitchaya Sitthi-Amorn, andWojciechMatusik.

2013. Spec2Fab: a reducer-tuner model for translating specifications to 3D prints.ACM Transactions on Graphics (TOG) 32, 4 (2013), 135.

M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, , and A. Vedaldi. 2014. DescribingTextures in the Wild. In Proceedings of the IEEE Conf. on Computer Vision and PatternRecognition (CVPR).

Charles E Connor, Steven S Hsiao, John R Phillips, and Kenneth O Johnson. 1990. Tactileroughness: neural codes that account for psychophysical magnitude estimates. TheJournal of neuroscience 10, 12 (1990), 3823–3836.

Donald Degraen, André Zenner, and Antonio Krüger. 2019. Enhancing Texture Percep-tion in Virtual Reality Using 3D-Printed Hair Structures. (2019).

Alexei A Efros and Thomas K Leung. 1999. Texture synthesis by non-parametricsampling. In iccv. IEEE, 1033.

Oskar Elek, Denis Sumin, Ran Zhang, Tim Weyrich, Karol Myszkowski, Bernd Bickel,Alexander Wilkie, and Jaroslav Křivánek. 2017. Scattering-aware texture reproduc-tion for 3D printing. ACM Transactions on Graphics (TOG) 36, 6 (2017), 241.

Galal Elkharraz, Stefan Thumfart, Diyar Akay, Christian Eitzinger, and Benjamin Hen-son. 2014. Making tactile textures with predefined affective properties. AffectiveComputing, IEEE Transactions on 5, 1 (2014), 57–70.

Alexander IJ Forrester and Andy J Keane. 2009. Recent advances in surrogate-basedoptimization. Progress in aerospace sciences 45, 1-3 (2009), 50–79.

Leon Gatys, Alexander S Ecker, and Matthias Bethge. 2015. Texture synthesis usingconvolutional neural networks. In Advances in neural information processing systems.262–270.

Bjoern Haefner, Yvain Quéau, Thomas Möllenhoff, and Daniel Cremers. 2018. Fightill-posedness with ill-posedness: Single-shot variational depth super-resolutionfrom shading. In Proceedings of the IEEE Conference on Computer Vision and PatternRecognition. 164–174.

Jan S Hesthaven and Stefano Ubbiali. 2018. Non-intrusive reduced order modeling ofnonlinear problems using neural networks. J. Comput. Phys. 363 (2018), 55–78.

Mark Hollins and Sliman J Bensmaïa. 2007. The coding of roughness. Canadian Journalof Experimental Psychology/Revue canadienne de psychologie experimentale 61, 3(2007), 184.

Mark Hollins and S Ryan Risner. 2000. Evidence for the duplex theory of tactile textureperception. Perception & psychophysics 62, 4 (2000), 695–705.

Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. 2017. Image-to-imagetranslation with conditional adversarial networks. arXiv preprint (2017).

Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization.arXiv preprint arXiv:1412.6980 (2014).

Tzu-Mao Li, Miika Aittala, Frédo Durand, and Jaakko Lehtinen. 2018. DifferentiableMonte Carlo Ray Tracing through Edge Sampling. ACM Trans. Graph. (Proc. SIG-GRAPH Asia) 37, 6 (2018), 222:1–222:11.

Louise R Manfredi, Hannes P Saal, Kyler J Brown, Mark C Zielinski, John F Dammann,Vicky S Polashock, and Sliman J Bensmaia. 2014. Natural scenes in tactile texture.Journal of neurophysiology 111, 9 (2014), 1792–1802.

Rafat Mantiuk, Kil Joong Kim, Allan G Rempel, andWolfgang Heidrich. 2011. HDR-VDP-2: a calibrated visual metric for visibility and quality predictions in all luminanceconditions. In ACM Transactions on graphics (TOG), Vol. 30. ACM, 40.

Rodrigo Martín, Min Xue, Reinhard Klein, Matthias B Hullin, and Michael Weinmann.2019. Using patch-based image synthesis to measure perceptual texture similarity.Computers & Graphics 81 (2019), 104–116.

MITMediaLab. 1995. Vision texture, VisTexdatabase. http://vismod.media.mit.edu/vismod/imagery/VisionTexture/

Michal Piovarči, David I.W. Levin, Jason Rebello, Desai Chen, Roman Ďurikovič,Hanspeter Pfister, Wojciech Matusik, and Piotr Didyk. 2016. An Interaction-Aware,Perceptual Model For Non-Linear Elastic Objects. ACM Transactions on Graphics(Proc. SIGGRAPH) 35, 4 (2016).

Javier Portilla and Eero P Simoncelli. 2000. A parametric texture model based on jointstatistics of complex wavelet coefficients. International journal of computer vision

40, 1 (2000), 49–70.M Raissi, P Perdikaris, and GE Karniadakis. 2019. Physics-informed neural networks:

A deep learning framework for solving forward and inverse problems involvingnonlinear partial differential equations. J. Comput. Phys. 378 (2019), 686–707.

Olivier Rouiller, Bernd Bickel, Jan Kautz, Wojciech Matusik, and Marc Alexa. 2013.3D-printing spatially varying BRDFs. IEEE computer graphics and applications 33, 6(2013), 48–57.

Christian Schüller, Daniele Panozzo, and Olga Sorkine-Hornung. 2014. Appearance-mimicking surfaces. ACM Transactions on Graphics (TOG) 33, 6 (2014), 216.

Liang Shi, Vahid Babaei, Changil Kim, Michael Foshey, Yuanming Hu, Pitchaya Sitthi-Amorn, Szymon Rusinkiewicz, and Wojciech Matusik. 2018. Deep multispectralpainting reproduction via multi-layer, custom-ink printing. In SIGGRAPH Asia 2018Technical Papers. ACM, 271.

Wouter M Bergmann Tiest. 2010. Tactual perception of material properties. Visionresearch 50, 24 (2010), 2775–2782.

Wouter M Bergmann Tiest and Astrid ML Kappers. 2009. Tactile perception of thermaldiffusivity. Attention, perception, & psychophysics 71, 3 (2009), 481–489.

Cesar Torres, Tim Campbell, Neil Kumar, and Eric Paulos. 2015. HapticPrint: DesigningFeel Aesthetics for Digital Fabrication. In Proceedings of the 28th Annual ACMSymposium on User Interface Software & Technology. ACM, 583–591.

Chelsea Tymms, Esther P Gardner, and Denis Zorin. 2018. A Quantitative PerceptualModel for Tactile Roughness. ACM Transactions on Graphics (TOG) 37, 5 (2018), 168.

Chelsea Tymms, Denis Zorin, and Esther P Gardner. 2017. Tactile perception of theroughness of 3D-printed textures. Journal of Neurophysiology 119, 3 (2017), 862–876.

Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, and Victor S Lempitsky. 2016. TextureNetworks: Feed-forward Synthesis of Textures and Stylized Images.. In ICML. 1349–1357.

Thomas SA Wallis, Christina M Funke, Alexander S Ecker, Leon A Gatys, Felix AWichmann, and Matthias Bethge. 2017. A parametric texture model based on deepconvolutional features closely matches texture appearance for humans. Journal ofvision 17, 12 (2017), 5–5.

Kai Wang, Guillaume Lavoué, Florence Denis, and Atilla Baskurt. 2008. A compre-hensive survey on three-dimensional mesh watermarking. IEEE Transactions onMultimedia 10, 8 (2008), 1513–1527.

Xin Wang, Man Jiang, Zuowan Zhou, Jihua Gou, and David Hui. 2017. 3D print-ing of polymer matrix composites: A review and prospective. Composites Part B:Engineering 110 (2017), 442–458.

Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. 2004. Image qualityassessment: from error visibility to structural similarity. IEEE transactions on imageprocessing 13, 4 (2004), 600–612.

Zhou Wang, Eero P Simoncelli, and Alan C Bovik. 2003. Multiscale structural similarityfor image quality assessment. In The Thrity-Seventh Asilomar Conference on Signals,Systems & Computers, 2003, Vol. 2. Ieee, 1398–1402.

Li-Yi Wei and Marc Levoy. 2000. Fast texture synthesis using tree-structured vectorquantization. In Proceedings of the 27th annual conference on Computer graphics andinteractive techniques. ACM Press/Addison-Wesley Publishing Co., 479–488.

YahooJAPAN. 2013. Frog. https://www.thingiverse.com/thing:182144 Modified.Ying Yang, Ruggero Pintus, Holly Rushmeier, and Ioannis Ivrissimtzis. 2017. A 3D

steganalytic algorithm and steganalysis-resistant watermarking. IEEE transactionson visualization and computer graphics 23, 2 (2017), 1002–1013.

Ning Yu, Connelly Barnes, Eli Shechtman, Sohrab Amirghodsi, and Michal Lukac. 2019.Texture Mixer: A Network for Controllable Synthesis and Interpolation of Texture.In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

Lin Zhang, Lei Zhang, Xuanqin Mou, David Zhang, et al. 2011. FSIM: a feature similarityindex for image quality assessment. IEEE transactions on Image Processing 20, 8(2011), 2378–2386.

Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. 2018.The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In TheIEEE Conference on Computer Vision and Pattern Recognition (CVPR).

Xiaoting Zhang, Guoxin Fang, Chengkai Dai, Jouke Verlinden, Jun Wu, Emily Whiting,and Charlie CL Wang. 2017. Thermal-comfort design of personalized casts. InProceedings of the 30th Annual ACM Symposium on User Interface Software andTechnology. ACM, 243–254.

Yang Zhou, Zhen Zhu, Xiang Bai, Dani Lischinski, Daniel Cohen-Or, and Hui Huang.2018. Non-stationary Texture Synthesis byAdversarial Expansion. ACMTransactionson Graphics (Proc. SIGGRAPH) 37, 4 (2018), 49:1–49:13.

ACM Trans. Graph., Vol. 39, No. 6, Article 212. Publication date: December 2020.


Recommended