A Comparison of Tactile Sensors for In-Hand Object Location

This work presents an extensive analysis of the usefulness of tactile sensors for in-hand object localization. Our analysis is based on a previous work where we proposed a method for the evaluation of tactile data using two algorithms: a Particle Filter algorithm and an Iterative Closest Point algorithm. In particular, we present a comparison of six different sensors, including two pairs of sensors based on similar technology, showing how the design and distribution of tactile sensors can affect the performance. Also, together with previous results where we demonstrated the importance of the synergy between tactile data and hand geometry, we corroborate that it is possible to obtain more similar performance with a simple fingertip sensor, than with more complex and expensive tactile sensors.


Introduction
Touch is an important human sense and a pathway for our understanding of the environment.Indeed, the human hand is an excellent tool as much for manipulation [1] as for sensing.In the same way, robots need hands with tactile sensors for object manipulation and recognition [2].Over the past two decades the improvement of tactile sensors for robots has resulted in many touch sensors, exploring different modes of transduction.In fact, production of tactile sensors with new designs continues nowadays [3][4][5].
Humans have the ability to recognize common objects by touch alone [6].For example, in the presence of visual uncertainty due to lighting, shadows, and occlusions, the tactile information is absolutely necessary to estimate object localization.In particular, shape and material properties (e.g.texture, temperature, and stiffness) are critical.As well as humans, robots sometimes must localize objects using tactile sensing alone.In fact, both human and robot tactile sensors look for the same important information and also help during grasping tasks, giving information about the contact forces, or detecting slipping [7].
There are many works that deal with object positioning using tactile data; most of them are based on probabilistic state estimation [8,9].The great advantage of using these probabilistic techniques is that they allow robots to work with the high uncertainty of tactile data.In particular, Bayesian techniques have shown such a good performance in mobile robot localization [10] that many works have adapted them for the haptic localization problem.For example, Chhatpar and Branicky [11] use Particle Filtering to deal with the uncertainty in location during robotic assemblies.Schaeffer and Okamura [12] explore Simultaneous Localization and Maping (SLAM) methods in order to improve the recognition during tactile surface exploration.Gadeyne and Bruyninckx [13] implement a Monte Carlo algorithm for object localization using the information of a force sensor installed at the endeffector of a manipulator robot.They conclude that, due to computational and real time requirements, it was mandatory to implement improved optimization methods over the basic Bayesian algorithm.In this sense, Petrovskaya and Khatib have recently proposed a Monte Carlo optimization, called Scaling Series, that performs a series of refinements using annealing [14].More recently, in [15,16], Lepora proposes an active Bayesian perception for Simultaneous Object Localization and Identification.
There are not many works that deal with measuring the performance of tactile sensors for object positioning.In our previous work [17] we presented two methods to evaluate the usefulness of tactile data.The first method, based on particle filtering, was intended to evaluate the quality of tactile data.The second method, based on ICP, evaluated the synergy of tactile data with the geometry of robot hands.This paper is an extension of that work in several ways.First, we present the information of five real sensors of different complexity and compare their performance with an ideal sensor.Second, besides comparing sensors with different technologies, we also compare a set of sensors which are based on the same fundamentals with the goal of clearly understanding the importance of the design of the sensor.Finally, in this work, we focus on tactile data alone, to obtain a comparison decoupled to the robot hand geometry.
The paper is organized as follows.First, we discuss the results obtained in [17] that justify this work; we also explain the MCL algorithm.Section 3 presents the sensors that are used for testing.Section 4 shows the localization results obtained with different objects.Section 5 presents an analysis of the results together with the sensor performance comparison.

Evaluation Method
As was seen in the previous section, many authors propose techniques to optimize the location and identification of objects by touch sensors.However, our intention is not to optimize the process of localization but to obtain a measurement of how good a sensor is in comparison with others.Nikandrova and Kyrki presented in [18] a similar approach in which the capability of a tactile sensor was evaluated by means of the error between predicted and contact readings during a manipulation task.In our previous work [17], we proposed two evaluation methods: the first one was based on the Iterative Closest Point (ICP) algorithm that included the synergy between the tactile sensor data, the robot hand, and the object geometry.The second method was based on the Monte Carlo Location algorithm (MCL) which only uses contact points for object localization (i.e., it relies exclusively on the data from tactile sensors).
The comparison of four different sensors using the ICP method, as is shown in Figure 1, demonstrated that the quantity/quality of the tactile data is not so important when other data such as object and hand geometry and robot configuration is available.Basically, the number of possible locations of the object in the hand is reduced drastically if we know the 3D model of both object and hand (i.e., only few object positions will be feasible in a closed hand).Thus, with little tactile information, we can disambiguate and find the right object location.In other words, an increment of the tactile sensor complexity does not improve the object pose precision.
Therefore, the comparison with ICP was very conclusive about the usefulness of simple sensors in localization problems (i.e., quite similar results in object localization can be obtained with simple or complex sensors).This justifies the use of simple tactile sensors for object localization.However, this method is not good enough if we want to compare two sensors based on similar technology (e.g., two simple sensors) as the sensors, as we have already said, achieve similar results.Also, not always the object and its model are known a priori.For example, in recognition problems, the object can be any of a long list of candidates, which makes the use of ICP algorithms unfeasible.That is why in this paper we have chosen to use the MCL evaluation algorithm to perform a more extensive analysis of different tactile sensors.

MCL Based Evaluation Algorithm.
In mobile robots, Monte Carlo localization (MCL) is a recursive Bayes filter that estimates the posterior distribution of robot poses conditioned on sensor data [10].Nevertheless, in this work, what we estimate is the object pose through tactile data.Note that our intention is not to improve the MCL algorithm but to obtain a measurement of what can be done with any sensor.In other words, we adapt the classic MCL algorithm to work with tactile data for object positioning.
In in-hand object localization the dynamical system is the robot hand grasping an object, the state is the object 6D pose inside the hand, and measurements include tactile data together with robot and hand kinematics (joint encoders readings).To follow the analogy with mobile robot localization, we have considered the 3D model of the object, formed by vertices, as the map.The steps of our algorithm are as follows.First, the 3D model of the sensed object is imported.The weight of each particle is calculated measuring the distance of the tactile data to the closest vertices in the object model.Figure 2(c) shows two of the particles (red and green) with the best evaluation.The real pose of the object is represented by the black particle.This process is repeated using more tactile data provided by a feeling-like action where the hand moves and closes the fingers in different positions around the object.The movement of the hand with respect to the object changes the probability of the particles in a similar way to the classic MCL in mobile robots.The resampling step of MCL is also performed, with the purpose of removing low probability particles and generating more particles where the object pose is more probable.Some action is taken to reduce the processing time of the algorithm.The particles are generated around an initial estimation, following a normal distribution with a given standar deviation (std).For more specific conclusions about the performance of the different kinds of tactile sensors, it would be interesting to extend the MCL study to a larger number of sensors.

Sensor Description
We have compared six different tactile sensors with our MCL algorithms.In the following we summarize their characteristics, starting with the most complex sensor and finishing with the most simple.

Ideal
Sensor.An ideal tactile sensor is defined as the one that can return as tactile data the exact contact location when an object collides with any part of the robot hand (see Figure 3).This sensor would act like the human skin and therefore its implementation would be extremely complex with current technology.This is the reason why we have simulated it, using ODE's collision detection in order to detect contacts between both trimesh geometries of robot hand and object (see Figure 3(a)).

Weiss Matrix Based Tactile Sensors.
The tactile sensor DSA 9210 (Weiss Robotics) is a resistive touch sensor designed to be used in the fingertips of robot hands.DSA 9210 is equipped with a sensing matrix with 70 sensing cells (taxels) and a spatial resolution of 3.4 mm.This matrix is covered by rubber.We have used one Weiss DSA 9210 per each finger of a BarrettHand as shown in Figure 4(a).The tactile data was acquired with the DSACOM-32-M Controller (see Figure 4(b)).In Figure 4(c), we can see that contacts generate the activation of the taxels.

Barrett Matrix Based Tactile Sensors.
The Barrett matrix sensor is a capacitive tactile sensor, not resistive like the Weiss sensor.However its design is matrix based too, so both sensors are classified as the same type (see Figure 5(b)).The main difference is in the matrix resolution (number of taxels): Barrett matrix sensor has 24 taxels per finger with a spatial resolution of 5 mm (see Figure 5(a)).

Cylindrical Beam Strain-Gauge Based Tactile
Sensor.This sensor, presented in a previous work [7], is based on the measurement of the deformation of a cylindrical elastic beam covered with a rigid metallic capsule.As is seen in Figure 6 the forces applied on the surface of the capsule are transmitted to the elastic beam and are measured in magnitude and direction by strain gauges.Contact location can be obtained as a result of the analysis of the forces as is depicted in Figure 6(c).As can be seen in Figure 8(c), this sensor, instead of giving an exact contact point location, returns a line that contains the contact point as tactile data.We have defined this type of sensor as a linear tactile sensor.

Square Beam Strain-Gauge Based Tactile
Sensor.This sensor is an evolution of the previous sensor.Its design has been changed in order to be built using 3D printers with ABS.Basically, the elastic beam has, in this sensor, a square section and is clamped only at one side of the robotic finger.Forces on the fingertip are transmitted to the elastic beam and measured with strain gauges.Opposite to the previous sensor, torque   is not measured (see Figure 8(c)).However, the remaining forces are enough to obtain the line that contains the contact location.Therefore, this sensor is defined as well as a linear tactile sensor.
Basically this sensor has been designed following the same principles as in the previous model.However, as this evolution is presented here for the first time, we will explain it in detail.First, we assumed that the force  is applied in the plane  (see Figure 8(a)).The cover is clamped to the elastic beam as shown in Figure 7(a) and therefore the applied force is transmitted to the elastic beam through this point.The applied force  can be decomposed into normal and tangential forces (  and   ) relative to the surface of the metallic cover as shown in Figure 8(a).In addition, the normal force can be decomposed into Cartesian coordinates relatives to the axes (, ) associated with the elastic beam as shown in Figure 8(b).The magnitude and the direction of the applied normal force   can be calculated with the components of the axis.
We define this sensor as a linear sensor, since the data obtained from this sensor is the complete line that includes the contact point.It can be seen in Figure 8(c).

BarrettHand Torque Finger
Sensor.We define the simplest tactile sensor as the one that can detect contacts but not their location.This is implemented in real hands as joint torque sensors.In our case, we have used the BarrettHand torque finger sensor, consisting of a flexible beam, a freemoving pulley, a pair of cables, and two strain gauges as is shown in Figure 9. Basically, when a force is applied to the last phalange of the finger, the cables get tight which moves the pulley, bending the flexible beam built into the inner finger.This deformation is detected by the strain gauges.

Experimentation
We performed a series of tactile explorations with the aformetioned sensors in order to obtain tactile data for our evaluation algorithm.In particular, seven different objects were explored (sphere, cube, canteen, piston, rabbit statue, and flowerpot) with a BarrettHand mounted as an endeffector of a Staubli Rx90 Robot (see Figure 10).Only the tactile data from the ideal sensor was obtained via simulation (Figure 3(a)).changed with the corresponding sensor for each series.For every location (hereinafter attempt) the tactile data was stored  together with robot and hand kinematics.This information was used to obtain the contact points location in a common frame of reference that was used in the MCL evaluation algorithm (see Figure 11).To understand better the quality and completeness of the tactile data, MCL was executed over the whole series (i.e., MCL gave a pose estimation at every new attempt using the accumulated tactile data from previous attempts).This process was carried out 6 times for every explored object, changing the number of particles (1000 or 10000) and their standard deviation (25 mm, 50 mm, or 100 mm).
As we mentioned in previous sections, as well as obtaining a global comparison of tactile sensors, in this work we will place special emphasis on comparing sensors with similar caracteristics.For this reason we will analyze results for pairs of similar kinds of sensors first and then a global evaluation will be performed.

Pairwise Comparison.
There are two pairs of sensors with similar characteristics in our list.On the one hand we will look at complex matrix based sensors, comparing the original BarrettHand fingertip sensor with the Weiss Robotics DSA 9210.On the other hand, we will compare simple sensors, testing our two strain-gauge based fingertip force sensors.A detailed description of the procedure followed by tactile data acquisition and sensor evaluation, together with examples, will be shown for each comparison.

Matrix Based Fingertip
Sensors.The Barrett fingertip sensor and the Weiss Robotics DSA 9210 sensor were installed in the three fingertips of the BarrettHand to carry out their respective series of tactile exploration.Figure 12 illustrates one series example with the DSA 9210 touching a piston.
Tactile data for each attempt is processed together with the robot and hand kinematics to obtain the contact point locations.Figure 13 shows the contact points returned for both sensors after completing the tactile exploration series of the Piston.Contact points are given to the MCL algorithm which calculates the pose of the object that adjusts better to the contacts.Figure 13 shows the 3D model of the piston posed with the MCL output.It can be seen that the contact points adjust finely to the object model.
Error between algorithm output and real object pose is then calculated.Figure 14 shows the pose error (in position and orientation) obtained with 10000 particles and 100 mm of std during the execution of the MCL algorithm with the data from the piston tactile exploration.It can be seen that error output is stabilized after 10 attempts with the Weiss sensor and 14 attempts with the Barrett sensor.We also see that the error is smaller with the Weiss sensor.
The same process was performed with the other six objects.The best results for both sensors were obtained using 1000 particles.Figure 15 shows the mean pose error (i.e., using data from the seven objects) obtained for this number of particles and different standard deviation (vertical lines).Horizontal lines represent the evolution of the MCL evaluation for 100 mm of standard deviation.It is interesting that this mean error shows now a better behaviour of the Barrett sensor than the Weiss sensor.We attribute this to the external geometric design of the sensors: both sensors have a geometric design that allows the fingertip to contact easy objects like the piston.Whereas Weiss sensor has better spatial resolution, the algorithm output is better too in simple objects like the piston.However, Barrett curved design at the end of the fingertip allows the sensor to touch the surface of complex objects (see Figure 16(b)), while Weiss design presents some problems of contact as shown in Figure 16(a).

Strain Gauges Based Fingertip Sensors.
Our two simple sensors were installed in the three fingertips of the Barrett-Hand to carry their respective series of tactile exploration out. Figure 17 illustrates one series example with the square beam linear sensor touching a flowerpot.
Tactile data for each attempt is processed together with the robot and hand kinematics to obtain the contact line locations.Figure 18 shows the contact lines returned for both sensors after completing the tactile exploration series of the flowerpot.Contact lines are given to the MCL algorithm which calculates the pose of the object that adjusts better to these lines.Figure 18 shows the 3D model of the flowerpot posed with the MCL output.It can be seen how lines adjust to the object model.
Figure 19 shows the pose error (in position and orientation) obtained with 10000 particles and 100 mm of std during the execution of the MCL algorithm with the data from the flowerpot tactile exploration.It can be seen that error output is stabilized after 21 attempts with the square beam sensor and 24 attempts with the cylindrical beam sensor.We see that both sensors have similar errors for this example.
The same process was performed with the other six objects.As well as with the matrix sensors, the best results were obtained using 1000 particles.Figure 20 shows the mean pose error (i.e., using data from the seven objects) obtained  for this number of particles and different standard deviations.Cylindrical beam sensor behaviour is sligthly better than the square beam.A possible explanation lies in the fact that cylindrical beams bend more uniformly than square beams.This produces a better force analysis and therefore a better contact location with the cylindrical beam sensor.

Global Comparison.
Figure 21 shows a global comparison, using the MCL evaluation algorithm explained in Section 2, for all the sensors presented in Section 3. Like in the pairwise comparison, the MCL was used 6 times for every explored object, changing the number of particles (1000 or 10000) and their standard deviation (25 mm, 50 mm, or 100 mm).The best results for every sensor were obtained with 10000 particles.The mean error obtained for this number of particles and different standard deviations is represented in Figure 21.Horizontal continuous lines represent the evolution of the MCL evaluation for 100 mm of standard deviation.
The best results were obtained with the ideal sensor followed by the Barrett matrix sensor.The bad results of our sensors are caused by the uncertainty of the contact line location (i.e., real contact could be at any point of the line).As was demonstrated in [17], this uncertainty can be solved using both 3D models of finger hand and object in the location algorithm.Torque finger data was insufficient for the MCL algorithm, producing a very bad performance (not shown in Figure 21).

Conclusions
We have evaluated the usefulness for object localization of six different tactile sensors using a method based on the Monte Carlo localization algorithm.Our method has revealed the main differences between the compared sensors.We have demonstrated that external and internal geometrical designs affect tactile data quality.For matrix sensors, the number of taxels and spatial resolution are not as important as a curved geometrical design that allows the sensor to contact effectively with the object.For strain-gauge fingertip force sensors, we have demonstrated the importance in the internal design of the flexible beam responsible for contact forces measurement.
Finally, the best performance was obtained with the ideal sensor followed by the matrix based sensors.However, the performance of our two sensors, as was demonstrated in [17], is on average when both 3D models of the robot hand and the object were used.This proves that, for object localization purposes, simple and low cost sensors can achieve a similar performance to other complex sensors when the synergy sensor hand is taken.

Figure 2 (
Figure2(a) shows a 2D simplified object model.As can be seen, the pose of the object is represented by a vector.Second, the data is obtained from the tactile sensors (represented by blue vertices in Figure2(b)).Third, a set of particles is generated.Each particle corresponds to a possible location of the object inside the hand (represented by the yellow vectors in Figure2(b)).The weight of each particle is calculated measuring the distance of the tactile data to the closest vertices in the object model.Figure2(c) shows two of the particles (red and green) with the best evaluation.The real pose of the object is represented by the black particle.This process is repeated using more tactile data provided by a feeling-like action where the hand moves and closes the fingers in different positions around the object.The movement of the hand with respect to the object changes the probability of the particles in a similar way to the classic MCL in mobile robots.The resampling step of MCL is also performed, with the purpose of removing low probability particles and generating more particles where the object pose is more probable.Some action is taken to reduce the processing time of the algorithm.The particles are generated around an initial estimation, following a normal distribution with a given standar deviation (std).For more specific conclusions about the performance of the different kinds of tactile sensors, it would be interesting to extend the MCL study to a larger number of sensors.

Figure 3 :
Figure 3: (a) Ideal sensor with a sphere.(b) Data extracted from ideal sensor.

Figure 6 :
Figure 6: (a) Sensor parts.(b) Cylindrical beam sensor in BarrettHand.(c) Forces in sensor and contact point location.

Figure 11 :
Figure 11: (a) Contact points in linear sensor refered to the hand frame.(b) Contact points in matrix sensor refered to the hand frame.

Figure 12 :
Figure 12: Secuence of the piston tactile exploration with the Weiss sensor.

Figure 13 :Figure 14 :
Figure 13: (a) Contact points from the Barrett matrix sensor.(b) Contact points from the Weiss matrix sensor.

Figure 15 :
Figure 15: (a) Mean distance error with the matrix sensors.(b) Mean rotation error with the matrix sensors.

Figure 16 :
Figure 16: (a) Incorrect contact with the Weiss sensor.(b) Correct contact with the Barrett sensor.

Figure 17 :Figure 18 :Figure 19 :
Figure 17: Tactile exploration example with the square beam linear sensor.

Figure 20 Figure 21 :
Figure 20: (a) Mean distance error with linear sensors.(b) Mean rotation error with linear sensors.