In this paper, we propose a robust tactile sensing image recognition scheme for automatic robotic assembly. First, an image reprocessing procedure is designed to enhance the contrast of the tactile image. In the second layer, geometric features and Fourier descriptors are extracted from the image. Then, kernel principal component analysis (kernel PCA) is applied to transform the features into ones with better discriminating ability, which is the kernel PCA-based feature fusion. The transformed features are fed into the third layer for classification. In this paper, we design a classifier by combining the multiple kernel learning (MKL) algorithm and support vector machine (SVM). We also design and implement a tactile sensing array consisting of 10-by-10 sensing elements. Experimental results, carried out on real tactile images acquired by the designed tactile sensing array, show that the kernel PCA-based feature fusion can significantly improve the discriminating performance of the geometric features and Fourier descriptors. Also, the designed MKL-SVM outperforms the regular SVM in terms of recognition accuracy. The proposed recognition scheme is able to achieve a high recognition rate of over 85% for the classification of 12 commonly used metal parts in industrial applications.
In an automated assembly line, information of object (e.g., shape and orientation) is necessary in the robotic manipulation. Based on the information received, a robot can assemble the products using the objects or parts in an automated manner. Previously, vision-based sensing technique (e.g., CCD camera) was often applied to recognize the shape and orientation information of objects in an automated manufacturing line. Although this approach can provide good temporal and spatial resolutions of objects, its recognition accuracy is easily affected by the environment factors such as lighting conditions. When a robot is operated in a dark environment, the visual sensing quality becomes poor. On the contrary, the visual sensing approach may suffer from the light reflection when the environment becomes brighter, especially when the objects to be assembled are made of metal. Moreover, the objects are sometimes hidden from the visual sensors during the manipulation. In contrast, tactile sensing is less sensitive to these conditions. Therefore, tactile image-based object recognition has received increasing attention form researchers and engineers over the past decade [
When the tactile sensing approach is adopted, a two-dimensional tactile sensing array consisting of multiple sensing elements is attached to a robotic hand or finger. When the robotic finger touches an object, each sensing element in the tactile array measures the contact force or pressure applied on a specific and small area of the object. The pressure values of the sensing elements are then transformed into integer ones within the range of
Previous works mainly solve the problem where the size of object is larger/much larger than the tactile sensing array by way of edge tracking/following [
Due to the factors above, it is difficult to identify the shape of an object through the tactile image. To achieve a high-reliability automated robotic assembly, it is thus necessary to develop a high-accuracy tactile image recognition scheme. To the end, we propose in this paper a scheme, which is composed of three main layers. Initially, an image preprocessing procedure is performed to enhance the contrast of the tactile images. In this layer, geometric features and Fourier descriptors are first extracted from a given image. The extracted geometric features and the Fourier descriptors form a feature vector, which is high-dimensional and does not necessarily achieve satisfactory recognition accuracy. Kernel principal component analysis (kernel PCA) [
The rest of this paper is organized as follows. In Section
The piezoresistive layer of sensor is a functional material [
Fabrication processes of tactile sensor arrays using screen printing technology. (I) Print the row and column electrodes on the PET films, respectively. (II) Print the piezoresistive material. (III) Bottom PET film with adhesion resin. (IV) The top and bottom PET films are laminated into a large area tactile array sensor.
Tactile sensor array was fabricated on a flexible film (
The pressure-piezoresistivity characteristics of the proposed sensor were measured by a customized instrument developed in LabVIEW environment which includes a pressure chamber, multifunction switch/measure unit (Agilent 34980A), and a National Instruments data acquisition (NI DAQ) card. In the calibrating process, the sensor was placed in the chamber, subjected to a static uniform load (to make sure each cell faced the same pressure). The pressures in chamber were controlled by a LabVIEW interface, and the measured data was scanned by Agilent 34980A and recorded via the NI DAQ card. Figure
Sensor array calibration device and the multifunction switch/measure unit. (a) The pressure chamber. (b) Agilent 34980A. (c) The pressure-piezoresistivity characteristics of one cell (position (3, 3)).
To determine the characteristic of the sensor cell, the experimental setting in Figure
Pressure testing machine and experimental setting.
In addition, to justify the stress applied to collect appropriate tactile images, five different loads (1 kgf, 2 kgf, 3 kgf, 4 kgf, and 5 kgf) were generated by the indenter. Raw images of a bar shape object with fixed cover under the five different loads are shown in Figure
Raw images of a bar shape object with fixed cover under various loads 1 kgf∽5 kgf.
The contact behavior is mainly determined by the surface flatness and roughness between two objects. That is, local stress will be concentrated on the first contacting area, and this phenomenon will lead to fragment of tactile image. To avoid this phenomenon, we place an elastic cover on a tactile sensor as buffer layer. Several commercially available cover layers with similar hardness were examined under the loading of 4 kgf, as shown in Figure
Raw images of a bar shape object with various covers under fixed load 4 kgf.
Moreover, the tactile sensor array is fabricated on a flexible film (
Layout of the designed tactile sensor array.
In this study, 12 mental objects with different shapes and sizes are designed as the testing objects. Samples of the designed objects are shown in Figure
Descriptions of the 12 objects.
Object number | Description |
---|---|
1 | Bar shape with 10 mm length |
2 | Bar shape with 35 mm length |
3 | Hexagon with flat size of 13 mm and a Φ8 mm hollow hole |
4 | Solid hexagon with flat size of 13 mm |
5 | Hexagon with flat size of 10 mm and a Φ6 mm hollow hole |
6 | Solid hexagon with flat size of 10 mm |
7 | Square with flat size of 13 mm and a Φ8 mm hollow hole |
8 | Solid square with flat size of 13 mm |
9 | Square with flat size of 10 mm and a Φ6 mm hollow hole |
10 | Solid square with flat size of 10 mm |
11 | Φ13 mm circle with a Φ8 mm hollow hole |
12 | Solid Φ13 mm circle |
Samples of the 12 objects to be recognized in this study.
Examples of the tactile images. The images in the first row of this figure are the tactile images of the first six objects (class 1–class 6), respectively. The second row displays the examples of the tactile images of class 7–class 12, respectively. Each image is a 10-by-10 gray-level matrix.
As can be observed from these examples, the spatial resolution of the tactile image is extremely low, and it is very difficult to discriminate between objects by observation. For example, the object in the last image (i.e., the sixth image) of the first row and the one in the last image of the second row are originally different: the former is a solid hexagon, while the latter is a solid circle. However, due to the low resolution and the aforementioned diffusion and fence effects, the two different objects in the two images look very similar and are thus difficult to discriminate. Therefore, a robust recognition scheme is required. In the following, we introduce our recognition scheme in detail.
Each tactile image is originally a pixel matrix of
An illustrative example for the image preprocessing stage, where (a) is the testing object, (b) is the corresponding 10-by-10 tactile image, (c) is the resized 33-by-33 image, (d) is the image after Gamma correction-based contrast enhancement, (e) is the image after noise reduction, and (f)–(h) are the binarized images with
Two kinds of geometric features are extracted from each binarized image: area and edge-to-mean variance (called variance hereafter). Area denotes the number of pixels labeled as 1 in the binarized image. To compute the variance, we first detect the edge points and the centroid of the object within one binarized image and then compute the distance between each edge point and the centroid. Finally, the variance of the computed distances is calculated.
To compute the Fourier descriptors, the boundary extraction algorithm [
Examples of boundary extraction result. The corresponding gray-level tactile images are displayed in Figure
Suppose that the Cartesian coordinates of the boundary pixels of an object are
The Fourier descriptors
The kernel PCA feature fusion consists of a training phase and a testing phase. Suppose that there is a set of training data
In test phase, the projection of testing data
Given a training set
MKL is a data-driven learning algorithm which learns kernel from the given training data [
The dual problem of (
Finally, for a test data point
In this paper, we solve the optimal values of
In this section, we first test the recognition accuracies of the geometric features or properties (GP) and Fourier descriptors (FD) using a simple classifier, that is, the
It can be seen from Figure
Next, we test the proposed recognition scheme (combination of the kernel PCA-based feature fusion and MKL-SVM) and compare the proposed scheme with other combinations. Similarly, the 10-run twofold cross validation is performed to optimize the parameters of the methods. For kernel PCA, the parameters to be optimized include the kernel parameter and the number of eigenvectors. The parameters of SVM are the penalty weight
Comparison of recognition accuracies among different methods.
Feature | Classifier | Recognition accuracy |
---|---|---|
FD + GE |
|
68.69 |
FD + GE | SVM | 76.17 |
Kernel PCA-based |
SVM | 82.13 |
Kernel PCA-based |
MKL-SVM | 85.54 |
As can be seen from Table
In this paper, we have presented a recognition scheme for solving the difficult tactile image recognition problem, which plays a critical role in automated robotic assembly. The proposed kernel PCA-based feature fusion technique largely improved the recognition accuracy of the frequently used geometric featured and Fourier descriptors, and the multiple kernel learning (MKL)-based SVM can perform much better than the regular SVM in terms of object recognition through the use of tactile image. Experimental results have indicated the effectiveness of the proposed recognition scheme in tactile image recognition. Nevertheless, there remain several worth-studying issues that may further improve the current results. For example, other types of kernels can be included in the MKL-SVM to gain better kernel combination, which will be our future work.
The authors declare that there is no conflict of interests regarding the publication of this paper.