This paper focuses on the problem of visual servo grasping of household objects for nonholonomic mobile manipulator. Firstly, a new kind of artificial object mark based on QR (Quick Response) Code is designed, which can be affixed to the surface of household objects. Secondly, after summarizing the vision-based autonomous mobile manipulation system as a generalized manipulator, the generalized manipulator’s kinematic model is established, the analytical inverse kinematic solutions of the generalized manipulator are acquired, and a novel active vision based camera calibration method is proposed to determine the hand-eye relationship. Finally, a visual servo switching control law is designed to control the service robot to finish object grasping operation. Experimental results show that QR Code-based artificial object mark can overcome the difficulties brought by household objects’ variety and operation complexity, and the proposed visual servo scheme makes it possible for service robot to grasp and deliver objects efficiently.
A classical mobile manipulator system (MMS) consists of a manipulator which is mounted on a nonholonomic mobile platform. This type of arrangement extends manipulator’s workspace apparently and is widely used in service robot applications [
When robots operate in unstructured environments, it is essential to include exteroceptive sensory information in the control loop. In particular, visual information provided by vision sensor such as charge-coupled device (CCD) cameras guarantees accurate positioning, robustness of calibration uncertainties, and reactivity of environmental changes. Much of the work related to CCD cameras and manipulators has focused on the applications about the manipulator’s visual servo control, which specifies robotic tasks (such as object grasping, assembling) in terms of desired image features extracted from a target object. The overview of visual servo can be seen in literature [
Relating CCD cameras and the mobile robots lead to the applications of vision-based autonomous navigation control. Ma et al. [
The newest trend is to integrate CCD cameras into mobile manipulator to form a vision-based mobile manipulation system (VBMMS). Thanks to the capabilities of the vision subsystem, the VBMMS can work in an unstructured environment and has wider applications than a fixed-base manipulator and a mobile platform. Due to the lack of accurate and robust positioning performance of VBMMS, very few physical implementations have been reported. de Luca et al. [
This paper presents a physical implementation of VBMMS in service robot intelligent space. It consists of two basic contributions. First, after summarizing the VBMMS as a generalized manipulator, the kinematics is analyzed analytically and an active vision-based camera calibration method is proposed to determine the hand-eye relationship consequently. Second, a novel switching control strategy is proposed which switches between eye-fixed approximation and position-based static look-and-move grasping. The remainder of the paper is organized as follows. Section
As shown in Figure
QR Code-based artificial object mark.
The coding of information stored in internal information representation part of the mark is shown in Table
Coding of information stored in QR Code-based artificial object mark.
Info type | Description | Bytes |
---|---|---|
Name | Object’s name’s first two words | 2 |
Serial number | Such as |
1 |
Sizes | Long * width * height, 2 bytes for each items | 8 |
Material | p (plastic), g (glass), z (paper), w (wood), s (metal), t (textile) | 1 |
Operation force | H (huge), L (large), M (middle), S (small), T (tiny) | 1 |
Operation position | u (upper), m (middle), b (bottom) | 1 |
Operation orientation | a (above), f (front), b (back), l (left), r (right) | 1 |
It can be seen from Table
As shown in Figure
Vision-based mobile manipulation system and its link coordinate frames.
Due to the nonholonomic constraint of the TMR and the nonredundancy of the 4-DOF manipulator, using VBMMS to complete grasping task is very difficult, and so far, barely no related works can be found. Take the difficulties of controlling the TMR and manipulator separately into account, the VBMMS is summarized as a generalized manipulator shown in Figure
Table
D-H parameters of the generalized manipulator.
Link |
|
|
|
|
Type |
---|---|---|---|---|---|
1 | 0 | 0 |
|
0 | R |
2 |
|
0 | 0 |
|
T |
3 |
|
0 |
|
25 | R |
4 |
|
−6.5 |
|
0 | R |
5 |
|
6.5 |
|
28.5 | R |
6 |
|
0 |
|
0 | R |
Tool |
|
0 | 0 | 28 |
Table
It is well known that the symbol
From (
Note that
Given an arbitrary end-effector pose, whose orientation is expressed by RPY description method, the four sets of inverse kinematic solutions can be solved by (
Inverse kinematic results of the generalized manipulator.
Roll/(rad) | Pitch/(rad) | Yaw/(rad) |
|
|
|
|
|
|
|
|
|
---|---|---|---|---|---|---|---|---|---|---|---|
0.4449 | 0.2566 | −1.2466 | 129.4143 | 15.7087 | 76.7968 | 0.2417 | 122.6 | −0.7854 | 0.1366 | −0.6283 | −0.3927 |
0.1922 | 123.7487 | 1.1579 | 0.1366 | −2.5133 | −0.6142 | ||||||
0.0880 | 127.4757 | −1.3771 | −0.5851 | 0.1454 | −1.0836 | ||||||
−5.9305 | 121.3515 | 1.7428 | −0.5851 | 2.9962 | 0.0767 | ||||||
|
|||||||||||
−0.2411 | −0.4617 | −0.2266 | 12.5929 | 22.4547 | 69.2228 | 1.0203 | 19.3710 | −1.0229 | 0.5987 | −0.2428 | 1.0851 |
0.2661 | 40.1966 | 2.4693 | 0.5987 | −2.8988 | −0.0848 | ||||||
0.1653 | 52.60 | −0.4488 | −1.0472 | 0.1571 | −0.5417 | ||||||
−4.5806 | 20.1375 | 1.3136 | −1.0472 | 2.9845 | 1.5420 |
After solving all the solutions, they can be processed so that they verify the joints’ value ranges, and then the optimal set of solution can be acquired using a certain optimality criteria such as the shortest path.
Hand-eye relationship determination is a key issue in visual servo control of a robot hand-eye system, and much work has been done. In this paper, we propose a novel method which is based on Zhang’s camera calibration method and tensor theory. As we know, Zhang’s algorithm proposed in 1999 is very representative because of its easy use, flexibility, and high accuracy [
Figure
Scheme of hand-eye relationship determination.
When the manipulator executes the
From Figure
Note that
It can be seen from (
According to the different values of the free indexes
Consider that there are
Based upon least-squares solution of a homogeneous system of linear equations,
Till then, the unique
After solving
The visual servo control scheme designed in this section consists of two steps: eye-fixed approximation of the household object and static look-and-move grasping. Once the VBMMS is commanded to grasp a household object that is in its camera’s FOV, the VBMMS starts to approximate the object with its eye gazing it. When the distance between VBMMS’s camera and household object reaches a certain value, the process of approximation switches to the static look-and-move grasping.
Figure
Scheme of eye-fixed approximation.
It is well known that for nonhomogeneous projective coordinate
We partition the interaction matrix so as to isolate the second degree of freedom of the generalized manipulator. Note the following:
Note
The vector
Under the influence of control input shown in (
Figure
Scheme of static look-and-move grasping.
In Figure
Structure reconstruction of plane
Take current camera frame
After solving
By solving (
Computation of
Similar to Step
Adjust the solved
As we all know, the rotation matrix
Substitute
Equation (
To refine the solved
Till then, the homogeneous transformation
In (
In this section, we will discuss two experiments: hand-eye relationship determination and the VBMMS’s switch control scheme separately.
In order to simplify the complexity of grid corner extraction, choose a model plane which contains a pattern of 4 × 4 squares as the calibration object; the size of each square is 22 mm × 22 mm. Attach an object frame
Let the TMR be static; take six images of the plane under different orientations caused by several known manipulator movements. The images are shown in Figure
Six images of a model plane under different orientation, together with the corners (indicated by red cross and blue square).
Using Zhang’s calibration method, the refined camera intrinsic parameter matrix
The known movements of the manipulator corresponding to six images are shown in Table
The known movements of the generalized manipulator.
Image |
|
|
|
|
---|---|---|---|---|
1 | 0 | 0.5756 | −0.0245 | −1.1241 |
2 | 0 | 0.5710 | −0.2224 | −1.0983 |
3 | 0 | 0.4560 | −0.2256 | −1.2930 |
4 | 0 | 0.2143 | 0.0360 | −1.5370 |
5 | 0 | −0.0012 | 0.1302 | −1.6249 |
6 | 0 | −0.3974 | −0.1872 | −1.9276 |
From Table
The pose information corresponding to known movements of the generalized manipulator.
Roll | Pitch | Yaw |
|
|
| ||
---|---|---|---|---|---|---|---|
I1 |
|
2.9713 | 1.4401 | −2.9833 | 42.2281 | −0.6184 | 41.7720 |
|
−2.5304 | −0.1911 | −1.6459 | −6.2760 | −7.2463 | 45.1793 | |
I2 |
|
1.9856 | 1.3546 | −2.0969 | 41.7209 | −5.4980 | 43.0438 |
|
−2.6322 | −0.0152 | −1.7485 | −17.8379 | −1.7928 | 44.5287 | |
I3 |
|
2.2297 | 1.2953 | −2.3112 | 38.8304 | −6.0230 | 43.0616 |
|
−2.5396 | 0.0100 | −1.7451 | −20.4513 | −6.5954 | 45.0891 | |
I4 |
|
−2.9439 | 1.3868 | 2.9484 | 33.4385 | 1.0060 | 46.4443 |
|
−2.4884 | −0.2583 | −1.6244 | −2.9982 | −11.5786 | 54.2764 | |
I5 |
|
−1.9581 | 1.4303 | 1.9545 | 27.6891 | 3.6303 | 52.0272 |
|
−2.6579 | −0.3423 | −1.6116 | 2.7396 | −2.0687 | 63.8751 | |
I6 |
|
1.3769 | 1.3922 | −1.3078 | 16.0177 | −4.8820 | 54.7535 |
|
−2.7377 | −0.0032 | −1.5332 | −23.9188 | 1.5681 | 71.5180 |
Combine image 1 and image 2 (noted as
The images corresponding to the desired and initial camera position and the object mark recognition using Gaussian model and Hough transformation are illustrated in Figure
Images of the target for the desired/initial camera position and object mark recognition using Gaussian model and Hough transformation.
Desired image
Initial image
Desired binary image
Initial binary image
As can be seen in Figure
Choose
The computed eye-fixed approximation control input.
Figure
Two images of the object mark corresponding to
Image corresponding to
Image corresponding to
The extracted corner correspondences matched by RANSAC
The corner correspondences given in Figure
Figure
Two images of the object mark corresponding to
Image corresponding to
Image corresponding to
The extracted corner correspondences matched by RANSAC
Applying DLT to the corner correspondences, the homography
Affected by the computed control input, the VBMMS moved to a position corresponding to the desired camera frame
It is nearly impossible for VBMMS to finish household objects’ grasping and delivering operation without some prior knowledge (such as objects’ color, texture, sizes, and localization) provided by people, not only for the household objects’ variety and operation complexity, but also for the difficulties of the VBMMS’s kinematic modeling and its nonholonomic constraint’s handling. On the one hand, a new QR Code-based artificial object mark is designed, which can store object’s property information and operation information and can be easily distinguished from complex family environment. On the other hand, in order to model the VBMMS, we summarize it as a generalized manipulator, followed by acquiring its analytical inverse kinematic solutions and determining the hand-eye relationship based upon active vision. Meanwhile, in order to deal with the VBMMS’s nonholonomic constraint, a visual servo switching control law which is composed of eye-fixed approximation part and static look-and-move grasping part is designed. The proposed scheme can solve the household objects’ grasping and delivering problem well, and makes it possible to let VBMMS-type service robot provide better housekeeping service.
The authors declare that there is no conflict of interests regarding the publication of this paper.
This project was supported by the National Natural Science Foundation of China (no. 61375084) and Advanced Mechanical Design and Manufacturing Public Service Platform Building, Longyan Science and Technology Plan Project. (no. 2012ly01).