Research Article Study on Impact Test System of Manipulator Based on Six Axis Force Sensor and Intelligent Controller

With the development and application of intelligent nervous system, intelligent control system represented by machine vision has been promoted and used in various ﬁ elds, including manipulator control system. Mechanical arm is the main actuator of robot and other mechanical components, but the application environment of mechanical arm is extremely complex, and with the development of industrial process, the sensitivity of technology and operating system for its arbitrary posture operation is also steadily improved. Based on the above problems, this paper designs a manipulator control system based on machine vision theory and orthogonal parallel six-dimensional force sensor to meet the working requirements of manipulator in high-precision environment. Systems hardware are consisted by machine vision structure and six dimensions of force sensor. The practical application e ﬀ ect of the model was veri ﬁ ed by collision detection experiment. The relevant research can provide theoretical guidance and practical reference for the research of manipulator control ﬁ eld. Dance and practical application reference for the research of manipulator control ﬁ eld.


Introduction
The control of manipulator has always been an important and hot issue in mechanical control. At present, the control theory and method of manipulator have been deeply studied [1][2][3]. Machine vision technology takes the photosensitive element as the processing core and uses the analog signal transmitted by the optical sensor to convert and collect the electrical signal and optical signal. With the development of camera and imaging processing technology, the data information processed by machine vision processing technology can be transformed from two-dimensional to threedimensional. Under the influence of automatic machinery production mode, industrial manipulator has become an important equipment to support modern production. How to realize the automatic control process of industrial manipulator has also become a current research hotspot.
The early vision-based manipulator control system is to process the image and obtain the target position from the image, so as to achieve the effect of intelligent control and closed-loop control [4,5]. For example, Li y et al. used the closed-loop vision algorithm developed based on monocular and binocular vision to realize the step of pose estimation from target recognition, which is completely automatic [6][7][8]. However, the manipulator control system relying only on vision cannot meet the needs of modern industry because of its limited accuracy, the resulting error is also large, and the prior knowledge of the environment is required [9]. Literature [10] developed the controller of the manipulator based on the behavior data of the double rotating joint manipulator and the reinforcement learning algorithm, but there are some defects, such as limited data points and the need to configure a large amount of prior knowledge for the manipulator model, so it is impossible to control the manipulator with different structures after training one controller. The visual sensor can collect rich information in the image. By combining with the reinforcement learning algorithm, the intelligent controller can learn information from the image independently, and the control strategy is optimized by the reward and punishment mechanism to realize the manipulator control after the manipulator with different structures is trained by the intelligent controller without any prior knowledge of the manipulator structure model [10][11][12].
The above shows that the use of machine learning combined manipulator alone cannot meet the high-precision requirements of today's manipulator [13]. Therefore, many scholars turn their research objectives to the sixdimensional force sensor with high-precision performance. The core technology of the development of the sixdimensional force sensor is the structural design of the sensor elastomer, and its structural form directly affects the performance of the six-dimensional force sensor [14]. With the development of robot application towards high speed and high precision, the problem of dynamic force measurement is becoming more and more prominent. The sixdimensional force sensor not only requires high sensitivity and small cross interference in each axis, but also requires a certain working bandwidth to meet the needs of dynamic force measurement. In order to adapt to different working environments, the structures of six-dimensional force sensor elastomers developed by experts and scholars are also diverse. Among them, Korean scholars have designed a  Figure 2: State matrix.  Journal of Sensors six-dimensional force/torque sensor to detect the full force information of the foot of humanoid robot. The sensor is small and has adjustable and independent sensitivity to different force components [15,16]. Ge Yu of the Institute of intelligent machinery, Chinese Academy of Sciences, and others invented an ultrathin six-dimensional force/torque sensor based on cross beam and double E-shaped film, which is used to obtain force sensing information of underwater robot wrist [17][18][19]. In addition, because the cross beam sensor has the advantages of symmetrical structure, high sensitivity, and small-dimensional coupling, more and more scholars improve it in order to obtain a sixdimensional force sensor with better performance [20][21][22]. The sensor has the characteristics of high sensitivity and small-dimensional coupling. Based on the traditional cross beam sensor, Wu C et al. [23][24][25] proposed a new six axis wrist force sensor by setting through holes on each sensitive beam. The sensor has good static and dynamic characteristics.
To sum up, machine vision can better meet the intelligent recognition requirements of the manipulator system, while the six-dimensional force sensor can better meet the sensitivity and accuracy requirements of the manipulator in the manipulation process. Therefore, this paper applies machine vision and six-dimensional force sensor to the manipulator control system with high accuracy requirements, in order to realize the use requirements of the manipulator control system in the environment with high intelligence and high accuracy requirements.

Basic Theory
2.1. Design of Controller. In order to realize the autonomous control of the manipulator, an intelligent controller based on machine control is designed. By learning step by step, the intelligent controller finally learns to control the movement of the manipulator, so that the end of the manipulator is controlled from the initial position to the target position. In each step, the intelligent controller collects the image of the environment through the visual sensor and obtains the next control quantity through online learning.
As shown in Figure 1, the intelligent controller includes a vision sensor, an image feature processor, a control actuator, and a learning arithmetic unit.
2.1.1. Vision Sensor. The visual sensor collects the actual information of the current environment in the form of color image, and the image matrix is the three-dimensional image matrix of RGB mode. The image matrix contains the current state and target state of the manipulator, as well as environmental information.
2.1.2. Image Feature Processor. After graying and thresholding the received image matrix I, the image feature processor adjusts the size and deformation and then outputs the state matrix s t to the core of the reinforcement learning algorithm. Figure 2 describes in detail the process of rr types of manipulator, xr types of manipulator, and xy types of manipulator acquiring image matrix I by visual sensor and inputting it to image feature processor for processing into state matrix s t .

Control
Actuator. The reinforcement learning algorithm operator generates a control amount a t ða 1 , a 2 Þ according to the state matrix s t and reward value r t generated by the image feature processor. The control actuator calculates the feed increment required to control the rotating joint or slider of the manipulator through the control amount a t and then maps the rotation angle increment directly to the joint torque M t ðm 1 , m 2 Þ.
where x i and y i represent the coordinate position of the manipulator control system.

Image coordinate system
Camera coordinate system Image plane coordinate system According to Equation (1), the principle in learning is that if the learning process deteriorates, the distance between the end position of the manipulator and the target position increases, and the reward value is reduced to punish. If the learning process is improved, the distance between the end position of the manipulator and the target position decreases, and the reward is carried out by increasing the reward value.

Machine Vision Learning Model
2.2.1. Camera Selection. When the industrial manipulator completes its work, it needs to obtain the visual information in the environment and locate the workpiece position or target position. Therefore, it is necessary to map the position information in the image to the real-world coordinate system, so it is necessary to calibrate the camera and establish the relationship between the coordinate systems.
When collecting information, we generally need to select an appropriate camera to collect the required information.
The vision system in this paper is mainly composed of two cameras. The remote camera is fixed in front of the manipulator to shoot the target point, and the end camera is fixed at the end of the manipulator to form a hand eye system, which is used to complete the tasks of defect detection and so on. Therefore, the camera needs to have appropriate focal length, good definition, and resolution. The detailed parameters of the camera are shown in Table 1.  Figure 5: Design diagram of collision detection system.   Journal of Sensors 2.2.2. Camera Calibration. The process of camera calibration is to determine the projection matrix P converted from the world coordinate system to the image coordinate system. Camera calibration is generally divided into two parts. One is to convert from the world coordinate system to the camera coordinate system. In this step, the camera external parameters R, T (rotation matrix, translation matrix) and other parameters are determined. The second part is to convert from the camera coordinate system to the image coordinate system. In this step, the camera internal parameters K and other parameters are determined. The camera imaging model is divided into linear model and nonlinear model. The perspective projection model of the camera is shown in Figure 3 without considering the lens distortion. The perspective projection model contains four coordinate systems. World coordinate system O w − X w Y w Z w , that is, a real coordinate system defined by the user in the real space according to the needs, reflecting the actual spatial position of the camera and the measured object. The camera coordinate system 2, its origin is the optical center of the camera, and the z-axis coincides with the camera optical axis, and the X c axis and Y c axis are parallel to the xy axis of the imaging plane coordinate system. It is defined to describe the object position from the perspective of the camera. Like the plane coordinate system o ′ − xy, this coordinate system is the plane imaged by the photosensitive element of the camera and is the coordinate system obtained by translating the camera focal length f distance along the z-axis of the camera coordinate system. Image coordinate system, that is, the coordinate system o − uv of the image we see. The origin is in the upper left corner of the image, and the unit is pixel. The optical signal collected by the camera through the sensor is converted into digital signal, that is, the image we see. An image can be regarded as a matrix of M × N, and each value in the matrix is the gray value of each pixel.

Six-Dimensional Force Sensor Model
In the manipulator control system, the dynamic performance of the force sensor directly affects the practical application performance of the manipulator. Force sensor is a kind of electrical component that converts force signal into relevant electrical signal. Among them, the six-dimensional force sensor can measure the force and torque information about the three coordinate axis in space at the same time. It is the most complete form of force sensor.

Orthogonal Parallel Six-Dimensional Force Sensor
Model. The orthogonal parallel six-dimensional force sensor  5 Journal of Sensors is composed of three parts: force measuring platform, fixed platform, and force measuring branch. The structure of the force measuring platform is exactly the same as that of the fixed platform. Each platform contains three support columns, which are distributed symmetrically around the z -axis of the coordinate system. There are six force measuring branches in total, which are S-type single-dimensional force sensors, and the middle part is fixed with a strain gauge for detecting the axial force on the force measuring branch. Among the six force measuring branches, three force measuring branches are placed vertically and connected with the two platforms. The three force measuring branches are placed horizontally and connected with the respective supporting columns of the two platforms. The connection methods are all elastic spherical joints, and the three vertical force measuring branches and three horizontal force measuring branches are also circumferentially symmetrical about the z-axis of the coordinate system.
Because the force measuring branch and the two platforms are connected by elastic spherical joints, ideally, the force measuring branch can be regarded as a two force element and only bear its own axial tensile pressure. Moreover, because the vertical force measuring branch and the horizontal force measuring branch are arranged orthogonally in the space, when the force received by the sixdimensional force sensor is the force in the x, y direction and the torque in the z direction, the force measuring branch mainly measured is the force measuring branch arranged horizontally. When the applied force is the moment in x, y direction and the force in z direction, the main force measuring branch is the force measuring branch arranged vertically. This arrangement of force measuring branches makes the measurement of six-dimensional external force more accurate from the structural principle, and the interdimensional coupling of six-dimensional force sensor can be reduced. Figure 4 is the structural diagram of orthogonal parallel six-dimensional force sensor. b 1 b 2 b 3 represents the force measuring platform, and B 1 B 2 B 3 represents the fixed platform. b 1 B 1 , B 2 b 2 , B 3 b 3 is three force measuring branches arranged vertically, and the included angle between any two points of b 1 , b 2 , b 3 and the origin o is 120°. b 4 B 4 , b 5 B 5 , b 6 B 6 is three horizontally arranged force measuring branches, their axes are tangent to a circle with a circle center of o′, the three tangent points are the midpoint of the horizontal force measuring branch, and the included angle between any two tangent points and the origin o′ is also 120°. The rectangular coordinate system o − xyz is the measurement reference coordinate system of the sixdimensional force sensor, and the origin o is determined as the geometric center of the lower surface of the force measuring platform, wherein, the x-axis is perpendicular to the horizontal force measuring branch b 4 B 4 , and the z-axis is perpendicular to the fixed platform B 1 B 2 B 3 and upward.

Vibration Model of Orthogonal Parallel Six-Dimensional
Force Sensor. Taking the whole orthogonal parallel sixdimensional force sensor as the research object, when the force measuring platform is affected by the external force F w , its spatial motion is described by the generalized coordinate q = ½q 1 q 2 T = ½q x q y q z q mx q my q mz T , where q 1 = ½q x q y q z defines the movement of the force measuring platform about the three coordinate axis and q 2 = ½q mx q my q mz defines the Six dimensional force sensor x 2 x 5 x 0  The vibration model of the orthogonal parallel sixdimensional force sensor can be regarded as composed of six spatial single degree of freedom second-order mechanical vibration systems and one mass block M. The selection of the reference coordinate system o − xyz of the whole system is the same as that in Figure 4.
Firstly, the relationship between the generalized coordinate q and the displacement l of the force measuring branch line is established. Based on the screw theory, then: where S i is the direction vector of the ith force measuring branch axis in the reference coordinate system o − xyz and r i is the position vector of a point on the axis of the ith force measuring branch in the reference coordinate system o − xyz .

Design of Collision Point Detection System for
Manipulator Control. In order to test the feasibility of the application of the above six-dimensional force sensor model in the manipulator control system, this paper proposes a manipulator control system based on machine vision and six-dimensional force sensor. Different from the traditional application of six-dimensional force sensor, in order to realize the sensitivity of the manipulator control system, the col-lision detection method is used for the system design. The system design is shown in Figure 5.

Collision Point Detection Model.
According to the different impact points and external forces, body collision can be roughly divided into single point single external force collision, single point external force/external torque mixed collision, multipoint external force collision, and multipoint external force/external torque mixed collision. The collision of multipoint and multiexternal force/external torque can be regarded as a special combination of single point and single external force. In order to achieve high-precision collision detection, the installation position diagram of the sensor in this paper is shown in Figure 6. The six-dimensional force sensor can measure the component information of force and torque on three axes. Assuming that the sensor at the base collects FðF x , F y , F z Þ and MðM x , M y , M z Þ, the position information of the collision point is expressed as: where PðP x , P y , P z Þ is the position vector of the collision point relative to the sensor coordinate system. Equation (3) can be rewritten as:   Substituting the data collected by the sensor to a frame into Equation (2) can obtain an external force action vector line L c ðpÞ, which is expressed as: Different parameters collected by each frame of the sensor determine different external force vector lines. In practice, most of the directions of the external force of the collision have a certain angle with the motion direction of the robot. Therefore, the external force vector lines calculated by using different frames of the sensor will intersect at a certain point P c in space, which is the collision point. The specific calculation method is as follows:

Journal of Sensors
Due to the measurement error of the sensor, the calculated spatial force vector lines will not intersect at the collision point in space, but the projection in a certain plane will intersect near the projection of the formal collision point in the plane. Therefore, the intersection of the external force vector line in the plane can be obtained by the projection method, and then, the intersection coordinates can be substituted into the original equation to obtain the collision point information, as shown in Figure 7.
Suppose L XOY ′ is the projection line of L c ðpÞ in XOY plane, L XOZ ′ in XOZ plane, and L YOZ ′ in YOZ plane. The cal-culation formula is: In Figure 7, P ′ ðx, yÞ is the intersection of the projection L XOY,1 ′ and L XOY,2 ′ of two external force vector lines, P 1 and P 2 are the corresponding points of the projection  11 Journal of Sensors intersection on the external force vector line, and the calculation formulas of P 1 and P 2 are: where η − is the selection factor for determining the best projection plane, and the selection rule of water projection plane is: where θ XOY , θ XOZ , θ YOZ − is the included angle of L XOY,1 ′ and L XOY,2 ′ projected by the external force vector line on the XOY, XOZ, YOZ plane.
If XOY plane is the optimal projection plane, the calculation formulas of P 1 ðx 1 , y 1 , z 1 Þ and P 2 ðx 2 , y 2 , z 2 Þ are: The real collision point coordinates are near two points, but the exact position cannot be calculated. Therefore, it is preliminarily determined that the collision point P is the midpoint of two points, that is: At the moment of robot collision, the sensor can collect multiple groups of data, and the schematic diagram of the solved spatial external force vector line is shown in Figure 8. In order to search the optimal solution closest to the real collision point in multiple groups of calculation results, it is assumed that ζðζ M x , ζ M y , ζ M z Þ is the error factor, and P ″ ðx ″ , y ″ , z ″ Þ is the preliminarily calculated collision point coordinate. Available: Among the preliminarily determined collision points, search the point that minimizes jζj = is, it is considered to be the optimal solution closest to the real collision point. The optimal search results are shown in Figure 9.

Simulation Experiment
In order to verify the effectiveness and accuracy of the collision point detection model proposed in this paper, a 3-DOF manipulator model is constructed for simulation experiments, as shown in Figure 10.
In order to reduce the amount of calculation, the connecting rod and joint center of the manipulator are symmetrical, and the center of gravity of each connecting rod is located on its own central axis. Table 1 shows the structural parameters of each connecting rod of the 3-DOF robot in this paper. The material is alloy, ρ = 2:7 × 10 3 kg/m 3 , In Figure 10, l 1 = 120mm, l 2 = 150mm, l 3 = 200mm, l 4 = 200mm, h = 30mm. In Table 2, the member parameters are shown as follows.
The inertia tensor of each connecting rod relative to its center of mass is: The data collected by the sensor at the base changes with time. After dynamic compensation processing, the data collected by the sensor is used as the input of Equation (4) to calculate the collision point. Import the manipulator model into Adams to verify the constructed manipulator model. In the experiment, the magnitude, direction, and position of the simulated impact force are known, and the 12 Journal of Sensors experimental result data are based on the sensor coordinate system. In the experiment:θ 1 = −π/6, θ 2 = −π/6, θ 2 = −π/4, _ θ 1 = _ θ 2 = _ θ 3 = 0, € θ 1 = 0:3491rad/s 2 , € θ 2 = 0:1746rad/s 2 , and € θ 3 = 0:4363rad/s 2 . The robot moves repeatedly for 6 times according to the set parameters and carries out collision test at different points with the same collision force (100 N) when running to the third second. The experimental results are shown in Table 3.
In order to verify the accuracy of the proposed system, the error analysis of the experimental results is carried out.
where ΔF n − is the absolute error of force in three directions and jF C j − is the modulus of measured force vector. The calculation formula of relative error of resultant force is: where f Cx , f Cy , f Cz − is the three directional components of the measured force and f Dx , f Dy , f Dz − is the three directional components of the calculated force. The calculation formula of relative error in each direction of position is: The calculation results are shown in Figures 11 and 12. As shown in Figure 11, Figure 11(a) shows the error value of mechanical arm experimental resultant force, Figure 11(b) shows the error value of formula computer mechanical arm resultant force, Figure 11(c) shows the relative error value of mechanical arm experimental resultant force, and Figure 11(d) shows the relative error value of formula computer mechanical arm resultant force. Compared with Figures 11(a) and 11(b), it can be found that the fluctuation of mechanical arm resultant force experimental test value is significantly larger than that calculated by formula, and there is no obvious law. The difference between the maximum error value and the minimum error value is large, up to 2.26 N, and the minimum value is only 0.49 N. The change of the calculated value of the formula shows the law that it increases with the increase of the distance from the collision point, and reaches the maximum value at the farthest position from the force sensor coordinate system, which is close to 1.5 N. Comparing Figures 11(c) and 11(d), it is found that their performance rules are roughly the same as those in Figures 11(a) and 11(b). The difference is that in the relative error, the maximum values of the two are close, and the relative error of the measured value is less than the formula calculated value. Therefore, the formula calculation can reflect the relative error of the measured computer arm control system to some extent.
When the relative distance error of the mechanical arm (a) is the collision distance of the three points, as shown in the figure, the calculation error of the mechanical arm (b) is the relative distance between the collision points, and the calculation error of the mechanical arm (a) is the basic distance of the collision point, as shown in the formula of the collision point (b), and they are all within the allowable range of error. However, it changes with the distance of the collision point. The collision error first increases with the increase of the distance, and the maximum error is 16 mm, but the relative displacement error in all directions fluctuates in the range of about 5%. Compared with the formula calculation, the experimental error value is basically the same, and the farthest relative error is no more than 5%. It can be seen that this method can meet the collision accuracy requirements of the farthest point of the manipulator.

Conclusion
The sensitivity of the manipulator control system determines the process of industrialization. This paper proposes a manipulator control system based on machine vision and six-dimensional sensor, which can directly realize the intelligent and accurate control of the manipulator, solve the problem of low precision of the manipulator control system, and test the effectiveness of the system combined with the test of collision point. Through the actual measurement, inspection and analysis, the following conclusions are drawn: (1) A manipulator control system based on machine vision and six-dimensional sensor is proposed. The manipulator visual servo structure in the system can effectively capture the target and transmit it to the six-dimensional force sensor to control the motion thread of the manipulator and guide the precise operation of the manipulator (2) Through the simulation experiment of impact point, it is found that the manipulator control system constructed in this paper can effectively test the impact point. At the same time, the formula calculation shows that the accuracy of the manipulator control system will not change significantly with the change of the position of the impact point. It has high accuracy and small error, and the maximum error is not more than 5%, which meets the accuracy requirements of the detection of the impact point of the manipulator

Data Availability
The experimental data used to support the findings of this study are included within the article.