Aircraft surface inspection includes detecting surface defects caused by corrosion and cracks and stains from the oil spill, grease, dirt sediments, etc. In the conventional aircraft surface inspection process, human visual inspection is performed which is time-consuming and inefficient whereas robots with onboard vision systems can inspect the aircraft skin safely, quickly, and accurately. This work proposes an aircraft surface defect and stain detection model using a reconfigurable climbing robot and an enhanced deep learning algorithm. A reconfigurable, teleoperated robot, named as “Kiropter,” is designed to capture the aircraft surface images with an onboard RGB camera. An enhanced SSD MobileNet framework is proposed for stain and defect detection from these images. A Self-filtering-based periodic pattern detection filter has been included in the SSD MobileNet deep learning framework to achieve the enhanced detection of the stains and defects on the aircraft skin images. The model has been tested with real aircraft surface images acquired from a Boeing 737 and a compact aircraft’s surface using the teleoperated robot. The experimental results prove that the enhanced SSD MobileNet framework achieves improved detection accuracy of aircraft surface defects and stains as compared to the conventional models.
Aircraft skin inspection is essential under the Corrosion Prevention and Control Program (CPCP) to ensure the aircraft structural integrity [
Human visual inspection is, by far, the most widely used method in aircraft surface inspection [
Designing a robotic inspection platform with good adherence, mobility, and flexibility is a key challenge. Typically, fixed morphology climbing robots are used for aircraft inspection. They use magnetic devices, vacuum suction cups, or propeller force to adhere and climb the aircraft surface [
Another constraint for the aircraft visual inspection technique is developing detection algorithm to recognize the stains and defects automatically. In the last decade, various visual inspection algorithms have been applied to the field of aircraft inspection and surface defect detection. These visual inspection algorithms are classified into two types, such as traditional image processing-based visual inspection [
Typically, the key challenge of deep learning algorithms for this application is the requirement of a large amount of image data and an optimal preprocessing algorithm. Preprocessing plays a vital role in helping the network in recognizing the low-contrast objects and differentiating between objects with similar features, such as dirt, stains, and scratches on the aircraft surface, all at a small cost, which is negligible compared to increasing the complexity of the CNN architecture [
In order to overcome the shortcomings mentioned earlier, this paper proposes a reconfigurable suction-based robot named as “Kiropter” for aircraft inspection along with an enhanced SSD MobileNet deep learning framework for recognizing and classifying the stains and defects on the aircraft surface. The reconfigurable robot is capable of accessing confined areas, overlapped joints, and fuselage on the aircraft body by dynamically changing its shape and functionality. The self-filtering-based periodic pattern detection filter is adopted as with the SSD MobileNet deep learning framework to effectively enhance the recognition results in low-contrast stain and defective areas in the aircraft skin image. This article is organized as follows: related work is reported in Section
Very few works on aircraft skin inspection exist in the literature. Few of these works focus on developing a robotic platform for inspection and others focused on the detection algorithm. Siegel and Gunatilake [
The functional block diagram of the proposed visual inspection model is shown in Figure
Proposed scheme.
The Kiropter is a semiautonomous teleoperated differential 8W 2
Figure
Hardware components and communication networks.
The central control unit (CCU) is powered with an Arduino Mega 2560 microcontroller. It handles the wireless communication interface and generates the required control signals to the locomotion unit and shape changing unit according to the control flow chart as shown in Figure
Control of the Kiropter flow chart, for three simultaneous commands.
Locomotion of the Kiropter robot has been achieved through three functions including adhesion, rolling, and transformation. Electric turbines are used for adhesion, and servomotors are used for navigation around the surface of the aircraft and for transformation. Electric turbines are controlled from the CCU through the HV75A brushless motor controller. The CCU generates the required PWM signal to the driving unit (HV75A) for adjustment of the turbine speed. For transformation, the servomotor is placed in the central articulation of the robot. It has a torque of 1
Configurations of the Kiropter for different situations: (a) flat, (b) 90
An EDF is an impeller driven by a brushless motor mounted inside a circular duct. It has a thrust holding 3.5 kg. The EDF receives its power directly from the batteries. The speed of the EDF and the energy used are controlled by changing the pulses from the CCU independently for each EDF using the electronic speed control (ESC) (Figure
EDF connections to the energy and control unit.
The vision system consists of a WiFi-enabled HD 1080 p camera (HDDB 10AD). The camera is placed 72 mm above the surface of the plane, in the center of the body of the robot. The camera is inclined at an angle
Position of the camera in the Kiropter: front and side views.
Angle of the view of the camera with respect to the inertial system of the robot.
This section describes the vision-based aircraft surface stain and defect detection based on the enhanced deep learning technique as shown in Figure
Enhanced deep learning scheme.
Generally, the backgrounds present in training and test image data can affect the learning and recognition abilities of all detection algorithms [
Preprocessing algorithm.
Data: grayscale image
Result:
Step 1: transform the image to a frequency domain (coordinates
Step 2: apply the log absolute function on the Fourier transformed source image
Step 3: compute the self-filtering function
Step 4: suppress the periodic patterns in the frequency image using self-filtering function
Step 5: transform the filtered image
Step 6: edge enhancement has been performed in this step. Due to the strong filtering effect on the prior stage, the edges of the stains and defects are slightly blurred which may affect the detection accuracy of the algorithm. Hence, the Sobel filter is adopted to enhance the edges of the defect and stain regions on the periodic pattern suppressed image.
SSD MobileNet is an object detection framework which is trained to detect and classify the defects and stains on the captured image. Here, MobileNet v2 is the base network utilized to extract the high-level feature from the images for classification and detection and SSD is a detection model which uses MobileNet feature map outputs and convolution layers of different sizes to classify and detect bounding boxes through regression. Connection of MobileNet and SSD is shown in Figure
SSD MobileNet.
MobileNet v2 [
MobileNet.
SSD [
This section describes the experimental results of the proposed scheme. The experiment has been performed in two phases. The first phase validates the Kiropter robot performance on different aircraft surfaces and captures the aircraft surface for visual inspection. The second phase involves validation of the detection algorithm with the captured aircraft skin images. These images of defects and stains are captured by operating the robot using a semiautonomous mode. In the semiautonomous mode, the navigation control of the robot is performed manually through teleportation. However, during the semiautonomous mode, the robot avoids the windows and the nose of the plane automatically by using an inductive sensor and also performs the shape change automatically when it moves on the fuselage area.
The performance of the Kiropter robot was tested in two environments at the RoAR laboratory and ITE, Singapore. In the RoAR laboratory, the platform was tested on the curved aircraft skin, vertical flat, and glass surfaces. In the Institute of Technical Education (ITE) College, Singapore, the Kiropter robot was tested with an actual aircraft, specifically on Boeing 737 and combat aircraft models. These results are shown in Figure
Kiropter robot in operation. The robot has been highlighted using yellow circles.
During the inspection, the robot was controlled through a GUI using Bluetooth communication. Through the GUI, the robot was paused in each stage (where stains and defects were visible) for a few seconds to capture the surface picture with better quality. The captured images are instantaneously sent to the remote inspection console and are also recorded in parallel in a 32 GB SD card present in the robot. The trial is performed in different regions of the aircraft surface including the fuselage section, wings, and bottom of the aircraft surface. Figure
Captured defect and stain images. (a–c) have stains and (d–f) have defects.
The effectiveness of the detection algorithm has been tested manually with Kiropter captured aircraft skin images. This dataset contains about 2200 images from 15 different aircrafts located in ITE, Singapore. The images are balanced across the two classes—stains (mainly from oil and liquid spills) and defects (which include cracks, scratches, and patches). Each image is resized to a
Standard performance metrics such as accuracy, precision, recall, miss rate, and
Figures
Stain detection results.
Defect detection results.
True and false detections. There are 140 images from each class.
Class (no.) | Predictions | |||
---|---|---|---|---|
SSD MobileNet | Enhanced SSD MobileNet | |||
Stain | Defect | Stain | Defect | |
Stain (140) | 106 | 22 | 130 | 4 |
Defect (140) | 30 | 94 | 6 | 128 |
Detection results.
Metric | SSD MobileNet | Enhanced SSD MobileNet | ||
---|---|---|---|---|
Stain | Defect | Stain | Defect | |
Accuracy | 79.4 | 96.2 | ||
Precision | 82.8 | 75.8 | 97.0 | 95.4 |
Recall | 77.9 | 81 | 95.5 | 96.8 |
80.3 | 78.3 | 96.3 | 96.1 | |
Miss rate | 8.6 | 11.4 | 4.2 | 7.1 |
Run time (Jetson Nano) | 73 ms | 129 ms | ||
Run time (workstation) | 32 ms | 51 ms |
The performance of the algorithm has been compared with standard SSD MobileNet (without a preprocessing stage) in terms of the abovementioned performance metrics. Both networks are trained for the same number of steps. Both stain detection and defect detection performance increases when preprocessing is used. In some cases, certain false classifications are avoided when preprocessing is used. This is evident in cases where defects and stains look similar but the difference is enhanced due to preprocessing. Few of these cases are shown in Figure
Comparison with SSD MobileNet based on false detection.
Comparison with SSD MobileNet based on prediction confidence.
Comparison with SSD MobileNet based on miss detection.
This section describes the comparative analysis of the proposed algorithm with the existing aircraft surface inspection algorithm. The comparison has been performed based on the detection accuracy of each model. Table
Comparison with aircraft inspection schemes.
Algorithm and inspection module | Class | Accuracy |
---|---|---|
ANN for the CIMP robot [ |
Crack and corrosion | 83.5 (avg.) |
Contourlet transform [ |
Crack and scratch | 70 |
VGG Net [ |
Defect detection | 87.62 |
AlexNet [ |
Defect detection | 84.73 |
Enhanced SSD MobileNet | Stain and defect | 96.2 |
The effectiveness of the proposed algorithm is further analyzed with the deep learning framework and DAGM 2007 defect image dataset. This DAGM 2007 dataset contains stain, crack, and pitted surfaces which are captured with different lighting conditions, in which the network achieved 93.2% accuracy.
Table
Comparison with other defect detection schemes.
Algorithm | Class | Accuracy | Application |
---|---|---|---|
AlexNet CNN [ |
Crack | 90 | Cracks on concrete surfaces |
SSD MobileNet [ |
Surface defect detection | 95 | Surface defect detection |
Compact CNN [ |
Damaged spots, dust, and scratches | 86.82 | Metallic surface defect detection |
8-layer CNN in the UAV [ |
Crack | 96.6 | Cracks on concrete surfaces |
Faster RCNN [ |
Crack, corrosion, and delamination | 84.7 | Metal surface inspection |
Enhanced SSD MobileNet | Stain and defect | 96.2 | Aircraft inspection |
Generally, UAV-based inspection has a lot of advantages over robot-based inspection due to its high mobility [
SSD MobileNet is a lightweight scheme which can perform real-time detection with the tradeoff of accuracy. Faster RCNN has better detection results but is larger and takes longer time to run. The enhancement of images through preprocessing increases the accuracy of the proposed model, while still being able to perform inference in real time.
This work proposed the aircraft surface inspection using an indigenously developed reconfigurable climbing robot (Kiropter) and enhanced visual inspection algorithm. An enhanced SSD MobileNet-based deep learning framework is proposed for detecting stains and defects on the aircraft surface. In the preprocessing stage, a self-filtering-based periodic pattern detection filter was included in the SSD MobileNet deep learning framework to reduce the unwanted background information and enhance the defect and stain features. The feasibility of the proposed method was verified with parts of the aircraft skin in the RoAR lab and real aircrafts in ITE Aerospace, Singapore. The experimental results proved that the developed climbing robot can successfully move around the complex regions of the aircraft including the fuselage and confined area and capture the defect and stain regions. Further, the efficiency of the detection algorithm was verified with the captured image and its results are compared with conventional SSD MobileNet and existing defect detection algorithms. The statistical results show that the proposed enhanced SSD MobileNet framework achieves improved detection accuracy (96
The data used to support the findings of this study are available from the corresponding author upon request.
The authors declare that there is no conflict of interest regarding the publication of this paper.
Sample video of the robot Kiropter, for the demonstrative purposes.