Small Object Detection Network Based on Feature Information Enhancement

Due to the small size and weak characteristics of small objects, the performance of existing object detection algorithms for small objects is not ideal. In this paper, we propose a small object detection network based on feature information enhancement to improve the detection effect of small objects. In our method, two key modules, information enhancement module and dense atrous convolution module, are proposed to enhance the expression and discrimination ability of feature information. The detection accuracy of this method on PASCAL VOC, MS COCO, and UCAS-AOD data sets is 81.3%, 34.8%, and 87.0%, respectively. In addition, the detection results of this paper in detecting small objects are slightly (0.2% and 0.1%) higher than the current advanced algorithms (YOLOv4 and DETR, respectively). Moreover, when these two modules are integrated into other algorithms, such as RFBNet, it can also bring considerable improvement.


Introduction
As one of the basic tasks of computer vision, object detection has been widely used in a variety of scenes, such as object tracking, intelligent monitoring, and automatic driving. With the development of a deep convolution neural network [1,2], object detection has completed a great leap from the traditional manual feature detection method to the deep learning method of convolution neural network, many object detection algorithms based on deep learning have been proposed. However, most of the existing object detection algorithms [3] can only successfully detect mediumsized objects and large objects in natural scenes, and the detection effect of small objects and dense objects is not satisfactory.
e object detection algorithm based on a convolution neural network can be roughly divided into a two-stage detection algorithm [4][5][6] and a one-stage detection algorithm [7][8][9][10][11]. Among them, the two-stage detection algorithm first generates multiple candidate regions through selective search, then uses a convolution neural network to extract features from the candidate regions, and finally carries out object category estimation and bounding box regression. e two-stage algorithm has high detection accuracy, but the detection speed is slow due to a large number of candidate regions. e one-stage detection algorithm regards object detection as a regression problem, directly extracts the features of the input image, and carries out bounding box regression, which has the advantage of detection speed. ese algorithms have achieved good detection results on general data sets such as PASCAL VOC [12] and MS COCO [13], but there are still great challenges in detecting small objects and dense objects. Small objects or dense objects contain a small number of pixels in the original image and carry limited information. After multiple downsampling in the depth network, the resolution is further reduced, resulting in weakening or even loss of feature information and increasing the difficulty of detection. erefore, small object detection is still a difficult problem to be solved in the task of computer vision.
In recent years, with the continuous development of deep learning, small object detection research has attracted wide attention and has been widely used in urban intelligent transportation, logistics management, agricultural and forestry development, public safety, disaster relief deployment, and other task scenarios. Small object detection research has important research significance and practical application value. Existing small object detection algorithms are mostly proposed on the basis of general object detection methods. ey enhance the spatial details and semantic information of small objects through fusing multiscale features [14][15][16], increasing the receptive field of fine-grained features [17][18][19][20][21], and introducing contextual information around the object [22,23] to enrich the expression of feature information. In addition, anchor-free algorithms [24], attention module enhancement [25,26], super-resolution feature representation [27], and data augmentation [28] have also been studied to solve the problem of small object detection.
In order to improve the detection accuracy of small objects and meet the real-time performance of detection, based on the one-stage classical algorithm SSD [7], a small object detection based on feature information enhancement network (FIEN) is proposed in this paper. We have designed the information enhanced module (IEM) and dense atrous convolution module (DAM) in FIEN, which can enhance the context information around small objects and learn the characteristics of the large receptive field, to improve the detection accuracy of small objects. e main contributions of this paper are as follows: (1) Based on the fusion structure of the feature pyramid network [14], we design an information enhancement module. is module adds global and local information branches to enhance the context information of small objects by learning the global information, local information, and multiscale information of input features.
(2) In order to reduce the loss of small object information, we propose a dense atrous convolution module, which uses the atrous convolution to obtain the characteristics of receptive fields of different scales and then fuses them. It is worth noting that in this paper, the dense connection is used to obtain the characteristics of receptive fields at different scales, so as to establish a connection for the characteristics of receptive fields at different scales. e receptive field is expanded without adding additional parameters and calculations, so as to improve the detection effect of small objects. (3) e FIEN algorithm proposed in this paper has achieved good detection accuracy on PASCAL VOC [12], UCAS-AOD [29], and MS COCO [13] data sets. At the same time, this paper also integrates IEM and DAM modules into RFB [17] network to further verify their effectiveness.

Small Object Detection Algorithm.
According to the definition of SPIE (Society of Photo-Optical Instrumentation Engineers), an object with an object area less than 80 pixels in the image (256 × 256) is a small object. Another method is that according to the definition of the COCO data set, objects with a size less than 32 × 32 pixels are considered small objects.
In recent years, small object detection has attracted extensive attention, but small object detection is very challenging due to its low resolution, less pixel information, and easy to be disturbed by a complex environment. Many scholars have done a lot of research to improve the detection accuracy of small objects. Super-resolution technology is applied to the field of small object detection aiming at improving the resolution of small objects. is technology mainly generated super-resolution feature representation by increasing the resolution of the high-level feature map [27] or by GAN [30,31], so as to improve the detection results of small objects. Aiming at the problem that small objects carry little pixel information, many studies used the multiscale fusion method to construct features with edge detail information and semantic information, which is conducive to small object detection. At present, there are three methods of feature fusion: element-by-element addition, element-by-element multiplication, and channel splicing. For example, literature [10,14,32,33] added the low-level feature map of different scales extracted by the feature extraction network with the high-level feature map, so as to detect objects of different scales. DSSD [34] added a deconvolution layer on the basis of the SSD [7] algorithm to multiply the high-level features extracted by the deconvolution layer with the low-level features of the same scale, so as to highlight the object area and enhance the detection of small objects. FSSD [16] sampled feature maps of different scales to the same scale for fusion, so as to further enhance the features for detection. In the face of the interference of complex background on small object information, the introduction of an attention mechanism is an effective method. It can make the network pay more attention to the area containing small object information, reduce the impact of background information on detection results, and improve the accuracy of small object detection. Li [35] introduced the channel attention module and used the correlation between channels to selectively enhance the areas rich in discriminant information. YOLOv3-A [36] optimizes the redundant channel problem of different levels of features in the channel attention operation and uses the spatial attention mechanism to obtain the distribution of input features over spatial positions, so as to retain the effective information for the detection process. Besides, the research based on anchor-free [24], data augmentation [28], and other methods has also been used to solve the problem of small object detection. Different from the above methods, FIEN uses two key components: IEM and DAM to obtain more feature information, so as to improve the detection performance of small objects.

Multiscale Information Enhancement.
Multiscale information enhancement solves the problem of object scale change by acquiring and fusing the feature information of multiple different scales, so as to improve the final detection results. For example, FPN [14] used the top-down structure with the horizontal connection to detect objects of different scales by using the features of different scales, which effectively solved the problem of scale change. Since then, many research works have improved based on FPN structure, such as adding additional pathways to further fuse deep and shallow features [32,37]. e attention mechanism [25,38] is introduced to guide the fusion of feature information at different levels. Moreover, feature maps with different atrous rates are fused to learn multiscale information [17][18][19]. Methods such as deconvolution and element-by-element multiplication to fuse high-and low-level features [34] are also widely used to enhance multiscale information. Different from the above work, this paper proposes IEM and DAM from the perspective of enhancing the context information around the object and enhancing the receptive field information. On the basis of FPN, the IEM adds two branches: global information branch and local information branch, which enhance the context information of small objects by fusing the global information, local information, and multiscale information around the objects. e DAM uses the atrous convolution with different expansion rates to obtain the features of multiple scales of receptive fields and then splices and fuses them to expand the receptive fields without adding additional parameters and computation.

Methods
SSD [7], as a representative of a one-stage object detection algorithm, has good results in detection accuracy and detection speed. SSD algorithm used large-scale shallow features to detect small objects in the image and large-scale deep features to detect medium-sized or large objects in the image, so as to achieve the purpose of multiscale detection. However, the shallow features of SSD lack the guidance of global semantic information, resulting in the low accuracy of small object detection. In order to improve the ability of SSD to detect small objects, this paper proposes an object detection method based on feature information enhancement on the basis of the SSD algorithm. e network can establish and enhance the exchange and connection between information and produce more discriminative features. e overall structure is shown in Figure 1. Firstly, in order to help locate small objects, we repeatedly use shallow features, up-and downsample both the Conv3-3 and Conv5-3 in the VGG-16 network to the scale of Conv4-3, and then splice the three feature layers in the channel dimension to obtain a multiscale feature map F, which contains texture information and semantic information. On this basis, this paper proposes an information enhancement module (IEM) and dense atrous convolution module (DAM), which use highlevel semantic information to guide and enhance the detailed information of shallow small object areas. Based on the enhanced feature map, downsampling is carried out to obtain six feature maps with different scales: P6, P5, P4, P3, P2. and P1, and their sizes are 38 × 38, 19 × 19, 10 × 10, 5 × 5, 3 × 3, and 1 × 1, respectively. e six feature maps with different scales are used for object detection. Multiple priori boxes with different proportions are set at each grid point in each feature map, and multiple boundary boxes are generated through classification and regression. Finally, the boundary boxes obtained from different scale feature maps are filtered out by nonmaximum suppression, and the final detection results are obtained.

Information Enhancement Module.
e traditional FPN integrates the semantic information obtained from the high level with the low-level feature map, but the features obtained from the high level only contain the semantic information of a single scale and cannot obtain more comprehensive and richer context information. In order to solve this problem, this paper adopts the structure of feature pyramid attention [38] and designs the information enhancement module, which aims to obtain more semantic information in feature map F, fuse feature maps of different scales, and establish semantic communication between information.
e core idea of the information enhancement module proposed in this paper is to integrate the multiscale semantic information of high-level features and introduce local information and global information at the same time, so as to establish the communication and learning between different information, and this paper uses semantic information to enhance the attention of spatial detail information and generates more discriminative features.
is paper assumes that the size of the input high-level feature map F is 2W × 2H × C. We obtain global information, local information, and multiscale semantic information through three parallel paths as shown in Figure 2. e calculation process is shown in the following equations: where B 1 , B 2 , and B 3 represent the feature map obtained by the first branch, the second branch, and the third branch, respectively; global(•) represents the global average pooling; FPN(•) represents the feature pyramid network; Add(•) represents the addition operation of corresponding elements; and Conv(•) represents the convolution operation. e first branch adopts global average pooling to obtain the global information of each channel and then adjusts the number of channels through a 1 × 1 convolution layer to fuse and learn the global information of channels. e second branch uses a 3 × 3 convolution to obtain the local information of the feature map. e third branch designs a feature pyramid network, which integrates three different scale features. e feature pyramid network uses a threelevel convolution network with a step size of 2, and the size of the convolution kernel is 5 × 5, 3 × 3, and 1 × 1 in turn. e pyramid network fuses the information of different scales in turn, which can more accurately fuse the context information of adjacent scales and get richer multiscale semantic information. Finally, the output features of the three branches are added with the corresponding elements to Computational Intelligence and Neuroscience 3 obtain the final enhanced features.
e information enhancement module designed in this paper can integrate different scales of context information and establish the relationship between multiscale information and global and local information, so as to obtain enhanced features with more representation ability.

Dense Atrous Convolution Module.
In object detection tasks, there are usually many small objects or objects with large-scale changes. In order to solve this problem, the feature map must be able to cover different scales of receptive fields. Inspired by [39,40], this paper designs a dense atrous convolution module by using expanded convolution and dense connection, which is used to obtain a denser sampling of high-level features and larger-scale receptive fields, establish and enhance the relationship between different receptive field feature maps, and learn richer information. Its structure is shown in Figure 3, in which four branches are represented as F 1 , F 2 , F 3 , and F 4 . F 1 is the original input feature, which is directly spliced with the output features of the other three branches, so as to further maintain the spatial and semantic information of the original input feature and play the effect of residual connection. F 2 branch aims to enhance spatial information in the vertical direction; firstly, the 1 × 1 convolution is used to reduce the number of channels; then the 3 × 1 convolution is used to perform one-dimensional convolution in the column dimension to enhance the vertical spatial relationship between learning feature points; and finally, the 3 × 3 convolution with the expansion rate of 3 is used to further enhance the context information of learning larger receptive fields. After the output features of F 2 and F 1 are spliced in the channel dimension, enter the F 3 branch. F 3 branch aims to enhance the spatial information in the horizontal direction; firstly, the 1 × 1 convolution is used to reduce the number of    channels, and then the 1 × 3 convolution is used to perform one-dimensional convolution in the row dimension to enhance the horizontal spatial relationship between learning feature points, and finally, the 3 × 3 convolution with an expansion rate of 3 is used to further enhance the context information of learning larger receptive fields. F 4 splices the output features of F 1 , F 2 , and F 3 as input and then convolutes with 1 × 1, 1 × 3, and 3 × 1 convolution, and next 3 × 3 convolution with an expansion rate of 5 is used to enhance the receptive field of the two dimensions of a column vector and row vector of the input feature. Finally, the output features of the four branches are spliced, and then the final output features are obtained by adjusting the number of channels through a 1 × 1 convolution. e calculation process is shown in the following equations: whereConv 3×3,d�3 and Conv 3×3,d�5 represent the atrous convolution layer, Conv represents the convolution operation, C · { } represents the splicing operation along the channel dimension, and F out represents the final output feature.
In order to make full use of the specific information learned by each branch and enhance the flow and dissemination of information, the four branches of this module adopt the series mode, that is, the output features of the front branch and the original input features are spliced as the input of the back branch. Reusing the features of the previous branches also avoids the loss of information caused by convolution operation and further enhances the information.

Experiment and Analysis
e FIEN network proposed in this paper has been tested on PASCAL VOC 2007 [12], MS COCO 2017 [13], and UCAS-AOD [29] data sets. PASCAL VOC 2007 data set has 9,963 images, including 20 categories, of which the number of small objects accounts for about 57%. MS COCO 2017 data set contains 80 categories, 118,287 training images, 5,000 verification images, and 40,670 test images. e images in the data set have complex backgrounds. With a large number of instance objects on each image, the number of small objects is increased, and the evaluation standard is stricter. UCAS-AOD data set is a remote sensing image data set, which only includes planes and cars. However, due to its high-altitude top view shooting, the images have a large field of vision, resulting in many small objects in the images and high background complexity, which brings great challenges to the detection.

Experimental Setup.
e experiments in this paper are implemented on PyTorch; the hardware environment is NVIDIA GeForce RTX 2080Ti. In the training process, this paper follows the training strategy of the baseline detector and uses the backbone pretrained on ImageNet, and the loss function is the sum of the location loss function L loc and the classification loss function L conf . e expression is shown in formula (9), where N represents the number of a priori boxes. At the same time, the random gradient descent Computational Intelligence and Neuroscience algorithm is used to optimize the weight of the network. e momentum is set to 0.9; the learning rate is set to 0.004; and the decay is set to 0.0005. At the beginning of training, the weight of the model is randomly initialized; if a large learning rate of 0.004 is adopted, the model training may be unstable. In order to ensure the stability of model training, this paper selects the way of warming up the learning rate, as shown in formula (10). Use a small learning rate of 1 × e −6 , and then the learning rate of each epoch increases a little. After six epochs, the learning rate increases to the preset 0.004. At the moment, the preset learning rate is used for training to make the convergence speed of the model faster.
where N iter represents the number of steps of network training iteration, l rate is the initial value of network learning rate, and epoch_size indicates the number of batch sizes contained in an epoch. Considering the limitation of GPU memory, when training the PASCAL VOC data set, set the batch size with the input image size of 300 × 300 and 512 × 512 to 16 and 14, respectively. When training the UCAS-AOD data set, the batch size is set to 16, and when training the MS COCO data set, the batch size is set to 8. In this paper, the mean average precision (mAP) of various objects is used as the object detection and evaluation index, as shown in the following equation: where Q represents the total number of all detection object categories, q represents a certain detection object category, and AP represents the average accuracy of a certain detection object category.
e average accuracy AP represents the area under the precision-recall curve, and its calculation formula is shown in the following formula: where P stands for precision and R for recall. e calculation method of precision and recall is shown in the following equations: where TP represents the number of positive samples correctly identified, FP represents the number of negative samples incorrectly identified as positive samples, and FN represents the number of positive samples predicted as negative samples. Positive and negative samples are distinguished according to the selected IOU threshold. ose greater than the IOU threshold are positive samples; otherwise, they are negative samples. In this paper, the IOU threshold is set to 0.5.

Results on PASCAL VOC 2007 Data Set.
In order to verify the effectiveness of the FIEN network, this section experiments to train the model on the joint training set of VOC 2007 and VOC 2012 (16,551 images) and test the model on the VOC 2007 test set (4,952 images). During training, the input image size is set to 300 × 300 and 512 × 512, respectively. At the same time, in order to further verify the effectiveness and universality of IEM and DAM, this paper also integrates the two modules into the RFB algorithm for experimental analysis. Due to different experimental environments, SSD [7] and RFB [17] are reproduced in this paper. (1) Compared with baseline network SSD and RFB, FIEN and FIEN_RFB have significantly improved detection performance. When the input size is 300 × 300, the overall detection accuracy is improved by 3% and 0.4%, respectively; when the input size is 512 × 512, the overall detection accuracy is improved by 1.5% and 0.4%, respectively. (2) Meanwhile, compared with other algorithms based on SSD networks, such as RSSD [42] and FSSD [16], the performance of FIEN has been shown to be significantly better. At the scale of 300 × 300, mAP increased by 1.7% and 1.4%, respectively, at the scale of 512 × 512, mAP increased by 0.5% and 0.4%, respectively. is shows that the FIEN proposed in this    Computational Intelligence and Neuroscience paper can capture more effective information to further improve detection accuracy. (3) As the input scale becomes larger, the detection accuracy of the model is improved. is is because the large-scale image retains more information during network feature extraction, which is conducive to object detection. However, blindly increasing the size of the input image in the process of training and testing will consume more computing resources and time. erefore, this experiment has only been carried out for 300 × 300 and 512 × 512.
In addition, in order to further explore the detection performance of FIEN for small objects, the detection accuracy of different algorithms in each category is listed in Table 2. e experimental results in Table 2 show that the detection accuracy of the algorithm in this paper is higher than that of the SSD algorithm in all categories, especially in the category of bottle and plant with more small objects. For some categories with a large proportion of small objects, such as boat, chair, and bird, the detection accuracy of FIEN_RFB is 2.6%, 1.4%, and 1.1%, respectively, higher than that of RFB, which shows that the two modules proposed in this paper can extract richer context information, which is conducive to the detection of small objects.

Results on MS COCO 2017 Data Set.
MS COCO 2017 is a comprehensive data set including object detection, semantic segmentation, and instance segmentation. For the object detection task, the MS COCO 2017 data set contains a large number of objects with large-scale changes, dense objects, and small objects, including 80 categories, 118,287 training images, 5,000 verification images, and 40,670 test images. e performance evaluation index uses the average accuracy AP and the average recall AR, where IOU � 0.5:0.95 means that 10 thresholds are set in steps of 0.05, and the average value of 10 thresholds is obtained. S, M, and L represent small object, medium object, and large object, respectively.
During the experiment, this paper takes SSD512 as the baseline detector, trains FIEN512, and compares its performance with other algorithms. e experimental results are shown in Table 3; the visual detection results are shown in Figures 4 and 5. According to Table 3 and Figures 4 and 5, the results are summarized in the following list:

Method
Backbone mAP (%) SSD300 [7] VGG-16 81.2 FSSD300 [16] VGG-16 81.7 MultDet300 [45] VGG-16 85.9 FIEN300 VGG-16 87.0 (1) e detection results of FIEN512 are significantly better than SSD512 [7], YOLOv3 [10], and DSSD513 [34], and the detection results are equivalent to RFB512 [17]. (2) Compared with YOLOv4 [11] and DETR [44], the detection result of FIEN512 is slightly lower, but FIEN512 effectively improves the detection accuracy of small objects while ensuring the detection accuracy of large and medium objects, which effectively proves that FIEN network has good advantages in detecting small objects and dense objects. (3) Figure 4 selects images with complex environments, variable object scale, and dense for detection. From the detection results of the fourth and eighth lines, it can be found that FIEN512 has achieved good performance in detecting various types of objects, dense objects, and small objects, which further proves the effectiveness of the FIEN network in detecting small objects and dense objects.

Results on UCAS-AOD Data Set.
In order to verify the detection performance of FIEN network for small objects, the UCAS-AOD data set is selected for experiments. UCAS-AOD only contains two types of object and background negative samples of car and plane, with a total of 1,510 images. In this paper, 1,057 images are used as the train set, and 453 images are used as the test set. Although the object type and quantity of the UCAS-AOD data set are far lower than that of the COCO data set, the correlation between objects is strong, which is suitable to verify the effectiveness of this method for small object detection. Although the increase in input scale will improve the detection accuracy, it will also slow down the speed of training and testing. erefore, this paper only compares FIEN300 with other algorithms in the UCAS-AOD data set. e experimental results are shown in Table 4.
It can be seen from Table 4 that the mAP of the FIEN300 model in this paper on the UCAS-AOD data set is 87.0%, which is 2.8%, 2.3%, and 1.1% higher than SSD300 [7], FSSD300 [16], and MultDet300 [45], respectively, reflecting the superiority of small object detection. Figure 6 shows some test results of SSD and FIEN on the UCAS-AOD data set. It can be seen from the detection results in Figure 5 that the SSD algorithm has missed detection when detecting small and dense objects. For example, several cars in the third row and the second column are not detected, while    Table 5. According to Table 5, the three experimental schemes can improve the detection accuracy of the baseline model, and the detection performance of adding two modules at the same time is the best. is shows that the IEM and DAM modules designed in this paper are effective in capturing context information and establishing the relationship between information. e joint use of the two modules can enhance the performance of the network and improve the detection accuracy of the model.

Verifying the Effectiveness of Each Branch in the IEM.
e IEM consists of global information branches, local information branches, and multiscale semantic information branches. In order to further study the detection performance of each branch, this paper designs four types of experiments: (1) only add multiscale semantic information branches; (2) add multiscale semantic information and global information branches; (3) add three branches; and (4) convolution kernels of different sizes are used in multiscale semantic information branches. e experimental results are shown in Table 6. e baseline model is SSD_300 that achieves 77.2% mAP on the VOC 2007 data set. According to the settings in Section 2.1, we first add the multiscale semantic information branch with a convolution kernel size of 3 × 3 to the baseline model, and its performance is improved from 77.2% to 78.4%. en we use convolution kernels of 5 × 5, 3 × 3, and 1 × 1 instead of convolution kernels of 3 × 3, and the detection performance is improved from 78.4% to 78.6%, which shows that different convolution kernels are used to capture richer information. en we added the global information branch on this basis, and its detection accuracy reached 79.0%. Finally, after adding local information branches, the detection accuracy reaches 79.3%, which effectively improves the detection performance. In Table 6, C333 and C531, respectively, indicate that the convolution kernel size is 3 × 3 and 5 × 5, 3 × 3, and 1 × 1; glo represents the global information branch; and loc represents the local information branch.

Verifying the Effectiveness of Different Expansion Rates in DAM.
In order to explore the influence of different expansion rates of atrous convolution of three branches in DAM on detection performance, this experiment analyzes and compares four different schemes: (1) the expansion rates are 3, 3, and 3 in turn; (2) the expansion rates are 3, 3, and 5 in turn; (3) the expansion rates are 3, 5, and 5; and (4) the expansion rates are 3, 5, and 7. e experimental results are shown in Table 7. It can be found from the results in Table 7 that the best test results are obtained when the expansion rate is set to 3, 3, and 5. e possible reason is that the branches of row spatial relationship and column spatial relationship enhancement learning are suitable for using the same and moderate expansion rate, while the branches of overall spatial relationship enhancement learning need larger receptive fields to obtain richer semantic and spatial information. erefore, in this paper, the atrous convolution with an expansion rate of 3, 3, and 5 is used to form a DAM in order to better detect objects.

Verifying the Connection Mode of IEM and DAM.
In order to explore the impact of different connection modes of the two modules on the detection performance, this paper adopts two connection modes for the IEM and the DAM: cascade and parallel, as shown in Figure 7. e final detection results are shown in Table 8. It can be seen from Table 8 that the detection effect of parallel connection for two modules is better than that of cascade connection. is shows that when using cascade connection, due to the small scale of input features, although more abundant information is captured after passing through the information enhancement module, some spatial features of the original features are lost after a series of convolution operations, which cannot provide more useful information for the DAM, resulting in unsatisfactory detection effect. When the parallel connection is adopted, the two modules operate on the input features, and the obtained features do not affect each other.
ey not only can obtain rich context information but also can establish the relationship between the information. Finally, they are fused to form complementarity and obtain more discriminative features.

Conclusion
In this paper, we propose a novel small object detection based on a feature information enhancement network (FIEN) with two simple yet effective components to alleviate information loss. Specifically, IEM extends the function of FPN to utilize local and global information in the input feature. en we introduce DAM to enhance the flow propagation between features and reduce the loss of small object information. Extensive evaluations of three data sets demonstrate that the proposed approach outperforms previous state-of-the-art methods in detecting small objects and the proposed two modules can be well generalized to other algorithms and achieve significant improvement. In addition, the detection algorithm can provide technical support for medical auxiliary diagnosis, intelligent agriculture, automatic driving, and other scenes. Although our method has achieved good results in detecting small objects, there are still some missed detection and false detection in detecting small objects with similar features or occlusion. In future work, we will adjust and optimize the performance of IEM and DAM and verify the generalization of two modules on more detectors. Meanwhile, we will also optimize the composition of the data set and increase the training of occluded small objects.

Conflicts of Interest
e authors declare that they have no conflicts of interest.