Feature Extraction of Plant Leaf Using Deep Learning

,


Introduction
Almost millions of species of plants can be found on the Earth surface as a result of botanical research. e literature proposed a wide range of techniques to recognize the plant type, most likely leaf recognition and visual classification of plants through image processing and computer vision [1]. It is a challenging problem and requires tactical dealing with intraclass variable textures and asymmetrical shapes. Commonly, a plant is recognized by recognizing its specific organs such as leaves, flowers, fruits, or bark or their combination. Belhumeur et al. introduced the usage of such a system for quick classification and recognition of plant species from an entire collection; i.e., the process of hours can be accomplished within seconds [2]. Similarly, SIFT descriptors integrated with a pack of Word models were applied for leaf recognition in [3]. e biologists have discovered numerous types of leaves using machine learning classifiers [4][5][6][7] and computer vision techniques [1], but still, there have been some kinds of leaves left to be identified which need to be demarcated.
An open-source plant recognition problem was given as challenge to research community in the 2016 edition of LifeCLEF which targets identification of the unknown and never-seen categories on the basis of plant characteristics like leaf shape, leaf veins, flower, fruit, stem, and branch of the entire plant [8]. With the advancement of artificial intelligence and neural networks, the research community can make their solutions more optimal in several domains. Automated plant recognition via the neural network using image processing is a critical area which allows recognition of leaf images with an accuracy of 80% to 97% [9,10].
Neural network works in the same fashion as the human brain, founded on mathematical formulas/models. e functional principles of neural networks target understanding and recognizing patterns among different components. e fundamental unit of neural network is a neuron which is trained by repetitive tasks and gets experienced just like human brain through acquired knowledge in the training. e focus of training and acquiring knowledge is to establish a connection between input and output. After training, the system can make predictions about what it has been trained. Convolutional neural network (CNN) is a class of deep neural networks first proposed by LeCun [11]. Common applications of CNN can be found in computer vision, natural language processing, and speech recognition [12][13][14][15]. CNN functions are based on the human vision processing system. It distinguishes a feature using local receptive field and shared weight and associates this feature to the feature map, saving computational load. Furthermore, a process of subsampling is performed to achieve the invariance among features with respect to geometric distortion. CNNs are considered better than classic neural networks on images because layers in CNN inherent the properties of input images whereas feedforward neural networks cannot make sense of order in their inputs.
is study provides a method to analyze leaf gas and the leaf features like area, diameter, perimeter, circularity, aspect ratio, solidity, eccentricity, and narrow factor of healthy and dead leaf dataset. e leaf attributes like leaf area, diameter, leaf chlorophyll, and leaf nitrogen are calculated and analyzed through CNN implemented in MATLAB. e result proves this method to be an efficient attempt. e paper is organized in five sections. Related work of the research community in the current field is presented in Section 2 with close comparison. Section 3 illustrates the methodology with the proposed cluster. Simulated results are discussed in Section 4. e paper is concluded and discourses the future directions of our work in Section 5.

Related Work
In the past few decades, the research community has focused on the field of artificial intelligence by working in digital image processing, computer vision technique, and machine learning to provide a platform between human and machine theory [2,[16][17][18][19][20][21][22].
is work is widely used in several companies and medical fields for classification and identification of plants which play a vital role in the Earth's ecosystem. Many plant species are at the edge of extinction in recent days. In order to save the Earth's biosphere, the flora diversity catalogue and study of plant databases is an important step. Techniques used for leaf recognition using shape, descriptor, size, and texture have been focused for many years. Wu et al. [23] used the probabilistic neural network (PNN) to automatically classify the leaf features of 32 plants. e accuracy rate is above 90% as the algorithm is fast in execution. Automatic plant identification via leaf characteristics is a challenging task and constrained by many complications which include geometric deformations, illumination variations, and interspecies and intraspecies levels. To overcome these constraints, Yahiaoui et al. [24] proposed a boundary-based approach using Otsu algorithm with a better classification rate by segmenting a scanned or scannedlike image into foreground and background pixel sets, which helps to obtain the binary image and subsequently extract the boundary for the description stage. In the past, researchers used the shape of leaves as one of the classification feature of plants [4,17,25,26] because plants can be identified through distinct shape attributes of the leaf even by nonexperts. Figure 1 shows the comparison of related studies. e enhanced neural networks like PNN, ANN, and CNN have significantly improved the resulting ratio and accuracy rate at a minimum cost of iterations. e concept of a pretrained CNN model for plant recognition was also proposed by Lee et al. [27], which achieved a performance of up to 99.6% verified through DN visualization. Lee et al. deduced that shape attributes of a leaf should be avoided as a choice of plant classification; however, venation structure is an important feature to distinguish plant species. Nitrogen (N), being an integral part of chlorophyll (Ch), plays a vital role in plant growth as Ch absorbs light energy for photosynthesis [28]. Plants with sufficient N contents in Ch are green and healthy; otherwise, plants are pale-green or yellow. erefore, the status of Ch and N can be determined by exploiting leaf color property using image processing. Ali et al. [29] used the Dark Green Color Index (DGCI) model to find out the level of Ch and N contents from color images of leaves. e DGCI covers dark green color on a scale of 0 and 1. e measurements of Ch and N were recorded at three different stages of plant development through laboratory-based methods and using a SPAD-502 device (a hand-held absorbance meter used to measure relative greenness and Ch and N contents).

Methodology with the Proposed Cluster
Extraction of leaf features using the principles of visual image processing helps in plant classification, and a training dataset is used to train the CNN. e input image needs to be preprocessed and is recognized after passing through a variety of steps. A color image is composed of color pixels where each pixel can have red, green, and blue color planes. So, the input image can be assumed as a three-dimensional matrix corresponding to three color planes having color values of the pixel as matrix entries. Figure 2 represents the procedural flow of the proposed model as a block diagram. Images can exist in several color spaces, e.g., grayscale, RGB, HSV, and CMYK. e computational intensity is directly proportional to the number of dimensions of the image. CNN plays a role in reducing images to a form that is easier to process and does not lack features necessary for making a good prediction. is makes CNN design a system not only good at learning features but also scalable to massive datasets. Otsu algorithm as proposed by Yahiaoui et al. [24] is used for segmentation of the input image foreground and background and then three color planes to input in the HSV model.

Preparing the Training Dataset.
e proposed model has used the Caltech dataset for leaf gas analysis by extracting individual leaf features which can be statistical or geometric.
2 Complexity e MNIST database, containing 60,000 image examples, is used to train the convolutional neural network with 10,000 image examples. As a result of training, network generates two datasets of images and labels necessary for classification of input images as shown in Table 1.
After downloading the MNIST data files, we unzip the files in the MNIST directory. e database contains both healthy and dead leaf images. Figure 3 shows the sample of healthy and dead leaves which have been trained by the CNN.

Image Processing.
In computer science, image processing is used to obtain an improved version of digital images or extract some useful information from it after applying different operations.
e objectives of image processing may include visualization, image sharpening and repair, image restoration, pattern measurement, and recognition. A digital image can be thought as a 2D matrix of pixel values. In CNN, an image should be thought as a 3D matrix where depth represent color channels of either red, green, or blue, unless not talking about a grayscale image which needs to be converted to the 3D matrix. Most commonly, the pixel size is 8 bits or 1 byte; therefore, a single pixel can represent a value between 0 and 255 as the intensity of color for color images, where 0 corresponds to black and 255 corresponds to white for grayscale images. e proposed model uses RGB, HSV, and HSB color conversions.

Datasets
Images Labels Train dataset Train-images-idx3-ubyte Train-labels-idx1-ubyte Test dataset t10k-images-idx3-ubyte t10k-labels-idx1-ubyte Chlorophyll is estimated from the RGB image, the hue saturation value (HSV) is used to identify leaf color and histogram, and hue saturation and intensity is used for measuring leaf nitrogen. Figure 4 shows the flow chart of the HSV model. Computing the number of colour bands/channels of a health or dead leaf image is a preprocessing step followed by physical rotation in order to direct the leaf apex on the right side, which is considered the initial point of the process. en, image resizing is done and pixels are separated into three RGB color matrices of size 400 × 300 × 3. e corresponding grayscale value of the RGB color image can be found by averaging or weighted averaging. Both methods are the same but some weighing factor is given to each of the color intensities in weighted averaging which can be defined as Grav � 0.3 * I R + 0.78 * I G + 0.14 * I B . (1) Here, I R , I G , and I B are the color intensities of red, green, and blue, respectively, of a pixel and multiplied by a predefined value. e resultant of (1) will be a grayscale image.
HSV is a cylindrical-coordinate representation of RGB obtained through Gray world algorithm or the White patch algorithm.
is color space is used for illumination value which indicates the light source. Figure 5 shows the HSV color representation of healthy and dead leaves. e conversion of the RGB digital image into HSV color space includes color detection, mask recognition, and finding the number of blobs in the image. e parameters like area, mean, max-mean, and min-mean are determined where the blobs in the image are identified unless blob size is more than 10 pixels. For more than 10-pixel blob size, parameters would have some values. HSV low and high thresholding are also used followed by histogram analysis applied to the processing image. At last, mean values of H, S, and V are computed.

CNN Architecture.
Typical machine learning models used for classification are support vector machine [30] and AdaBoost [31] whose performance is based on extracted feature points. However, these models cannot extract the optimal feature points because learning and classification proceed independently. e CNN is the neural network model mirroring the human visual system [32]. To understand the architecture of CNN, all layers of CNN can be categorized as (i) convolution layer: works the same as lateral geniculate nucleus (LGN) detecting the boundary edge of objects, (ii) pooling layer: corresponds to visual cortex (V3) used to identify the color of the whole object, and (iii) fully connected layer: acts as the lateral occipital cortex (LOC) to detect the color and shape of objects. e proposed CNN is based on a 9-layer structure where each layer filters the distinct features of the processing image. e input image is transformed as an array of pixel values of volume 28 × 28 × 3. e front layer is always the convolution layer extracting the maximum or accumulated leaf features, which are passed to the pooling layer. e output of the model can be a single class or a group of classes that evidently describe the leaf image. ere are totally 3 nodes with  Figure 6 depicts the proposed CNN architecture.

Edge and Boundary Detection.
Edge detection is used in many applications of image processing, particularly object recognition and classification system. Edge detection is the process of identifying points in a digital image at which the pixel value (brightness) changes sharply or has discontinuities. e combination of these points can be organized as a set of curved line segments known as edges. Edge detection techniques are gaining popularity because of being robust to the conditions where illumination changes abruptly. In this proposed model, edge detection of leaf images is based on the region-based approach which engages Prewitt and Roberts filter methods that enable us to sharply identify the discontinuities between two regions in a greyscale image. It uses the Prewitt operator which is a discrete differentiation operator and approximates the gradient of the image intensity function. e Prewitt operator returns the corresponding gradient vector or the norm of the vector at every pixel in the processing image. e Prewitt operator is relatively inexpensive in terms of computations because this activates on image convolution with a small, separable, and integer-valued filter with 90-degree rotation. To emphasize the edges, Sobel and Canny filters are used in the edge detection algorithm. e mean square error (MSE) can be computed as follows: Here, m ik is the mean of the original image and m ik is the mean of the filtered image of the ith color plane at the kth pixel.
where e � 10 and v lies between MN −1 and 255. e use of the Canny filter emphasizes the whole lead edge, whereas Prewitt and Roberts filters focus only the upper leaf part.

Feature Extraction.
Separation of the leaf object (foreground) from its background is known as segmentation.
Original Color Image Hue Image Saturation Image Value Image Original Color Image Hue Image Saturation Image Value Image   is process adopts adaptive threshold K-Means method. After segmentation, geometric features are extracted from the segmented image. For example, aspect ratio and roundness (R) of the leaf can be computed from the following: e leaf color falls under morphological features. Several statistical parameters like mean, skewness, and kurtosis can be computed in the color space to represent color features of the leaf. is method has low computational complexity and is applicable to real-time processing. As the processing image comprises three color planes (red, green, and blue), it helps in the estimation of Ch and N using the mean of all three color planes after leaf contour as follows: In (6), values of red and blue are included for normalization purpose and giving the estimate of ChN of healthy and dead leaves. Nitrogen can be computed using (7) where HE, St, and Bg are hue value, saturation intensity, and brightness intensity of the colored image, respectively.
e maximum distance between two points on the boundary of the leaf object in the processing image is known as the effective diameter. e effective diameter can be calculated using another morphological feature of a leaf called area as formulated in Skewness (S i ) and kurtosis (K i ) are the imperative color moments used to represent color distribution in processing and retrieving images. Equations (9) and (10) formulate both parameters for the ith color plane, respectively.

Numerical Results and Discussion
is work implements leaf recognition and Ch and N analysis using CNN and contributes to the domain of leaf classification no matter whether the leaf is healthy or dead. Two databases, namely, Caltech and Flavia, are used for training and experimentation purpose. e Caltech database was prepared by Caltech University containing 102 categories of leaves consisting of 8000 images.
is database contains at least 40 images of each category which makes it attractive for large-scale systems developed to recognize and analyze the leaf images. e second database Flavia contains healthy leaf images collected from different areas. e dataset of diseased leaf images is also included to analyze dead leaves in order to achieve the diversity of experimentation. e final dataset contains 467 healthy leaf images and 60 dead images which corresponds 70% of Caltech and 30% of other datasets. Table 2 and Table 3 show that all results are obtained within the error limit of 5 × 10 -7 using the backpropagation process. Table 2 characterizes the numerical results of dead and healthy leaves against Ch, N, area, and diameter. First, leaves are categorized according to their state, i.e., dead and health leaves. Healthy leaves are further categorized into two classes, 1 and 2. e first class contains normal images of leaves, whereas the second class comprises noisy images. Table 3 represents the recognition rate of dead and healthy leaves against Ch and N. Our proposed model  6 Complexity reduces 1.4% recognition rate when assigned to recognize the noisy version of normal healthy class images. To test the trained CNN efficiency, dead leaves of three classes were examined. Several leaf features, as discussed above, are extracted to test the proposed model. It is observed that the ratio of Ch of the healthy leaf will be greater than that of the dead leaf and vice versa for nitrogen ratios, because the number of bacterial blobs is identified as more than that on the healthy leaf.
Comparing the computed results of plant leaf recognition system performance with existing research studies is essential.
is study achieved outstanding results compared to existing research studies [4,[33][34][35]. However, the mentioned work has a different dataset and different types of classes, so it is not justified to directly compare their results. Table 4 shows the comparative results related to the study.

Conclusion and Future Directions
Leaf recognition plays a significant role in the science of plant classification. Convolutional neural networks (CNNs) have proven their capability in various applications of image processing and computer vision. is paper critically discusses the related work and then aims feature extraction of plant leaves with gas analysis, specifically, under two leaf states, i.e., healthy and dead leaves, using CNN. e proposed model separates the integrated complications like geometric deformations, varying illumination in the sample images, and interspecies and intraspecies levels to be handled with lower complexity. e work shows how feature extraction of leaf and gas analysis can be accomplished using the HSV model while the trained CNN classifies the leaves using color specifications without mathematical or statistical study. Two leaf databases, Caltech and Flavia, are used to train and test CNN's efficiency along with a deceased set of leaves. e accuracy of recognition ratio for the proposed model is almost 98%.
is work can be extended by training larger datasets, particularly dead leaves, validating the recognition ratio using CNNs. e extraction of advanced features of digital images with image acquisition, adaptive image enhancement, and various boundary detection algorithms with new developing tools is required. e work can be evolved using different machine learning and deep learning techniques, e.g., autoencoders.
e way of learning of domain adaptation is another big task for researchers in the future.

Data Availability
e processed data are available upon request from the corresponding author.

Conflicts of Interest
e authors declare no conflicts of interest.