Detection and Analysis of Human Cells Based on Artificial Neural Network

The detection and classification of histopathological cell images is a hot topic in current research. Medical images are an important research direction and are widely used in computer-aided diagnosis, biological research, and other fields. A neural network model based on deep learning is also common in medical image analysis and automatic detection and classification of tissue and cell images. Current medical cell detection methods generally do not consider that the yield is affected by other factors in the topological region, which leads to inevitable errors in the accuracy and generalization of the algorithm; at the same time, the current medical cell imaging methods are too simple to predict the classification markers, which affect the accuracy of cell image classification. This study introduces the concepts of two kinds of neural networks and then constructs a cell recognition model based on the convolution neural network principle and staining principle. In the experimental part, we developed three groups of experiments using the same equation as the experiment and tested the best cell recognition model proposed in this study.


Introduction
In this study, we propose a study to train neural network simulators using biosphere flux data collected by EURO-FLUX project to provide spatial and temporal estimates of European forest carbon flux on the continental scale. e novelty of this method is that the neural network structure is constrained and parameterized using traffic data, and a limited number of input driving variables are used [1]. In this study, a hybrid intelligent system based on past financial performance data is proposed, which combines a rough set method with neural network to predict enterprise failure. By comparing the traditional discriminant analysis and neural network method with our hybrid method, the effectiveness of this method is verified [2]. e artificial neural network method is used to predict short-term load of large-scale energy system. Different neuron combinations were used to test networks with one or two potential layers, and the prediction errors of the results were compared. When the neural network is divided into different load patterns, it can give a good load forecast [3]. e improved criteria of WG and MPA are established and verified using the artificial neural network and traditional methods. A multicenter study was conducted on 240 WG patients and 78 MPA patients. Appropriately trained neural networks and CT can distinguish these diseases and perform better than LR [4]. e support vector machine (SVM) and artificial neural network (ANN) systems are applied to the drug/non-drug classification problem as an example of the binary decisionmaking problem in the early virtual compound filtering. e results show that compared with the artificial neural network, the solution obtained by support vector machine training has better robustness and smaller standard error [5]. In this study, a new method based on the artificial neural network is proposed to identify MHCII binding cores and binding affinities simultaneously. A new training algorithm is used for training, which allows the correction of deviations in training data caused by redundant binding kernel representations [6]. is study introduces the implementation of FANN, which is a fast artificial neural network library written by ANSIC. e results show that the speed of FAAN library is obviously faster than other libraries on the system without floating-point processor, while the performance of FANN library on the system with floating-point processor is equivalent to other highly optimized libraries [7]. e purpose of this study was to determine whether circulating tumor cells were present in the blood of patients with large operable or locally advanced breast cancer before and after neoadjuvant chemotherapy and before and after preoperative neoadjuvant chemotherapy. After research, we concluded that in patients receiving neoadjuvant chemotherapy, CELLSEARCH system can detect circulating tumor cells in a low truncation range of 1 cell. Detection of circulating tumor cells is not associated with primary tumor response, but is an independent prognostic factor for early recurrence [8]. e pathological TNM stage is the best factor to judge the prognosis of non-small cell lung cancer. After isolating NSCLC patients by the size of epithelial tumor cells, cytological analysis was used to evaluate the presence of CTC in surgical patients [9]. In this study, a microbial electronic manipulation and detection lab-on-a-chip based on a closed dielectrophoresis cage combined with impedance sensing is proposed.
is method is suitable for implementation in integrated circuit technology, which can not only operate and detect a single unit but also reduce the scale of the system [10]. Circulating tumor cells have long been considered to reflect the invasiveness of tumors. erefore, many people have tried to develop analytical methods to reliably detect and enumerate CTCs, but such analytical methods have not been available until recently. is article reviews CTCs, especially the technical problems of its detection, the clinical results obtained so far, and the future prospects [11]. To determine the clinical application of immunoglobulin heavy chain gene rearrangement identification in multiple myeloma tumor cell detection, we investigated 36 consecutive newly diagnosed patients intending to receive high-dose chemotherapy in a research program. ere is no consistent relationship between bone marrow MRD status and clinical course, and patients with negative PCR also have early recurrence [12]. Using yeast cells as a model system, a piezoelectric lead zirconate titanate-stainless steel cantilever beam was studied as a realtime cell detector in water. Under the experimental conditions, when the cell diffusion distance is less than the linear size of the adsorption area, the resonance frequency shift rate has a linear relationship with the cell concentration, and the resonance frequency shift rate can be used to quantify the cell concentration [13]. Although optical cell counting and flow cytometry devices have been widely reported, there is usually a lack of sensitive and effective nonoptical methods to detect and quantify large surface area cells attached to micro-devices. We describe an electrical method based on measuring cell count changes in the conductivity of the surrounding medium due to ions released by immobilized cells on the inner surface of the microfluidic channel [14]. Background of the diagnostic value and prognostic significance of circulating tumor cell detection in bladder cancer are still controversial. We conducted a meta-analysis to consolidate the current evidence of using CTC detection methods to diagnose bladder and other urothelial cancers and the association between CTC-positive and advanced and remote diseases. Conclusion of CTC evaluation can confirm the diagnosis and differential diagnosis of bladder cancer [15].

Artificial Neural Network
2.1. RBF Neural Network. RBF neural network belongs to a kind of radial neural network. When there are enough nerve cells in the hidden layer, it can be designed as any continuous function infinitely. Local approximation, classification, and pattern recognition are all very good, and the learning and teaching time of the algorithm is very short. e mapping relation in RBF neural network is expressed as f(x): R n ⟶ R o , as shown as follows: where C is the number of neurons in the potential layer of the network, c i is the center of radial basis function of each potential layer, the width is σ i , and ω i is the ith activation function and exit neuron. e neural network of RBF must be trained and learned to determine the radial basis center c i , width σ i , and weight ω i between the potential layer and the output layer of neurons in each potential layer, to determine the mapping relationship between inputs.
To ensure that each activation function is not peaceful or too sharp, the activation function of latent neurons is regarded as a fixed radial basis function, and the center c i of latent radial basis function is randomly selected from training. e radial basis function is defined as follows: where K represents the number of neurons in the hidden layer and d max is the maximum distance between the two centers, and this formula shows that the width of neurons in the hidden layer is constant.

BP Neural Network.
BP neural network is a multilayer feedback neural network with inverse error transmission. e learning process can be divided into signal transmission and error reverse transmission. A schematic diagram of BP neural network reverse transmission algorithm is shown in Figure 1.
From this, it can be deduced that the weight correction values are shown as follows: where M represents all the inputs that affect neuron J, η is the inverse error rate of learning, δ j (n) is the local gradient, y i (n) is the output of neuron I, and Ψ is the activation function.
2 Computational Intelligence and Neuroscience

Principle of Convolution Neural
Network. e convolution neural network is based on the mathematical mapping in this study. It can learn the same mapping ability as this expression independently. It specializes in learning that needs to be practiced in a specific space, so this training can make it learn the mapping relationship between input and output. e process is shown as follows: where y represents output vector, x represents input vector, g represents CNN, g L represents layer 1 CNN, w L represents layer 1 g L weight and bias vector, and ∘ represents convolution operation.
A convolution neural network is usually composed of the following types (as shown in Figure 2). e convolution layer is used to separate important functions, the pooling layer is used to reduce the number of parameters and excessive matching, and the complete combination layer is usually used for network output after all convolution operations.
Input Layer: is layer is used to input data. In multidimensional data processing, because the input data are usually images, this study mainly introduces the input layer of objects placed in images. First, the image information is converted into function data and input into convolution neural network. e image structure is the embodiment of image information. In analysis, the CNN input layer keeps its original data when processing image information. Images are usually divided into black and white images and color images. When CNN analyzes different types of images, the inputs are different.
Convolution Layer: the convolution layer first detects each feature of the image locally and then performs local expansion processing at a higher level to obtain global information. e core of convolution operation is a mathematical operation, which usually represents discrete convolution in convolution neural network. e convolution formula is as follows: where is the first to i-level images and l − 1, b l j represents the offset of the jth feature corresponding to the l level, and F represents the activation function. e most common of these activation functions is the relay-type activation function, whose principle is shown as follows: Pooling Layer: pooling layer is usually combined with the convolution layer, which is mainly used to reduce function scale, compare data, reduce the number of network parameters, reduce overmatching, and improve the tolerance of fault model. Complete Combination Layer: after processing several convolution layers and a pooling layer, the convolution neural network will be combined with the complete combination layer. Output Layer: the focus of the output layer of convolution neural network is to produce the desired results according to the situation. After calculation, different probability values are obtained from input to output.

Proposition and Construction of Cell Image Detection
Model. Assuming that the time domain remains constant, Ω is defined as the state region of output Y, which is based on the finite state of the model. Suppose that the spatial constrained regression model g is used to test the known and has y � g(Ω; s(x)) form, where s(x) is an unknown parameter vector, and the result of the last layer of ordinary CNN is shown as follows: where x L−1 is the output of the network (L − 1) layer in the neural network and w L is the weight of the last layer, which is output under the mapping of f L . Based on the theoretical analysis of space constraints in this study, we need to extend the standard CNN to estimate s(x) so that the last two layers (f L−1 , f L ) of the network are defined as follows: where x L−2 is the output of the network (L − 2) layer and Formula (9) is the parameter estimation layer. According to the weight w L−1 , printing the image to obtain a parameter vector; Formula (10) is the spatial constraint layer, which belongs to the parameter vector in the regression model. At the beginning of kernel image recognition, image plane x ∈ R H×W×D , height H, width W, and feature number y 0 (n)= +1 Neuron j e j (n) Figure 1: Schematic diagram of BP neural network reverse algorithm.
D are given, and the goal is to detect the center point X of each kernel. In this study, the Euclidean distance from the pixel to the core, i.e., ‖Z j − Z 0 m ‖ 2 , is obtained when the core is detected, where Z j and Z 0 m represent the coordinates of y j and the center coordinates of the mth core, respectively. e weight is reduced, i.e., normalized, and the regularized formula is shown as follows: Let Ω � 1, · · · , H ′ * 1, · · · , w ′ , and y is the spatial region. e j-th element is j � 1, . . . , |Ω|. Equation (12) is defined as follows: where Z j and Z 0 m represent the coordinates of y j and the center coordinate of the mth core of D, respectively, and Ω is a constant radius. It can be seen from the figure that the probability graph defined by Equation (12) has a maximum value near the center of each core Z 0 m , and other places are flat. Next, a prediction output y generated from a spaceconstrained layer of the network is determined. Based on the known structure of the motion result probability graph described in Equation (12), we define the predicted output as Equation (13) of the Jth element.
where Z e purpose of formulas (14) and (15) is to show that the corresponding weights and deviations are output to the previous layer, then normalized, and then combined with the previous predictions to obtain a parameter estimate. e importance of Formula (16) is that it is useful for the upper exit. After the corresponding weights and deviations are given, normalization is carried out to obtain the are vectors, the former represents deviation, the latter represents weighting, and sigm(·) represents the sigmoid function commonly used in convolution neural networks, which is often used to hide the output of neurons, and its value range is (0, 1). It can specify a real number between (0, 1); that is, it is used for normalization. e principle is shown as follows: where X represents the data after zero mean processing and S(x) represents the data after normalization processing, and the learning method should use a loss function, as shown as follows: where ε is a small constant, which represents the ratio of nonzero probability pixels to the total number of zero probability pixels in the training input, and H(y j , y j ) is the cross-entropy loss, which is specifically defined as follows: Among them, when the actual values are y j � 1 and H(y j , y j ) � −log(y j ), when the predicted value of y j is closer to 1, log(y j ) is closer to the maximum value of 1, and the minus sign indicates the minimum error value. When the predicted value of y j is closer to zero, log(y j ) is closer to the negative. An infinite addition and subtraction sign indicates the maximum error value. When the actual values are y j � 0 and H(y j , y j ) � −log(1 − y j ), when the predicted value y j is closer to zero, log(y j ) is closer to the maximum value 1, and the minus sign indicates the minimum error value, while when the predicted value y j is closer to 1, log(y j ) is closer to the negative infinite addition and subtraction sign, which represents the maximum error value. e detailed parameters of each convolution are shown in Table 1.
In Table 1, you can see that the input is an input attribute with a size of 27 × 27, and the output attribute after the final network frame is 11 × 11. To extract and merge all function information, the scroll window increment is always set to 1, and the trigger function uses relay-type trigger function evenly. e network model structure mentioned in this article is shown in Figure 3.
F is the full interconnection layer, and these neurons in the full interconnection layer represent medical image information without spatial information; S1 is a new parameter estimation layer, and these neurons in the parameter estimation layer represent the estimated position information; S2 is the spatial constraint layer, L is the total number of layers in the network, and each neuron represents the medical image information with state parameter information.

Coloring Principle of Stain.
e color deconvolution method is mainly based on the orthogonal transformation of the original RGB image, and according to the Beer-Lambert law, it is expressed as the relationship between the light intensity of the histological cell image and the staining matrix, as shown as follows: where I O,C is the intensity of incident light radiated from the tissue cell image, I C is the intensity of light passing through the tissue cell image, subscript C is the RGB three-channel identifier, Q is the dye color matrix, and C is the dye absorbance. It can be seen from Equation (10) that the intensity of transmitted light and dye content is relatively complex nonlinear relations. In the RGB color model, the light intensity of each pixel in the camera is I R , I G , and I B , respectively. e optical density (OD) expression of each pixel is shown as follows: It can be seen from Equation (21) that the optical density of each channel has a linear relationship with the absorption of light absorbent, so the optical density of each channel can be used to distinguish the color rendering effect of several dyes. e color effect of each point can be quantified by a 3 × 1 RGB three-channel optical density matrix. Using simple hematoxylin staining, the absorbance of R, G, and B channels was 0.18, 0.20, and 0.18, respectively. e size of the color matrix Q is related to the type of point, and each element of the matrix is proportional to the absorbance of each channel. For the three dyes R, G, B, the three-channel color system is defined as follows: Each row represents a dot, and each column represents the absorbance values of R, G, and B channels. In this data set, only two dyes are used for staining, and the corresponding chromosome systems of R, G, and B channels are shown as follows: In the dyeing experiment, one dye was used to obtain the absorbance values of three RGB channels after dyeing with each dye. e dyeing formula for hematoxylin and eosin multiple dyeing is as follows: Computational Intelligence and Neuroscience

Color Deconvolution.
To make the color effect of each color in multicolor image stand out, RGB information must be transformed orthogonally. e purpose of orthogonal transformation is to make the color effect of each color independent of each other, to obtain the color effect of a dye. e transformed matrix must be normalized, and the normalization process for each dye is shown as follows: e normalized optical density matrix A is shown as follows: e N × 2 matrix C is used to describe the color effect of two dyes on a pixel, and then, the optical density matrix Y � AC of the image collected from the pixel is obtained. Multiple color images are separated by color deconvolution theory, and the separated images can be used for density and texture analysis. e cell sample images were experimented according to H&E staining mode, and the experimental results are shown in Figures 4-6.
In the image of the isolated hematoxylin-stained component, the nucleus is blue, while in the image of the eosinstained component, the cytoplasm and cytoplasm are pink. After color inversion of the pathological picture of this material, the separation result of nucleus and cytoplasm is very good. As shown in the above figure, the color deconvolution method can be used as an image preprocessing method in this study.

Comparative Experiment and Analysis of Cell Detection.
In this section, we designed the same control group as the experimental group, tested SCNN and SR-CNNSSAE models, respectively, and tested the parameters according to the detection performance of CRCStoPhenotypes data set.
is section selects 100 cell images from the test data set and stores the accuracy, recovery rate, and F1 scores of the three experimental models when testing the images. Tables 2-4 compare the differences in the three experimental systems in three evaluation indexes in detail. Table 2 shows that the maximum recovery rate of SCNN is 0.9076, which is 0.0546 and 0.2146 higher than SR-CNN and SSAE, respectively, same but improved compared with 0.16 SSAE; on average, SCNN still leads SR-CNN and SSAE. SCNN through the maximum recovery rate, minimum recovery rate, and average recovery rate of comparative analysis shows that the detection accuracy has been greatly improved. However, the mean square error of SCNN is larger than that of SR-CNN and SSAE, which shows that the stability is not as good as that of SR-CNN and SSAE, but the difference is very small, only 0.01, which is within the range of acceptable area.
It can be seen from Table 3 that in terms of accuracy, the highest accuracy of SCNN is 0.8883, and SR-CNN and SSAE are 0.0503 and 0.2163, respectively; SCNN has a minimum Fully-connected2  Computational Intelligence and Neuroscience accuracy of 0.7002. In SR-CNN and SSAE, the minimum values are 0.6801 and 0.5141, respectively; in terms of average accuracy, SCNN and SR-CNN are basically the same, only 0.002 behind, which belongs to the normal statistical range and is obviously ahead of SSAE. e above three groups of comparative data show that SCNN has an excellent performance in accuracy. In terms of stability, the three experimental systems are basically the same, and they are all relatively stable. It can be seen from Table 4 that the maximum F1 of SCNN is 0.836947, which is 0.009 and 0.173 higher than SR-CNN and SSAE, respectively. Based on SR-CNN, the minimum F1 of SCNN is 0.70065. On the basis of SSAE, it decreased by 0.009 and increased by 0.132. For average, F1 of SCNN improved by 0.008 on SR-CNN but significantly surpassed SSAE. e above three sets of comparative data       show that SCNN performed very well at F1. In terms of stability, the F1 score of SCNN in SR-CNN system is less than 0.005 and that in SSAE system is 0.001, which shows that this system is more stable. e above analysis compares the detection performance differences in multiple cell images in detail from three indexes. Table 5 analyzes and compares the total indexes of the three experimental systems. Table 5 shows that SCNN has better performance than SR-CNN and SSAE in terms of recovery rate and F1 score. Although it lags behind SR-CNN in accuracy, the difference is very small, which indicates that the experimental performance can be relied on.
Summarizing the above experimental results and comparative analysis, this section shows that the SCNN cell recognition model proposed in this study has better detection accuracy and stronger generalization ability, which shows that it is very important to add spatial information to the designed convolution neural network model.

Comparative Experiment and Analysis of Cell Classification.
is section contains three sets of comparative experiments with the same experimental settings, which are designed to test the classification ability of the kernel classification model, the kernel classification model, and the kernel classifcation model proposed in this paper for the CRCHistoPhenotypes dataset. e parameters of the comparative test are the same as those of the classification test in Chapter 3. e F1 scores in different core classifications are compared, and the reference methods are CRImage method and superpixel imaging method. e exact F1 score is shown in Figure 7.
As can be seen from Figure 7, the F1 score of the classification method based on adjacent set prediction proposed in this study is higher than that of the other two methods in the four categories, and the curve is more stable, indicating the best performance. See Table 6 for a detailed comparison.
It can be seen from Table 6 that in terms of average F1, this method is obviously ahead of CRImage method in the super-pixel imaging method. e above three groups of comparative data show that the classification model based on adjacent set prediction in this study performs very well for F1 scores. In terms of stability, this method is 0.047 smaller than CRImage method, which shows that this model is more stable. In the same experimental environment, we combine SoftmaxCNN with a group of adjacent prediction methods, use CRImage method and superpixel imaging method to detect four different nuclei, and get the AUC values of different nuclei. See Figure 8 for details. Figure 8 analyzes and compares the present model, the CRImage super-pixel imaging model, and the AUC metrics. Comparing these three curves, we can see that the model in this study has better AUC performance than the other two methods in the classification of four types of kernels. Table 7 compares the differences in AUC statistical data of the three experimental schemes in detail.
As can be seen from Table 7, the AUC of predictionbased adjacent set classification model for four different core types is 0.059 and 0.217 higher than that of super-pixel imaging method and CRImage method, respectively, the minimum value is 0.099 and 0.295 higher, and the average value is higher, more than 0.071 and 0.2435. e performance of this model is better, and the mean square error is less than 0.0208 and 0.0346, which shows that the model in this study is more stable in classifying cell images. After comparing the F1 fraction and AUC values obtained from    Computational Intelligence and Neuroscience different types of nuclei, the weighted integration of F1 fraction and AUC values was carried out and a detailed comparison was made. e specific numerical equations are shown in Table 8. Table 8 shows that the combination of SoftmaxCNN and AdjacentSetPrediction is used to classify the kernel used in this study, which is nearly 1 percentage point higher than the F1 score of the other two kernel classification methods, which is more. It shows the superiority of the proposed model in nuclear classification of cell histology image classification based on adjacent set prediction. e multiclass AUC is at least 0.6 percentage points higher than that of SuperpixelDescripto method and 2 percentage points higher than that of CRImage method. e combination of Soft-maxCNN and adjacent force prediction is more than 90% in multiclass. e comparison results of AUC values show that the proposed method has better classification ability and stability in nuclear classification.
Based on the above experimental results and comparative analysis, this section demonstrates that the proposed nuclear classification model based on adjacent set prediction has better classification ability and stronger stability and proves that convolution neural network combined with adjacent set prediction model is effective.

Concluding Remarks
In this study, we propose a method to detect nuclei by combining spatial data. is method aims at detecting nuclei in histological cell images and constructing a spatial model of cell image detection, to solve the problem of missing topological input in the current model. Aiming at the problem of how to classify the nuclei in the enlarged image of human cells, a prediction mechanism based on adjacent sets is proposed, and a large classification model of human cell images is constructed by combining the convolution neural network system of linear regression. In recent years, the deep learning method is widely used, which provides a theoretical basis for human cell image detection and classification combined with neural network model.

Data Availability
e experimental data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that they have no conflicts of interest regarding this work.