Building Up a Robust Risk Mathematical Platform to Predict Colorectal Cancer

1College of Computer Science, Sichuan University, Chengdu 610065, China 2College of Computer and Information Science, Southwest University, Chongqing 400715, China 3College of Mathematics and Statistics, Southwest University, Chongqing 400715, China 4School of Electronic and Information Engineering, Xi’an Jiaotong University, 28 Xianning West Road, Beilin District, Xi’an 710049, China 5Toxicology Institute, College of Preventive Medicine, Third Military Medical University, 30 Gaotanyan Street, Shapingba District, Chongqing 400038, China 6Department of Environment Health, College of Preventive Medicine, Third Military Medical University, 30 Gaotanyan Street, Shapingba District, Chongqing 400038, China


Introduction
During past decades, new strategies are developed to decrease the incidence and to improve the prognosis of colorectal cancer (CRC), from popularizing regular screening in individuals older than 50 years for prevention to taking some new technologies like laparoscopic surgery, neoadjuvant chemotherapies, and bio-targeted therapy into consideration for more precise and individualized treatment.However, CRC is still one of the important contributors to cancer worldwide [1][2][3][4][5][6][7].CRC ranks 4 in cancer incidences and accounts for approximately 8-10% cancer-related death [8], and the 5-year survival rate (40-50%) is still not as satisfied as expected.CRC is now recognized as a result of multistep process under very complicated gene-environment interactions; either genetic variation and environmental factors or dietary pattern and unfavorable lifestyle may jointly play the important roles in colorectal neoplasia [9][10][11][12].Accordingly, to efficiently identify CRC-risk factors is the first step for prevention and early diagnosis which is critical for decreasing CRC morbidity and mortality [13,14].Based on this hypothesis, a consortium that includes institutions from South Korea, Japan, and China cooperatively performs a multicenter casecontrol study (KOJACH study) during 2000-2004 to explore the CRC-risk factors in East Asia populations [15][16][17][18].In this cooperative study, information of family history, life styles, food, nutrition intakes, and single nucleotide polymorphisms (SNPs) of each participant is collected for both CRC cases and cancer-free controls.Then this study plans to develop such a CRC predictive model that can not only investigate which potential risk factors have the significant impact on the occurrence of CRC regarding the collected data but also efficiently and reliably predict the risk of CRC before being diagnosed as early as possible.
There are some mathematical models already developed and used to process different type of data for CRC occurrence prediction.For low dimensional data, Wu et al. [19] and Huang et al. [20] propose the logistic regression and the greedy Bayesian model.To process high dimensional dichotomous data, Hahn and his colleagues [21][22][23] propose to use multifactor dimensionality reduction (MDR) method for mapping them into the low dimensional space and Li et al. [24] propose a novel forward U test to estimate the possibility of the risk of CRC.In addition, Andrew et al. [25], Meredith et al. [26], and Rutledge et al. [27] employ the linear regression models to predict the occurrence of CRC.However, these previous models cannot simultaneously process our big high dimensional CRC data with both continuous and discrete data type to obtain enough high predictive accuracy.
For this reason, to avoid the shortcomings of the previous research when they are used for such complicated data collected in the KOJACH study as mentioned above, we propose a robust CRC cancer predictive model based on our latest study [28] with the following three innovations.Firstly, we use a common standard to collect clinical CRC data with information of genetic variations and environmental exposure [29], since the quickly collected high dimensional data not only have the large volume including 369 CRC patients and 929 cancer-free controls, but also have 305 data types.Secondly, the biological classification, dimensionality reduction, and regression analysis stages are integrated into the CRC predictive model to make it robust and reliable.Thirdly, both heterogeneous ensemble learning model (HELM) and a generalized kernel recursive maximum correntropy (GKRMC) algorithm are developed to increase the predictive accuracy of the model.
The research results indicate that (1) both genetic and environmental related factors play the significant role in the occurrence of CRC; (2) CRC risk can be accurately and efficiently identified with this model by using these explored biomarkers as the classifiers; and (3) our innovated HELM and GKRMC have higher predictive power than the classical regression algorithms.
Finally, we analyze the outperformance reasons for both HELM and GKRMC algorithm and discuss the future study for the CRC predictive model.

Materials and Methods
The data used in this study is from the hospital-based casecontrol study of colorectal cancer in Chongqing, China, by the Department of Toxicology at the Third Military Medical University [18].The clinical case data is comprised of 369 pathologically diagnosed colorectal cancer patients.The control data consists of 929 cancer-free patients with frequency matched by age, gender, and birthplace.All controls are selected from the orthopedics and general surgery department of the same hospitals and those who have cancer history or any cancer-related diseases are excluded.All recruitments sign a written informed consent.
Food intake is evaluated by our previously developed Semi-Quantitative Food Frequency Questionnaire [30].The SNP information of full-length genes plus 2,000 bp in the upper stream of each candidate gene is obtained from the HapMap [31].After setting the minor allele frequency at 0.01 [32], the Haploview software [33] is used to screen the tag SNPs and only one SNP is selected in each of linkage disequilibrium blocks.As a result, there is a total of 46 tag SNPs from the 127 reported SNPs of the three key alcoholmetabolism genes (ADH1B, ALDH2, and CYP2E1) [34][35][36].DNA is extracted from 2.5 mL whole blood according to the manufacturer's instructions of Promega DNA Purification Wizard kit.The DNA purification and Polymerase Chain Reactions (PCR) are done by Eppendorf 5333 Mastercycler.Genotyping of the selected TagSNPs is done by ABI 3130xl Gene Analyzer.This study protocol is approved by the Third Military Medical University Ethics Committee.
The items in the dataset include general information (such as gender and age), polymorphism distribution of genes related to ethanol metabolism (the distribution of homozygotes and heterozygotes of gene loci), and demographic characteristics, food, and lifestyle habits (smoking and alcohol consumption).To avoid any bias, a standard questionnaire is generated in which each survey item has a specific definition.The examination is carried out as a face-to-face query.Several survey items, such as the amount of alcohol and cigarettes consumed, are quantitatively estimated.Using age 60 as the demarcation point, the surveyed patients are divided into the elderly group and the young/middle-aged group.Alcohol consumption is divided into healthy drinking (including people who do not drink and people who drink no more than 15 g per day) and nonhealthy drinking (including people who drink more than 15 g per day).Based on smoking habits, the participants are divided into nonsmokers and smokers (including those who had quit smoking).
This study employs these data to build the predictive CRC model with biological classification, dimensionality reduction, and regression analysis stages, which will be illustrated in detail in the next section.

Biological Classification.
The biological classification is carried out from the perspective of medical science to divide the original dataset into four subclasses, which are as follows: (1) polymorphism distribution of genes related to ethanol metabolism: the data of the SNPs are listed in Supplementary S1 in Supplementary Material available online at https://doi.org/10.1155/2017/8917258;(2) demographic characteristics information: the data of the demographic characteristics are listed in Supplementary S2; (3) lifestyle habits: the data of the lifestyles are listed in Supplementary S3; (4) food: the data of the foods are listed in Supplementary S4.

Dimensionality Reduction for the Original
Data.This study employs three broadly used dimensionality reduction Complexity 3 methods, namely, principal component analysis, entropy of information, and relief method to obtain the mutually explored biomarkers for each subclass.
(1) Sparse Principal Component Analysis (SPCA) Method.Principal component analysis (PCA) [37][38][39] is a dimensionality reduction technique to ease complexity in multivariate data analyses by replacing the original variables with a small group of principal components.SPCA uses the Lasso [40] to produce modified principal components with sparse loadings.PCs are the uncorrelated linear combinations of original variables ranked by their variances in the descending order: where  1 ,  2 , . . .,   are the original variables and  1 ,  2 , . . .,   are the coefficients of principal components PC  corresponding to the original variables estimated by the R-system packages.
High entropy H(X) indicates high uncertainty about the random variable X.
(3) Relief Method.Relief algorithm [44] is applied to classification of two kinds of data.Relief is a kind of feature weighting algorithm, which gives different weights according to the relevance of features and categories.Also, the relevance of features and categories in relief algorithms is based on the ability of features to distinguish between close samples.Relief algorithm process is as follows: for  = 1 : . ( The key idea of relief is to iteratively estimate feature weights according to their ability to discriminate between neighboring patterns.In each of the iterations, a pattern x is randomly selected and then two nearest neighbors of  are found, one from the same class (termed the nearest hit or NH) and the other from a different class (termed the nearest miss or NM).  represents the weight of the ith feature.

Regression Analysis.
After biological classification and data dimensional reduction stages, we used the logistic regression (LR), support vector machine (SVM), heterogeneous ensemble learning model (HELM), kernel recursive lease squares (KRLS) [45], and our innovated generalized kernel recursive maximum correntropy (GKRMC) algorithm to build up the predictive regression model.
where  1 ,  2 , . . .,   are covariates and  0 ,  1 , . . .,   are the unknown coefficients for the covariates and  is the probability of the dependent variable equaling a "success" or "case." (2) Support Vector Machine.Support vector machine (SVM) [48] is a machine learning method proposed by Vapnik in the early 1990s and successively extended by other researchers.
The general form of the equation of the separating line is given as where ( ⋅ ) represents the inner product of the vector W and the X vector.If the linear discriminator function is normalized so that all samples meet |() ≥ 1|, then the margin between the classification face ( ⋅ ) +  = 1 and ( ⋅ ) +  = −1 is 2‖‖ (namely, the classification interval).
Minimizing the distance 2/‖‖, it is equivalent to maximizing 1/2|||| 2 , and then we can get the optimal classification face.Thus, the problem of seeking the optimal classification face is transformed into the following optimization problem: (

3) Heterogeneous Ensemble Learning Model (HELM).
Ensemble learning [49] employs multiple learners to solve a problem.The generalization ability of an ensemble is usually significantly better than that of a single learner [50].The adaboost algorithm [51] is a type of ensemble learning.Based on previous studies, most of the ensemble learning algorithms are the integration of several of the same (homomorphic ensemble) or different (anomaly ensemble) weak classifiers.Here we propose such a HELM algorithm based on the adaboost algorithm that integrates the advantages of both homomorphic and anomaly ensemble.HELM algorithm process is illustrated in Figure 1.Compute error rate (Eq.( 7)) Update weight (Eq.( 9))
Using the matrix inversion lemma [54], we have Substituting ( 17) into (15) yields The weight vector can be expressed explicitly as a linear combination of the transformed data; that is, Ω  = Φ    , where the coefficients vector where It is easy to observe that where ℎ  = Φ  −1   .Using the block matrix inversion identity, we can derive where   =  −1 ℎ  and So, Then we obtain the GKRMC algorithm, in which the coefficients update follows (23) and   is computed by (22).This study uses   2 (⋅) to denote the Gaussian kernel for RKHS [57], with  2 being the kernel size.The GKRMC produces a RBF [58] type network, which is a linear combination of the kernel functions (Figure 2).  denotes the coefficient vector of the network at iteration  and (  )  denotes the jth scalar in   .

The Results of the Biological Classification.
In past decades, a number of candidate factors implicated in CRC risk are proposed by epidemiology studies, which can be divided into two groups in total, genetic factors and nongenetic factors.The genetic factors' group consists of many SNPs, and the nongenetic factors' group is comprised of several kinds of environment factors.According to the biological characteristics and the manner that human beings are exposed to environmental factors in whole lifetime, the raw big CRC-related genetic and environmental data can be classified into four biological categories: SNPs, demographic characteristics, lifestyles, and foods as in Table 1.

Results of Original Data Dimensionality Reduction.
To process the dataset of SNPs, demographic characteristics, lifestyle and food, SPCA, and entropy and relief methods are employed, respectively.
Table 2 shows the principal components for the SNPs, demographic characteristics, and lifestyle and food by SPCA method, respectively.The result of the SPCA is listed in Supplementary S5.We consider that the features with high weight will result in the colorectal cancer when the relief algorithm is applied to extract key features from the dataset.The result of relief algorithm is shown in Figure 3.In the upper part of Figures 3(a), 3(b), 3(c), and 3(d), the horizontal axis shows the feature numerical number and the vertical axis shows the feature weight.In the lower part of Figures 3(a), 3(b), 3(c), and 3(d), the horizontal axis shows the feature weight and the vertical axis shows the feature value, while the bars in Figure 3 represent the numbers of the features according to the feature weight.
Table 3 shows the results of dimensionality reduction by entropy method for the SNPs, demographic characteristics, and lifestyle and food, respectively.The entropy () in ( 2) is for data dimensionality reduction.
Regarding the results of Figure 3, Table 4 shows the common factors for the SNPs, demographic characteristics, and lifestyle and food by relief method, respectively.
Figure 4 shows the interaction results for the three dimensionality reduction methods.Figure 4(a) indicates that rs1256030 is the mutually explored biomarker by SPCA, entropy, and relief; rs10046, rs1152579, rs676387, rs6905370, rs928554, and rs6983267 are the mutually explored biomarkers by SPCA and entropy and rs4939827, rs4767944, rs1801132, rs4767939, rs10505477, rs3798758, and rs2075633 are the mutually explored biomarker by SPCA and relief.Figure 4(c) indicates that drinking and drinking and smoking in same time point are the mutually explored biomarkers by SPCA, entropy, and relief; tea consumption is the mutually explored biomarker by SPCA and relief.
Figure 4(d) indicates that vegetables are the mutually explored biomarkers by SPCA, entropy, and relief; mushrooms, seasoning, pickles, and grains are the mutually explored biomarkers by SPCA and entropy; eggs and milk, meat, and seafood are the mutually explored biomarkers by SPCA and relief and nuts is the mutually explored biomarker by entropy and relief.
We have 36 features mutually explored by every two of the SPCA, entropy, and relief methods.
By U test [59], Table 5 shows that 13 out of 36 features have small p value.
Table 6 shows that 13 features with small p value are important biomarkers.

Results of Regression.
According to the dimensionality reduction analysis, there are 13 biomarkers selected as the classifier for these four biological datasets.Next, we employ LR, SVM, KRLS, HELM, and GKRMC algorithm to build up the predictive cancer model based on these selected classifiers.
Table 7 presents four measures (accuracy, sensitivity, specificity, and precision) to assess how good or how "accurate" the classifier is.There are 1298 cases-control samples, 369 of which are case and 929 of which are control.Cross validation [60] method randomly chooses 75% of samples (973 samples) as the training dataset and the rest (325 samples) are used for testing dataset.Since cross validation introduces the random effect, we have to repeat the experiment 10 times.Figure 5 shows that GKRMC always has the greatest sensitive, precision, and accuracy values as well as greater specificity value compared to KRLS.Moreover, Table 8 lists the average value and standard deviation of the classification measurement for each algorithm.

Discussion and Conclusion
For CRC tumorigenesis, both genetic and environmental factors, as well as their interaction, playing important role in CRC risk is already the common view of most previously studies [61], but to figure out how to predict the occurrence of CRC by using the risk factors is still a challenge today.In the present study, we use big data of 1298 samples from a CRC case-control study in which relatively complete information of genetic and demographic characteristics and life style and food intake is simultaneously collected; furthermore, we expect to develop such a CRC-risk predictive model that not only can explore which risk factors included in the collected big dataset have significant impact on the occurrence of CRC, but also can accurately predict the occurrence of CRC as early as possible.
Such big datasets are classified into four different categories in the biological classification stage.And 13 of all explored potential biomarkers consisting of 4 SNPs, 6 demographic characteristics, 1 lifestyle factor, and 2 foods are screened out in data dimensionality reduction stage.Unlike pure mathematical formulae, the biological rationality of such model depends on whether the selected biomarkers can be biologically explained as validated etiology of colorectal cancer supported by either population-based association study or biological function-based mechanisms experimental study.And then, these explored biomarkers can be used as the classifiers for the predictive model to access the risk of colorectal cancer in the regression analysis stage.
In fact, results from substantial epidemiology studies focusing on CRC risk/protective factors provide evidences for the associations between each category and risk of CRC.For the genetic variations, at least 2 (rs10046, rs6983267) of the 4 currently selected SNPs listed in Table 5 were reported to have significant association with CRC risk in either genome-wide association studies or candidate gene based study [59,62].Particularly, SNP rs6983267 is one of the most significant variations associated with increasing CRC risk in Caucasians, Asians, and Africans [63].Regarding the other two selected SNPs (rs1256030, rs676387) located, respectively, in estrogen receptor beta gene (ESR2) and 17 -hydroxysteroid dehydrogenases gene (HSD17B1) (both are estrogen metabolism pathway genes), though there is no direct evidence supporting their association with CRC, they both are found significantly associated with cancers such as liver and ovarian cancers [64,65].Moreover, considerable evidence from epidemiological and metabolic studies support that the estrogen metabolism pathway genes undoubtedly play an important role in CRC and other cancers [66], which implies the potential that the two SNPs may affect the susceptibility of CRC.
For demographic factors, almost all the 6 selected factors have been reported to be the unfavorable factors for CRC risk in a bunch of previous studies [67,68].For lifestyles, alcohol drinking and smoking are proved as two significant risk factors of CRC [18,68].Alcohol drinking, in a dose-response manner, evidently contributes to the increase of CRC risk.Meanwhile, obvious positive associations between CRC risk and cigarette smoking are observed in most measures [69].
For food, extensive epidemiologic and experimental studies confirm their important roles in the development of CRC.For example, higher consumption of vegetables and seafood is always associated with relatively lower CRC risk due to their relatively high content of antioxidant nutrients such as dietary fiber, vitamins, and long-chain unsaturated fatty acids [70][71][72][73].On the contrary, the excessive consumption of smoked/salted/processed meat is linked to higher risk of colorectal neoplasia [73].
In general, it is demonstrated that the 13 currently explored biomarkers can be used as the classifiers in the regression analysis stage, which is supported by these manually reviewed experimental evidences [59,63,67,[69][70][71].
Although LR and SVM may perform very well for linear systems, their performance will get worse when applied to the nonlinear and non-Gaussian situations [74], which is rather common in real world applications.Therefore, we suggest using nonlinear regression algorithm to process our dataset, which is comprised of continuous and discrete data with multivariate data type.However, using classical nonlinear algorithm such as KRLS will suffer from outliers.
To overcome the shortcoming of both linear and conventional nonlinear regression algorithms, this study proposes an ensemble learning model (HELM) and a generalized kernel recursive maximum correntropy (GKRMC) algorithm to increase the predictive power of the model.Next, we analyze the reason why HELM and GKRMC can outperform LR, SVM, and KRLS algorithms.
HELM is an ensemble learning algorithm, which integrates linear and nonlinear classifiers to classify the data points.Based on the previous study [75], the diversity of weak classifiers is one of the evaluation criteria for ensemble algorithm.HELM includes both linear (SVM and logistic regression) and nonlinear (KRLS) classifiers and its superior performance has been shown in Figure 5.
The cost function of GKRMC (see (14)) is so robust that is not sensitive to large outliers as KRLS, since an exponentially weighted mechanism  , (  − Ω    ) of ( 14) can assign greater weight to the samples with smaller error but not to the samples with greater error.Since the big dataset usually consists of outliers [29,76], GKRMC can achieve the higher predictive accuracy with the less standard deviation (Table 8) than KRLS.As mentioned before, the predictive power of GKRMC should be better than LR, SVM, and KRLS due to the nature of nonlinear regression (Figure 5).
In conclusion, this study proposes a robust CRC-risk predictive model to analyze the big data with information of genetic variations and environmental exposure for the CRC patients and cancer-free controls.The research results indicate that both genetic and environmental related factors explored by our model play the significant roles in the occurrence of CRC and the innovated HELM and GKRMC can increase the predictive power of the model.
However, this novel predictive model is the first step in predicting the risk of CRC tumor growth.Except for the environment factors and SNPs involved in the current model, if other factors such as pathway-pathway and pathway-environment interactions are included, there will be a higher chance to find a set of variations which may be integrative biomarkers, as proved in other researches [77,78].A limitation of our study is that there is only a finite number of tag SNPs located in a relatively small number of genes, which results in the nonuse of employing pathway interaction into model construction.Also, how to improve the GKRMC's specificity is an important topic for future study, which will further improve the whole system's performance.While extensions will be necessary to account in greater detail for the complexity of the CRC involved, we believe that if properly combined with more experimental data such as RNA sequence analysis and recent modeling techniques [79][80][81][82][83][84][85][86], advanced in silico platforms such as this one will evolve into powerful integrative research platforms that improve our understanding of CRC tumorigenesis.

Figure 4 (
Figure 4(b)  indicates that age, depression, blood triglyceride, and BMI are the mutually explored biomarkers by SPCA, entropy, and relief; blood triglyceride is the mutually explored biomarker by SPCA and entropy; cholesterol, activity, emotion status, and physical activity are the mutually explored biomarkers by SPCA and relief and mental stress is the mutually explored biomarkers by entropy and relief.Figure4(c) indicates that drinking and drinking and smoking in same time point are the mutually explored biomarkers by SPCA, entropy, and relief; tea consumption is the mutually explored biomarker by SPCA and relief.Figure4(d) indicates that vegetables are the mutually explored biomarkers by SPCA, entropy, and relief; mushrooms, seasoning, pickles, and grains are the mutually explored biomarkers by SPCA and entropy; eggs and milk, meat, and seafood are the mutually explored biomarkers by SPCA and relief and nuts is the mutually explored biomarker by entropy and relief.We have 36 features mutually explored by every two of the SPCA, entropy, and relief methods.By U test[59], Table5shows that 13 out of 36 features have small p value.Table6shows that 13 features with small p value are important biomarkers.

Figure 3 :
Figure 3: Feature selection by relief algorithm: (a) SNPs feature (note: the feature numerical number in the upper figure is regarding Supplementary S1 from columns B(1) to AU(46)), (b) demographic characteristics feature (note: the Feature numerical number in the upper figure is regarding Supplementary S2 from columns A(1) to U(21)), (c) lifestyle feature (note: the feature numerical number in the upper figure is regarding Supplementary S3 from columns B(1) to I(8)), and (d) food feature (note: the feature numerical number in the upper figure is regarding Supplementary S4 from columns B(1) to O(14)).

Table 1 :
Results of biological classification.

Table 2 :
The results by SPCA method.

Table 3 :
The results by entropy method.

Table 4 :
The results by relief method.

Table 5 :
p value of 13 important biomarkers.

Table 7 :
The definition of the classification measurement.