^{1,2}

^{1}

^{1}

^{3}

^{2}

^{2}

^{2}

^{1}

^{2}

^{3}

By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes.

Cone-Beam Computed Tomography (CBCT) has been widely used for accurate patient setup (initial positioning) and adaptive radiation therapy. Attentions are still needed to reduce imaging radiation doses and improve image qualities. Traditionally, Conventional Cone-Beam Computed Tomography (CBCT) image reconstruction in radiation therapy needs hundreds of projections, which deliver high imaging dose to patients. In order to reduce the number of CBCT projections, recently, some researchers have proposed methods to reconstruct images by using the information of prior images, such as a planning CT [

FEM can be best understood from its practical application, for instance, mesh discretization of a continuous domain into a set of discrete subdomains. It has already been used in image registration [

When 3D-2D DIR algorithm is used iteratively to reconstruct 3D volumetric images, the number of sampling points is crucial for the computation. A large number of sampling points could lead to a very slow computational speed, while a limited number of points with uniform distribution could miss some important image features and make the registration less accurate. In our proposed method, a special FEM system is developed to automatically generate high-quality adaptive meshes conforming to the image features for the whole volume without user’s manual segmentation. This system allows for more sampling points placed in important regions (at organ/tissue/body boundaries or regions with highly nonlinearly varying image intensity); while fewer sampling points are placed within homogeneous or in regions with linearly varying intensity. In this way, deformations of boundaries and other important features can be directly characterized by the displacements of the sampling points that are lying on boundaries or features, rather than interpolating from a uniform grid or a larger-sized tetrahedron in the volume mesh. As a result, the deformation can be controlled more precisely. With approximately the same numbers of sampling points, the feature-based nonuniform meshing method produces better deformed volumetric images comparing with methods using uniform meshes. The high-quality digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated by using ray tracing method. Subsequently, these DRRs are compared with corresponding 2D projections from CBCT scans, and the DVF is optimized iteratively to obtain the final reconstructed volumetric images.

In order to provide a good initial DVF and accelerate the calculation, we proposed a boundary-based 3D-2D DIR method before the aforementioned 3D-2D DIR. Although researches on boundary-guided (or contour-guided) image registrations [

This paper makes the following contributions for effectively computing 3D-2D DIR:

Compared with the traditional voxel-based method, the mesh-based methods have faster computational speed and better DVFs, since the voxel-based deformations were employed to estimate a large number of unknowns, which required extremely long computational time and were easy to be trapped in localized deformations.

When equal numbers of sampling points are used, the nonuniform meshing method leads to obtaining higher quality reconstructed images and better DVF compared with that of uniform meshes under the same number of optimization iterations.

Due to the large data sizes of the volume and projection images, the boundary-based DIR technique and GPU-based parallel implementation have been applied and achieved high computational efficiency and the reconstruction of 512 × 512 × 140 CBCT image can be done within 3 minutes, which is close to clinical applications.

The rest of the paper is organized as follows. Section

Figure

Flow chart of the proposed 3D-2D image registration method. The dash-line box is used to provide initial DVFs and accelerate the calculation by proposed boundary-based 3D-2D DIR method. It may be skipped according to different scenarios. More details are given in Section

After the user specifies the total number of mesh vertices, a feature-based nonuniform tetrahedral mesh is generated automatically. Nonuniform meshes are important for improving the accuracy of the numerical simulations as well as better approximating the shapes. Zhong et al. [

The basic idea of the mesh generation is similar to Zhong et al.’s work [

Regarding each mesh vertex as a particle, the potential energy between the particles decides the interparticle forces. When the forces on each particle reach equilibrium, particles arrive at an optimal balanced state, resulting in a uniform distribution. In this case, an isotropic meshing can be generated. To handle the adaptive meshing, the concept of “embedding space” [

The density field is defined by using the following metric tensor as

Given

The total energy can be computed by summing up all pairs of interparticle energies:

The gradient of

Then the total force applied on particle

In the particle optimization algorithm, user can specify a density field

Initialize particle locations

Update the

Obtain particle

Compute

Compute

Compute the total force

Compute the total energy

Run L-BFGS algorithm with

Project

After optimizing the particle positions, the final desired nonuniform tetrahedral mesh can be generated by using the Delaunay triangulation [

Figure

Demonstration of the feature-based nonuniform mesh generation on a digital XCAT phantom. (a) The original image; (b) extracted feature edges; (c) density field; (d) a 2D view of the interior meshes with color-mapping.

It is necessary to design a density field to match the volume image features. Original images are preanalyzed using a Laplacian operator (searching for zero crossings in the second derivative of the image to find edges) to extract features including contour edges and boundaries between organs and tissues, which are regions with highly nonlinearly varying image intensities. Since the Laplacian operator as high-pass operator highlights edges as well as noise, it is desirable to smooth the image in order to suppress noises at first. When the feature edges of the volume image are obtained as shown in Figure

After designing the density field, a binary mask needs to be computed from the original image by setting “one” inside of the human anatomy and “zero” outside to constrain the vertex positions inside or on the body during mesh vertices optimization. The mesh vertices are automatically computed by Algorithm

In order to control the entire volume image more effectively, 8 bounding box vertices of the human anatomy are added.

This section introduces how to use our generated feature-based nonuniform meshing to reconstruct high-quality volumetric images by using 3D-2D DIR.

The displacement vector of each voxel (

Ray tracing is a technique for generating an image by tracing the path of light through pixels in an image plane and simulating the effects of its encounters with objects. In the DRR generation, the Siddon ray tracing algorithm is applied [

To better simulate the realistic raw target CBCT projections from XCAT phantom data and test the sensitivity of our method to the realistic complications, after the noise-free ray line integrals ^{5} and

The deformation is optimized by minimizing the total energy

The second term of the energy function in (

L-BFGS algorithm [

For each iteration of L-BFGS optimization, the energy

The DRRs

Using (

The entire process of this volumetric image reconstruction method was implemented on GPU. The GPU card used in our experiments is an NVIDIA GeForce GTX 780 Ti with 3 GB GDDR5 video memory. It has 2,880 CUDA cores with a clock speed of 1,006 MHz. Utilizing such a GPU card with tremendous parallel computing ability can significantly increase the computation efficiency. There are two time-consuming processes during the reconstruction. One is the DRR generation, and the other is the gradient computation of the similarity term in the total energy

Then the gradient of the similarity term, that is, (

Finally the total gradient for all projections with respect to displacement vector

Now it is clearly shown that there are two components in the gradient of the similarity term in (

(a) Gradient computation of the similarity energy with respect to voxel intensity (

Subdividing the DRR image into small subgroups for GPU parallel computation.

To maximally utilize the GPU’s parallel computing power, the best subgroup size for the XCAT data is 8 × 8 pixels, and the head and neck (H&N) patient data is 16 × 24 pixels, which is determined during the preprocessing step. The general idea is computed based on similar triangle property according to the voxel scale, the pixel scale, the distance from source to volume position, and the distance from source to DRR position.

(b) Gradient of voxel intensity with respect to displacement vector of vertex

The CPU-based serial implementation of mesh-based 3D-2D registration method on XCAT data (256 × 256 × 132) with 60 projections (256 × 256) takes about 2.5 hours; after using the GPU-based parallel implementation, it takes about 3 minutes, which is about 50 times faster.

The size of H&N patient data used in this study is relatively large. CT volume data size is 512 × 512 × 140 and CBCT projection size is 1024 × 768. The reconstruction running time of CPU-based implementation on mesh-based 3D-2D registration method with 30 projections is about 12 hours, while the running time of GPU-based implementation is about 20 minutes, which is about 36 times faster than the CPU-based one. The multiresolution scheme is used to further improve the speed. In the experiment, both the CT volume image and CBCT images are downsampled into different resolution levels (three levels for experiments on H&N patient data), from the coarsest level (CT volume: 256 × 256 × 70, CBCT projection: 256 × 192, and time per iteration: 2.08 seconds) to the higher level (CT volume: 512 × 512 × 140, CBCT projection: 512 × 384, and time per iteration: 9.91 seconds) and finally to the full resolution level (CT volume: 512 × 512 × 140, CBCT projection: 1024 × 768, and time per iteration: 40.44 seconds). By using this strategy, the volumetric image reconstruction can be accomplished in 6.9 minutes, including 30 iterations of coarsest level, 15 iterations of higher level, and 5 iterations of full resolution level (about 60 times faster than CPU-based serial implementation). It is comparable to the fastest iterative CBCT techniques.

In order to further improve the computational speed of the proposed 3D-2D DIR method, in this section, we introduce a boundary-based 3D-2D DIR to obtain a good initial deformation; then the feature-based nonuniform meshing for 3D-2D DIR method, as mentioned in Section

After generating the feature-based nonuniform meshes (Section

(a) Original images; (b) the 0-1 binary images; (c) boundaries of an H&N patient data.

In order to directly and conveniently control the updated positions of the deformed anatomy surface voxels, we prefer to use the splatting method [

Splatting projection.

For perspective projection with antialiasing consideration, the Gaussian kernel radius should have dynamic sizes, which can be calculated from similar triangles shown in Figure

Geometry of the perspective projection with antialiasing for splatting method.

Another main advantage of splatting method over ray tracing is that splatting has a faster calculation speed; that is, it is very easy to ignore empty voxels (nonsurface voxels), which do not contribute to the final projection image. However, this is difficult to realize in ray tracing method.

It is noted that if we directly project the volume surface voxels (without considering kernels) onto the image plane, some pixels may be included there not belonging to the final 2D boundaries of the projection as shown at the top left of Figure

Computation of 2D projection boundaries of the deformed anatomy surface from an H&N patient data.

Demonstration of XCAT male phantom results. (a) Original image (Phase 1); (b) target image (Phase 4); (c) deformed image from Phase 1; (d) differences between deformed and target images.

The computed projections of 3D surface are compared with corresponding 2D projection boundaries from CBCT scans, and the primary DVF is iteratively optimized to obtain a good initial deformation for final volumetric image. The surface deformation is optimized by minimizing the total energy

L-BFGS algorithm is used to optimize the DVF

Because we do not need the exact DRRs of the deformed 3D anatomy image, the resampling is not required. What we focus on is the updated voxel positions of the deformed 3D anatomy surface. Then we can use the splatting method to compute the projections of the deformed volume surface.

After the above boundary-based registration, the primary DVF is obtained and then applied in further complete intensity-based DIR as the initial deformation. As a result, the final volumetric images are obtained by applying the optimized DVF to planning CT images.

The algorithms are implemented by using Microsoft Visual C++ 2010, MATLAB R2013a, and NVIDIA CUDA 5.5. For the hardware platform, the experiments are run on a desktop computer with Intel® Xeon E5645 CPU with 2.40 GHz, 34 GB DDR3 RAM, and NVIDIA GeForce GTX 780 Ti GPU with 3 GB GDDR5 video memory.

We evaluate and compare our proposed nonuniform tetrahedral meshing for 3D-2D DIR with other 3D-2D DIR methods, that is, voxel-based method [

This method is evaluated thoroughly by using two sets of digital XCAT phantoms and H&N patient data. Taking the XCAT male phantom data for example, two sets of 3D images, representing the same patient (phantom) at two different respiratory phases, are created. Both the beating heart and respiratory motions are considered, and in order to simulate the large deformation, the max diaphragm motion is set to 10 cm. Phase 1 and Phase 4 are shown in Figures

A conventional normalized cross correlation (NCC) is used to evaluate the similarity (i.e., the linear correlation) of 3D images and DVFs:

The normalized root mean square error (NRMSE) between the interpolated values

The range of the NRMSE is

To demonstrate the advantage of using feature-based nonuniform mesh, results of three types of meshing methods are compared. These are (1) a bounding box uniform orthogonal grid in Figure

Three types of meshing for XCAT male phantom data. (a) A uniform orthogonal grid; (b) a uniform tetrahedron mesh; (c) a feature-based nonuniform tetrahedron mesh.

In order to apply the meshing-based method in the 3D-2D image registration framework, we have to compute the meshes at first. The number of vertices of the tetrahedron meshes in XCAT phantom data and H&N patient data are all around 1,000; hence the execution time of the particle-based meshing method is the same. H&N patient data (512 × 512 × 140) is larger than XCAT data (256 × 256 × 132), so it takes more time in the preprocessing steps of image feature edges computation and the density field computation for feature-based tetrahedron mesh generation. Compared with the uniform orthogonal grid mesh generation (5 seconds), the isotropic tetrahedron mesh generation needs 10 seconds, and the feature-based tetrahedron mesh generation needs more time in preprocessing steps: (a) compute the image feature edges: 3.5 seconds (XCAT data) versus 15 seconds (H&N patient data); (b) compute the density field: 0.4 minutes (XCAT data) versus 1.5 minutes (H&N patient data); (c) run particle-based meshing framework: 10 seconds. Uniform and feature-based nonuniform tetrahedron meshes can be generated by our particle-based meshing approach only if the desired density field is available (the density field of uniform tetrahedron mesh is globally uniform). Once these meshes are generated, they are used by the 3D-2D DIR framework, and no additional computation is required for the meshes. It should be noted that the mesh generation could be done in advance as soon as the planning CT is performed. So the time for this preprocess can be hidden for the image registration process.

Feature-based nonuniform tetrahedral meshes are created using approximately 1,000 vertices on the original CT image (Figure

To test the robustness and accuracy of the algorithm, another XCAT female phantom with more complicated motions including a large deformation between respiratory Phase 1 and Phase 4 and the translation (globally translated by four-voxel-size distance (i.e., about 4.68 mm) in the horizontal direction) is used. Other configurations are the same as the previous male phantom case. Figure

Demonstration of XCAT female phantom results. (a) Original image (Phase 1); (b) target image (Phase 4); (c) differences between deformed and target images at the beginning of optimization; (d) differences between deformed and target images at iteration 5; (e) differences between deformed and target images at iteration 100 (end of the optimization); (f) final deformed image from Phase 1.

While a conventional CBCT reconstruction requires hundreds of projections, the mesh-based algorithm uses far fewer projections since the information from the planning CT image is used. The number of projections may vary from case to case. Table

Digital XCAT male phantom study results using various numbers of projections.

Number of projections used | 10 | 20 | 30 | 60 | 90 | 120 |
---|---|---|---|---|---|---|

NCC of images | 0.9788 | 0.9812 | 0.9825 | 0.9855 | 0.9859 | 0.9860 |

NRMSE of images | 0.1952 | 0.1836 | 0.1769 | 0.1612 | 0.1587 | 0.1585 |

NCC of DVF | 0.7823 | 0.7946 | 0.8012 | 0.8150 | 0.8189 | 0.8201 |

NRMSE of DVF | 1.0055 | 0.9076 | 0.8525 | 0.8118 | 0.7959 | 0.7951 |

Comparisons of results from different meshing algorithms are shown in Tables

Evaluation of reconstruction accuracy based on a digital XCAT male phantom.

Uniform orthogonal grid | Uniform tetrahedron mesh | Nonuniform tetrahedron mesh | Nonuniform tetrahedron mesh | Voxel-based method | |
---|---|---|---|---|---|

Number of vertices | 1,050 | 987 | 1,005 | 10,004 | 8,650,762 |

NCC of images | 0.9829 | 0.9835 | 0.9855 | 0.9858 | 0.9872 |

NRMSE of images | 0.1749 | 0.1705 | 0.1612 | 0.1593 | 0.1514 |

NCC of DVF | 0.7690 | 0.7829 | 0.8150 | 0.8265 | 0.6061 |

NRMSE of DVF | 1.2366 | 1.0923 | 0.8118 | 0.8059 | 1.2484 |

Evaluation of reconstruction accuracy based on a digital XCAT female phantom.

Uniform orthogonal grid | Uniform tetrahedron mesh | Nonuniform tetrahedron mesh | Nonuniform tetrahedron mesh | Voxel-based method | |
---|---|---|---|---|---|

Number of vertices | 980 | 981 | 1,011 | 10,000 | 8,650,752 |

NCC of images | 0.9789 | 0.9792 | 0.9829 | 0.9846 | 0.9775 |

NRMSE of images | 0.1970 | 0.1954 | 0.1766 | 0.1682 | 0.2035 |

NCC of DVF | 0.7687 | 0.7680 | 0.7672 | 0.7649 | 0.6292 |

NRMSE of DVF | 1.3875 | 1.1358 | 0.9705 | 0.9777 | 1.0317 |

The similarity energy (

This feature-based nonuniform meshing image registration method has been tested on five clinical data sets from the head and neck cancer patients H&N01~H&N05. Figure

Results with different numbers of projections on H&N01 patient data.

Number of projections used | 10 | 20 | 30 | 60 | 90 |
---|---|---|---|---|---|

NCC of images | 0.8458 | 0.8459 | 0.8460 | 0.8460 | 0.8460 |

NRMSE of images | 0.4798 | 0.4794 | 0.4792 | 0.4792 | 0.4792 |

Comparison of three meshes on the data of five head and neck cancer patients.

Patients | Uniform orthogonal grid | Uniform tetrahedron mesh | Nonuniform tetrahedron mesh |
---|---|---|---|

H&N01 | |||

Number of vertices | 936 | 992 | 1,007 |

NCC | 0.8327 | 0.8358 | 0.8460 |

NRMSE | 0.4988 | 0.4938 | 0.4792 |

H&N02 | |||

Number of vertices | 1,040 | 990 | 1,000 |

NCC | 0.9036 | 0.9122 | 0.9134 |

NRMSE | 0.4182 | 0.4116 | 0.4084 |

H&N03 | |||

Number of vertices | 980 | 1,000 | 998 |

NCC | 0.8470 | 0.8471 | 0.8482 |

NRMSE | 0.4748 | 0.4743 | 0.4722 |

H&N04 | |||

Number of vertices | 1,001 | 1,000 | 1,000 |

NCC | 0.7756 | 0.7806 | 0.8111 |

NRMSE | 0.5841 | 0.5817 | 0.5567 |

H&N05 | |||

Number of vertices | 980 | 1,000 | 995 |

NCC | 0.7565 | 0.7690 | 0.7843 |

NRMSE | 0.6278 | 0.6092 | 0.5712 |

Demonstration of the feature-based mesh generation on the H&N01 patient data. (a) The original image; (b) extracted feature edges; (c) density field; (d) a 2D view of the interior meshes with color-mapping.

Demonstration of H&N01 patient results from axial view. (a) Original CT image; (b) deformed image; (c) target image (daily CBCT).

Three types of meshing for H&N01 cancer patient data. (a) A uniform orthogonal grid; (b) a uniform tetrahedron mesh; (c) a feature-based nonuniform tetrahedron mesh.

For the H&N05 cancer patient data, which has a large deformation on tumor during treatment, we use it to evaluate the effectiveness of the boundary-based 3D-2D DIR method. Figures

Demonstration of H&N05 patient data results. (a), (b) Differences between projection of the original/final surface voxel (white dots) and the target CBCT projection boundary (black dots); (c) original CT image; (d) reconstructed image from original CT image; (e) target image (daily CBCT).

The accuracy of the boundary-based 3D-2D DIR and further intensity-based 3D-2D DIR are performed on H&N05 cancer patient data. Both the NCC and NRMSE shown in Table

Evaluation of boundary-based DIR accuracy on an H&N cancer patient data.

Status | Initial | Boundary-based DIR | Full DIR |
---|---|---|---|

NCC of images | 0.7627 | 0.7875 | 0.7919 |

NRMSE of images | 0.5938 | 0.5569 | 0.5479 |

With the GPU-based implementation, taking this H&N cancer patient data for example, our boundary-guided method takes 5.26 seconds per iteration, which is about 10 times faster than the non-boundary-guided method (i.e., 59.75 seconds). The multiresolution scheme is used to further improve the speed on both boundary-based 3D-2D DIR and further intensity-based 3D-2D DIR. In the experiment, only the CBCT images are downsampled into different resolution levels (three levels for experiments on H&N05 patient data), from the coarsest level (CBCT projection: 256 × 192, time for boundary-based DIR: 1.37 seconds/iteration, and time for intensity-based DIR: 3.15 seconds/iteration) to the higher level (CBCT projection: 512 × 384, time for boundary-based DIR: 2.11 seconds/iteration, and time for intensity-based DIR: 15.01 seconds/iteration) and finally to the full resolution level (CBCT projection: 1024 × 768, time for boundary-based DIR: 5.26 seconds/iteration, and time for intensity-based DIR: 59.75 seconds/iteration). At the same time, the proposed method needs fewer optimization iterations (35 iterations compared with original 50 iterations). The significant advantage of this method is that, instead of registering the whole image, we just need to register the surface voxel and projection boundaries, which involves much fewer number of voxels and pixels. In addition, in the H&N patient data case, we only focus on the 3D surface and 2D projection boundaries, so that the effects of different image modalities will be ignored. By using both GPU implementation and multiresolution scheme, the volumetric image reconstruction of 512 × 512 × 140 H&N cancer patient can be accomplished within 3 minutes (compared with 6.9 minutes of the original 3D-2D DIR in Section

The feature-based nonuniform meshing allows more sampling points to be placed in the important regions; thus the deformation can be controlled more precisely. With the approximately same numbers of sampling points (vertices), the feature-based nonuniform meshing method produces better registration results, where a larger NCC is obtained compared with the uniform orthogonal grid and the uniform tetrahedron meshes. While this improvement may seem small, it is important to note that the NCC is very close to 1 because only minor anatomic changes occur. The NRMSE measurement is also provided to represent the differences between deformed images and the ground truth images. In H&N patient data, again, the feature-based nonuniform meshing method yields the highest accuracy of registration among the various methods.

It is intuitive that more sampling points (10,000 versus 1,000) lead to better results. In contrast, the voxel-based deformation provides the best image results in XCAT male phantom but requires using more than eight million sampling points. However, when the optimized DVF is compared with the ground truth DVF, the DVF of voxel-based deformation is significantly less similar to the ground truth than the feature-based nonuniform meshing method. Voxel-based deformation may yield better image intensity result, but the resulting DVF represents an unrealistic anatomical mapping. This is a drawback of voxel-based deformation and is due to its localized deformation. The feature-based meshing method overcomes this drawback and yields more anatomically accurate DVF.

As for the mesh quality, before deformation, the nonuniform tetrahedral meshes are generated based on the smooth density field as introduced in Section

The repeated use of CBCT during a course of treatment could deliver high extra imaging dose to patients. For example, if weekly CBCT pelvis scans are performed with the conventional faction scheme, the total dose will be around 4.05 mSv/scan × 6 weeks = 24.3 mSV; and the total dose of head scans will be around 2.0 mSv/scan × 6 weeks = 12.0 mSV (Table

Moreover, the proposed boundary-based 3D-2D DIR method can substantially further improve both the accuracy and the speed of reconstructing volumetric images by producing a good initial DVF. This eventually will lead to a fast and safe daily volumetric imaging with a very small number of projections for image-guided radiation therapy or online adaptive radiation therapy. There might be a limitation of this boundary-based method, if the deformation happens mainly in the internal organs. In the case of lung, its intensity is significantly different from that of chest wall, so we may segment lung and apply the proposed boundary-based 3D-2D DIR method only focusing on the lung first.

In the future, our feature-based nonuniform meshing method may also be applied to 4D images registration. A CBCT scan acquires approximately 600 hundred projections in a full rotation, and if they are sorted into ten respiratory phases, the corresponding 4D simulation CT set can be used to generate a high-quality, full 4D CBCT image set without exposing the patient to additional imaging dose. Currently, the feature-based nonuniform meshing method has been employed to some head and neck cancer patient data and achieved excellent results. In the future, we will investigate and determine the clinical accuracy of the method based on more patient data and statistical analysis in some follow-up applications for other cancer cases: such as breast, lung, and prostate cancers.

The authors declare that there is no conflict of interests regarding the publication of this paper.

The authors would like to thank Drs. John Yordy, Susie Chen, and Lucien Nedzi in the Department of Radiation Oncology at UT Southwestern Medical Center for providing the head and neck cancer patients’ data. This project was partially supported by Cancer Prevention & Research Institute of Texas (CPRIT) under Grant nos. RP110329 and RP110562 and National Science Foundation (NSF) under Grant nos. IIS-1149737 and CNS-1012975.