A U-Net Approach to Apical Lesion Segmentation on Panoramic Radiographs

The purpose of the paper was the assessment of the success of an artificial intelligence (AI) algorithm formed on a deep-convolutional neural network (D-CNN) model for the segmentation of apical lesions on dental panoramic radiographs. A total of 470 anonymized panoramic radiographs were used to progress the D-CNN AI model based on the U-Net algorithm (CranioCatch, Eskisehir, Turkey) for the segmentation of apical lesions. The radiographs were obtained from the Radiology Archive of the Department of Oral and Maxillofacial Radiology of the Faculty of Dentistry of Eskisehir Osmangazi University. A U-Net implemented with PyTorch model (version 1.4.0) was used for the segmentation of apical lesions. In the test data set, the AI model segmented 63 periapical lesions on 47 panoramic radiographs. The sensitivity, precision, and F1-score for segmentation of periapical lesions at 70% IoU values were 0.92, 0.84, and 0.88, respectively. AI systems have the potential to overcome clinical problems. AI may facilitate the assessment of periapical pathology based on panoramic radiographs.


Introduction
Chronic apical periodontitis is an infection of tissues surrounding the dental apex induced by pulpal disease, mostly because of bacterial disease in the root canal complex developing during untreated or incorrectly treated dental caries [1][2][3]. Apical periodontitis is common, and its prevalence increases with age. Epidemiological studies have reported that apical periodontitis is present in 7% of teeth and 70% of the general population. The diagnosis of acute apical periodontitis is made clinically, but the detection of chronic apical periodontitis is done by radiography [4]. In general, following root canal treatment, complete healing of periapical lesions is expected or at least improvement in the form of a decrease of the size of periapical lesion [1,5]. Radiographically, apical periodontitis manifests as a widened periodontal ligament space or visible lesions. Such radiolucencies, also called apical lesions, tend to be detected incidentally or by radiographic follow-up of endodontically treated teeth [6,7]. Radiolucency in radiographs is an important feature of apical periodontitis [2]. Apical periodontitis can be detected on periapical and panoramic radiographs and by cone-beam computed tomography (CBCT). CBCT has superior discriminatory power but is costly and exposes the patient to radiation burden [6,8]. Periapical and panoramic radiographs are the most frequently used techniques in the diagnosis and treatment of apical lesions [2]. Panoramic radiography generates two-dimensional (2D) tomographic images of the entire maxillomandibular area [9], enabling the evaluation of all teeth simultaneously. Also, panoramic radiography requires a far lower dose of radiation than CBCT imaging [6,10]. Besides, panoramic radiography is painless, unlike intraoral radiographs, thus well tolerated by patients [9,11]. One of the many recent technological advances in artificial intelligence (AI) and its applications are expanding rapidly, also in the area of medical management and medical imaging [12]. AI uses computational networks (neural networks (NNs)) that mimic biological nervous systems [13]. NNs were developed as one of the first types of AI algorithms. The computing power of NNs varies depending on the character and amount of training data. Networks using many large layers are termed deep learning NNs [14]. A deep convolutional neural network (D-CNN) was used to process large and complex images [15]. Deep learning networks, including CNNs, have displayed superior achievement in terms of object, face, and activity recognition [16]. Medical organ and lesion segmentation are an important application of imaging modalities [17,18]. The detection and classification performance of deep learning-based CNNs concerning retinopathy caused by diabetes, skin cancer, and tuberculosis is very high [19,20]. CNNs have also been applied in dentistry for tooth detection and numbering, as well as an assessment of periodontal bone loss and periapical pathology [21][22][23][24][25]. U-Net and pixel-based image segmentation, which is a different architecture created from CNN layers, are more successful than classical models even if there are few training images. The presentation of this architecture has been realized with biomedical images. The traditional U-Net architecture, extended to handle volumetric input, has two phases: the coder portion of the network where it learns representational features at unlikely scaleand gather-dependent information, and the decoder portion where the network extracts knowledge from the noticed situation and formerly learned features. The jump links used between the corresponding encoder and decoder layers allow deep parts of the network to be trained efficiently and compare the same receiver characteristics with different receiver areas [26].
The study is aimed at assessing the diagnostic success of U-Net approach for the segmentation of apical lesions in panoramic images.

Radiographic Data Preparation.
The panoramic radiographs used in the study were derived from the archives of the Faculty of Dentistry of Eskisehir Osmangazi University; 470 anonymized panoramic radiographs were applied. The radiographs were obtained from January 2018 to January 2019 for a variety of reasons. Images with artifacts of any type were excluded. The study design was authorized by the Non-Interventional Clinical Research Ethics Committee of Eskisehir Osmangazi University (decision date and number: 06.08.2019/14). The study was conducted following the regulations of the Declaration of Helsinki. The Planmeca Promax 2D (Planmeca, Helsinki, Finland) panoramic imag-ing system was used to obtain panoramic radiographs with the following parameters: 68 kVp, 16 mA, and 13 s.

Image Annotation.
Three dental radiologists (I.S.B. and E. B. with 10 years of experience and F.A.K. with 3 years of experience) annotated ground truth images with the common decision on all images using CranioCatch Annotation software (Eskisehir, Turkey). The polygonal boxes were used to determine the locations of the apical lesions.
2.3. Deep CNN Architecture. The deep learning was performed using a U-Net implemented with the PyTorch model (version 1.4.0). The U-Net architecture is used for semantic segmentation assignments (Figure 1).
The U-Net architecture consists of four block levels, including two convolutional layers with batch normalization and a rectified linear unit activation function (ReLu). There is a maximum pool layer in the encoding section and upconvolution layers in the decoding section. Each block has 32, 64, 128, or 256 convolutional filters. Besides the bottleneck, the layer comprises 512 convolutional filters. Skip connections to the corresponding layers from the encoding layers are present in the decoding part [26]. The Adam Optimizer was used to train the U-Net. The segmentation model with PyTorch U-Net was trained with 95 epochs; the model based on 43 epochs showed the best performance and was thus used in the experiment. The model pipeline is summarized in Figure 2. 2.5. Statistical Analysis. The confusion matrix was used to assess the achievement of the model. This matrix is a meaningful table that summarizes the predicted and actual situations. The performance of model is frequently assessed using the data in the confusion matrix [27]. The metrics used to evaluate the success of the model were as follows:   Twelve apical lesions were not detected (False Negatives). In 5 cases without apical lesions, lesions were nevertheless segmented by the AI model (False Positives) ( Table 1).
Tuzoff et al. presented a novel CNN algorithm for automatic tooth detection and numbering on panoramic radiographs. They found the sensitivity and specificity value of tooth numbering as 0.9893 and 0.9997, respectively. The findings showed the ability of current CNN architectures for automatic dental radiographic interpretation and diagnosis on panoramic radiographs [25]. Chen et al. detected and numbered teeth in dental periapical films using faster region proposal CNN networks (faster R-CNN). Faster R-CNN performed unusually well for tooth detection and localization, showing good precision and recall and overall performance like that of a younger dentist [24]. Miki et al. assessed the utility of deep CNN for classifying teeth based on dental CBCT images; the accuracy was 91.0%. The system rapidly and automatically produces diagrams for forensic recognition [38]. Two previous studies investigated the utility of AI systems for detecting periapical lesions. Ekert et al. investigated the capability of deep CNN algorithm to detect apical lesions on dental panoramic radiographs. CNNs detected the lesions despite the small number of data sets [6]. Orhan

Conclusions
Deep learning AI models enable the evaluation of periapical pathology based on panoramic radiographs. The application

Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.