Basit öğe kaydını göster

dc.contributor.authorSünnetçi, Kubilay Muhammed
dc.contributor.authorKaba, Esat
dc.contributor.authorÇeliker, Fatma Beyazal
dc.contributor.authorAlkan, Ahmet
dc.date.accessioned2023-09-26T07:45:03Z
dc.date.available2023-09-26T07:45:03Z
dc.date.issued2023en_US
dc.identifier.citationSunnetci, K. M., Kaba, E., Celiker, F. B., & Alkan, A. (2023). Deep Network-Based Comprehensive Parotid Gland Tumor Detection. Academic radiology, S1076-6332(23)00226-X. Advance online publication. https://doi.org/10.1016/j.acra.2023.04.028en_US
dc.identifier.issn1076-6332
dc.identifier.urihttps://doi.org/10.1016/j.acra.2023.04.028
dc.identifier.urihttps://hdl.handle.net/11436/8384
dc.description.abstractRationale and Objectives: Salivary gland tumors constitute 2%-6% of all head and neck tumors and are most common in the parotid gland. Magnetic resonance (MR) imaging is the most sensitive imaging modality for diagnosis. Tumor type, localization, and relationship with surrounding structures are important factors for treatment. Therefore, parotid gland tumor segmentation is important. Specialists widely use manual segmentation in diagnosis and treatment. However, considering the development of artificial intelligence-based models today, it is seen that artificial intelligence-based automatic segmentation models can be used instead of manual segmentation, which is a time-consuming technique. Therefore, we segmented parotid gland tumor (PGT) using deep learning-based architectures in the paper. Materials and Methods: The dataset used in the study includes 102 T1-w, 102 contrast-enhanced T1-w (T1C-w), and 102 T2-w MR images. After cropping the raw and manually segmented images by experts, we obtained the masks of these images. After standardizing the image sizes, we split these images into approximately 80% training set and 20% test set. Hereabouts, we trained six models for these images using ResNet18 and Xception-based DeepLab v3+. We prepared a user-friendly Graphical User Interface application that includes each of these models. Results: From the results, the accuracy and weighted Intersection over Union values of the ResNet18-based DeepLab v3+ architecture trained for T1C-w, which is the most successful model in the study, are equal to 0.96153 and 0.92601, respectively. Regarding the results and the literature, it can be seen that the proposed system is competitive in terms of both using MR images and training the models independently for T1-w, T1C-w, and T2-w. Expressing that PGT is usually segmented manually in the literature, we predict that our study can contribute significantly to the literature. Conclusion: In this study, we prepared and presented a software application that can be easily used by users for automatic PGT segmentation. In addition to predicting the reduction of costs and workload through the study, we developed models with meaningful performance metrics according to the literature.en_US
dc.language.isoengen_US
dc.publisherElsevieren_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectDeep learningen_US
dc.subjectMagnetic resonance imagingen_US
dc.subjectParotid gland tumoren_US
dc.subjectSegmentationen_US
dc.titleDeep network-based comprehensive parotid gland tumor detectionen_US
dc.typearticleen_US
dc.contributor.departmentRTEÜ, Tıp Fakültesi, Dahili Tıp Bilimleri Bölümüen_US
dc.contributor.institutionauthorKaba, Esat
dc.contributor.institutionauthorÇeliker, Fatma Beyazal
dc.identifier.doi10.1016/j.acra.2023.04.028en_US
dc.relation.journalAcademic Radiologyen_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US


Bu öğenin dosyaları:

DosyalarBoyutBiçimGöster

Bu öğe ile ilişkili dosya yok.

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster