Scientific Journal

Applied Aspects of Information Technology

Brain tumor is a relatively severe human disease type. Its timely diagnosis and tumor type definition are an actual task in modern medicine. Lately, the segmentation methods on 3D brain images (like computer and magnetic resonance tomography) are used for definition of a certain tumor type. Nevertheless, the segmentation is usually conducted manually, which requires a lot of time and depends on the experience of a doctor. This paper looks at the possibility of creating a method for the automatic segmentation of images. As a training sample, the medical database of MRI brain tomography with three tumor types (meningioma, glioma, and pituitary tumor) was taken. Taking into account the different slices, the base had: 708 examples of meningioma, 1426 examples of glioma, and 930 examples of pituitary tumor. The database authors marked the regions of interest on each image, which were used as a tutor (supervised learning) for automatic segmentation model. Before model creation, currently existing popular automatic segmentation models were analyzed. U-Net deep convolution neural network architecture was used as the most suitable one. As the result of its use, the model was obtained, which can segment the image correctly in seventy four percent of six hundred images (testing sample). After obtaining the automatic segmentation model, the Random Forest models for three “One versus All” tasks and one multiclass task were created for brain tumor classification. The total sample was divided into training (70 %), testing (20 %), and examining (10 %) ones before creating the models. The accuracy of the models in the examining sample varies from 84 to 94 percent. For model classification creation, the texture features were used, obtained by texture analysis method, and created by the co-authors of the Department of Biomedical Cybernetics in the task of liver ultrasound image classification. They were compared with well-known Haralick texture features. The comparison showed that the best way to achieve an accurate classification model is to combine all the features into one stack
1. de Zwart, A. D., Beeres, F. J. P., Rhemrev, S. J., Bartlema, K. & Schipper, I. B. “Comparison of MRI, CT and bone scintigraphy for suspected scaphoid fractures”. Journal “European Journal of Trauma and Emergency Surgery”. 2016; 42(6): 725–731. DOI: 10.1007/s00068-015-0594-9. 
2. Verma, A. & Khanna, G. “A survey on digital image processing techniques for tumor detection”. Journal “Indian J Sci Technol”. 2016; 9(14): 15 p. DOI: 10.17485/ijst/2016/v9i14/84976. 
3. Zandifar, A., Fonov, V., Coupé, P., Pruessner, J. & Collins, D. L. “A comparison of accurate automatic hippocampal segmentation methods”. Neuroimage. 2017; 155: 383–393. DOI: 10.1016/j.neuroimage.2017.04.018. 
4. Jelercic, S. & Rajer, M. “The role of PET-CT in radiotherapy planning of solid tumours”. “Radiology and Oncology”. 2015; Vol. 49: 1–9. DOI: 10.2478/raon-2013-0071. 
5. Torigian, D. A., Zaidi, H., Kwee, T. C., Saboury, B., Udupa, J. K., Cho, Z. H., et al. “PET/MR imaging: Technical aspects and potential clinical applications”. Journal Radiology. 2013; 267(1): 26–44. DOI: 10.1148/radiol.13121038. 
6. Zhao, B., Schwartz, L. H. & Larson, S. M. “Imaging surrogates of tumor response to therapy: Anatomic and functional biomarkers”. Journal J Nucl Med. 2009; 50(2): 239–249. DOI: 10.2967/jnumed.108.056655. 
7. Liu, X., Deng, Z. & Yang, Y. “Recent progress in semantic image segmentation”. Journal Artif Intell Rev. 2019; 52(2): 1089–1106. DOI: 10.1007/s10462-018-9641-3. 
8. Chun-Xiang, W., Yao, W. & Zhi-Bo, H. “Segmentation adaptive slicing algorithm based on features of parts in additive manufacturing”. Journal Chinese J Eng Des. 2020; 27(3): 373–379. DOI: 10.3785/j.issn.1006-754X.2020.00.038. 
9. Acuff, S. N., Jackson, A. S., Subramaniam, R. M. & Osborne, D. “Practical considerations for integrating PET/CT into radiation therapy planning”. Journal J Nucl Med Technol. 2018; 46(4): 343–348. DOI: 10.2967/jnmt.118.209452. 
10. Werner-Wasik, M., Nelson, A. D., Choi, W., Arai, Y., Faulhaber, P. F., Kang, P., et al. “What is the best way to contour lung tumors on PET scans? Multiobserver validation of a gradient-based method using a NSCLC digital PET phantom”. Int J Radiat Oncol Biol Phys. 2012; 82(3): 1164–1171. DOI: 10.1016/j.ijrobp.2010.12.055. 
11. Bagci, U., Udupa, J. K., Mendhiratta, N., Foster, B., Xu, Z., Yao, J., et al. “Joint segmentation of anatomical and functional images: Applications in quantification of lesions from PET, PET-CT, MRI-PET, and MRI-PET-CT images”. Journal Med Image Anal. 2013; 17(8): 929–945. DOI: 10.1016/ 
12. Khalil, M. M. “Basic science of PET imaging”. Journal Basic Science of PET Imaging. 2016. 619 p. 
13. Ambrosini, V., Fanti, S., Chengazi, V. U. & Rubello, D. “Diagnostic accuracy of FDG PET/CT in mediastinal lymph nodes from lung cancer”. European Journal of Radiology. 2014; Vol.83: 1301–1302. DOI: 10.1016/j.ejrad.2014.04.035. 
14. Toledano, M. N., Vera, P., Tilly, H., Jardin, F. & Becker, S. “Comparison of therapeutic evaluation criteria in FDG-PET/CT in patients with diffuse large-cell B-cell lymphoma: Prognostic impact of tumor/liver ratio”. PLoS One. 2019; 14(2): 16 p. DOI: 10.1371/journal.pone.0211649.
15. Delbeke, D., Coleman, R. E., Guiberteau, M. J., Brown, M. L., Royal, H. D., Siegel, B. A, et al. “Procedure guideline for tumor imaging with 18F-FDG PET/CT 1.0”. Journal of Nuclear Medicine. 2006; Vol.47: 885–895. 
16. Jones, J. L., Xie, X. & Essa, E. “A shortest path approach to interactive medical image segmentation”. In: Biomedical Image Segmentation: Advances and Trends. 2016. p. 407–436. DOI: 10.1201/9781315372273. 
17. Chen, X. & Pan, L. “A Survey of Graph Cuts/Graph Search Based Medical Image Segmentation”. IEEE Rev Biomed Eng. 2018; 11: 112–124. DOI: 10.1109/RBME.2018.2798701. 
18. Li, G., Jiang, D., Zhou, Y., Jiang, G., Kong, J. & Manogaran, G. “Human lesion detection method based on image information and brain signal”. IEEE Access. 2019; 7: 11533–11542. DOI: 10.1109/ACCESS.2019.2891749. 
19. Yu, K., Shi, F., Gao, E., Zhu, W., Chen, H. & Chen, X. “Shared-hole graph search with adaptive constraints for 3D optic nerve head optical coherence tomography image segmentation”. Journal Biomed Opt Express. 2018; 9(3): 962 p. DOI: 10.1364/boe.9.000962. 
20. Meskini, E., Helfroush, M. S., Kazemi, K. & Sepaskhah, M. “A new algorithm for skin lesion border detection in dermoscopy images”. Journal J Biomed Phys Eng. 2018; 8(1): 109–118. DOI: 10.22086/jbpe.v0i0.444. 
21. Dey, N., Rajinikanth, V., Ashour, A. S. & Tavares, J. M. R. S. “Social group optimization supported segmentation and evaluation of skin melanoma images”. Symmetry (Basel). 2018; 10(2): 21 p. DOI: 10.3390/sym10020051. 
22. Feng, X., Qing, K., Tustison, N. J., Meyer, C. H. & Chen, Q. “Deep convolutional neural network for segmentation of thoracic organs-at-risk using cropped 3D images”. Journal Med Phys. 2019; 46 (5): 2169–2180. DOI: 10.1002/mp.13466. 
23. Iglovikov, V. & Shvets, A. “TernausNet: U-Net with VGG11 Encoder Pre-Trained on ImageNet for Image Segmentation”. Journal ArXiv. 2018. 5 p. 
24. Ronneberger, O., Fischer, P. & Brox, T. “U-net: Convolutional networks for biomedical image segmentation”. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 2015. p. 234–241. DOI: 10.1007/978-3-319-24574-4_28. 
25. Zhou, J., Zhang, Q., Zhang, B. & Chen, X. “TongueNet: A Precise and fast tongue segmentation system using U-net with a morphological processing layer”. Journal Appl Sci. 2019; 9(15): 19 p. DOI: 10.3390/app9153128. 
26. Lai, X., Yang, W. & Li, R. “DBT Masses Automatic Segmentation Using U-Net Neural Networks”. Comput Math Methods Med. 2020; 2020: 10p. DOI: 10.1155/2020/7156165. 
27. Cheng, J., Huang, W., Cao, S., Yang, R., Yang, W., Yun, Z., et al. “Enhanced performance of brain tumor classification via tumor region augmentation and partition”. Journal PLoS One. 2015; 10(10): 13 р. DOI: 10.1371/journal.pone.0140381. 
28. Cheng, J., Yang, W., Huang, M., Huang, W., Jiang, J., Zhou, Y., et al. “Retrieval of Brain Tumors by Adaptive Spatial Pooling and Fisher Vector Representation”. Journal PLoS One. 2016; 11(6): 15 p. DOI: 10.1371/journal.pone.0157112. 
29. Kruglyi, V. & Nastenko, Ie. “Formirovanie informativnih priznakov dlia zadachi klassifikaciyi patologiya/norma po izobrajeniyu UZI pecheni pacienta”. Journal Scientific Discussion. 2019; 1(31): 57–59 (in Russian). 
30. Nastenko, Ie., Dykan, I., Tarasiuk, B., Pavlov, V., Nosovets, O., Babenko, V., Kruglyi, V., Dyba, M. & Soloduschenko, V. “Klassifikaciya staniv pechinki pri difuznih zahvoruvanniah na osnovi statistichnih pokaznikiv teksturi ultrazvukovih zobrazhen’ ta MGUA”. Journal Induktivne modelliuvannia skladnih sistem. 2019; 11: 54–66 (in Ukrainian). 
31. Brynolfsson, P., Nilsson, D., Torheim, T., Asklund, T., Karlsson, C. T., Trygg, J., et al. “Haralick texture features from apparent diffusion coefficient (ADC) MRI images depend on imaging and preprocessing parameters”. Sci Rep. 2017; 7(1): 11 p. DOI: 10.1038/s41598-017-04151-4. 
Last download:
9 May 2021


[ © KarelWintersky ] [ All articles ] [ All authors ]
[ © Odessa National Polytechnic University, 2018.]