Scientific Journal

Applied Aspects of Information Technology

INTELLIGENT SYSTEM BASED ON A CONVOLUTIONAL NEURAL NETWORK FOR IDENTIFYING PEOPLE WITHOUT BREATHING MASKS
Abstract:
The COVID-19 pandemic is having a huge impact on people and communities. Many organizations face significant disruptions and issues that require immediate response and resolution. Social distancing, breathing masks and eye protection as preventive measures against the spread of COVID-19 in the absence of an effective antiviral vaccine play an important role. Banning unmasked shopping in supermarkets and shopping malls is mandatory in most countries. However, with a large number of buyers, the security is not able to check the presence of breathing masks on everyone. It is necessary to introduce intelligent automation tools to help the work of security. In this regard, the paper proposes an up-to-date solution – an intelligent system for identifying people without breathing masks. The proposed intelligent system works in conjunction with a video surveillance system. A video surveillance system has a structure that includes video cameras, recorders (hard disk drives) and monitors. Video cameras shoot sales areas and transmit the video image to recording devices, which, in turn, record what is happening and display the video from the cameras directly on the monitor. The main idea of the proposed solution is the use of an intelligent system for classifying images periodically received from cameras of a video surveillance system. The developed classifier divides the image stream into two classes. The first class is “a person in a breathing mask” and the second is “a person without a breathing mask”. When an image of the second class appears, that is, a person who has removed a breathing mask or entered a supermarket without a breathing mask, the security service will immediately receive a message indicating the problem area. The intelligent system for image classification is based on a convolution neural network VGG-16. In practice, this architecture shows good results in the classification of images with great similarity. To train the neural network model, the Google Colab cloud service was used – this is a free service based on Jupyter Notebook. The trained model is based on an open source machine learning platform TensorFlow. The effectiveness of the proposed solution is confirmed by the correct processing of the practically obtained dataset. The classification accuracy is up to 90 %.
Authors:
Keywords
DOI
10.15276/aait.03.2020.2
References
1. Ajit, A., Acharya, K. & Samanta, A. “A Review of Convolutional Neural Networks”. International Conference on Emerging Trends in Information Technology and Engineering (ic-ETITE). Vellore, India: 2020. р. 1–5. DOI: 10.1109/ic-ETITE47903.2020.049. 
2. Zhou, Y., Chen, S., Wang, Y. & Huan, W. “Review of research on lightweight convolutional neural networks”. IEEE 5th Information Technology and Mechatronics Engineering Conference (ITOEC). Chongqing. 2020. p. 1713–1720. DOI: 10.1109/ITOEC49072.2020.9141847. 
3. Elhassouny, A. & Smarandache, F. “Trends in deep Convolutional Neural Networks architectures: a review”. International Conference of Computer Science and Renewable Energies (ICCSRE). Agadir, Morocco: 2019. p. 1–8. DOI: 10.1109/ICCSRE.2019.8807741. 
4. Muhammed, M. A. E., Ahmed, A. A. & Khalid, T. A. “Benchmark analysis of popular ImageNet classification deep CNN architectures”. International Conference оn Smart Technologies for Smart Nation (SmartTechCon). Bangalore: 2017. p. 902–907. DOI: 10.1109/ SmartTechCon.2017.8358502. 
5. Deng, J., Dong, W., Socher, R., Li, L., Kai Li & Li Fei-Fei. “ImageNet: A large-scale hierarchical image database”. IEEE Conference on Computer Vision and Pattern Recognition. Miami, FL.: 2009. p. 248– 255. DOI: 10.1109/ CVPR.2009.5206848. 
6. Rajalakshmi, M., Saranya, P. & Shanmugavadivu, P. “Pattern Recognition-Recognition of Handwritten Document Using Convolutional Neural Networks”. IEEE International Conference on Intelligent Techniques in Control, Optimization and Signal Processing (INCOS). Tamilnadu, India: 2019. p. 1–7. DOI: 10.1109/ INCOS45849.2019.8951342. 
7. Wan, X., Song, H., Luo, L., Li, Z., Sheng, G. & Jiang, X. “Pattern Recognition of Partial Discharge Image Based on One-dimensional Convolutional Neural Network”. Condition Monitoring and Diagnosis (CMD). Perth, WA. 2018. p. 1–4. DOI: 10.1109/CMD.2018.8535761. 
8. Lee, J., Lee, S. & Yang, S. “An Ensemble Method of CNN Models for Object Detection”. International Conference on Information and Communication Technology Convergence (ICTC). Jeju: 2018. p. 898– 901. DOI: 10.1109/ ICTC.2018.8539396. 
9. Mane, S. & Mangale, S. “Moving Object Detection and Tracking Using Convolutional Neural Networks”. Second International Conference on Intelligent Computing and Control Systems (ICICCS). Madurai, India: 2018. p. 1809–1813. DOI: 10.1109/ICCONS.2018.8662921. 
10. Marmanis, D., Schindler, K., Wegner, J. D., Datcu, M. & Stilla, U. “Semantic segmentation of aerial images with explicit class-boundary modeling”. IEEE International Geoscience and Remote Sensing Symposium (IGARSS). Fort Worth, TX. 2017. p. 5165–5168. DOI: 10.1109/ IGARSS.2017.8128165. 
11. Tao, H., Li, W., Qin, X. & Jia, D. “Image semantic segmentation based on convolutional neural network and conditional random field”. Tenth International Conference on Advanced Computational Intelligence (ICACI). Xiamen: 2018. p. 568–572. DOI: 10.1109/ICACI.2018.8377522. 
12. Yang, J. & Li, J. “Application of deep convolution neural network”. 14th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP). Chengdu, 2017. p. 229–232. DOI: 10.1109/ICCWAMTIP.2017.8301485. 
13. Yenter, A. & Verma, A. “Deep CNN-LSTM with combined kernels from multiple branches for IMDb review sentiment analysis”. IEEE 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference (UEMCON). New York, NY: 2017. p. 540–546. DOI: 10.1109/UEMCON.2017.8249013. 
14. Li, P., Li, J. & Wang, G. “Application of Convolutional Neural Network in Natural Language Processing”. 15th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP). Chengdu, China: 2018. p. 120–122. DOI: 10.1109/ICCWAMTIP.2018.8632576. 
15. Lecun, Y., Bottou, L., Bengio, Y. & Haffner, P. “Gradient-based learning applied to document recognition”. Proceedings of the IEEE. 1998; Vol.86 No.11: 2278–2324. DOI: 10.1109/5.726791. 
16. Goodfellow, I., Bengio, Y. & Courville, A. “Deep Learning”. Massachusetts, US: MIT Press. 2016. 802 p. 
17. Krizhevsky, A. “Learning Multiple Layers of Features from Tiny Images”, Technical Report TR2009. University of Toronto. Toronto: 2009. 58 p. 
18. ByungSoo Ko, Kim, H., Kyo-Joong Oh & Choi, H. “Controlled dropout: A different approach to using dropout on deep neural network”. IEEE International Conference on Big Data and Smart Computing (BigComp). Jeju, 2017. p. 358–362. DOI: 10.1109/BIGCOMP.2017.7881693. 
19. Srivastava, N., Hinton, G., Krizhevsky, A. & Salakhutdinov, R. “Dropout: A Simple Way to Prevent Neural Networks from Overfitting”. Journal of Machine Learning Research. 2014; 15(1): 211–252. 
20. Zeiler, M.D. & Fergus, R. “Visualizing and Understanding Convolutional Networks”. European conference on computer vision. Springer International Publishing. 2014. p. 818–833. DOI: 10.1007/978-3- 319-10590-1_53. 
21. Noh, H., Hong, S. & Han, B. “Learning Deconvolution Network for Semantic Segmentation”. IEEE International Conference on Computer Vision (ICCV). Santiago: 2015. p. 1520–1528. DOI: 10.1109/ICCV.2015.178. 
22. Simonyan, K. & Zisserman, A. “Very Deep Convolutional Networks for Large-Scale Image Recognition”. arXiv preprint, arXiv: 1409.1556. 2014. 14 p. 
23. Girshick, R. “Fast R-CNN”. 2015 IEEE International Conference on Computer Vision (ICCV). Santiago: 2015. p. 1440–1448. DOI: 10.1109/ICCV.2015.169. 
24. Dai, J., He, K. & Sun, J. “Instance-Aware Semantic Segmentation via Multi-task Network Cascades”. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV: 2016. p. 3150–3158. DOI: 10.1109/ CVPR.2016.343. 
25. Zhang, K., Li, T., Liu, B. & Liu, Q. “Co-Saliency Detection via Mask-Guided Fully Convolutional Networks With Multi-Scale Label Smoothing”. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, CA, USA: 2019. p. 3090.–3099. DOI: 10.1109/CVPR.2019.00321. 
26. Szegedy, C., Liu, W., Jia, Y. et al. “Going deeper with convolutions”. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, MA: 2015. p. 1–9. DOI: 10.1109/CVPR.2015.7298594. 
27. Lin, M, Chen, Q. & Yan, S. “Network In Network”. CoRR abs/1312.4400. arXiv: 1312.4400. 2013. 10 p. 
28. He, K., Zhang, X., Ren, S. & Sun, J. “Deep Residual Learning for Image Recognition”. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV: 2016. p. 770–778. DOI: 10.1109/CVPR.2016.90. 
29. Squartini, S., Hussain, A. & Piazza, F. “Preprocessing based solution for the vanishing gradient problem in recurrent neural networks”. Proceedings of the International Symposium on Circuits and Systems, ISCAS '03. Bangkok: 2003. p. 713–716. DOI: 10.1109/ ISCAS.2003.1206412. 
30. Hochreiter, S. “The vanishing gradient problem during learning recurrentneural nets and problem solutions”. Int. J. Uncertain. Fuzziness Knowledge Based Syst. 1998. 6: 107–116. 
31. Szegedy, C., Ioffe, S., Vanhoucke, V. & Alemi, A. “Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning”. Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence. 2017. p. 4278–4284. 
32. Wen, L., Li, X., Li, X. & Gao, L. “A New Transfer Learning Based on VGG-19 Network for Fault Diagnosis”. IEEE 23rd International Conference on Computer Supported Cooperative Work in Design (CSCWD). Porto, Portugal: 2019. p. 205–209. DOI: 10.1109/CSCWD. 2019.8791884. 
33. Noble, F. K. “Comparison of OpenCV's feature detectors and feature matchers”. 23rd International Conference on Mechatronics and Machine Vision in Practice (M2VIP). Nanjing: 2016. p. 1–6. DOI: 10.1109/M2VIP.2016.7827292. 
34. Dalal, N. & Triggs, B “Histograms of oriented gradients for human detection”. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). San Diego, CA, USA: 2005; Vol. 1: 886–893. DOI: 10.1109/CVPR.2005.177. 
35. Cardoso, A., Leitao, J. & Teixeira, C. “Using the Jupyter Notebook as a Tool to Support the Teaching and Learning Processes in Engineering Courses”. Auer M., Tsiatsos T. (eds) The Challenges of the Digital Transformation in Education. ICL 2018. Advances in Intelligent Systems and Computing, Publ. Springer, Cham. 2019; Vol. 917: 227–236. DOI: https://doi.org/10.1007/978-3-030-11935-5_22. 
36. Pester, A. & Schrittesser, M. “Object detection with Raspberry Pi3 and Movidius Neural Network Stick”. 5th Experiment International Conference (exp.at'19). Funchal (Madeira Island). Portugal: 2019. p. 326–330. DOI: 10.1109/ EXPAT.2019.8876583.


Received 02.08.2020
Received after revision 15.09.2020
Accepted 21.09.2020
Published:
Last download:
11 Oct 2021

Contents


[ © KarelWintersky ] [ All articles ] [ All authors ]
[ © Odessa National Polytechnic University, 2018.]