Real-Time Vehicle Detection for Surveillance of River Dredging Areas Using Convolutional Neural Networks

Full Text (PDF, 1142KB), PP.17-28

Views: 0 Downloads: 0

Author(s)

Mohammed Abduljabbar Zaid Al Bayati 1 Muhammet Cakmak 2,*

1. Karabuk University / Computer Engineering, Karabuk, Turkey

2. Karabuk University / Electrical and Electronics Engineering, Karabuk, Turkey

* Corresponding author.

DOI: https://doi.org/10.5815/ijigsp.2023.05.02

Received: 21 Apr. 2023 / Revised: 27 May 2023 / Accepted: 6 Jul. 2023 / Published: 8 Oct. 2023

Index Terms

River dredging, Automated surveillance, Vehicle detection, CNN, Scale invariant

Abstract

The presence of illegal activities such as illegitimate mining and sand theft in river dredging areas leads to economic losses. However, manual monitoring is expensive and time-consuming. Therefore, automated surveillance systems are preferred to mitigate such activities, as they are accurate and available at all times. In order to monitor river dredging areas, two essential steps for surveillance are vehicle detection and license plate recognition. Most current frameworks for vehicle detection employ plain feed-forward Convolutional Neural Networks (CNNs) as backbone architectures. However, these are scale-sensitive and cannot handle variations in vehicles' scales in consecutive video frames. To address these issues, Scale Invariant Hybrid Convolutional Neural Network (SIH-CNN) architecture is proposed for real-time vehicle detection in this study. The publicly available benchmark UA-DETRAC is used to validate the performance of the proposed architecture. Results show that the proposed SIH-CNN model achieved a mean average precision (mAP) of 77.76% on the UA-DETRAC benchmark, which is 3.94% higher than the baseline detector with real-time performance of 48.4 frames per seconds.

Cite This Paper

Mohammed Abduljabbar Zaid Al Bayati, Muhammet Çakmak, "Real-Time Vehicle Detection for Surveillance of River Dredging Areas Using Convolutional Neural Networks", International Journal of Image, Graphics and Signal Processing(IJIGSP), Vol.15, No.5, pp. 17-28, 2023. DOI:10.5815/ijigsp.2023.05.02

Reference

[1]L. He, S. Wen, L. Wang, and F. Li, “Vehicle theft recognition from surveillance video based on spatiotemporal attention,” Applied Intelligence, vol. 51, no. 4, pp. 2128–2143, Apr. 2021, doi: 10.1007/S10489-020-01933-8/TABLES/7.
[2]J. D. Trivedi, S. D. Mandalapu, and D. H. Dave, “Vision-based real-time vehicle detection and vehicle speed measurement using morphology and binary logical operation,” J Ind Inf Integr, vol. 27, p. 100280, May 2022, doi: 10.1016/J.JII.2021.100280.
[3]P. J. Recky, “Total Solution For Smart Traffic and Toll Roads Management in Indonesia,” Devotion Journal of Community Service, vol. 3, no. 2, pp. 149–157, Dec. 2021, doi: 10.36418/DEV.V3I2.119.
[4]Z. Wang, J. Huang, N. N. Xiong, X. Zhou, X. Lin, and T. L. Ward, “A Robust Vehicle Detection Scheme for Intelligent Traffic Surveillance Systems in Smart Cities,” IEEE Access, vol. 8, pp. 139299–139312, 2020, doi: 10.1109/ACCESS.2020.3012995.
[5]J. S. Chou and C. H. Liu, “Automated Sensing System for Real-Time Recognition of Trucks in River Dredging Areas Using Computer Vision and Convolutional Deep Learning,” Sensors 2021, Vol. 21, Page 555, vol. 21, no. 2, p. 555, Jan. 2021, doi: 10.3390/S21020555.
[6]H. A. Saad and E. H. Habib, “Assessment of Riverine Dredging Impact on Flooding in Low-Gradient Coastal Rivers Using a Hybrid 1D/2D Hydrodynamic Model,” Frontiers in Water, vol. 3, p. 628829, Mar. 2021, doi: 10.3389/FRWA.2021.628829/BIBTEX.
[7]A. Xu, L. E. Yang, W. Yang, and H. Chen, “Water conservancy projects enhanced local resilience to floods and droughts over the past 300 years at the Erhai Lake basin, Southwest China,” Environmental Research Letters, vol. 15, no. 12, p. 125009, Dec. 2020, doi: 10.1088/1748-9326/ABC588.
[8]D. Kusumaningrum, T. A. Hafsari, and L. Syam, “Sand and The City: The historical geography of sand mining in Jeneberang River and its relation to urban development in South Sulawesi,” ETNOSIA : Jurnal Etnografi Indonesia, vol. 6, no. 2, pp. 200–216, Nov. 2021, doi: 10.31947/ETNOSIA.V6I2.17918.
[9]J. S. Chou and Y. C. Chiu, “Identifying critical risk factors and responses of river dredging projects for knowledge management within organisation,” J Flood Risk Manag, vol. 14, no. 1, p. e12690, Mar. 2021, doi: 10.1111/JFR3.12690.
[10]Z. Yang and L. S. C. Pun-Cheng, “Vehicle detection in intelligent transportation systems and its applications under varying environments: A review,” Image Vis Comput, vol. 69, pp. 143–154, Jan. 2018, doi: 10.1016/J.IMAVIS.2017.09.008.
[11]S. Sivaraman and M. M. Trivedi, “Looking at vehicles on the road: A survey of vision-based vehicle detection, tracking, and behavior analysis,” IEEE Transactions on Intelligent Transportation Systems, vol. 14, no. 4, pp. 1773–1795, Dec. 2013, doi: 10.1109/TITS.2013.2266661.
[12]A. ; Alarbi, Z. Albayrak, A. Forestiero, A. Alarbi, and Z. Albayrak, “Core Classifier Algorithm: A Hybrid Classification Algorithm Based on Class Core and Clustering,” Applied Sciences 2022, Vol. 12, Page 3524, vol. 12, no. 7, p. 3524, Mar. 2022, doi: 10.3390/APP12073524.
[13]A. H. Ahmed, H. B. Alwan, and M. Çakmak, “Convolutional Neural Network-Based Lung Cancer Nodule Detection Based on Computer Tomography,” Lecture Notes in Networks and Systems, vol. 572, pp. 89–102, 2023, doi: 10.1007/978-981-19-7615-5_8/COVER.
[14]K. W. Al-Mansoori and M. Cakmak, “Automatic Speech Recognition (ASR) System using convolutional and Recurrent neural Network Approach,” HORA 2022 - 4th International Congress on Human-Computer Interaction, Optimization and Robotic Applications, Proceedings, 2022, doi: 10.1109/HORA55278.2022.9799877.
[15]H. C. Altunay and Z. Albayrak, “A hybrid CNN+LSTM-based intrusion detection system for industrial IoT networks,” Engineering Science and Technology, an International Journal, vol. 38, p. 101322, Feb. 2023, doi: 10.1016/J.JESTCH.2022.101322.
[16]R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation.” pp. 580–587, 2014. Accessed: Jul. 29, 2023. [Online]. Available: http://arxiv.
[17]R. Girshick, “Fast R-CNN.” pp. 1440–1448, 2015. Accessed: Jul. 29, 2023. [Online]. Available: https://github.com/rbgirshick/
[18]S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” Adv Neural Inf Process Syst, vol. 28, 2015, Accessed: Jul. 29, 2023. [Online]. Available: https://github.com/
[19]J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection.” pp. 779–788, 2016. Accessed: Jul. 29, 2023. [Online]. Available: http://pjreddie.com/yolo/
[20]W. Liu et al., “SSD: Single shot multibox detector,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9905 LNCS, pp. 21–37, 2016, doi: 10.1007/978-3-319-46448-0_2/FIGURES/5.
[21]J. Redmon and A. Farhadi, “YOLO9000: Better, Faster, Stronger.” pp. 7263–7271, 2017. Accessed: Jul. 29, 2023. [Online]. Available: http://pjreddie.com/yolo9000/
[22]M. H. Ashraf, F. Jabeen, H. Alghamdi, M. S. Zia, and M. S. Almutairi, “HVD-Net: A Hybrid Vehicle Detection Network for Vision-Based Vehicle Tracking and Speed Estimation,” Journal of King Saud University - Computer and Information Sciences, vol. 35, no. 8, p. 101657, Sep. 2023, doi: 10.1016/J.JKSUCI.2023.101657.
[23]H. Alghamdi and T. Turki, “PDD-Net: Plant Disease Diagnoses Using Multilevel and Multiscale Convolutional Neural Network Features,” Agriculture 2023, Vol. 13, Page 1072, vol. 13, no. 5, p. 1072, May 2023, doi: 10.3390/AGRICULTURE13051072.
[24]C. Kyrkou, “YOLOpeds: efficient real-time single-shot pedestrian detection for smart camera applications,” IET Computer Vision, vol. 14, no. 7, pp. 417–425, Oct. 2020, doi: 10.1049/IET-CVI.2019.0897.
[25]T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, “Focal Loss for Dense Object Detection.” pp. 2980–2988, 2017.
[26]L. Wang, Y. Lu, H. Wang, Y. Zheng, H. Ye, and X. Xue, “Evolving boxes for fast vehicle detection,” Proc (IEEE Int Conf Multimed Expo), pp. 1135–1140, Aug. 2017, doi: 10.1109/ICME.2017.8019461.
[27]X. Hu et al., “SINet: A Scale-Insensitive Convolutional Neural Network for Fast Vehicle Detection,” IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 3, pp. 1010–1019, Mar. 2019, doi: 10.1109/TITS.2018.2838132.
[28]F. Zhang, F. Yang, C. Li, and G. Yuan, “CMNet: A connect-and-merge convolutional neural network for fast vehicle detection in urban traffic surveillance,” IEEE Access, vol. 7, pp. 72660–72671, 2019, doi: 10.1109/ACCESS.2019.2919103.
[29]L. Chen, F. Ye, Y. Ruan, H. Fan, and Q. Chen, “An algorithm for highway vehicle detection based on convolutional neural network,” EURASIP J Image Video Process, vol. 2018, no. 1, pp. 1–7, Dec. 2018, doi: 10.1186/S13640-018-0350-2/TABLES/2.
[30]Y. Xiang, W. Choi, Y. Lin, and S. Savarese, “Subcategory-Aware convolutional neural networks for object proposals & detection,” Proceedings - 2017 IEEE Winter Conference on Applications of Computer Vision, WACV 2017, pp. 924–933, May 2017, doi: 10.1109/WACV.2017.108.
[31]H. Haritha and S. K. Thangavel, “A modified deep learning architecture for vehicle detection in traffic monitoring system,” https://doi.org/10.1080/1206212X.2019.1662171, vol. 43, no. 9, pp. 968–977, 2019, doi: 10.1080/1206212X.2019.1662171.
[32]X. Wu, X. Chen, and J. Zhou, “C-CNN: Cascaded convolutional neural network for small deformable and low contrast object localization,” Communications in Computer and Information Science, vol. 771, pp. 14–24, 2017, doi: 10.1007/978-981-10-7299-4_2/FIGURES/8.
[33]Z. Xia and J. Kim, “Mixed spatial pyramid pooling for semantic segmentation,” Appl Soft Comput, vol. 91, p. 106209, Jun. 2020, doi: 10.1016/J.ASOC.2020.106209.
[34]J. Dai, Y. Li, K. He, and J. Sun, “R-FCN: Object Detection via Region-based Fully Convolutional Networks,” Adv Neural Inf Process Syst, vol. 29, 2016, Accessed: Jul. 29, 2023. [Online]. Available: https://github.com/daijifeng001/r-fcn.
[35]L. Wen et al., “UA-DETRAC: A new benchmark and protocol for multi-object detection and tracking,” Computer Vision and Image Understanding, vol. 193, p. 102907, Apr. 2020, doi: 10.1016/J.CVIU.2020.102907.