A Hybrid Approach for Real-time Vehicle Monitoring System

Full Text (PDF, 1727KB), PP.1-15

Views: 0 Downloads: 0

Author(s)

Pankaj Pratap Singh 1 Shitala Prasad 2,*

1. Department of Computer Science & Engineering, Central Institute of Technology Kokrajhar, Kokrajhar, India

2. Institute for Infocomm Research, A*STAR, Singapore, Singapore

* Corresponding author.

DOI: https://doi.org/10.5815/ijem.2024.01.01

Received: 6 Aug. 2023 / Revised: 1 Oct. 2023 / Accepted: 20 Nov. 2023 / Published: 8 Feb. 2024

Index Terms

Vehicle Detection and Classification, Blob Analysis, Morphology, Segmentation, Object Detection

Abstract

In today's modern era, with the significant increase in the number of vehicles on the roads, there is a pressing need for an advanced and efficient system to monitor them effectively. Such a system not only helps minimize the chances of any faults but also facilitates human intervention when required. Our proposed method focuses on detecting vehicles through background subtraction, which leverages the benefits of various techniques to create a comprehensive vehicle monitoring solution. In general, when it comes to surveillance and monitoring moving objects, the initial step involves detecting and tracking these objects. For vehicle segmentation, we employ background subtraction, a technique that distinguishes foreground objects from the background. To target the most prominent regions in video sequences, our method utilizes a combination of morphological techniques. The advancements in vision-related technologies have proven to be instrumental in object detection and image classification, making them valuable tools for monitoring moving vehicles. Methods based on moving object detection play a vital role in real-time extraction of vehicles from surveillance videos captured by street cameras. These methods also involve the removal of background information while filtering out noisy data. In our study, we employ background subtraction-based techniques that continuously update the background image to ensure efficient output. By adopting this approach, we enhance the overall performance of vehicle detection and monitoring.

Cite This Paper

Pankaj Pratap Singh, Shitala Prasad, "A Hybrid Approach for Real-time Vehicle Monitoring System", International Journal of Engineering and Manufacturing (IJEM), Vol.14, No.1, pp. 1-15, 2024. DOI:10.5815/ijem.2024.01.01

Reference

[1]Gonzalez, R.C., 2009. Digital image processing. Pearson Education India.
[2]Godbehere, A.B., Matsukawa, A. and Goldberg, K., 2012, June. Visual tracking of human visitors under variable-lighting conditions for a responsive audio art installation. In 2012 American Control Conference (ACC) (pp. 4305-4312). IEEE.
[3]Omar, T. and Nehdi, M.L., 2016. Data acquisition technologies for construction progress tracking. Automation in Construction, 70, pp.143-155.
[4]Park, J., Kim, K. and Cho, Y.K., 2017. Framework of automated construction-safety monitoring using cloud-enabled BIM and BLE mobile tracking sensors. Journal of Construction Engineering and Management, 143(2), p.05016019.
[5]Goyette, N., Jodoin, P.M., Porikli, F., Konrad, J. and Ishwar, P., 2014. A novel video dataset for change detection benchmarking. IEEE Transactions on Image Processing, 23(11), pp.4663-4679.
[6]Barnich, O. and Van Droogenbroeck, M., 2010. ViBe: A universal background subtraction algorithm for video sequences. IEEE Transactions on Image processing, 20(6), pp.1709-1724.
[7]Brutzer, S., Höferlin, B. and Heidemann, G., 2011, June. Evaluation of background subtraction techniques for video surveillance. In CVPR 2011 (pp. 1937-1944). IEEE.
[8]Messelodi, S., Modena, C.M. and Zanin, M., 2005. A computer vision system for the detection and classification of vehicles at urban road intersections. Pattern analysis and applications, 8, pp.17-31.
[9]Bouwmans, T., 2012. Background subtraction for visual surveillance: A fuzzy approach. Handbook on soft computing for video surveillance, 5, pp.103-138.
[10]Dhome, Y., Tronson, N., Vacavant, A., Chateau, T., Gabard, C., Goyat, Y. and Gruyer, D., 2010, July. A benchmark for background subtraction algorithms in monocular vision: a comparative study. In 2010 2nd international conference on image processing theory, tools and applications (pp. 66-71). IEEE.
[11]Prasad, S., Peddoju, S.K. and Ghosh, D., 2016, March. Mask Region Grow segmentation algorithm for low-computing devices. In 2016 Twenty Second National Conference on Communication (NCC) (pp. 1-6). IEEE.
[12]Goyette, N., Jodoin, P.M., Porikli, F., Konrad, J. and Ishwar, P., 2012, June. Changedetection. net: A new change detection benchmark dataset. In 2012 IEEE computer society conference on computer vision and pattern recognition workshops (pp. 1-8). IEEE.
[13]Vacavant, A., Chateau, T., Wilhelm, A. and Lequievre, L., 2013. A benchmark dataset for outdoor foreground/background extraction. In Computer Vision-ACCV 2012 Workshops: ACCV 2012 International Workshops, Daejeon, Korea, November 5-6, 2012, Revised Selected Papers, Part I 11 (pp. 291-300). Springer Berlin Heidelberg.
[14]Balamuralidhar, N., Tilon, S. and Nex, F., 2021. MultEYE: Monitoring system for real-time vehicle detection, tracking and speed estimation from UAV imagery on edge-computing platforms. Remote sensing, 13(4), p.573.
[15]Boppana, U.M., Mustapha, A., Jacob, K. and Deivanayagampillai, N., 2022. Comparative analysis of single-stage yolo algorithms for vehicle detection under extreme weather conditions. In IOT with Smart Systems: Proceedings of ICTIS 2021, Volume 2 (pp. 637-645). Springer Singapore.
[16]Kwan, C., Chou, B., Yang, J., Rangamani, A., Tran, T., Zhang, J. and Etienne-Cummings, R., 2019. Deep learning-based target tracking and classification for low quality videos using coded aperture cameras. Sensors, 19(17), p.3702.
[17]Abraham, A., Zhang, Y. and Prasad, S., 2021, September. Real-time prediction of multi-class lane-changing intentions based on highway vehicle trajectories. In 2021 IEEE International Intelligent Transportation Systems Conference (ITSC) (pp. 1457-1462). IEEE.
[18]Abraham, A., Nagavarapu, S.C., Prasad, S., Vyas, P. and Mathew, L.K., 2022, December. Recent Trends in Autonomous Vehicle Validation Ensuring Road Safety with Emphasis on Learning Algorithms. In 2022 17th International Conference on Control, Automation, Robotics and Vision (ICARCV) (pp. 397-404). IEEE.
[19]Stauffer, C. and Grimson, W.E.L., 1999, June. Adaptive background mixture models for real-time tracking. In Proceedings. 1999 IEEE computer society conference on computer vision and pattern recognition (Cat. No PR00149) (Vol. 2, pp. 246-252). IEEE.
[20]Ji, X., Wei, Z. and Feng, Y., 2006. Effective vehicle detection technique for traffic surveillance systems. Journal of Visual Communication and Image Representation, 17(3), pp.647-658.
[21]Mu, K., Hui, F., Zhao, X. and Prehofer, C., 2016. Multiscale edge fusion for vehicle detection based on difference of Gaussian. Optik, 127(11), pp.4794-4798.
[22]Yan, G., Yu, M., Yu, Y. and Fan, L., 2016. Real-time vehicle detection using histograms of oriented gradients and AdaBoost classification. Optik, 127(19), pp.7941-7951.
[23]Wen, X., Shao, L., Fang, W. and Xue, Y., 2014. Efficient feature selection and classification for vehicle detection. IEEE Transactions on Circuits and Systems for Video Technology, 25(3), pp.508-517.
[24]Yang, Z. and Pun-Cheng, L.S., 2018. Vehicle detection in intelligent transportation systems and its applications under varying environments: A review. Image and Vision Computing, 69, pp.143-154.
[25]Sun, Z., Bebis, G. and Miller, R., 2006. Monocular precrash vehicle detection: features and classifiers. IEEE transactions on image processing, 15(7), pp.2019-2034.
[26]Tsai, L.W., Hsieh, J.W. and Fan, K.C., 2007. Vehicle detection using normalized color and edge map. IEEE transactions on Image Processing, 16(3), pp.850-864.
[27]Chen, D.Y. and Peng, Y.J., 2012. Frequency-tuned taillight-based nighttime vehicle braking warning system. IEEE Sensors Journal, 12(11), pp.3285-3292.
[28]Jia, D., Zhu, H., Zou, S. and Huang, K., 2016. Recognition method based on green associative mechanism for weak contrast vehicle targets. Neurocomputing, 203, pp.1-11.
[29]Lin, H.Y., Dai, J.M., Wu, L.T. and Chen, L.Q., 2020. A vision-based driver assistance system with forward collision and overtaking detection. Sensors, 20(18), p.5139.
[30]Gan, C., Zhao, H., Chen, P., Cox, D. and Torralba, A., 2019. Self-supervised moving vehicle tracking with stereo sound. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 7053-7062).
[31]Liu, W., Luo, Z. and Li, S., 2018. Improving deep ensemble vehicle classification by using selected adversarial samples. Knowledge-Based Systems, 160, pp.167-175.
[32]Sang, J., Wu, Z., Guo, P., Hu, H., Xiang, H., Zhang, Q. and Cai, B., 2018. An improved YOLOv2 for vehicle detection. Sensors, 18(12), p.4272.
[33]Arabi, S., Haghighat, A. and Sharma, A., 2020. A deep‐learning‐based computer vision solution for construction vehicle detection. Computer‐Aided Civil and Infrastructure Engineering, 35(7), pp.753-767.
[34]Rani, E., 2021. LittleYOLO-SPP: A delicate real-time vehicle detection algorithm. Optik, 225, p.165818.
[35]Makrigiorgis, R., Hadjittoouli, N., Kyrkou, C. and Theocharides, T., 2022. Aircamrtm: Enhancing vehicle detection for efficient aerial camera-based road traffic monitoring. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 2119-2128).
[36]Sun, Z., Bebis, G. and Miller, R., 2006. On-road vehicle detection: A review. IEEE transactions on pattern analysis and machine intelligence, 28(5), pp.694-711.
[37]Hayman and Eklundh, 2003, October. Statistical background subtraction for a mobile observer. In Proceedings Ninth IEEE International Conference on Computer Vision (pp. 67-74). IEEE.
[38]Bouwmans, T., El Baf, F. and Vachon, B., 2008. Background modeling using mixture of gaussians for foreground detection-a survey. Recent patents on computer science, 1(3), pp.219-237.
[39]Alex, D.S. and Wahi, A., 2014. BSFD: BACKGROUND SUBTRACTION FRAME DIFFERENCE ALGORITHM FOR MOVING OBJECT DETECTION AND EXTRACTION. Journal of Theoretical & Applied Information Technology, 60(3).
[40]Singh, P.P., 2016, July. Extraction of image objects in very high resolution satellite images using spectral behaviour in look up table and color space based approach. In 2016 SAI Computing Conference (SAI) (pp. 414-419). IEEE.
[41]Prasad, S. and Singh, P.P., 2018, November. A compact mobile image quality assessment using a simple frequency signature. In 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV) (pp. 1692-1697). IEEE.
[42]Friedman, N. and Russell, S., 2013. Image segmentation in video sequences: A probabilistic approach. arXiv preprint arXiv:1302.1539.
[43]Wren, C.R., Azarbayejani, A., Darrell, T. and Pentland, A.P., 1997. Pfinder: Real-time tracking of the human body. IEEE Transactions on pattern analysis and machine intelligence, 19(7), pp.780-785.
[44]Voigtlaender, P., Krause, M., Osep, A., Luiten, J., Sekar, B.B.G., Geiger, A. and Leibe, B., 2019. Mots: Multi-object tracking and segmentation. In Proceedings of the ieee/cvf conference on computer vision and pattern recognition (pp. 7942-7951).
[45]Singh, P.P. and Garg, R.D., 2015. Fixed point ICA based approach for maximizing the non-Gaussianity in remote sensing image classification. Journal of the Indian Society of Remote Sensing, 43, pp.851-858.