Object Tracking with a Novel Visual-Thermal Sensor Fusion Method in Template Matching

Full Text (PDF, 1079KB), PP.39-47

Views: 0 Downloads: 0

Author(s)

Satbir Singh 1,* Arun Khosla 1 Rajiv Kapoor 2

1. Dr.B R Ambedkar National Institute of Technology, Jalandhar, India

2. Delhi Technological University, Delhi, India

* Corresponding author.

DOI: https://doi.org/10.5815/ijigsp.2019.07.03

Received: 15 Apr. 2019 / Revised: 3 May 2019 / Accepted: 22 May 2019 / Published: 8 Jul. 2019

Index Terms

Sensor Fusion, Object Tracking, Template matching, Thermal Imaging

Abstract

Recently there has been an increase in the use of thermal-visible conjunction technique in the field of surveillance applications due to complementary advantages of both. An amalgamation of these for tracking requires a reasonable scientific procedure that can efficiently make decisions with sound accuracy and excellent precision. The proposed research presents a unique idea for obtaining a robust track estimate with the thermo-visual fusion in the context of fundamental template matching. This method firstly introduces a haphazard transporting control mechanism for individual modality tracking that avoids unexpected estimates. Then it brings together an efficient computation procedure for providing the weighted output using minimal information from the individual trackers. Experiments performed on publically available datasets mark the usefulness of the proposed idea in the context of accuracy, precision and process time in comparison with the state of art methods.

Cite This Paper

Satbir Singh, Arun Khosla, Rajiv Kapoor, "Object Tracking with a Novel Visual-Thermal Sensor Fusion Method in Template Matching", International Journal of Image, Graphics and Signal Processing(IJIGSP), Vol.11, No.7, pp. 39-47, 2019. DOI: 10.5815/ijigsp.2019.07.03

Reference

[1]U. Ali and M. Hanif, “Optimized Visual and Thermal Image Fusion for Efficient Face Recognition,” in IEEE International Conference on Information Fusion, 2006.

[2]G. Bebis, A. Gyaourova and I. Pavlidis, “Face Recognition by Fusing Thermal Infrared and Visible Imagery,” Image and Vision Computing, vol. 24, no. 7, pp. 727–742, 2006.

[3]J. Heo, S. G. Kong, B. R. Abidi, and M. A. Abidi, “Fusion of visual and thermal signatures with eyeglass removal for robust face recognition,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2004, pp. 122–122.

[4]T. Wilhelm, H. J. B¨ohme, and H. M. Gross, “A multi-modal system for tracking and analyzing faces on a mobile robot,” Robotics and Autonomous Systems, vol. 48, no. 1, pp. 31–40, 2004.

[5]D. R. Perrott, J. Cisneros, R. L. McKinley, and W. R. D’Angelo, “Aurally aided visual search under virtual and free-field listening conditions.” Human Factors, vol. 38, no.4, pp. 702-715, 1996.

[6]G. Cielniak, and T. Duckett, “Active People Recognition using Thermal and Grey Images on a Mobile Security Robot,” IEEE/RSJ Int. Conf. Intell. Robot. Syst., 2005, pp. 3610–3615.

[7]G. Cielniak, T. Duckett, and A. J. Lilienthal, “Improved data association and occlusion handling for vision-based people tracking by mobile robots,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2007, pp. 3436–3441. 

[8]G. B. Palmerini and S. Universit`a, “Combining Thermal and Visual Imaging in Spacecraft Proximity Operations,” in International Conference on Control Automation Robotics Vision, 2014, pp. 383–388. 

[9]Y. Tong, L. Liu, M. Zhao, J. Chen, and H. Li, “Adaptive fusion algorithm of heterogeneous sensor networks under different illumination conditions,” Signal Processing, vol. 126, pp. 149–158, 2016.

[10]Z. Zhou, B. Wang, S. Li, and M. Dong, “Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters,” Information Fusion, vol. 30, pp. 15–26, 2016.

[11]C. Li, X. Wu, N. Zhao, X. Cao, and J. Tang, “Fusing two stream convolutional neural networks for RGB-T object tracking,” Neurocomputing, vol. 281, pp. 78–85, 2018.

[12]G. S. Walia and R. Kapoor, “Recent advances on multicue object tracking: a survey,” Artificial Intelligence Review, vol. 46, no. 1, pp. 821–847, 2016.

[13]S. Singh, R. Kapoor, and A. Khosla, Cross-Domain Usage in Real Time Video-Based Tracking. U.S.A: IGI Global, 2017, pp. 105–129.

[14]J. Ma, Y. Ma, and C. Li, “Infrared and visible image fusion methods and applications: A survey,” Information Fusion, vol. 45, pp. 153–178, 2018.

[15]C. O. Conaire, N. E. O. Connor, and A. Smeaton, “Thermo-visual feature fusion for object tracking using multiple spatiogram trackers,” Machine Vision and Applications, vol. 19, no. 5-6, pp. 483–494, 2008.

[16]M. Talha and R. Stolkin, “Particle filter tracking of camouflaged targets by adaptive fusion of thermal and visible spectra camera data,” IEEE Sensors Journal, vol. 14, no. 1, pp. 159–166, 2014.

[17]J. Xiao, R. Stolkin, M. Oussalah, and A. Leonardis, “Continuously Adaptive Data Fusion and Model Relearning for Particle Filter Tracking With Multiple Features,” IEEE Sensors Journal, vol. 16, no. 8, pp. 2639– 2649, 2016.

[18]K. Nummiaro, E. Koller-Meier, and L. Van Gool, “An adaptive colorbased particle filter,” Image and Vision Computing, vol. 21, no. 1, pp. 99–110, 2003.

[19]G. Xiao, X. Yun, and J. Wu, “A new tracking approach for visible and infrared sequences based on tracking-before-fusion,” International Journal of Dynamics and Control, vol. 4, no. 1, pp. 40-51, 2016.

[20]R. Stolkin, D. Rees, M. Talha, and I. Florescu, “Bayesian fusion of thermal and visible spectra camera data for region based tracking with rapid background adaptation,” in IEEE Int. Conf. Multisens. Fusion Integr. Intell. Syst., 2012, pp. 192–199.

[21]E. Fendri, R. R. Boukhriss, M. Hammami, “Fusion of thermal infrared and visible spectra for robust moving object detection,” Pattern Anal. Appl., vol. 20, no. 4, pp. 907–926, 2017.

[22]S. R. Schnelle, and A.L. Chan, “Enhanced target tracking through infrared-visible image fusion,” in 14th Int. Conf. Inf. Fusion, 2011, pp. 1–8. 

[23]Y. Niu, S. Xu, L. Wu, and W. Hu, “Airborne infrared and visible image fusion for target perception based on target region segmentation and discrete wavelet transform,” Math. Probl. Eng., 2012. 

[24]Y. Wu, E. Blasch, G. Chen, L. Bai, L. Ling, “Multiple source data fusion via sparse representation for robust visual tracking,” in 14th International Conference on Information Fusion,  2011, pp. 1–8.

[25]C. Li, H. Cheng, S. Hu, X. Liu, J. Tang, and L. Lin, “Learning Collaborative Sparse Representation for Grayscale-Thermal Tracking,” IEEE Trans. Image Process., vol. 25, no. 12, pp. 5743–5756. 2016.

[26]C. Li, S. Hu, S. Gao, and J.  Tang, “Real-time grayscale-thermal tracking via laplacian sparse representation,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2016, pp. 54–65.

[27]H. P. Liu and F.C. Sun, “Fusion tracking in color and infrared images using joint sparse representation,” Sci. China-Information Sci., vol. 55, no.  3, pp. 590–599, 2012.

[28]J. Davis and V. Sharma, “Background-subtraction using contour based fusion of thermal and visible imagery,” Computer Vision and Image Understanding, vol. 106, no. 2-3, pp. 162–182, 2007.

[29]Bristol Eden Project Multi-Sensor Data Set, http://www.cis.rit.edu/pelz/scanpaths/data/bristol-eden.htm/.

[30]Video Analytics Dataset, https://www.ino.ca/en/video-analytics-dataset/.