Obstacle Detection Techniques in Outdoor Environment: Process, Study and Analysis

Full Text (PDF, 1878KB), PP.35-53

Views: 0 Downloads: 0

Author(s)

Yadwinder Singh 1,* Lakhwinder Kaur 1

1. Punjabi University/Computer Engineering Department, Patiala, 147001, India

* Corresponding author.

DOI: https://doi.org/10.5815/ijigsp.2017.05.05

Received: 6 Jan. 2017 / Revised: 16 Feb. 2017 / Accepted: 23 Mar. 2017 / Published: 8 May 2017

Index Terms

Segmentation, Obstacle Detection, Fea-ture Extraction, Thresholding, Image Processing

Abstract

Obstacle detection is the process in which the upcoming objects in the path are detected and collision with them is avoided by some sort of signalling to the visually impaired person. In this review paper we present a comprehensive and critical survey of Image Processing techniques like vision based, ground plane detection, feature extraction, etc. for detecting the obstacles. Two types of vision based techniques namely (a) Monocular vision based approach (b) Stereo Vision based approach are discussed. Further types of above described ap-proaches are also discussed in the survey. Survey dis-cusses the analysis of the associated work reported in literature in the field of SURF and SIFTS features, mo-nocular vision based approaches, texture features and ground plane obstacle detection.

Cite This Paper

Yadwinder Singh, Lakhwinder Kaur,"Obstacle Detection Techniques in Outdoor Environment: Process, Study and Analysis", International Journal of Image, Graphics and Signal Processing(IJIGSP), Vol.9, No.5, pp.35-53, 2017. DOI: 10.5815/ijigsp.2017.05.05

Reference

[1]N. B. Romdhane, M. Hammami and H. B. Abdallah, “A generic obstacle detection method for collision avoidance,” IEEE Intelligent Vehicles Symposium (IV), pp. 491–496, 2011.

[2]I. Ulrich and I. Nourbakhsh, “Appearance-based obstacle detection with monocular color vision,” AAAI/IAAI, 2000.

[3]K. Yamaguchi, T. Kato and Y. Ninomiya, “Moving Obsta-cle Detection using Monocular Vision,” In IEEE Intelligent Vehicles Symposium, pp. 288–293, 2006. 

[4]D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, 2004. 

[5]H. Bay, T. Tuytelaars and L. Van Gool, “Surf: Speeded up robust features,” Comput. Vision–ECCV, pp. 404-417, 2006.

[6]J. Michels, A. Saxena and A. Y. Ng, “High speed obstacle avoidance using monocular vision and reinforcement learning,” In Proceedings of the 22nd International Con-ference on Machine Learning, pp. 593-600, 2005.

[7]G. Y. Song, K. Y. Lee and J. W. Lee, “Vehicle detection by edge-based candidate generation and appearance-based classification,” In IEEE Intelligent Vehicles Symposium, pp. 428–433, 2008.

[8]Q. Zhan, S. Huang and J. Wu, “Automatic navigation for a mobile robot with monocular vision,” IEEE Conference on Robotics, Automation and Mechatronics, pp. 1005–1010, 2008. 

[9]G. Ma, D. Muller, S. B. Park, S. M. Schneiders and A. Kummert, “Pedestrian detection using a single-monochrome camera,” IET Intelligent Transport Systems, vol. 3, no. 1, pp. 42–56, 2009.

[10]C. Viet and I. Marshall, “An efficient obstacle detection algorithm using colour and texture,” World Academy of Science, Engineering and Technology, pp. 132–137, 2009.

[11]C.C. Lin and M. Wolf, “Detecting Moving Objects Using a Camera on a Moving Platform,” 20th International Con-ference on Pattern Recognition, pp. 460–463, 2010. 

[12]A. Cherubini and F. Chaumette, “Visual navigation with obstacle avoidance,” IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1593–1598, 2011.

[13]J. Lim and W. Kim, “Detecting and tracking of multiple pedestrians using motion, color information and the Ada-Boost algorithm,” Multimedia Tools and Applications, vol. 65, no. 1, pp. 161–179, 2012.

[14]P. Mishra, T. U. Vivek, G. Adithya and J. K. Kishore, “Vision based in-motion detection of dynamic obstacles for autonomous robot navigation,” In Annual IEEE India Conference (INDICON), pp. 149–154, Dec. 2012.

[15]J. Lalonde, R. Laganiere and L. Martel, “Single-view ob-stacle detection for smart back-up camera systems,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1–8, 2012.

[16]Y. Lin, C. Lin, W. Liu and L. Chen, “A vision-based ob-stacle detection system for parking assistance,” 8th IEEE Conference on Industrial Electronics and Applications (ICIEA), pp. 1627–1630, 2013.

[17]B. Jia, L. Rui and Z. Ming, "Real-time obstacle detection with motion features using monocular vision", The Visual Computer, pp. 1-13, 2014.

[18]Mun-Cheon Kang, Sung-HoChae, Jee-Young son, Jin-Woo Yoo and Sung-Jeako,” A novel Obstacle Detection Method based on Deformable Grid for the Visually Impaired,” IEEE Transactions on Consumer Electronics, vol. 61, no. 3, pp. 376-383, 2015.

[19]Y Zhuang, Y Rui, T.S Huang and S Mehrotra, “ Adaptive key-frame extraction using unsupervised clustering”, Pro-ceedings of International conference on Image Processing, pp. 866-870, 1998.

[20]Li Zhao, Wei Qi, Stan Z. Li, Shi-Qiang Yang, H. J. Zhang, “Key-frame extraction and shot retrieval using nearest fea-ture line”,Proceedings of ACM Workshop on Multimedia, pp. 217- 220, 2000.

[21]A.D. Doulamis, N.D. Doulamis, S.D. Kollias, “A fuzzy video content representation for video summarization and content based retrieval”, Journal of signal processing, pp.1049-1060, 2000.

[22]Y Gong and X Liu, “Video summarization using singular value decomposition”, Proceedings of Computer Vision and Pattern Recognition, pp. 347-358, 2000.

[23]D.P. Mukherjee, S.K. Das, S. Saha, “Key-frame estimation in video using randomness measure of feature point pattern”, IEEE transactions on circuits on systems for video technology, vol.7, no.5, pp. 612-620, 2007.

[24]Li Liu, Ling Shao, Peter Rockett, “Boosted key-frame and correlated pyramidal motion feature representation for human action recognition”, Pattern Recognition, pp. 1810-1818, 2013.

[25]Suresh C Raikwar, Charul Bhatnagar and Anand Singh Jalal, “A frame work for key-frame extraction from sur-veillance Video”, 5th International Conference on Computer and Communication Technology”, IEEE, pp. 297-300, 2014.

[26]Sheena C V, N.K. Narayanan, “ Key- Frame extraction by analysis of histograms of video frames using stastical method”, 4th International Conference on Eco-Friendly Computing and Communication Systems, Procedia Com-puter Science pp. 36-40, 2015.

[27]J. Zhou and B. Li, “Homography-based ground detection for a mobile robot platform using a single camera,” In Proceedings of IEEE International Conference on Robotics and Automation(ICRA), pp. 4100–4105, 2006.

[28]D. Conrad and G. DeSouza, “Homography-based ground plane detection for mobile robot navigation using a modified em algorithm,” IEEE International Conference on Robotics and Automation (ICRA), pp. 910–915, 2010. 

[29]A. Jamal, P. Mishra, S. Rakshit, A. K. Singh and M. Kumar, “Real-time ground plane segmentation and obstacle detection for mobile robot navigation,” IEEE 49 Interna-tional Conference on Emerging Trends in Robotics and Communication Technologies (INTERACT), pp. 314–317, 2010. 

[30]C. Lin, S. Jiang, Y. Pu and K. Song, “Robust ground plane detection for obstacle avoidance of mobile robots using a monocular camera,” IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS), pp. 3706–3711, 2010. 

[31]R. Benenson, M. Mathias, R. Timofte and L. V. Gool, “Fast stixel computation for fast pedestrian detection,” In Workshops and Demonstrations Computer Vision–ECCV, pp. 11–20, 2012.

[32]J. Molineros, S. Cheng, Y. Owechko, D. Levi and W. Zhang, “Monocular rear-view obstacle detection using re-sidual flow,” In Workshops and Demonstrations Computer Vision–ECCV, pp. 504-514, 2012. 

[33]G. Panahandeh, N. Mohammadiha and M. Jansson, “Ground plane feature detection in mobile vision-aided in-ertial navigation,” IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3607–3611, 2012.

[34]D. Koester, B. Schauerte and R. Stiefelhagen, “Accessible section detection for visual guidance,” IEEE International Conference on Multimedia and Expo Workshops (ICMEW), pp. 1-6, 2013.

[35]M. Knorr, W. Niehsen and C. Stiller, "Robust ground plane induced homography estimation for wide angle fisheye cameras", In Proceedings of IEEE Intelligent Vehicles Symposium, pp. 1288-1293, 2014. 

[36]G. Zhao and M. Pietikainen, “Dynamic texture recognition using local binary patterns with an application to facial ex-pressions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 915–928, 2007. 

[37]R. Nosaka, Y. Ohkawa and K. Fukui, “Feature extraction based on co-occurrence of adjacent local binary patterns,” Advances in Image and Video Technology, pp. 82–91, 2012.

[38]G. Beliakov, S. James and L. Troiano, “Texture recognition by using GLCM and various aggregation functions,” IEEE International Conference on Fuzzy Systems, pp. 1472-1476, 2008

[39]A. Datta, S. Dutta, S. K. Pal, R. Sen and S. Mukhopadhyay, “Texture Analysis of Turned Surface Images Using Grey Level Co-Occurrence Technique,” Advanced Materials Research, pp. 38–43, 2012.

[40]M. M. Ali, M. B. Fayek and E. E. Hemayed, “Human-inspired features for natural scene classification,” Pattern Recognition Letters., vol. 34, no. 13, pp. 1525–1530, 2013. 

[41]G.-H. Liu and J. Y. Yang, “Content-based image retrieval using color difference histogram,” Pattern Recognition, vol. 46, no. 1, pp. 188–198, 2013. 

[42]J. Ren, X. Jiang and J. Yuan, “Dynamic texture recognition using enhanced lbp features,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 2400-2404, 2013.

[43]J. Zou, C.-C. Liu, Y. Zhang and G.-F. Lu, “Object recogni-tion using Gabor co-occurrence similarity,” Pattern Recognition, vol. 46, no. 1, pp. 434–448, 2013. 

[44]P. Gehler and S. Nowozin, “On feature combination for multiclass object classification,” IEEE 12th International Conference on Computer Vision, pp. 221–228, 2009. 

[45]B. Zitova, J. Flusser, J. Kautsky and G. Peters, “Feature point detection in multiframe images,” In Czech Pattern Recognition Workshop, no. 3, pp. 117–122, 2000. 

[46]P. Saisan, G. Doretto, Y. N. Wu and S. Soatto, “Dynamic Texture Recognition,” In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2001. 

[47]F. Woolfe and A. Fitzgibbon, “Shift-invariant dynamic texture recognition,” In Computer Vision–ECCV, pp. 549–562, 2006. 

[48]A. Satpathy, X. Jiang and H.L. Eng,” LBP-Based Edge- Texture Features for Object Recognition,” IEEE Transac-tions on Image Processing, vol. 23, no. 5, pp. 1953-1964, 2014.

[49]Z. Zhang, Y. Huang, C. Li and Y. Kang, “Monocular vision simultaneous localization and mapping using SURF,” IEEE 7th World Conference on Intelligent Control and Au-tomation (WCICA), pp. 1651–1656, 2008. 

[50]C. H. Lampert, H. Nickisch and S. Harmeling, “Learning to detect unseen object classes by between-class attribute transfer,” IEEE Conference on Computer Vision and Pat-tern Recognition(CVPR), pp. 951–958, 2009. 

[51]D. Ta, W. Chen, N. Gelfand and K. Pulli, “SURFTrac: Efficient tracking and continuous object recognition using local feature descriptors,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2937–2944, 2009.

[52]G. Yu and J. Morel, “A fully affine invariant image com-parison method,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 1597–1600, 2009. 

[53]B. Besbes, A. Rogozan and A. Bensrhair, “Pedestrian recognition based on hierarchical codebook of SURF fea-tures in visible and infrared images,” In IEEE Intelligent Vehicles Symposium (IV), pp. 156–161, 2010. 

[54]K. E. A. Sande, T. Gevers and C. G. M. Snoek, “Evaluating color descriptors for object and scene recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 9, pp. 1582–1596, 2010. 

[55]Q. V. Le, W. Y. Zou, S. Y. Yeung and A. Y. Ng, “Learn-ing hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3361–3368, 2011.

[56]Y. Pang, W. Li, Y. Yuan and J. Pan, “Fully affine invariant SURF for image matching,” Neurocomputing, vol. 85, pp. 6–10, 2012.

[57]Y.-T. Wang, C.-H. Sun and M.-J. Chiou, “Detection of moving objects in image plane for robot navigation using monocular vision,” EURASIP Journal on Advances in Sig-nal Processing, pp. 1-22, 2012.

[58]Y. Yu, K. Huang, W. Chen and T. Tan, “A novel algorithm for view and illumination invariant image matching,” IEEE Transactions on Image Processing, vol. 21, no. 1, pp. 229–40, 2012.