Body Gestures Recognition System to Control a Service Robot

Full Text (PDF, 598KB), PP.69-76

Views: 0 Downloads: 0

Author(s)

Jose L. Medina-Catzin 1,* Anabel Martin-Gonzalez 2 Carlos Brito-Loeza 2 Victor Uc-Cetina 2

1. Universidad Autónoma de Yucatán/Facultad de Matemáticas, Merida, 97205, Mexico

2. Universidad Autónoma de Yucatán/Computational Learning and Imaging Research, Merida, 97205, Mexico

* Corresponding author.

DOI: https://doi.org/10.5815/ijitcs.2017.09.07

Received: 27 Jun. 2017 / Revised: 10 Jul. 2017 / Accepted: 15 Jul. 2017 / Published: 8 Sep. 2017

Index Terms

Intelligent control, service robots, gesture recognition, Adaboost, Haar features

Abstract

Personal service robots will be in the short future part of our world by assisting humans in their daily chores. A highly efficient way of communication with people is through basic gestures. In this work, we present an efficient body gestures’ interface that gives the user practical communication to control a personal service robot. The robot can interpret two body gestures of the subject and performs actions related to those gestures. The service robot’s setup consists of a Pioneer P3-DX research robot, a Kinect sensor and a portable workstation. The gesture recognition system developed is based on tracking the skeleton of the user to get the body parts relative 3D positions. In addition, the system takes depth images from the sensor and extracts their Haar features, which will train the Adaboost algorithm to classify the gesture. The system was developed using the ROS framework, showing good performance during experimental evaluation with users. Our body gesture-based interface may serve as a baseline to develop practical and natural interfaces to communicate with service robots in the near future.

Cite This Paper

José L. Medina-Catzin, Anabel Martin-Gonzalez, Carlos Brito-Loeza, Victor Uc-Cetina, "Body Gestures Recognition System to Control a Service Robot", International Journal of Information Technology and Computer Science(IJITCS), Vol.9, No.9, pp. 69-76, 2017. DOI:10.5815/ijitcs.2017.09.07

Reference

[1]K. G. Engelhardt, R. A. Edwards, “Human robot integration for service robotics,” in Human-Robot Interaction, Mansour Rahimi, Waldemar Karwowki. Eds. London: Taylor & Francis Ltd., pp. 315–346, 1992.

[2]I. Olaronke, O. Oluwaseun, and I. Rhoda, "State Of The Art: A Study of Human-Robot Interaction in Healthcare", International Journal of Information Engineering and Electronic Business (IJIEEB), vol. 9, num. 3, pp. 43–55, 2017.

[3]A. Singh, J. Buonassisi, and S. Jain, " Autonomous Multiple Gesture Recognition System for Disabled People", International Journal of Image, Graphics and Signal Processing (IJIGSP), vol. 6, num. 2, pp. 39–45, 2014.

[4]N. Roy, G. Baltus, D. Fox, F. Gemperle, J. Goetz, T. Hirsch, D. Margaritis, M. Montemerlo, J. Pineau, J. Schulte, et al., “Towards personal service robots for the elderly,” in Workshop on Interactive Robots and Entertainment (WIRE 2000), vol. 25, p. 184, 2000.

[5]J. Forlizzi and C. DiSalvo, “Service robots in the domestic environment: a study of the Roomba vacuum in the home,” in Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction, pp. 258–265. ACM, 2006.

[6]T. Sasai, Y. Takahashi, M. Kotani, and A. Nakamura, “Development of a guide robot interacting with the user using information projection – Basic system,” in Proceedings of the 2011 IEEE International Conference on Mechatronics and Automation, pp. 1297–1302, 2011.

[7]V. J. Traver, A. P. del Pobil, and M. Perez-Francisco, “Making service robots human-safe,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), vol. 1, pp. 696–701, IEEE, 2000.

[8]R. Viciana-Abad, R. Marfil, J. M. Perez-Lorenzo, J. P. Bandera, A. Romero-Garces, and Reche-Lopez, “Audio-visual perception system for a humanoid robotic head,” Sensors, pp. 9522–9545, 2014.

[9]K. Watanabe, Y. Shiraishi, S. Tzafestas, J. Tang, and T. Fukuda, “Feedback control of an omnidirectional autonomous platform for mobile service robots,” Journal of Intelligent and Robotic Systems, vol. 22, num. (3–4), pp. 315–330, 1998.

[10]C. A. Acosta Calderon, C. Zhou, and R. Elara Mohan, “Development of an autonomous service robot for social interactions,” in International Conference on Information, Communications and Signal Processing (ICICS), pp. 1–6, 2011.

[11]S. Duan, X. Wang, and W. Wan, “The LogitBoost Based on Joint Feature for Face Detection,” in International Conference on Image and Graphics, pp. 483–488, 2013.

[12]M. Rezaei, H. Ziaei Nafchi, and S. Morales, “Global Haar-Like Features: A New Extension of Classic Haar Features for Efficient Face Detection in Noisy Images,” vol. 8333, pp. 302–313, Springer Berlin Heidelberg, 2014.

[13]H. Wang, X. Gu, X. Li, Z. Li, and J. Ni, “Occluded face detection based on Adaboost technology,” in 2015 Eighth International Conference on Internet Computing for Science and Engineering (ICICSE), pp. 87–90, 2015.

[14]S. Zhang, C. Bauckhage, and A. B. Cremers, “Informed Haar-like features improve pedestrian detection,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 947–954, 2014.

[15]Q. Chen, N. D. Georganas, and E. M. Petriu, “Hand gesture recognition using Haar-like features and a stochastic context-free grammar,” IEEE Transactions on Instrumentation and Measurement, vol. 57, num. 8, pp. 1562–1571, 2008.

[16]M. A. Rahaman, M. Jasim, M. H. Ali, and M. Hasanuzzaman, “Real-time computer vision-based Bengali sign language recognition,” in Int. Conf. on Computer and Information Technology (ICCIT), pp. 192–197, 2014.

[17]N. Mavaddat, T. Kim, and R. Cipolla, “Design and evaluation of features that best define text in complex scene images,” in IAPR Conference on Machine Vision Applications (MVA), pp. 20–22, 2009.

[18]S. Maher Elkerdawi, R. Sayed, and M. ElHelw, “Real-Time Vehicle Detection and Tracking Using Haar-like Features and Compressive Tracking,” pp. 381–390. Springer International Publishing, 2014.

[19]S. Shujuan, X. Zhize, W. Xingang, H. Guan, W. Wenqi, and X. De, “Real-time vehicle detection using Haar-SURF mixed features and gentle AdaBoost classifier,” in Chinese Control and Decision Conference, pp. 1888–1894, 2015.

[20]P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. I-511–I-518, 2001.

[21]Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of Computer and System Sciences, vol. 55, num. 1, pp. 119–139, 1997.

[22]A. Takemura, A. Shimizu, and K. Hamamoto, “Discrimination of breast tumors in ultrasonic images using an ensemble classifier based on the Adaboost algorithm with feature selection,” IEEE Transactions on Medical Imaging, vol. 29, num. 3, pp. 598–609, 2010.

[23]N. Hamdi, K. Auhmani, M. M. Hassani, O. Elkharki, “An efficient gentle AdaBoost-based approach for mammograms classification,” Journal of Theoretical and Applied Information Technology, vol. 81, num. 1, pp. 138–143, 2015.

[24]C. C. Cai, J. Gao, B. Minjie, P. Zhang, and H. Gao, “Fast Pedestrian Detection with Adaboost Algorithm Using GPU,” International Journal of Database Theory and Application, vol. 8, num. 6, pp. 125–132, 2015.

[25]M. Kimura, J. Matai, M. Jacobsen, and R. Kastner, “A Low-Power AdaBoost-Based Object Detection Processor Using Haar-Like Features,” in Proc. of IEEE International Conference on Consumer Electronics, pp. 203–206, 2013.

[26]T. Hastie, R. Tibshirani, J. Friedman, “The Elements of Statistical Learning: Data Mining, Inference and Prediction”, Springer, 2001.