Autonomous Multiple Gesture Recognition System for Disabled People

Full Text (PDF, 596KB), PP.39-45

Views: 0 Downloads: 0

Author(s)

Amarjot Singh 1,* John Buonassisi 1 Sukriti Jain 2

1. School of Engineering Science, Simon Fraser University, Burnaby, Canada.

2. Dept of Electronics and Communication Engineering, 3 Ambedkar Institute of Advanced Communication technologies and research, GGSIPU, India

* Corresponding author.

DOI: https://doi.org/10.5815/ijigsp.2014.02.05

Received: 19 Sep. 2013 / Revised: 5 Nov. 2013 / Accepted: 10 Dec. 2013 / Published: 8 Jan. 2014

Index Terms

Gesture Recognition, Motion Tracking, Robot, Disability

Abstract

The paper presents an intelligent multi gesture spotting system that can be used by disabled people to easily communicate with machines resulting into easement in day-to-day works. The system makes use of pose estimation for 10 signs used by hearing impaired people to communicate. Pose is extracted on the basis of silhouettes using timed motion history (tMHI) followed by gesture recognition with Hu-Moments. Signs involving motion are recognized with the help of optical flow. Based on the recognized gestures, particular instructions are sent to the robot connected to system resulting into an appropriate action/movement by the robot. The system is unique as it can act as a assisting device and can communicate in local as well as wide area to assist the disabled person.

Cite This Paper

Amarjot Singh,John Buonassisi,Sukriti Jain,"Autonomous Multiple Gesture Recognition System for Disabled People", IJIGSP, vol.6, no.2, pp.39-45, 2014. DOI: 10.5815/ijigsp.2014.02.05

Reference

[1]Hu, M. Visual pattern recognition by moment invariants. IRE Trans. Information Theory, Vol. IT-8, Num. 2, 1962.

[2]Elgammal, A., Harwood, D. and Davis L. Non-parametric Model for Background Subtraction, IEEE FRAME-RATE Workshop, http://www.eecs.lehigh.edu/FRAME/. 1999.

[3]Therrien, C. Decision Estimation and Classification. John Wiley and Sons, Inc., 1989. 

[4]Davis, J. Recognizing movement using motion histograms. MIT Media lab Technical Report #487, March 1999. 

[5]Cuttler, R. and M. Turk. View-based interpretation of real-time optical flow for gesture recognition. Int. Conf. On Automatic Face and Gesture Recognition, page 416-421.

[6]Open Source Computer Vision Library (OpenCV) in C and optimized assembly modules on Intel’s architecture can be downloaded from http://www.intel.com/research/mrl/research/cvlib.

[7]Stauffer,C.; Grimson,W.E.L. “Learning patterns of activity using real-time tracking “ IEEE Trans on Pattern Analysis and Machine Intelligence, Vol 22 pp. 747–757, 2002.

[8]Tan, H.; Viscito, E.; Delp, E.; Allebach, J.; “Inspection of machine parts by backprojection reconstruction” IEEE int Conf. on Robotics and Automation. Proc, pp. 503-508,1987.

[9]Yang Liu, George Chen , Nelson Max , Christian Hofsetz , Peter Mcguinness, “Visual Hull Rendering with Multi-view Stereo Refinement”WSCG, pp. 261-268, 2004.

[10]James M. Rehg and Takeo Kanade;” Visual tracking of high DOF articulated structures: An application to human hand tracking” ECCV 1994.

[11]Paolo Dario, Eugenio Guglielmelli, Cecilia Laschi, Giancarlo Teti “MOVAID: a personal robot in everyday life of disabled and elderly people” Journal of Technology and Disability, pp77-93, 1999.

[12]Balaguer, C.;Gimenez, A.;Jardon, A.;Cabas, R.;Correal, R. “Live experimentation of the service robot applications for elderly people care in home environments” Int conf. of Intelligent Robots and Systems, (IROS 2005). pp. 2345-2350, 2005.

[13]J.M. Noyes, R. Haigh, A.F.Starr “Automatic speech recognition for disabled people” journal of Applied Ergonomics, Vol 20, Issue 4, Pages 293-298 December 1989.

[14]Hayati, S.;Volpe, R.;Backes, P.;Balaram, J.;Welch, R.;Ivlev, R.;Tharp, G.;Peters, S.;Ohm, T.;Petras, R.;Laubach, S.;“The Rocky 7 rover: a Mars sciencecraft prototype” Int conf. of Robotics and Automation, Vol 3 pp 2458 - 2464 1997.

[15]Abderrahim, M.;Balaguer, C.;Gimenez, A.;Pastor, J.M.;Padron, V.M. ” ROMA: a climbing robot for inspection operations”, proc. Int conf of Robotics and Automation, Vol 3 pp. 2303 – 2308, 1999.

[16]Ekinci M, Gedikli E. Silhouette based human motion detection and analysis for real-time automated video surveillance. Turk J Elec Engin, Vol. 13, pp. 199-229, 2005.

[17]C. Vogler, D. Metaxas, ASL recognition based on a coupling between HMMs and 3D motion analysis. Proc. of International Conference on Computer Vision, pp. 363-369, 1998. 

[18]C. Bregler, Learning and recognizing human dynamics in video sequences. Proc. of IEEE CS Conf. on Computer Vision and Pattern Recognition, pp. 568-574, 1997.

[19]Harwin WS, Jackson RD. Analysis of intentional head gestures to assist computer access by physically disabled people. J Biomed Eng, Vol. 12, pp. 193-8, 1990.

[20]P. Dario, E. Guglielmelli, B. Allotta, “MOVAID: a personnal robot in everyday life of disabled and elderly people”, Techology and Disability Journal, n 10, IOS Press, pp. 77-93, 1999.

[21]P. Hoppenot, J. Boudy, J.L. Baldinger, F. Delavaux, E. Colle : "Assistance to the maintenance in residence of the handicapped or old people" - HuMaN 07, Tlemcen, 12th-14th march 2007.

[22]Amarjot Singh, Devinder Kumar, Phani Srikanth, Srikrishna Karanam, Niraj Acharya, “An Intelligent Multi-Gesture Spotting Robot to Assist Persons with Disabilities”, in International Journal of Computer Theory and Engineering, Vol. 4, No. 6, pp. 998-1001, December, 2012.

[23]Devinder Kumar, Amarjot Singh, “Occluded Human Tracking and Identification using Image Annotation", in International Journal of Image, Graphics and Signal Processing, Vol.4, No.12, November 2012.

[24]Sridhar Bandaru, Amarjot Singh, “Advanced Mobile Surveillance System for Multiple People Tracking”, in International Journal of Intelligent Systems and Applications, Vol. 5, pp. 76-84, May, 2013.