Human Distraction Detection from Video Stream Using Artificial Emotional Intelligence

Full Text (PDF, 973KB), PP.19-29

Views: 0 Downloads: 0

Author(s)

Rafflesia Khan 1,* Rameswar Debnath 1

1. Computer Science and Engineering discipline, Khulna University, Khulna, Bangladesh

* Corresponding author.

DOI: https://doi.org/10.5815/ijigsp.2020.02.03

Received: 8 Oct. 2019 / Revised: 17 Oct. 2019 / Accepted: 28 Oct. 2019 / Published: 8 Apr. 2020

Index Terms

Distraction, E-learning, Facial Landmarks, Facial Alignment, Facial Movement, Artificial Emotional Intelligence.

Abstract

This paper addresses the problem of identifying certain human behavior such as distraction and also predicting the pattern of it. This paper proposes an artificial emotional intelligent or emotional AI algorithm to detect any change in visual attention for individuals. Simply, this algorithm detects human’s attentive and distracted periods from video stream. The algorithm uses deviation of normal facial alignment to identify any change in attentive and distractive activities, e.g., looking to a different direction, speaking, yawning, sleeping, attention deficit hyperactivity and so on. For detecting facial deviation we use facial landmarks but, not all landmarks are related to any change in human behavior. This paper proposes an attribute model to identify relevant attributes that best defines human’s distraction using necessary facial landmark deviations. Once the change in those attributes is identified, the deviations are evaluated against a threshold based emotional AI model in order to detect any change in the corresponding behavior. These changes are then evaluated using time constraints to detect attention levels. Finally, another threshold model against the attention level is used to recognize inattentiveness. Our proposed algorithm is evaluated using video recording of human classroom learning activity to identify inattentive learners. Experimental results show that this algorithm can successfully identify the change in human attention which can be used as a learner or driver distraction detector. It can also be very useful for human distraction detection, adaptive learning and human computer interaction. This algorithm can also be used for early attention deficit hyperactivity disorder (ADHD) or dyslexia detection among patients.

Cite This Paper

Rafflesia Khan, Rameswar Debnath, " Human Distraction Detection from Video Stream Using Artificial Emotional Intelligence", International Journal of Image, Graphics and Signal Processing(IJIGSP), Vol.12, No.2, pp. 19-29, 2020. DOI: 10.5815/ijigsp.2020.02.03

Reference

[1]R.W. Picard, Affective computing, MIT press (2000). 

[2]R. W. Picard, “Affective computing: challenges,” International Journal of Human-Computer Studies 59(1-2), 55–64 (2003).

[3]E. Hudlicka, “Affective computing for game design,” in Proceedings of the 4th Intl. North American Conference on Intelligent Games and Simulation, 5–12, McGill University Montreal, Canada (2008).

[4]L. Wang, “Attention decrease detection based on video analysis in e-learning,” in Transactions on Edutainment XIV, 166–179, Springer (2018).

[5]N. Hakami, S. White, and S. Chakaveh, “Motivational factors that influence the use of moocs: Learner’s perspectives,” in Proceedings of the 9th International Conference on Computer Supported Education (CSEDU 2017), 323–331 (2017).

[6]W. Shunping, “An analysis of online learning behaviors and its influencing factors: A case study of students’ learning process in online course" open education learning guide" in the open university of china [j],” open education research 4 (2012).

[7]R. H. Shea, “E-learning today,” US News & World Report 28, 54–56 (2002).

[8]T. Baltrušaitis, P. Robinson, and L.-P. Morency, “Openface: an open source facial behavior analysis toolkit,” in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), 1–10, IEEE (2016).

[9]Wu, F., Mou, Z.: The design research of learning outcomes prediction based on the model of personalized behavior analysis for learners. E-educ. Res. 348(1), 41–48 (2016).

[10]S. Asteriadis, P. Tzouveli, K. Karpouzis, et al., “Estimation of behavioral user state based on eye gaze and head pose-application in an e-learning environment,” Multimedia Tools and Applications 41(3), 469–493 (2009).

[11]N. Alioua, A. Amine, A. Rogozan, et al., “Driver head pose estimation using efficient descriptor fusion,” EURASIP Journal on Image and Video Processing 2016(1), 2 (2016).

[12]I.-H. Choi, C.-H. Jeong, and Y.-G. Kim, “Tracking a driver’s face against extreme head poses and inference of drowsiness using a hidden markov model,” Applied Sciences 6(5), 137 (2016).

[13]B. Amos, B. Ludwiczuk, M. Satyanarayanan, et al., “Openface: A general-purpose face recognition library with mobile applications,” CMU School of Computer Science 6 (2016).

[14]OpenVINO, “Openvino deep learning computer vision toolkit.” Available link:   https://software.intel.com/enus/openvino-toolkit (26 Sept. 2016).

[15]V. Kazemi and J. Sullivan, “One millisecond face alignment with an ensemble of regression trees,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 1867–1874 (2014).

[16]C. Conati and X. Zhou, “Modeling student’s emotions from cognitive appraisal in educational games,” in International Conference on Intelligent Tutoring Systems, 944–954, Springer (2002).

[17]X. X. Lu, “A review of solutions for perspective-npoint problem in camera pose estimation,” in Journal of Physics: Conference Series, 1087(5), 052009, IOP Publishing (2018).

[18]S. Mallick, “”home." learn opencv..” Available link:  https://www.learnopencv.com/tag/solvepnp (26 Sept. 2016).

[19]P. Alhola and P. Polo-Kantola, “Sleep deprivation: Impact on cognitive performance,” Neuropsychiatric disease and treatment (2007).

[20]T. Soukupova and J. Cech, “Eye blink detection using facial landmarks,” in 21st Computer Vision Winter Workshop, Rimske Toplice, Slovenia, (2016).

[21]I. Thresholding, “Opencv.” Available link:  https://docs.opencv.org/ref/master/d7/d4d/tutorial_py _thresholding.html (Accessed September 27, 2019.).