Boosted Gaze Gesture Recognition using Underlying Head Orientation Sequence

Journal article


Authors/Editors


Strategic Research Themes


Publication Details

Author listBature Z.A., Abdullahi S.B., Yeamkuan S., Chiracharit W., Chamnongthai K.

PublisherInstitute of Electrical and Electronics Engineers

Publication year2023

JournalIEEE Access (2169-3536)

Volume number11

Start page43675

End page43689

Number of pages15

ISSN2169-3536

eISSN2169-3536

URLhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85159699648&doi=10.1109%2fACCESS.2023.3270285&partnerID=40&md5=44b37a5c6eae3484979cfb7c11d50383

LanguagesEnglish-Great Britain (EN-GB)


View in Web of Science | View on publisher site | View citing articles in Web of Science


Abstract

People find it challenging to control smart systems with complex gaze gestures due to the vulnerability of eye saccades. Instead, the existing works achieved good recognition accuracy of simple gaze gestures because of sufficient eye gaze points but simple gaze gestures have limited applications compared to complex gaze gestures. Complex gaze gestures need a composition of multiple subunits of eye fixation to contain a sequence of gaze points that are clustered and rotated with an underlying complex head orientation relationship. This paper proposes a new set of eye gaze points and head orientation angles as new sequences to recognize complex gaze gestures. Eye gaze points and head orientation angles have a powerful influence on gaze gesture formation. The new sequence was obtained by aligning clustered gaze points and head orientation angles with a simple moving average (SMA) to denoise and interpolate the gap between successive eye fixations. The aligned new sequence of complex gaze gestures was utilized to train sequential machine learning (ML) algorithms. To evaluate the performance of the proposed method, we recruited and recorded the eye gaze and head orientation features of ten participants using an eye tracker. The results show that Boosted Hidden Markov Models (HMM) using Random Subspace methods achieved the best accuracies of 94.72% and 98.1% for complex, and simple gestures respectively, which outperformed the conventional methods. Author


Keywords

ElectrooculographyEye-trackingGaze patternsHead Orientation anglesMagnetic headsRadar trackingSequence Recognition


Last updated on 2024-04-10 at 00:00