IEEE International Workshop on Signal Processing Systems (SiPS) edition:30 location:Dallas, Texas date:26-28 October 2016
A versatile and intuitive way of interacting with smart devices consists of a 3D-touchscreen in the air above the device, combined with gesture recognition. In this paper, the limitations of hand tracking and gesture recognition using existing hardware in smartphones will be investigated. To accomplish hand tracking and gesture recognition, the two loudspeakers of the device send ultrasonic tones in combination with a Maximum Length Sequence (MLS), after which the microphones pick up the potentially altered reflected signal again. The Doppler response on the tones yields information about the speed of the hand above the device and the acoustic channel estimation obtained from the MLS estimates the distance at which the hand is present. The position estimation module estimates the 3D-position of the hand in a predefined grid of eight regions and hereby achieves an accuracy above 77 % in all cases. For the gesture recognition, features are extracted from both the Doppler-response as well as the acoustic channel estimation. A simple decision tree then classifies the performed gesture. Experimental results show that the accuracy of the gesture recognition module achieves an accuracy between 87 and 100 %.