Improved optical flow for gesture-based human-robot interaction

Published in IEEE International Conference on Robotics and Automation, 2019

Recommended citation: Chang, J. Y., Tejero-de-Pablos, A., & Harada, T. (2019, May). Improved optical flow for gesture-based human-robot interaction. In 2019 International Conference on Robotics and Automation (ICRA) (pp. 7983-7989).

Gesture interaction is a natural way of communicating with a robot as an alternative to speech. Gesture recognition methods leverage optical flow in order to understand human motion. However, while accurate optical flow estimation (i.e., traditional) methods are costly in terms of runtime, fast estimation (i.e., deep learning) methods’ accuracy can be improved. In this paper, we present a pipeline for gesture-based human-robot interaction that uses a novel optical flow estimation method in order to achieve an improved speed-accuracy trade-off. Our optical flow estimation method introduces four improvements to previous deep learning-based methods: strong feature extractors, attention to contours, midway features, and a combination of these three. This results in a better understanding of motion, and a finer representation of silhouettes. In order to evaluate our pipeline, we generated our own dataset, MIBURI, which contains gestures to command a house service robot. In our experiments, we show how our method improves not only optical flow estimation, but also gesture recognition, offering a speed-accuracy trade-off more realistic for practical robot applications.

Download here

Bibtex:

@inproceedings{chang2019improved,
  title={Improved optical flow for gesture-based human-robot interaction},
  author={Chang, Jen-Yen and Tejero-de-Pablos, Antonio and Harada, Tatsuya},
  booktitle={2019 International Conference on Robotics and Automation (ICRA)},
  pages={7983--7989},
  year={2019},
  organization={IEEE}
}