Summarization of user-generated sports video by using deep action recognition features

Published in IEEE Transactions on Multimedia, 2018

Recommended citation: Tejero-de-Pablos, A., Nakashima, Y., Sato, T., Yokoya, N., Linna, M., & Rahtu, E. (2018). Summarization of user-generated sports video by using deep action recognition features. IEEE Transactions on Multimedia, 20(8), 2000-2011.

Automatically generating a summary of sports video poses the challenge of detecting interesting moments, or highlights, of a game. Traditional sports video summarization methods leverage editing conventions of broadcast sports video that facilitate the extraction of high-level semantics. However, user-generated videos are not edited, and thus traditional methods are not suitable to generate a summary. In order to solve this problem, this work proposes a novel video summarization method that uses players’ actions as a cue to determine the highlights of the original video. A deep neural network-based approach is used to extract two types of action-related features and to classify video segments into interesting or uninteresting parts. The proposed method can be applied to any sports in which games consist of a succession of actions. Especially, this work considers the case of Kendo (Japanese fencing) as an example of a sport to evaluate the proposed method. The method is trained using Kendo videos with ground truth labels that indicate the video highlights. The labels are provided by annotators possessing different experience with respect to Kendo to demonstrate how the proposed method adapts to different needs. The performance of the proposed method is compared with several combinations of different features, and the results show that it outperforms previous summarization methods.

Download here


  title={Summarization of user-generated sports video by using deep action recognition features},
  author={Tejero-de-Pablos, Antonio and Nakashima, Yuta and Sato, Tomokazu and Yokoya, Naokazu and Linna, Marko and Rahtu, Esa},
  journal={IEEE Transactions on Multimedia},