Video ads content structuring by combining scene confidence prediction and tagging
Published in arXiv preprint, 2021
Recommended citation: Suzuki, T., & Tejero-de-Pablos, A. (2021). Video Ads Content Structuring by Combining Scene Confidence Prediction and Tagging. arXiv preprint arXiv:2108.09215.
Video ads segmentation and tagging is a challenging task due to two main reasons: (1) the video scene structure is complex and (2) it includes multiple modalities (e.g., visual, audio, text.). While previous work focuses mostly on activity videos (e.g. “cooking”, “sports”), it is not clear how they can be leveraged to tackle the task of video ads content structuring. In this paper, we propose a two-stage method that first provides the boundaries of the scenes, and then combines a confidence score for each segmented scene and the tag classes predicted for that scene. We provide extensive experimental results on the network architectures and modalities used for the proposed method. Our combined method improves the previous baselines on the challenging “Tencent Advertisement Video” dataset.
Bibtex:
@article{suzuki2021video,
title={Video Ads Content Structuring by Combining Scene Confidence Prediction and Tagging},
author={Suzuki, Tomoyuki and Tejero-de-Pablos, Antonio},
journal={arXiv preprint arXiv:2108.09215},
year={2021}
}