Identifying Visible Actions in Lifestyle Vlogs
Annual Meeting of the Association for Computational Linguistics (ACL), July 2019
Abstract
We consider the task of identifying human actions visible in online videos.
We focus on the widely spread genre of lifestyle vlogs, which consist of videos
of people performing actions while verbally describing them. Our goal is to
identify if actions mentioned in the speech description of a video are visually
present. We construct a dataset with crowdsourced manual annotations of visible
actions, and introduce a multimodal algorithm that leverages information
derived from visual and linguistic clues to automatically infer which actions
are visible in a video. We demonstrate that our multimodal algorithm
outperforms algorithms based only on one modality at a time.
Citation
Oana Ignat, Laura Burdick, Jia Deng, and Rada Mihalcea.
"Identifying Visible Actions in Lifestyle Vlogs."
Annual Meeting of the Association for Computational Linguistics (ACL), July 2019.
BibTeX
@inproceedings{Ignat:2019:IVA, author = "Oana Ignat and Laura Burdick and Jia Deng and Rada Mihalcea", title = "Identifying Visible Actions in Lifestyle Vlogs", booktitle = "Annual Meeting of the Association for Computational Linguistics (ACL)", year = "2019", month = jul }