Text-based Editing of Talking-head Video
ACM Transactions on Graphics (Proc. SIGGRAPH), July 2019
Abstract
Editing talking-head video to change the speech content or to remove filler
words is challenging. We propose a novel method to edit talking-head video
based on its transcript to produce a realistic output video in which the
dialogue of the speaker has been modified, while maintaining a seamless
audio-visual flow (i.e. no jump cuts). Our method automatically annotates an
input talking-head video with phonemes, visemes, 3D face pose and geometry,
reflectance, expression and scene illumination per frame. To edit a video, the
user has to only edit the transcript, and an optimization strategy then chooses
segments of the input corpus as base material. The annotated parameters
corresponding to the selected segments are seamlessly stitched together and
used to produce an intermediate video representation in which the lower half of
the face is rendered with a parametric face model. Finally, a recurrent video
generation network transforms this representation to a photorealistic video
that matches the edited transcript. We demonstrate a large variety of edits,
such as the addition, removal, and alteration of words, as well as convincing
language translation and full sentence synthesis.
Citation
Ohad Fried, Ayush Tewari, Michael Zollhöfer, Adam Finkelstein, Eli Shechtman, Dan B Goldman, Kyle Genova, Zeyu Jin, Christian Theobalt, and Maneesh Agrawala.
"Text-based Editing of Talking-head Video."
ACM Transactions on Graphics (Proc. SIGGRAPH) 38(4), Article 68, July 2019.
BibTeX
@article{Fried:2019:TEO, author = "Ohad Fried and Ayush Tewari and Michael Zollh{\"o}fer and Adam Finkelstein and Eli Shechtman and Dan B Goldman and Kyle Genova and Zeyu Jin and Christian Theobalt and Maneesh Agrawala", title = "Text-based Editing of Talking-head Video", journal = "ACM Transactions on Graphics (Proc. SIGGRAPH)", year = "2019", month = jul, volume = "38", number = "4", articleno = "68" }