Kim, Byung-Hak; Ganapathi, Varun
LumièreNet: Lecture Video Synthesis from Audio Artikel
In: CoRR, Bd. bs/1907.02253, 2019.
Abstract | Links | BibTeX | Schlagwörter: audio processing, computer vision, lectures, machine learning, O, pattern recognition, speech processing
@article{Kim2019,
title = {LumièreNet: Lecture Video Synthesis from Audio},
author = {Byung-Hak Kim and Varun Ganapathi},
url = {http://arxiv.org/abs/1907.02253
https://dblp.org/rec/bib/journals/corr/abs-1907-02253},
year = {2019},
date = {2019-07-08},
urldate = {2019-08-08},
journal = {CoRR},
volume = {bs/1907.02253},
abstract = {We present LumièreNet, a simple, modular, and completely deep-learning based architecture that synthesizes, high quality, full-pose headshot lecture videos from instructor's new audio narration of any length. Unlike prior works, LumièreNet is entirely composed of trainable neural network modules to learn mapping functions from the audio to video through (intermediate) estimated pose-based compact and abstract latent codes. Our video demos are available at [22] and [23].},
keywords = {audio processing, computer vision, lectures, machine learning, O, pattern recognition, speech processing},
pubstate = {published},
tppubtype = {article}
}
We present LumièreNet, a simple, modular, and completely deep-learning based architecture that synthesizes, high quality, full-pose headshot lecture videos from instructor's new audio narration of any length. Unlike prior works, LumièreNet is entirely composed of trainable neural network modules to learn mapping functions from the audio to video through (intermediate) estimated pose-based compact and abstract latent codes. Our video demos are available at [22] and [23].