End-to-end Learning of Driving Models from Large-scale Video Datasets
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) oral presentation, July 2017
Abstract
Robust perception-action models should be learned from training data with
diverse visual appearances and realistic behaviors, yet current approaches to
deep visuomotor policy learning have been generally limited to in-situ models
learned from a single vehicle or a simulation environment. We advocate learning
a generic vehicle motion model from large scale crowd-sourced video data, and
develop an end-to-end trainable architecture for learning to predict a
distribution over future vehicle egomotion from instantaneous monocular camera
observations and previous vehicle state. Our model incorporates a novel
FCN-LSTM architecture, which can be learned from large-scale crowd-sourced
vehicle action data, and leverages available scene segmentation side tasks to
improve performance under a privileged learning paradigm.
Citation
Huazhe Xu, Yang Gao, Fisher Yu, and Trevor Darrell.
"End-to-end Learning of Driving Models from Large-scale Video Datasets."
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) oral presentation, July 2017.
BibTeX
@inproceedings{Xu:2017:ELO, author = "Huazhe Xu and Yang Gao and Fisher Yu and Trevor Darrell", title = "End-to-end Learning of Driving Models from Large-scale Video Datasets", booktitle = "IEEE Conference on Computer Vision and Pattern Recognition (CVPR) oral presentation", year = "2017", month = jul }