Learning to Generate Synthetic 3D Training Data through Hybrid Gradient
arXiv preprint, June 2019
Abstract
Synthetic images rendered by graphics engines are a promising source for
training deep networks. However, it is challenging to ensure that they can help
train a network to perform well on real images, because a graphics-based
generation pipeline requires numerous design decisions such as the selection of
3D shapes and the placement of the camera. In this work, we propose a new
method that optimizes the generation of 3D training data based on what we call
"hybrid gradient". We parametrize the design decisions as a real vector, and
combine the approximate gradient and the analytical gradient to obtain the
hybrid gradient of the network performance with respect to this vector. We
evaluate our approach on the task of estimating surface normals from a single
image. Experiments on standard benchmarks show that our approach can outperform
the prior state of the art on optimizing the generation of 3D training data,
particularly in terms of computational efficiency.
Citation
Dawei Yang and Jia Deng.
"Learning to Generate Synthetic 3D Training Data through Hybrid Gradient."
arXiv:1907.00267, June 2019.
BibTeX
@techreport{Yang:2019:LTG, author = "Dawei Yang and Jia Deng", title = "Learning to Generate Synthetic {3D} Training Data through Hybrid Gradient", institution = "arXiv preprint", year = "2019", month = jun, number = "1907.00267" }