Learning Where to Look: Data-Driven Viewpoint Set Selection for 3D Scenes
arXiv preprint, April 2017
Abstract
The use of rendered images, whether from completely synthetic datasets or
from 3D reconstructions, is increasingly prevalent in vision tasks. However,
little attention has been given to how the selection of viewpoints affects the
performance of rendered training sets. In this paper, we propose a data-driven
approach to view set selection. Given a set of example images, we extract
statistics describing their contents and generate a set of views matching the
distribution of those statistics. Motivated by semantic segmentation tasks, we
model the spatial distribution of each semantic object category within an image
view volume. We provide a search algorithm that generates a sampling of likely
candidate views according to the example distribution, and a set selection
algorithm that chooses a subset of the candidates that jointly cover the
example distribution. Results of experiments with these algorithms on SUNCG
indicate that they are indeed able to produce view distributions similar to an
example set from NYUDv2 according to the earth mover's distance. Furthermore,
the selected views improve performance on semantic segmentation compared to
alternative view selection algorithms.
Citation
Kyle Genova, Manolis Savva, Angel X. Chang, and Thomas Funkhouser.
"Learning Where to Look: Data-Driven Viewpoint Set Selection for 3D Scenes."
arXiv:1704.02393, April 2017.
BibTeX
@techreport{Genova:2017:LWT, author = "Kyle Genova and Manolis Savva and Angel X. Chang and Thomas Funkhouser", title = "Learning Where to Look: Data-Driven Viewpoint Set Selection for {3D} Scenes", institution = "arXiv preprint", year = "2017", month = apr, number = "1704.02393" }