3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) oral presentation, July 2017
Abstract
Matching local geometric features on real-world depth images is a challenging
task due to the noisy, low-resolution, and incomplete nature of 3D scan data.
These difficulties limit the performance of current state-of-art methods, which
are typically based on histograms over geometric properties. In this paper, we
present 3DMatch, a data-driven model that learns a local volumetric patch
descriptor for establishing correspondences between partial 3D data. To amass
training data for our model, we propose a self-supervised feature learning
method that leverages the millions of correspondence labels found in existing
RGB-D reconstructions. Experiments show that our descriptor is not only able to
match local geometry in new scenes for reconstruction, but also generalize to
different tasks and spatial scales (e.g. instance-level object model alignment
for the Amazon Picking Challenge, and mesh surface correspondence). Results
show that 3DMatch consistently outperforms other state-of-the-art approaches by
a significant margin. Code, data, benchmarks, and pre-trained models are
available online at http://3dmatch.cs.princeton.edu
Citation
Andy Zeng, Shuran Song, Matthias Nießner, Matthew Fisher, Jianxiong Xiao, and Thomas Funkhouser.
"3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions."
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) oral presentation, July 2017.
BibTeX
@inproceedings{Zeng:2017:3LL, author = "Andy Zeng and Shuran Song and Matthias Nie{\ss}ner and Matthew Fisher and Jianxiong Xiao and Thomas Funkhouser", title = "{3DMatch}: Learning Local Geometric Descriptors from {RGB-D} Reconstructions", booktitle = "IEEE Conference on Computer Vision and Pattern Recognition (CVPR) oral presentation", year = "2017", month = jul }