Efficiently Combining Positions and Normals for Precise 3D Geometry
ACM Transactions on Graphics (Proc. of ACM SIGGRAPH 2005), August 2005
Rendering comparisons.
Left: range image obtained from triangulation. Right: our hybrid surface reconstruction, which incorporates both position and normal information. |
Abstract
Range scanning, manual 3D editing, and other modeling approaches
can provide information about the geometry of surfaces
in the form of either 3D positions (e.g., triangle meshes or range
images) or orientations (normal maps or bump maps). We present
an algorithm that combines these two kinds of estimates to produce
a new surface that approximates both. Our formulation is linear, allowing
it to operate efficiently on complex meshes commonly used
in graphics. It also treats high- and low-frequency components separately,
allowing it to optimally combine outputs from data sources
such as stereo triangulation and photometric stereo, which have
different error-vs.-frequency characteristics. We demonstrate the
ability of our technique to both recover high-frequency details and
avoid low-frequency bias, producing surfaces that are more widely
applicable than position or orientation data alone.
Paper
Talk
Citation
Diego Nehab, Szymon Rusinkiewicz, James Davis, and Ravi Ramamoorthi.
"Efficiently Combining Positions and Normals for Precise 3D Geometry."
ACM Transactions on Graphics (Proc. of ACM SIGGRAPH 2005) 24(3), August 2005.
BibTeX
@article{Nehab:2005:ECP, author = "Diego Nehab and Szymon Rusinkiewicz and James Davis and Ravi Ramamoorthi", title = "Efficiently Combining Positions and Normals for Precise {3D} Geometry", journal = "ACM Transactions on Graphics (Proc. of ACM SIGGRAPH 2005)", year = "2005", month = aug, volume = "24", number = "3" }