Princeton > CS Dept > PIXL > Graphics > Publications Local Access 

Depth from Shading, Defocus, and Correspondence Using Light-Field Angular Coherence
Computer Vision and Pattern Recognition (CVPR), June 2015

Michael Tao, Pratul Srinivasa, Jitendra Malik,
Szymon Rusinkiewicz, Ravi Ramamoorthi


Abstract

Light-field cameras are now used in consumer and industrial applications. Recent papers and products have demonstrated practical depth recovery algorithms from a passive single-shot capture. However, current light-field capture devices have narrow baselines and constrained spatial resolution; therefore, the accuracy of depth recovery is limited, requiring heavy regularization and producing planar depths that do not resemble the actual geometry. Using shading information is essential to improve the shape estimation. We develop an improved technique for local shape estimation from defocus and correspondence cues, and show how shading can be used to further refine the depth.

Light-field cameras are able to capture both spatial and angular data, suitable for refocusing. By locally refocusing each spatial pixel to its respective estimated depth, we produce an all-in-focus image where all viewpoints converge onto a point in the scene. Therefore, the angular pixels have angular coherence, which exhibits three properties: photo consistency, depth consistency, and shading consistency. We propose a new framework that uses angular coherence to optimize depth and shading. The optimization framework estimates both general lighting in natural scenes and shading to improve depth regularization. Our method outperforms current state-of-the-art light-field depth estimation algorithms in multiple scenarios, including real images.

Citation (BibTeX)

Michael Tao, Pratul Srinivasa, Jitendra Malik, Szymon Rusinkiewicz, and Ravi Ramamoorthi. Depth from Shading, Defocus, and Correspondence Using Light-Field Angular Coherence. Computer Vision and Pattern Recognition (CVPR), June 2015.

Paper
  PDF file