PrincetonComputer SciencePIXL GroupPublications → [Zhang et al. 2019] Local Access
Accelerating Large-Kernel Convolution Using Summed-Area Tables

arXiv preprint, July 2019

Linguang Zhang, Maciej Halber, Szymon Rusinkiewicz
Abstract

Expanding the receptive field to capture large-scale context is key to obtaining good performance in dense prediction tasks, such as human pose estimation. While many state-of-the-art fully-convolutional architectures enlarge the receptive field by reducing resolution using strided convolution or pooling layers, the most straightforward strategy is adopting large filters. This, however, is costly because of the quadratic increase in the number of parameters and multiply-add operations. In this work, we explore using learnable box filters to allow for convolution with arbitrarily large kernel size, while keeping the number of parameters per filter constant. In addition, we use precomputed summed-area tables to make the computational cost of convolution independent of the filter size. We adapt and incorporate the box filter as a differentiable module in a fully-convolutional neural network, and demonstrate its competitive performance on popular benchmarks for the task of human pose estimation.
Paper
Link
Citation

Linguang Zhang, Maciej Halber, and Szymon Rusinkiewicz.
"Accelerating Large-Kernel Convolution Using Summed-Area Tables."
arXiv:1906.11367, June 2019.

BibTeX

@techreport{Zhang:2019:ALC,
   author = "Linguang Zhang and Maciej Halber and Szymon Rusinkiewicz",
   title = "Accelerating Large-Kernel Convolution Using Summed-Area Tables",
   institution = "arXiv preprint",
   year = "2019",
   month = jun,
   number = "1906.11367"
}