PrincetonComputer SciencePIXL GroupPublications → [Peterson et al. 2019] Local Access
Human uncertainty makes classification more robust

International Conference on Computer Vision (ICCV), October 2019

Joshua Peterson, Ruairidh Battleday,
Thomas L. Griffiths, Olga Russakovsky
Abstract

The classification performance of deep neural networks has begun to asymptote at near-perfect levels. However, their ability to generalize outside the training set and their robustness to adversarial attacks have not. In this paper, we make progress on this problem by training with full label distributions that reflect human perceptual uncertainty. We first present a new benchmark dataset which we call CIFAR-10H, containing a full distribution of human labels for each image of the CIFAR10 test set. We then show that, while contemporary classifiers fail to exhibit human-like uncertainty on their own, explicit training on our dataset closes this gap, supports improved generalization to increasingly out-of-training-distribution test datasets, and confers robustness to adversarial attacks.
Citation

Joshua Peterson, Ruairidh Battleday, Thomas L. Griffiths, and Olga Russakovsky.
"Human uncertainty makes classification more robust."
International Conference on Computer Vision (ICCV), October 2019.

BibTeX

@inproceedings{Peterson:2019:HUM,
   author = "Joshua Peterson and Ruairidh Battleday and Thomas L. Griffiths and Olga
      Russakovsky",
   title = "Human uncertainty makes classification more robust",
   booktitle = "International Conference on Computer Vision (ICCV)",
   year = "2019",
   month = oct
}