Unbiased look at dataset bias
Citations Over TimeTop 1% of 2011 papers
Abstract
Datasets are an integral part of contemporary object recognition research. They have been the chief reason for the considerable progress in the field, not just as source of large amounts of training data, but also as means of measuring and comparing performance of competing algorithms. At the same time, datasets have often been blamed for narrowing the focus of object recognition research, reducing it to a single benchmark performance number. Indeed, some datasets, that started out as data capture efforts aimed at representing the visual world, have become closed worlds unto themselves (e.g. the Corel world, the Caltech-101 world, the PASCAL VOC world). With the focus on beating the latest benchmark numbers on the latest dataset, have we perhaps lost sight of the original purpose? The goal of this paper is to take stock of the current state of recognition datasets. We present a comparison study using a set of popular datasets, evaluated based on a number of criteria including: relative data bias, cross-dataset generalization, effects of closed-world assumption, and sample value. The experimental results, some rather surprising, suggest directions that can improve dataset collection as well as algorithm evaluation protocols. But more broadly, the hope is to stimulate discussion in the community regarding this very important, but largely neglected issue.
Related Papers
- Electro-optical sights for light weapons(2006)
- → Investigation of interdependence between touch and sight(1991)1 cited
- DIVERSITY OF GEOLOGICAL AND MORPHOLOGICAL SIGHTS AND THE PROTECTION OF LUSHAN MOUNT(2007)
- → Return of sight, or second sight(1858)
- Spectacles: Sight and Education(2018)