A Fast and Accurate One-Stage Approach to Visual Grounding
Citations Over TimeTop 1% of 2019 papers
Abstract
We propose a simple, fast, and accurate one-stage approach to visual grounding, inspired by the following insight. The performances of existing propose-and-rank two-stage methods are capped by the quality of the region candidates they propose in the first stage - if none of the candidates could cover the ground truth region, there is no hope in the second stage to rank the right region to the top. To avoid this caveat, we propose a one-stage model that enables end-to-end joint optimization. The main idea is as straightforward as fusing a text query's embedding into the YOLOv3 object detector, augmented by spatial features so as to account for spatial mentions in the query. Despite being simple, this one-stage approach shows great potential in terms of both accuracy and speed for both phrase localization and referring expression comprehension, according to our experiments. Given these results along with careful investigations into some popular region proposals, we advocate for visual grounding a paradigm shift from the conventional two-stage methods to the one-stage framework.
Related Papers
- → Embedding as a modeling problem(1998)167 cited
- → Children's phrase set for text input method evaluations(2006)28 cited
- → Study on Neutral Point Grounding Modes in Medium-Voltage Distribution Network(2014)11 cited
- → Investigation on the Spot for Grounding Systems in Buildings(2010)5 cited
- → Grounding, electrical referencing and bonding in electronic systems(2003)