Revisiting the “Video” in Video-Language Understanding
Citations Over TimeTop 10% of 2022 papers
Abstract
What makes a video task uniquely suited for videos, beyond what can be understood from a single image? Building on recent progress in self-supervised image-language models, we revisit this question in the context of video and language tasks. We propose the atemporal probe (ATP), a new model for video-language analysis which provides a stronger bound on the baseline accuracy of multimodal models constrained by image-level understanding. By applying this model to standard discriminative video and language tasks, such as video question answering and text-to-video retrieval, we characterize the limitations and potential of current video-language benchmarks. We find that understanding of event temporality is often not necessary to achieve strong or state-of-the-art performance, even compared with recent large-scale video-language models and in contexts intended to benchmark deeper video-level understanding. We also demonstrate how ATP can improve both video-language dataset and model design. We describe a technique for leveraging ATP to better disentangle dataset subsets with a higher concentration of temporally challenging data, improving benchmarking efficacy for causal and temporal understanding. Further, we show that effectively integrating ATP into full video-level temporal models can improve efficiency and state-of-the-art accuracy. 1 1 Project website: https://stanfordvl.github.io/atp-revisit-video-lang/
Related Papers
- → A hierarchical Bayesian approach for semi-supervised discriminative language modeling(2012)2 cited
- → Comparison of MPI Implementations on a Shared Memory Machine(2000)7 cited
- → Analyzing Data Reusability of Raytrace Application in Splash2 Benchmark(2016)1 cited
- → Boosting global scene classification accuracy by discriminative region localization(2011)
- → Face Super-Resolution via Discriminative-Attributes(2019)