Anticipating Visual Representations from Unlabeled Video
Citations Over TimeTop 1% of 2016 papers
Abstract
Anticipating actions and objects before they start or appear is a difficult problem in computer vision with several real-world applications. This task is challenging partly because it requires leveraging extensive knowledge of the world that is difficult to write down. We believe that a promising resource for efficiently learning this knowledge is through readily available unlabeled video. We present a framework that capitalizes on temporal structure in unlabeled video to learn to anticipate human actions and objects. The key idea behind our approach is that we can train deep networks to predict the visual representation of images in the future. Visual representations are a promising prediction target because they encode images at a higher semantic level than pixels yet are automatic to compute. We then apply recognition algorithms on our predicted representation to anticipate objects and actions. We experimentally validate this idea on two datasets, anticipating actions one second in the future and objects five seconds in the future.
Related Papers
- → New developments on the Encyclopedia of DNA Elements (ENCODE) data portal(2019)1,073 cited
- → Using the ENCODE Resource for Functional Annotation of Genetic Variants(2015)32 cited
- → Extrapolating ENCODE data to the whole human genome(2008)2 cited
- The Study of OCDMA Encode/Decode Using Multilayer Films(2001)
- The ENCODE Project Decoded(2012)