Cross-View Action Modeling, Learning, and Recognition
Citations Over TimeTop 1% of 2014 papers
Abstract
Existing methods on video-based action recognition are generally view-dependent, i.e., performing recognition from the same views seen in the training data. We present a novel multiview spatio-temporal and-or graph (MST-AOG) representation for cross-view action recognition, i.e., the recognition is performed on the video from an unknown and unseen view. As a compositional model, MST-AOG compactly represents the hierarchical combinatorial structures of cross-view actions by explicitly modeling the geometry, appearance and motion variations. This paper proposes effective methods to learn the structure and parameters of MST-AOG. The inference based on MST-AOG enables action recognition from novel views. The training of MST-AOG takes advantage of the 3D human skeleton data obtained from Kinect cameras to avoid annotating enormous multi-view video frames, which is error-prone and time-consuming, but the recognition does not need 3D information and is based on 2D video input. A new Multiview Action3D dataset has been created and will be released. Extensive experiments have demonstrated that this new action representation significantly improves the accuracy and robustness for cross-view action recognition on 2D videos.
Related Papers
- → Robustness assessment of complex networks using the idle network(2022)4 cited
- → POSITIVE-NEGATIVE ASYMMETRY IN MENTAL STATE INFERENCE: REPLICATION AND EXTENSION(2006)1 cited
- → Development and Design Optimization of 2Y Hexarotor with Robustness against Rotor Failure(2020)2 cited
- → Researching robustness of information system for measuring of microcontrollers average power consumption(2017)
- → Robustness Assessment of Complex Networks using the Idle Network(2022)