Action Recognition and Localization by Hierarchical Space-Time Segments
Citations Over TimeTop 1% of 2013 papers
Abstract
We propose Hierarchical Space-Time Segments as a new representation for action recognition and localization. This representation has a two-level hierarchy. The first level comprises the root space-time segments that may contain a human body. The second level comprises multi-grained space-time segments that contain parts of the root. We present an unsupervised method to generate this representation from video, which extracts both static and non-static relevant space-time segments, and also preserves their hierarchical and temporal relationships. Using simple linear SVM on the resultant bag of hierarchical space-time segments representation, we attain better than, or comparable to, state-of-the-art action recognition performance on two challenging benchmark datasets and at the same time produce good action localization results.
Related Papers
- → A Benchmark Test Structure for Experimental Dynamic Substructuring(2011)9 cited
- → Solutions to the Third Benchmark Control Problem(1991)3 cited
- Theoretical Analysis of the Benchmark for Choosing Manipulative Instruments of Monetary Policies(2009)
- → Exploring disk performance benchmarks(2017)
- → Support Structure Performance Benchmark(2023)