ActionFlowNet: Learning Motion Representation for Action Recognition
Citations Over TimeTop 10% of 2018 papers
Abstract
We present a data-efficient representation learning approach to learn video representation with small amount of labeled data. We propose a multitask learning model ActionFlowNet to train a single stream network directly from raw pixels to jointly estimate optical flow while recognizing actions with convolutional neural networks, capturing both appearance and motion in a single model. Our model effectively learns video representation from motion information on unlabeled videos. Our model significantly improves action recognition accuracy by a large margin (23.6%) compared to state-of-the-art CNN-based unsupervised representation learning methods trained without external large scale data and additional optical flow input. Without pretraining on large external labeled datasets, our model, by well exploiting the motion information, achieves competitive recognition accuracy to the models trained with large labeled datasets such as ImageNet and Sport-1M.
Related Papers
- → Deep CNN models for pulmonary nodule classification: Model modification, model integration, and transfer learning(2019)60 cited
- → Implementing convolutional neural network model for prediction in medical imaging(2022)6 cited
- → Leaf Features Extraction for Plant Classification using CNN(2021)7 cited
- → Deep Convolution Neural Network for RBC Images(2022)2 cited
- → Deep Convolutional Neural Networks(2021)8 cited