Temporally consistent caption detection in videos using a spatiotemporal 3D method
Citations Over Time
Abstract
Captions are text or logos superimposed on videos during a postproduction process. Caption detection in videos is useful for a variety of applications. For many applications, temporal consistency and stability is very important. Most of the prior work adopts certain post-processing procedures to smooth detected caption bounding boxes over time. Although these approaches mitigate the effect of the temporal inconsistency problem, they are unable to eliminate the problem. In this paper, we present a new caption detection algorithm that detects the 3D bounding boxes of caption regions in spatiotemporal volume space. 2D bounding boxes are then created by slicing the 3D bounding boxes. Since all the 2D bounding boxes corresponding to a caption area are sliced from one 3D bounding box, they are identical over time, thus ensuring temporal consistency of the result. The experiment results show that our new approach not only generates temporally consistent results but also results in higher detection accuracy.
Related Papers
- → Syncretic-NMS: A Merging Non-Maximum Suppression Algorithm for Instance Segmentation(2020)30 cited
- → Enhancing Bounding Volumes using Support Plane Mappings for Collision Detection(2010)10 cited
- → Temporally consistent caption detection in videos using a spatiotemporal 3D method(2009)2 cited
- A Method to Generate the Minimum Bounding Boxes for Shape-Arbitrary Objects(2010)
- → Algorithm of the Bounding Box Based on the OBB(2012)