IoU Loss for 2D/3D Object Detection
Citations Over TimeTop 1% of 2019 papers
Abstract
In the 2D/3D object detection task, Intersection-over-Union (IoU) has been widely employed as an evaluation metric to evaluate the performance of different detectors in the testing stage. However, during the training stage, the common distance loss (e.g, L_1 or L_2) is often adopted as the loss function to minimize the discrepancy between the predicted and ground truth Bounding Box (Bbox). To eliminate the performance gap between training and testing, the IoU loss has been introduced for 2D object detection in [1] and [2]. Unfortunately, all these approaches only work for axis-aligned 2D Boxes, which cannot be applied for more general object detection task with rotated Boxes. To resolve this issue, we investigate the IoU computation for two rotated Boxes first and then implement a unified framework, IoU loss layer for both 2D and 3D object detection tasks. By integrating the implemented IoU loss into several state-of-the-art 3D object detectors, consistent improvements have been achieved for both bird-eye-view 2D detection and point cloud 3D detection on the public KITTI [3] benchmark.
Related Papers
- → AttentionNet: Aggregating Weak Directions for Accurate Object Detection(2015)181 cited
- → Multi-Task Self-Supervised Object Detection via Recycling of Bounding Box Annotations(2019)51 cited
- → Small Object Detection Method based on Improved YOLOv5(2022)7 cited
- → 3D point cloud segmentation oriented to the analysis of interactions(2016)8 cited
- → Fast and Accurate Object Detection Based on Fusion of YOLOv2 and R-CNN Predicted Result for Autonomous Driving(2022)1 cited