OctNet: Learning Deep 3D Representations at High Resolutions
2017pp. 6620–6629
Citations Over TimeTop 1% of 2017 papers
Abstract
We present OctNet, a representation for deep learning with sparse 3D data. In contrast to existing models, our representation enables 3D convolutional networks which are both deep and high resolution. Towards this goal, we exploit the sparsity in the input data to hierarchically partition the space using a set of unbalanced octrees where each leaf node stores a pooled feature representation. This allows to focus memory allocation and computation to the relevant dense regions and enables deeper networks without compromising resolution. We demonstrate the utility of our OctNet representation by analyzing the impact of resolution on several 3D tasks including 3D object classification, orientation estimation and point cloud labeling.
Related Papers
- → Denoising of 3D Point Clouds Constructed from Light Fields(2019)10 cited
- → Construction of 3D Environment Models by Fusing Ground and Aerial Lidar Point Cloud Data(2015)3 cited
- → Constructing a Mesh Model of the Construction for Finite Element Method (FEM) Simulation from the Point Cloud Data Collected by Terrestrial Laser Scanning (TLS)(2023)1 cited
- → Plane Loop Closure Based Point Cloud Registration Using Structured Light Sensor(2019)1 cited
- Based on Laser Point Clouds Data of Complex Plant Leaves Reconstruction Method(2014)