With the growing demand for 3D applications, how to classify 3D objects with neural networks has become an important topic in recent years, and we propose a computation and memory efficient point-based solution to 3D classification. The IEEE Transactions on Image Processing covers novel theory, algorithms, and architectures for the formation, capture, processing, communication, analysis, and display of images, video, and multidimensional signals in a wide variety of applications, which is the top international computer vision Journal. A 3D Point Cloud Recognition paper co-worked by Intelligence Vision Lab of Samsung Electronics (China) R&D Center (SRC-Nanjing) and Nanjing University has been recently accepted by TIP in Nov. 2023.
Intelligence Vision Lab of SRC-Nanjing Intelligence Vision Lab (IVL) of SRC-Nanjing focuses on advanced computer vision technologies, including video frame interpolation, super resolution, style transfer, 2D/3D object detection, human pose understanding, etc. We have published several papers on top conferences / Journals and deployed some computer vision algorithms on Samsung’s products. We will keep on going and making more contributions to Samsung.
(Structure comparisons of APP-Net with other existing Nets)
(Workflow of APP-Net)
Main Contributions of This Paper
This paper (“APP-Net: Auxiliary-point-based Push and Pull Operations for Efficient Point Cloud Classification”) has the following main contributions.
1. It proposes to decouple the feature aggregation and receptive field expansion process to facilitate redundancy reduction.
2. It proposes an auxiliary-anchor-based operator to exchange features among neighbor points with linear computation complexity and linear memory consumption.
3. It proposes to use the online normal estimation to improve the classification task.
4. Experiments show that the proposed network achieves remarkable efficiency and low memory consumption while keeping competitive accuracy.
(The whole structure of the proposed APP-Net)
Quantitative Results The proposed algorithm achieves excellent quantitative results (measured by Speed, Param. Size and OA) on MODELNET40 dataset. In particular, it gets state of art performance with less model size and high speed.
(Quantitative results on MODELNET40 benchmark)
Qualitative Results The proposed algorithm gives better qualitative results on SCANOBJECTNN benchmark than state-of-the-art methods while using much less computation resource.
(Qualitative results on SCANOBJECTNN benchmark)