Artificial Intelligence 11 min read

How SnowflakeNet Revolutionizes Point Cloud Completion with Skip‑Transformer

SnowflakeNet introduces a novel Snowflake Point Deconvolution architecture combined with a Skip‑Transformer to explicitly split and refine points, enabling high‑quality reconstruction of fine local geometry in incomplete point clouds and outperforming prior methods on both dense and sparse benchmarks.

Kuaishou Large Model
Kuaishou Large Model
Kuaishou Large Model
How SnowflakeNet Revolutionizes Point Cloud Completion with Skip‑Transformer

Background

Point cloud shape completion is a hot research topic that aims to predict high‑quality complete shapes from incomplete point clouds. Existing methods often fail to restore fine local details because point clouds are inherently discrete and locally unstructured.

Innovation

We propose SnowflakeNet, a new network that captures and restores local geometric details through multi‑layer Snowflake Point Deconvolution (SPD) and a Skip‑Transformer.

SPD progressively increases point count by splitting each parent point into multiple child points, resembling snowflake growth in 3D space.

Skip‑Transformer captures local shape features and coordinates splitting patterns between adjacent SPDs, ensuring consistent and detailed generation.

Related Work

Deep‑learning based point cloud completion methods can be grouped into three categories: folding‑based, coarse‑to‑fine, and deformation‑based. Our approach belongs to the first two but introduces an explicit, locally structured point generation process.

Method Description

SnowflakeNet consists of three modules: feature extraction, seed point generation, and point generation. The point generation module, the core of the network, contains three SPD layers. Each SPD receives the point cloud from the previous layer, splits every point, and uses an MLP to produce offset features and vectors for the child points.

The Skip‑Transformer integrates the current point features with offset features from the previous SPD via attention, producing enriched shape context for the next splitting step.

Experiments

We evaluate SnowflakeNet on the PCN dataset (dense, 16384 points) and the Completion3D dataset (sparse, 2048 points), comparing against state‑of‑the‑art methods.

Quantitative Results

Using Chamfer Distance (CD) as the metric, SnowflakeNet achieves lower average CD than competing methods on both datasets, demonstrating superior overall performance and per‑category improvements.

Qualitative Results

Visual comparisons show that SnowflakeNet better restores fine local structures such as sofa cushions, chair backs, aircraft wings, and engines.

Snowflake Deconvolution Visualization

Figure 7 illustrates point‑splitting paths on the PCN dataset: gray lines represent the first split, red lines the second. The method respects local geometry, producing coherent offset trajectories for both flat surfaces and complex corners.

Real‑World Data

Applying the model pretrained on Completion3D to ScanNet scenes demonstrates robust generalization to sparse, noisy inputs, still producing clean complete shapes.

References

Y. Yang et al., “FoldingNet: Point cloud auto‑encoder via deep grid deformation,” CVPR 2018.

X. Wen et al., “Point cloud completion by skip‑attention network with hierarchical folding,” CVPR 2020.

W. Yuan et al., “PCN: Point completion network,” 3DV 2018.

X. Wang et al., “Cascaded refinement network for point cloud completion,” CVPR 2020.

W. Zhang et al., “Detail preserved point cloud completion via separated feature aggregation,” ECCV 2020.

Z. Huang et al., “PF‑Net: Point fractal network for 3D point cloud completion,” CVPR 2020.

X. Wen et al., “PMP‑Net: Point cloud completion by learning multi‑step point moving paths,” CVPR 2021.

Deep Learningpoint cloud completionSnowflakeNet3D visionSkip-Transformer
Kuaishou Large Model
Written by

Kuaishou Large Model

Official Kuaishou Account

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.