In this paper, we present SegDINO3D, a novel Transformer encoder-decoder framework for 3D instance segmentation. As 3D training data is generally not as sufficient as 2D training images, SegDINO3D is designed to fully leverage 2D representation from a pre-trained 2D detection model, including both image-level and object-level features, for improving 3D representation. SegDINO3D takes both a point cloud and its associated 2D images as input. In the encoder stage, it first enriches each 3D point by retrieving 2D image features from its corresponding image views and then leverages a 3D encoder for 3D context fusion. In the decoder stage, it formulates 3D object queries as 3D anchor boxes and performs cross-attention from 3D queries to 2D object queries obtained from 2D images using the 2D detection model. These 2D object queries serve as a compact object-level representation of 2D images, effectively avoiding the challenge of keeping thousands of image feature maps in the memory while faithfully preserving the knowledge of the pre-trained 2D model. The introducing of 3D box queries also enables the model to modulate cross-attention using the predicted boxes for more precise querying. SegDINO3D achieves the state-of-the-art performance on the ScanNetV2 and ScanNet200 3D instance segmentation benchmarks. Notably, on the challenging ScanNet200 dataset, SegDINO3D significantly outperforms prior methods by +8.6 and +6.8 mAP on the validation and hidden test sets, respectively, demonstrating its superiority.
(a) Overview of SegDINO3D. Given a point cloud and its corresponding posed multi-view RGB images, SegDINO3D extracts features for each 3D point via a Nearest View Sampling strategy, and then utilizes a 3D encoder to fuse global contextual information. In the decoder, we employ the proposed Box-Modulated Cross-Attention and Distance-Aware Cross-Attention to update the 3D object queries. (b) Visual illustration of the Nearest View Sampling strategy. Each 3D point calculates its distance to all the views that it is visible to finds the top-k nearest views (red dash lines). (c) Visual illustration of 2D object queries construction. Each 2D object query is assigned a 3D center computed as the medoid of its corresponding 3D points. The green points on the right side show the distribution of the 2D object queries' 3D centers in the scene.
Visualization of the predicted 3D instance bounding boxes and 3D instance masks.
On the challenging ScanNet200 dataset, SegDINO3D significantly outperforms prior methods by +8.6 and +6.8 mAP on the validation and hidden test sets, respectively, demonstrating its superiority.
@article{qu2025segdino3d,
title={SegDINO3D: 3D Instance Segmentation Empowered by Both Image-Level and Object-Level 2D Features},
author={Qu, Jinyuan and Li, Hongyang and Chen, Xingyu and Liu, Shilong and Shi, Yukai and Ren, Tianhe and Jing, Ruitao and Zhang, Lei},
journal={arXiv preprint arXiv:2509.16098},
year={2025}
}