We present radiance field propagation (RFP), a novel approach to segmenting objects in 3D during reconstruction given only unlabeled multi-view images of a scene. To the best of our knowledge, RFP is one of the first unsupervised approaches for tackling 3D real scene object segmentation for neural radiance field (NeRF) without any supervision, annotations, or other cues such as 3D bounding boxes and prior knowledge of object class.
RFP is derived from emerging neural radiance field-based techniques, which jointly encodes semantics with appearance and geometry. The core of our method is a novel propagation strategy for individual objects' radiance fields with a bidirectional photometric loss, enabling an unsupervised partitioning of a scene into salient or meaningful regions corresponding to different object instances.
Here object 0 and 1 respectively denote the background and the foreground object: (a) corrected rendered image, (b) image rendered with object 1's radiance field, (c) image rendered with object 0's radiance field and (d) image with object 0 rendered with object 1's RF and object 1 with object 0's RF.
Qualitative comparison on 3D segmentation on scenes with a single foreground object from LLFF and CO3D. IEM and ReDO are unsupervised single-image-based methods. SemanticNeRF is a supervised approach for scene labeling, so not applicable to LLFF due to the lack of ground truth labels. Images have been cropped for typography.
We would like to thank Yichen Liu and Shengnan Liang for fruitful discussion at the inception stage of the project. This research is supported in part by the Research Grant Council of the Hong Kong SAR under grant no. 16201420. This website is in part based on a template of Michaël Gharbi.