Skip to yearly menu bar Skip to main content


Poster

SVDTree: Semantic Voxel Diffusion for Single Image Tree Reconstruction

Yuan Li · Zhihao Liu · Bedrich Benes · Xiaopeng Zhang · Jianwei Guo

Arch 4A-E Poster #441
[ ] [ Project Page ] [ Paper PDF ]
[ Poster
Wed 19 Jun 10:30 a.m. PDT — noon PDT

Abstract:

Efficiently representing and reconstructing the 3D geometry of trees remains a challenging problem in computer vision and graphics. We propose a novel approach for generating realistic tree models from single-view photographs. We cast the 3D information inference problem to a semantic voxel diffusion process, which converts an input image of a tree to a novel Semantic Voxel Structure (SVS) in 3D space. The SVS encodes the geometric appearance and semantic structural information (e.g., classifying trunks, branches, and leaves), having the distinct advantage of retaining the intricate internal features of trees. Tailored to the SVS, we also present a new hybrid tree modeling approach by combining structure-oriented branch reconstruction and self-organization-based foliage reconstruction. We validate our approach by using images from both synthetic and real trees. The comparison results show that our approach can better preserve tree details and achieve more realistic and accurate reconstruction results than previous methods. We will open-source the code and data to facilitate future research.

Chat is not available.