Our paper 'Multi-modal Relation Distillation for Unified 3D Representation Learning' accepted by ECCV 2024.
Our paper 'Multi-modal Relation Distillation for Unified 3D Representation Learning' accepted by ECCV 2024.Multi-modal Relation Distillation for Unified 3D Representation LearningHuiqun Wang, Yiping Bao, Panwang...
Our team won CVPR 2024 3D Open-Vocabulary Scene Understanding Challenge: 1st place in 3D Functionality Grounding track, 2nd place in 3D Object Instance Search track.
Our team won CVPR 2024 3D Open-Vocabulary Scene Understanding Challenge: 1st place in 3D Functionality Grounding track, 2nd place in 3D Object Instance Search track.
CVPR...
Our team won CVPR 2024 Monocular Depth Estimation Challenge.
Our team won ICCV 2023 3D Open-Vocabulary Scene Understanding (OpenSUN3D) Challenge.
CVPR 2024 Monocular Depth Estimation Challenge
GuangYuan Zhou, ZhengXin Li, Qiang Rao, YiPing Bao, Xiao Liu
PICO-MR...
Our paper 'Coin3D: Controllable and Interactive 3D Assets Generation with Proxy-Guided Conditioning' accepted by SIGGRAPH 2024.
Our paper 'Coin3D: Controllable and Interactive 3D Assets Generation with Proxy-Guided Conditioning' accepted by SIGGRAPH 2024.Coin3D: Controllable and Interactive 3D Assets Generation with Proxy-Guided ConditioningBangbang...
Our paper 'DreamSpace: Dreaming Your Room Space with Text-Driven Panoramic Texture Propagation' accepted by IEEE VR 2024.
Our paper 'DreamSpace: Dreaming Your Room Space with Text-Driven Panoramic Texture Propagation' accepted by IEEE VR 2024.DreamSpace: Dreaming Your Room Space with Text-Driven Panoramic Texture...
Our team won ICCV 2023 3D Open-Vocabulary Scene Understanding (OpenSUN3D) Challenge.
Our team won ICCV 2023 3D Open-Vocabulary Scene Understanding (OpenSUN3D) Challenge.
ICCV 2023 3D Open-Vocabulary Scene Understanding (OpenSUN3D) Challenge
Hongbo Tian, Chunjie Wang, Xiaosheng Yan, Bingwen Wang,...
Our paper 'Tri-MipRF: Tri-Mip Representation for Efficient Anti-Aliasing Neural Radiance Fields' has been selected as a best paper finalist for ICCV 2023!
Our paper 'Tri-MipRF: Tri-Mip Representation for Efficient Anti-Aliasing Neural Radiance Fields' has been selected as a best paper finalist (17/8260) for ICCV 2023! We are...
Our paper 'Tri-MipRF: Tri-Mip Representation for Efficient Anti-Aliasing Neural Radiance Fields' accepted by ICCV 2023 as Oral Presentation.
Our paper 'Tri-MipRF: Tri-Mip Representation for Efficient Anti-Aliasing Neural Radiance Fields' accepted by ICCV 2023 as Oral Presentation.Tri-MipRF: Tri-Mip Representation for Efficient Anti-Aliasing Neural Radiance...
Our paper 'MegBA: A High-Performance and Distributed Library for Large-Scale Bundle Adjustment' accepted by ECCV 2022.
Our paper 'MegBA: A High-Performance and Distributed Library for Large-Scale Bundle Adjustment' accepted by ECCV 2022.MegBA: A High-Performance and Distributed Library for Large-Scale Bundle AdjustmentJie...
Our paper 'TransMVSNet: Global Context-aware Multi-view Stereo Network with Transformers' accepted by CVPR 2022.
Our paper 'TransMVSNet: Global Context-aware Multi-view Stereo Network with Transformers' accepted by CVPR 2022.TransMVSNet: Global Context-aware Multi-view Stereo Network with TransformersYikang Ding, Wentao Yuan, Qingtian...