Our paper 'Multi-modal Relation Distillation for Unified 3D Representation Learning' accepted by ECCV 2024.

Our paper 'Multi-modal Relation Distillation for Unified 3D Representation Learning' accepted by ECCV 2024.

Multi-modal Relation Distillation for Unified 3D Representation Learning

Huiqun Wang, Yiping Bao, Panwang Pan, Zeming Li, Xiao Liu, Ruijie Yang, Di Huang

Recent advancements in multi-modal pre-training for 3D point clouds have demonstrated promising results by aligning heterogeneous features across 3D shapes and their corresponding 2D images and language descriptions. However, current straightforward solutions often overlook intricate structural relations among samples, potentially limiting the full capabilities of multi-modal learning. To address this issue, we introduce Multi-modal Relation Distillation (MRD), a tri-modal pre-training framework, which is designed to effectively distill reputable large Vision-Language Models (VLM) into 3D backbones. MRD aims to capture both intra-relations within each modality as well as cross-relations between different modalities and produce more discriminative 3D shape representations. Notably, MRD achieves significant improvements in downstream zero-shot classification tasks and cross-modality retrieval tasks, delivering new state-of-the-art performance.

PDF: https://arxiv.org/abs/2407.14007
Page: https://eccv.ecva.net/virtual/2024/poster/2074

Add a Comment

您的电子邮箱地址不会被公开。 必填项已用 * 标注