README.md

July 25, 2025 · View on GitHub

HybridTM: Combining Transformer and Mamba for 3D Semantic Segmentation

Xinyu Wang *, Jinghua Hou *, Zhe Liu , Yingying Zhu
Huazhong University of Science and Technology
* Equal contribution, ✉ Corresponding author

Image 2

Abstract

Transformer-based methods have demonstrated remarkable capabilities in 3D semantic segmentation through their powerful attention mechanisms, but the quadratic complexity limits their modeling of long-range dependencies in large-scale point clouds. While recent Mamba-based approaches offer efficient processing with linear complexity, they struggle with feature representation when extracting 3D features. However, effectively combining these complementary strengths remains an open challenge in this field. In this paper, we propose HybridTM, the first hybrid architecture that integrates Transformer and Mamba for 3D semantic segmentation. In addition, we propose the Inner Layer Hybrid Strategy, which combines attention and Mamba at a finer granularity, enabling simultaneous capture of long-range dependencies and fine-grained local features. Extensive experiments demonstrate the effectiveness and generalization of our HybridTM on diverse indoor and outdoor datasets. Furthermore, our HybridTM achieves state-of-the-art performance on ScanNet, ScanNet200, and nuScenes benchmarks.

Image 2

News

  • 2024.06.30: HybridTM has been accepted by IROS 2025 as an Oral presentation.

Results

  • Scannet Val
MethodPresent atmIoU
MinkUNetCVPR 201972.2
O-CNNSIGGRAPH 201774.0
STCVPR 202274.3
Point Transformer v2NeurIPS 202275.4
OctFormerSIGGRAPH 202374.5
Swin3DarXiv 202375.5
Point Transformer v3CVPR 202477.5
Point MambaarXiv 202475.7
Serialized Point MambaarXiv 202476.8
Ours-77.8
  • Scannet200 Val
ModelPresented atmIoU
MinkUNetCVPR 201925.0
OctFormerSIGGRAPH 202332.6
Point Transformer V2NeurIPS 202230.2
Point Transformer V2CVPR 202435.2
Ours-36.5
  • S3DIS Area5
ModelPresent atAera5 (mIoU)
MinkUNetCVPR 201965.4
PointNeXtNeurIPS 202270.5
Swin3DarXiv 202372.5
Point Transformer v2NeurIPS 202271.6
Serialized Point MambaarXiv 202470.6
Ours-72.1
  • nuScenes Val
ModelPresent atmIoU
MinkUNetCVPR 201973.3
SPVNASECCV 202077.4
Cylender3DCVPR 202176.1
AF2S3NetCVPR 202162.2
SphereFormerCVPR 202379.5
Point Transformer v3CVPR 202480.2
Ours-80.9

Installation

Please refer to INSTALL.md for the installation of HybridTM codebase.

Getting Started

# ScanNet
sh scripts/train.sh -g 4 -d scannet -c semseg-hybridTM-v1m1-0-base -n semseg-hybridTM-v1m1-0-base

# ScanNet200
sh scripts/train.sh -g 4 -d scannet200 -c semseg-hybridTM-v1m1-0-base -n semseg-hybridTM-v1m1-0-base

# S3DIS
sh scripts/train.sh -g 4 -d s3dis -c semseg-pt-v3m1-0-rpe -n semseg-pt-v3m1-0-rpe

# Scratched nuScenes
sh scripts/train.sh -g 4 -d nuscenes -c semseg-hybridTM-v1m1-0-base -n semseg-hybridTM-v1m1-0-base

TODO

  • Release the paper.
  • Release the checkpoints of HybridTM on the Scannet.

Citation

@inproceedings{hybridTM,
  title={HybridTM: Combining Transformer and Mamba for 3D
Semantic Segmentation},
  author={Wang, Xinyu and Hou, Jinghua and Liu, Zhe and Zhu, Yingying},
  booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems},
  year={2025}
  }

Acknowledgements

We thank these great works and open-source repositories: Pointcept, LION, Mamba, Spconv and flash-linear-attention.