SI-ViT

December 10, 2023 · View on GitHub

Pancreatic Cancer ROSE Image Classification Based on Multiple Instance Learning with Shuffle Instances

Tianyi Zhang, Youdan Feng, Yu Zhao, Yunlu Feng, Yanli Lei, Nan Ying, Fan Song, Zhiling Yan, Yufang He, Aiming Yang, and Guanglei Zhang, “Shuffle Instances-based Vision Transformer for Pancreatic Cancer ROSE Image Classification” Computer Methods and Programs in Biomedicine, 244, p. 107969. (SCI, Q1, IF=6.1)

https://authors.elsevier.com/a/1iDRxcV4LHEkF

  • Results can be view in Archive folder

  • Colab scripts are provided with a sample dataset (MICCAI 2015 chanllange)

Abstract

Background and Objective: The rapid on-site evaluation (ROSE) technique improves pancreatic cancer diagnosis by enabling immediate analysis of fast-stained cytopathological images. Automating ROSE classification could not only reduce the burden on pathologists but also broaden the application of this increasingly popular technique. However, this approach faces substantial challenges due to complex perturbations in color distribution, brightness, and contrast, which are influenced by various staining environments and devices. Additionally, the pronounced variability in cancerous patterns across samples further complicates classification, underscoring the difficulty in precisely identifying local cells and establishing their global relationships.

Methods: To address these challenges, we propose an instance-aware approach that enhances the Vision Transformer with a novel shuffle instance strategy (SI-ViT). Our approach presents a shuffle step to generate bags of shuffled instances and corresponding bag-level soft-labels, allowing the model to understand relationships and distributions beyond the limited original distributions. Simultaneously, combined with an un-shuffle step, the traditional ViT can model the relationships corresponding to the sample labels. This dual-step approach helps the model to focus on inner-sample and cross-sample instance relationships, making it potent in extracting diverse image patterns and reducing complicated perturbations.

Results: Compared to state-of-the-art methods, significant improvements in ROSE classification have been achieved. Aiming for interpretability, equipped with instance shuffling, SI-ViT yields precise attention regions that identifying cancer and normal cells in various scenarios. Additionally, the approach shows excellent potential in pathological image analysis through generalization validation on other datasets.

Conclusions: By proposing instance relationship modeling through shuffling, we introduce a new insight in pathological image analysis. The significant improvements in ROSE classification leads to protential AI-on-site applications in pancreatic cancer diagnosis. The code and results are publicly available at https://github. com/sagizty/MIL-SI.

Method Overview

MIL-SI

Overview of our proposed approach MIL-SI, composed of two steps MIL step and CLS step. In the data processing as illustrated in (a), the images will be transformed into patch-es, and the patch annotation label will be calculated based on the corresponding masks. In the MIL step, the bags of patches within a batch will be shuffled while the bags of image patches will remain unchanged in the CLS step. The bags are then composed into images with the soft-label aggregated from the patch-level label. In the 2-step training process in (b), after the feature extraction of the backbone, the patch tokens will be used to regress the bag-level soft-label in the MIL head. In the CLS step, an additional CLS head will be used to predict the categories of the input images based on the class token.

Results on the test set

MIL-SI

ModelModel infoMIL InfosizeAcc (%)Precision (%)Recall (%)Sensitivity (%)Specificity (%)NPV (%)F1_score (%)
ViT_384_401_PT_lf05_b4_p32_ROSE_MILMIL ViTCLS+CLS_MIL+MIL384, P3294.0091.9890.6890.6895.7795.0591.32

SOTA models

ModelModel infoMIL InfosizeAcc (%)Precision (%)Recall (%)Sensitivity (%)Specificity (%)NPV (%)F1_score (%)
vgg16_384_401_PT_lf05_b4_ROSE_CLSVGG 16CLS38490.6586.2787.0187.0192.6093.0286.64
vgg19_384_401_PT_lf05_b4_ROSE_CLSVGG 19CLS38490.0690.4279.9479.9495.4789.9084.86
mobilenetv3_384_401_PT_lf05_b4_ROSE_CLSMobilenet v3CLS38489.5791.0677.6877.6895.9288.9483.84
efficientnet_b3_384_401_PT_lf05_b4_ROSE_CLSEfficientnet_b3CLS38489.5785.0385.0385.0391.9991.9985.03
ResNet50_384_401_PT_lf05_b4_ROSE_CLSResNet50CLS38490.7587.3685.8885.8893.3592.5186.61
inceptionv3_384_401_PT_lf05_b4_ROSE_CLSInception v3CLS38490.7586.7286.7286.7292.9092.9086.72
xception_384_401_PT_lf05_b4_ROSE_CLSXceptionCLS38490.9491.4681.6481.6495.9290.7186.27
swin_b_384_401_PT_lf05_b4_ROSE_CLSSwin TransformerCLS38489.1786.7581.3681.3693.3590.3583.97
ViT_384_401_PT_lf05_b4_ROSE_CLSViTCLS38490.6588.2084.4684.4693.9691.8886.29
conformer_384_401_PT_lf05_b4_ROSE_CLSConformerCLS38489.6790.8278.2578.2595.7789.1784.07
cross_former_224_401_PT_lf05_b4_ROSE_CLSCross_formerCLS38489.6786.9482.7782.7793.3591.0284.80
PC_Hybrid2_384_401_PT_lf05_b4_ROSE_CLSMSHTCLS38490.6590.6081.6481.6495.4790.6785.88

Counterpart augmentations

ModelModel infoMIL InfosizeAcc (%)Precision (%)Recall (%)Sensitivity (%)Specificity (%)NPV (%)F1_score (%)
ViT_384_401_PT_lf05_b4_ROSE_CutMix_CLSViTCLS38492.7289.5589.5589.5594.4194.4189.55
ViT_384_401_PT_lf05_b4_ROSE_Cutout_CLSViTCLS38492.3291.0786.4486.4495.4792.9488.70
ViT_384_401_PT_lf05_b4_ROSE_Mixup_CLSViTCLS38492.5288.8389.8389.8393.9694.5389.33

Different head structure

ModelModel infoMIL InfosizeAcc (%)Precision (%)Recall (%)Sensitivity (%)Specificity (%)NPV (%)F1_score (%)
PC_ViT_384_401_PT_lf05_b4_ROSE_CLSViTCLS38490.6588.2084.4684.4693.9691.8886.29
ViT_384_401_PT_lf05_b4_p32_NS_ROSE_MILMIL ViT (no shuffle MIL)CLS+CLS_MIL384, P3292.1390.0687.0187.0194.8693.1888.51
ViT_384_401_PT_lf05_b4_p32_NCLSMIL_ROSEMIL ViTCLS+MIL, no cls step MIL regression384, P3293.4191.5989.2789.2795.6294.3490.41
ViT_384_401_PT_lf05_b4_p32_ROSE_MILMIL ViTCLS+CLS_MIL+MIL384, P3294.0091.9890.6890.6895.7795.0591.32

Different patch size

ModelModel infoMIL InfosizeAcc (%)Precision (%)Recall (%)Sensitivity (%)Specificity (%)NPV (%)F1_score (%)
ViT_384_401_PT_lf05_b4_p16_ROSE_MILMIL ViTCLS+CLS_MIL+MIL384, P1693.6092.8888.4288.4296.3793.9690.59
ViT_384_401_PT_lf05_b4_p32_ROSE_MILMIL ViTCLS+CLS_MIL+MIL384, P3294.0091.9890.6890.6895.7795.0591.32
ViT_384_401_PT_lf05_b4_p64_ROSE_MILMIL ViTCLS+CLS_MIL+MIL384, P6493.1191.0488.9888.9895.3294.1890.00
ViT_384_401_PT_lf05_b4_p128_ROSE_MILMIL ViTCLS+CLS_MIL+MIL384, P12892.6292.9285.3185.3196.5392.4788.95

Different head balance

ModelModel infoMIL InfosizeAcc (%)Precision (%)Recall (%)Sensitivity (%)Specificity (%)NPV (%)F1_score (%)
ViT_384_401_PT_lf05_b4_p32_MIL_05_ROSEMIL ViTCLS+0.5CLS_MIL+0.5MIL384, P3291.7384.6293.2293.2290.9496.1788.71
ViT_384_401_PT_lf05_b4_p32_MIL_12_ROSEMIL ViTCLS+1.2CLS_MIL+1.2MIL384, P3292.5287.7791.2491.2493.2095.2289.47
ViT_384_401_PT_lf05_b4_p32_MIL_15_ROSEMIL ViTCLS+1.5CLS_MIL+1.5MIL384, P3293.3191.8188.7088.7095.7794.0790.23
ViT_384_401_PT_lf05_b4_p32_MIL_18_ROSEMIL ViTCLS+1.8CLS_MIL+1.8MIL384, P3293.4191.1289.8389.8395.3294.6090.47
ViT_384_401_PT_lf05_b4_p32_MIL_25_ROSEMIL ViTCLS+2.5CLS_MIL+2.5MIL384, P3293.5092.3588.7088.7096.0794.0890.49
ViT_384_401_PT_lf05_b4_p32_MIL_30_ROSEMIL ViTCLS+3.0CLS_MIL+3.0MIL384, P3293.6091.4090.1190.1195.4794.7590.75

Attention visuallization by grad-CAM

CAM results

CAM on shuffled instances

CAM on shuffled instances

bad caces

bad caces

CAM of different patch settings

CAM of different patch settings

CAM of different settings on shuffled samples

CAM of different settings on shuffled samples

More samples can be viewed in the folder of Archive