create new anaconda env

November 20, 2025 · View on GitHub

Beyond Cosine Similarity: Magnitude-Aware CLIP for No-Reference Image Quality Assessment

1South China Normal University, China  2Nanyang Technological University, Singapore  3City University of Hong Kong, China 
*denotes Corresponding author
Accepted to AAAI 2026

[arXiv][Project Page]

The proposed Magnitude-Aware CLIP(MA-CLIP) IQA provides training-free dual-source framework that integrates a statistically normalized magnitude score with semantic similarity via a confidence-guided fusion strategy.

If you find MA-CLIP useful for your projects, please consider ⭐ this repo. Thank you! 😉

:postbox: Updates

  • 2026.1.20: Looking forward to meeting you in Singapore. Have fun! :yum:
  • 2025.11.11: This repo is created.

:diamonds: Installation

Codes and Environment

# git clone this repository
git clone https://github.com/zhix000/Maclip.git
cd Maclip

# create new anaconda env
conda create -n maclip python=3.8 -y
conda activate maclip

# install python dependencies
pip install -r requirements.txt

:circus_tent: Inference

Usage:

  1. Configure Dataset Paths
    Modify the dataset paths in inference_maclip.py:

    • image_paths_all: List of root directories for each dataset.
    • dataset_config: Mapping from dataset names to their corresponding JSON annotation files (containing image paths and ground-truth quality scores).
    • Supported Datasets: livec, AGIQA-3k, AGIQA-1k, SPAQ, CSIQ, TID2013, kadid, koniq, PIPAL
  2. Run Inference
    Execute the inference script to evaluate image quality on specified datasets:

    python inference_maclip.py
    

:zap: Quick Start

# Install with pip
pip install Maclip

# test with default settings
scorer = model.Maclip(backbone='RN50')  
pred = scorer(name, datasets, box_lam=0.5, base_cos=1.0, base_norm=0.6, alpha=1.0)

Key Parameters

The model supports customizing evaluation behavior through parameters in model.Maclip and its forward method:

  • backbone: CLIP backbone model (default: RN50, optional: ViT-B/32, RN101 etc., from clip_model.py).
  • box_lam: Lambda parameter for Box-Cox transformation (default: 0.5)
  • base_cos/base_norm: Base weights for fusion of cosine similarity and magnitude cues (default: 1.0/0.6).
  • alpha: Fusion coefficient (default: 1.0)

:love_you_gesture: Citation

If you find our work useful for your research, please consider citing the paper:

@article{liao2025beyond,
  title={Beyond Cosine Similarity Magnitude-Aware CLIP for No-Reference Image Quality Assessment},
  author={Liao, Zhicheng and Wu, Dongxu and Shi, Zhenshan and Mai, Sijie and Zhu, Hanwei and Zhu, Lingyu and Jiang, Yuncheng and Chen, Baoliang},
  journal={arXiv preprint arXiv:2511.09948},
  year={2025}
}

Contact

If you have any questions, please feel free to reach out at zcliao@m.scnu.edu.cn, blchen@m.scnu.edu.cn.