Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs (ECCV 2024)
November 6, 2024 ยท View on GitHub
This repository provides the official PyTorch implementation of the following paper:
Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs
> Shi Liu1, Kecheng Zheng1,2, Wei Chen2,
> 1State Key Lab of CAD&CG, Zhejiang University,2Ant Group
Overview

Setup
conda env create -f environment.yml
conda activate pai
How to use PAI in LVLMs
Our method consists of two core components:
1. inference intervention
This component is implemented in the attention.py file by replacing the attention forward method in the transformers library. Additionally, you need to specify the number of layers to be perturbed, the position information of image tokens in the input sequence, and the hyperparameters that control the perturbation.
llama_modify(model, start_layer, end_layer, use_attn, alpha, use_cfg,
img_start_idx, img_end_idx)
2. logits refine
This component is implemented in the CFG.py file and is utilized by passing the logits_processor parameter within the model.generate() method. Additionally, you need to construct the input information without image tokens and some related hyperparameters. We recommend using it in the nucleus sample decode method.
CFGLogits(gamma, neg_promt, llm_model, start_layer, end_layer)
Evaluation
POPE
- Generate the LVLM's responsed and save them:
<!-- single round evaluation -->
python pope_eval.py --model MODEL_NAME --data-path /path/to/COCO --pope-type random --use-attn --alpha 0.2 --use-cfg --gamma 1.1 --start-layer 2 --end-layer 32
<!-- multi round evaluation -->
python pope_chat_eval.py --model MODEL_NAME --data-path /path/to/COCO --pope-type random --use-attn --alpha 0.2 --use-cfg --gamma 1.1 --start-layer 2 --end-layer 32
- Calculate POPE using the answer file:
python pope_ans.py --ans_file /path/to/answer.json
CHAIR
- Generate the LVLM's responses and save them in a jsonl file:
python chair_eval.py --model MODEL_NAME --data-path /path/to/COCO --use-attn --alpha 0.2 --use-cfg --gamma 1.1 --start-layer 2 --end-layer 32
- Calculate CHAIR using the generated jsonl file:
python chair.py --cap_file /path/to/jsonl
Acknowledgement
This paper is motivated by prompt-to-prompt. Our method implementation is based on the Prompt Highlighter. The evaluation code is based on OPERA. Thanks for their impressive works!
Citation
If you find this work useful for your research, please cite our paper:
@article{liu2024paying,
title={Paying more attention to image: A training-free method for alleviating hallucination in lvlms},
author={Liu, Shi and Zheng, Kecheng and Chen, Wei},
journal={arXiv preprint arXiv:2407.21771},
year={2024}
}