README.md

January 14, 2024 ยท View on GitHub

๐Ÿชจ๏ธ Making Retrieval-Augmented Language Models Robust to Irrelevant Context

RetRobust Overview

By training RALMs on 1K examples we can make them robust to irrelevant context and improve QA performance [Paper].

Alt text

๐Ÿค— Data and Models

Our models and data are available at the RetRobust HuggingFace Collection.

๐Ÿง—๐Ÿฝ Experiments framework

LLama-2 inference servers were set using lm-sys/FastChat. Experiments were run using the framework from reasoning-on-cots. To run these experiments, see here.

๐Ÿƒโ€ Training

See here.

โš”๏ธ๏ธ NLI filtering

See here.

โœ Citation

bibtex
@misc{yoran2023making,
      title={Making Retrieval-Augmented Language Models Robust to Irrelevant Context}, 
      author={Ori Yoran and Tomer Wolfson and Ori Ram and Jonathan Berant},
      year={2023},
      eprint={2310.01558},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}