Created
November 13, 2025 11:26
-
-
Save hugobowne/f61a2495325b3faba5db07f0ff635fce to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # Reranking with Large Language Models (LLMs) | |
| ## Short Description of Research Question | |
| How to efficiently rerank hypotheses or retrieved passages using Large Language Models to optimize for quality metrics beyond model probability, while managing the computational cost? | |
| ## Summary of Work | |
| 1. **EEL: Efficiently Encoding Lattices for Reranking (2023)** | |
| - Investigates reranking hypotheses for conditional text generation by encoding lattices of outputs efficiently with Transformers. | |
| - Introduces EEL, which uses a single Transformer pass over a lattice to compute contextualized token representations. | |
| - Combines with token-factored rerankers for efficient extraction of high scoring hypotheses. | |
| - Shows significant speedup and better or comparable performance versus encoding each hypothesis individually. | |
| 2. **RIDER: Reader-Guided Passage Reranking for Open-Domain QA (2021)** | |
| - Proposes RIDER, a simple passage reranking method based on reader's top predictions without retraining. | |
| - Achieves large gains in retrieval accuracy and exact match scores. | |
| - Outperforms state-of-the-art supervised rerankers without any training. | |
| 3. **LLM4Rerank: LLM-based Auto-Reranking Framework (2024)** | |
| - Introduces a reranking framework for recommendations integrating multiple criteria like accuracy, diversity, and fairness. | |
| - Uses a fully connected graph and Chain-of-Thought processes in LLMs. | |
| - Demonstrates superior performance over previous reranking models on multiple public datasets. | |
| ## Papers | |
| - [EEL: Efficiently Encoding Lattices for Reranking](https://arxiv.org/pdf/2306.00947v1) (2023) by Singhal et al. | |
| - [RIDER: Reader-Guided Passage Reranking for Open-Domain Question Answering](https://arxiv.org/pdf/2101.00294v3) (2021) by Mao et al. | |
| - [LLM4Rerank: LLM-based Auto-Reranking Framework for Recommendations](https://arxiv.org/pdf/2406.12433v4) (2024) by Gao et al. | |
| This overview highlights recent advances using LLMs in reranking tasks, addressing efficiency, accuracy, and multi-criteria optimization. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment