Question Generation for Retrieval Evaluation
This notebook is a step-by-step tutorial on how to generate a question dataset with LLMs for retrieval evaluation within RAG. It will guide you through getting a document dataset, generating diverse and relevant questions through prompt engineering on LLMs, and analyzing the question dataset. The question dataset can then be used for the subsequent task of evaluating the retriever model, which is a part of RAG that collects and ranks relevant document chunks based on the user’s question.
Question Generation for RAG Notebook
If you would like a copy of this notebook to execute in your environment, download the notebook here:
Download the notebookTo follow along and see the sections of the notebook guide, click below:
View the Notebook