Skip to main content

RAGAS

RAG evaluation framework — measure faithfulness, relevance, and recall of RAG pipelines

LLM FrameworksFree

RAGAS (Retrieval Augmented Generation Assessment) is an open-source framework for evaluating RAG pipelines. It measures key quality dimensions — faithfulness (is the answer grounded in the retrieved context?), answer relevancy (does the answer address the question?), context recall (did retrieval find all needed information?), and context precision. Integrates with LangChain, LlamaIndex, and any RAG system.

Key specs
7,000 GitHub stars source
as of 2026-03-27
Loading…

FAQ

Alternatives

Integrations

None listed.

Built on

None listed.