Build-a-RAG tutorial companion files https://furst.group/rag
Find a file
2026-02-22 12:58:16 -05:00
data Initial commit: RAG demo with build and query scripts 2026-02-22 12:41:55 -05:00
.gitignore Initial commit: RAG demo with build and query scripts 2026-02-22 12:41:55 -05:00
build.py Initial commit: RAG demo with build and query scripts 2026-02-22 12:41:55 -05:00
clean_eml.py Add eml-to-text conversion script 2026-02-22 12:58:16 -05:00
query.py Initial commit: RAG demo with build and query scripts 2026-02-22 12:41:55 -05:00
README.md Initial commit: RAG demo with build and query scripts 2026-02-22 12:41:55 -05:00
requirements.txt Add eml-to-text conversion script 2026-02-22 12:58:16 -05:00

RAG Demo

Retrieval Augmented Generation using LlamaIndex with local models.

This demo builds a semantic search system over a collection of text documents using a HuggingFace embedding model and Ollama for generation.

Tutorial

See the full walkthrough at: https://lem.che.udel.edu/wiki/index.php?n=Main.RAG

Quick Start

# Create and activate virtual environment
python3 -m venv .venv
source .venv/bin/activate

# Install dependencies
pip install -r requirements.txt

# Pull the generating model
ollama pull command-r7b

# Place your .txt documents in ./data, then build the vector store
python build.py

# Run interactive queries
python query.py

Models

  • Embedding: BAAI/bge-large-en-v1.5 (downloaded automatically on first run)
  • Generation: command-r7b via Ollama