Build-a-RAG tutorial companion files
https://furst.group/rag
- Python 100%
| data | ||
| .gitignore | ||
| build.py | ||
| clean_eml.py | ||
| query.py | ||
| README.md | ||
| requirements.txt | ||
RAG Demo
Retrieval Augmented Generation using LlamaIndex with local models.
This demo builds a semantic search system over a collection of text documents using a HuggingFace embedding model and Ollama for generation.
Tutorial
See the full walkthrough at: https://lem.che.udel.edu/wiki/index.php?n=Main.RAG
Quick Start
# Create and activate virtual environment
python3 -m venv .venv
source .venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Pull the generating model
ollama pull command-r7b
# Place your .txt documents in ./data, then build the vector store
python build.py
# Run interactive queries
python query.py
Models
- Embedding: BAAI/bge-large-en-v1.5 (downloaded automatically on first run)
- Generation: command-r7b via Ollama