36 lines
803 B
Markdown
36 lines
803 B
Markdown
# RAG Demo
|
|
|
|
Retrieval Augmented Generation using LlamaIndex with local models.
|
|
|
|
This demo builds a semantic search system over a collection of text documents
|
|
using a HuggingFace embedding model and Ollama for generation.
|
|
|
|
## Tutorial
|
|
|
|
See the full walkthrough at:
|
|
https://lem.che.udel.edu/wiki/index.php?n=Main.RAG
|
|
|
|
## Quick Start
|
|
|
|
```bash
|
|
# Create and activate virtual environment
|
|
python3 -m venv .venv
|
|
source .venv/bin/activate
|
|
|
|
# Install dependencies
|
|
pip install -r requirements.txt
|
|
|
|
# Pull the generating model
|
|
ollama pull command-r7b
|
|
|
|
# Place your .txt documents in ./data, then build the vector store
|
|
python build.py
|
|
|
|
# Run interactive queries
|
|
python query.py
|
|
```
|
|
|
|
## Models
|
|
|
|
- **Embedding:** BAAI/bge-large-en-v1.5 (downloaded automatically on first run)
|
|
- **Generation:** command-r7b via Ollama
|