Initial commit: RAG demo with build and query scripts
This commit is contained in:
commit
39f1f73e2a
6 changed files with 214 additions and 0 deletions
36
README.md
Normal file
36
README.md
Normal file
|
|
@ -0,0 +1,36 @@
|
|||
# RAG Demo
|
||||
|
||||
Retrieval Augmented Generation using LlamaIndex with local models.
|
||||
|
||||
This demo builds a semantic search system over a collection of text documents
|
||||
using a HuggingFace embedding model and Ollama for generation.
|
||||
|
||||
## Tutorial
|
||||
|
||||
See the full walkthrough at:
|
||||
https://lem.che.udel.edu/wiki/index.php?n=Main.RAG
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Create and activate virtual environment
|
||||
python3 -m venv .venv
|
||||
source .venv/bin/activate
|
||||
|
||||
# Install dependencies
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Pull the generating model
|
||||
ollama pull command-r7b
|
||||
|
||||
# Place your .txt documents in ./data, then build the vector store
|
||||
python build.py
|
||||
|
||||
# Run interactive queries
|
||||
python query.py
|
||||
```
|
||||
|
||||
## Models
|
||||
|
||||
- **Embedding:** BAAI/bge-large-en-v1.5 (downloaded automatically on first run)
|
||||
- **Generation:** command-r7b via Ollama
|
||||
Loading…
Add table
Add a link
Reference in a new issue