2.2 KiB
LLMs for Engineers
CHEG 667-013 — Chemical Engineering with Computers
Department of Chemical and Biomolecular Engineering, University of Delaware
A hands-on workshop on Large Language Models and machine learning for engineers. Learn how to train a GPT from scratch, run local models, build retrieval-augmented generation systems, then tie it back to underlying machine learning methods by implementing a simple neural network.
Sections
| # | Topic | Description |
|---|---|---|
| 01 | nanoGPT | Train a small transformer on Shakespeare. Explore model parameters, temperature, and text generation. |
| 02 | Local models with Ollama | Run pre-trained LLMs locally. Summarize documents, query arXiv, generate code, build custom models. |
| 03 | Retrieval-Augmented Generation | Build a RAG system: chunk documents, embed them, and query with an LLM grounded in your own data. |
| 04 | Advanced retrieval | Hybrid BM25 + vector search with cross-encoder re-ranking. Compares summarization versus raw retrieval. |
| 05 | Building a neural network | Implement a one-hidden-layer network from scratch in numpy, then in PyTorch. Fits C_p(T) data for N₂. |
Prerequisites
- A terminal (macOS/Linux, or WSL on Windows)
- Python 3.10+
- Basic comfort with the command line
- Ollama (sections 02–04)
Getting started
Clone this repository and work through each section in order:
git clone https://lem.che.udel.edu/git/furst/llm-workshop.git
cd llm-workshop
Each section has its own README.md with a full walkthrough, exercises, and any code or data needed.
Python environment
Install uv (a fast Python package manager), then:
uv sync
This creates a .venv/ virtual environment and installs all dependencies from the lock file. To run scripts:
uv run python 05-neural-networks/nn_torch.py
Or activate the environment directly:
source .venv/bin/activate
python 05-neural-networks/nn_torch.py
License
MIT
Author
Eric M. Furst, University of Delaware