Sync changes from che-computing

- Fix checkpoint directory name in 01-nanogpt
- Add generative text references (OUTPUT, Love Letters)
- Add PYTORCH.md troubleshooting (MPS, CUDA, WSL)
- Minor spacing fix in 02-ollama

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Eric Furst 2026-04-10 09:50:42 -04:00
commit 794cdaea0d
3 changed files with 119 additions and 4 deletions

View file

@ -188,7 +188,7 @@ At any time during a chat, you can reset the model with `/clear`, and you can le
We can see that the `gemma3` model has nearly one billion parameters and a context length of 32,768! The *embedding length* is 1152. This is the equivalent to `n_embd` in `nanoGPT`. It is the size of the embedding vector space.
Above, we also see that the quantization is only four bits, but it is a little more complicated than representing numbers with just sixteen values. The `K` and `M` refer to optimizations — first is the "K-block" quantization method, which refers to a groupwise quantization scheme where weights are grouped into blocks (e.g., 32 or 64 values), and each group gets its own scale and offset for better accuracy. `M` refers to a variant of `Q4_K` that applies an alternate encoding or layout for better memory access patterns or inference performance on certain hardware. `Q4_K` is a common choice for quantization when running 7B70B models on laptop or desktop computers. (That's $10^6$ $10^7$ times more parameters than our first `nanoGPT` model!)
Above, we also see that the quantization is only four bits, but it is a little more complicated than representing numbers with just sixteen values. The `K` and `M` refer to optimizations — first is the "K-block" quantization method, which refers to a groupwise quantization scheme where weights are grouped into blocks (e.g., 32 or 64 values), and each group gets its own scale and offset for better accuracy. `M` refers to a variant of `Q4_K` that applies an alternate encoding or layout for better memory access patterns or inference performance on certain hardware. `Q4_K` is a common choice for quantization when running 7B70B models on laptop or desktop computers. (That's $10^6$$10^7$ times more parameters than our first `nanoGPT` model!)
With the `/set verbose` command, you can monitor the model performance: