From 564e75b82408a65a0a409d61eb36d5a9fa91e650 Mon Sep 17 00:00:00 2001 From: Eric Date: Tue, 7 Apr 2026 07:46:42 -0400 Subject: [PATCH] Minor edits --- 02-ollama/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/02-ollama/README.md b/02-ollama/README.md index 2f553d4..4066f75 100644 --- a/02-ollama/README.md +++ b/02-ollama/README.md @@ -188,7 +188,7 @@ At any time during a chat, you can reset the model with `/clear`, and you can le We can see that the `gemma3` model has nearly one billion parameters and a context length of 32,768! The *embedding length* is 1152. This is the equivalent to `n_embd` in `nanoGPT`. It is the size of the embedding vector space. -Above, we also see that the quantization is only four bits, but it is a little more complicated than representing numbers with just sixteen values. The `K` and `M` refer to optimizations — first is the "K-block" quantization method, which refers to a groupwise quantization scheme where weights are grouped into blocks (e.g., 32 or 64 values), and each group gets its own scale and offset for better accuracy. `M` refers to a variant of `Q4_K` that applies an alternate encoding or layout for better memory access patterns or inference performance on certain hardware. `Q4_K` is a common choice for quantization when running 7B–70B models on laptop or desktop computers. (That's $10^6$–$10^7$ times more parameters than our first `nanoGPT` model!) +Above, we also see that the quantization is only four bits, but it is a little more complicated than representing numbers with just sixteen values. The `K` and `M` refer to optimizations — first is the "K-block" quantization method, which refers to a groupwise quantization scheme where weights are grouped into blocks (e.g., 32 or 64 values), and each group gets its own scale and offset for better accuracy. `M` refers to a variant of `Q4_K` that applies an alternate encoding or layout for better memory access patterns or inference performance on certain hardware. `Q4_K` is a common choice for quantization when running 7B–70B models on laptop or desktop computers. (That's $10^6$ – $10^7$ times more parameters than our first `nanoGPT` model!) With the `/set verbose` command, you can monitor the model performance: