Sync changes from che-computing
- Fix checkpoint directory name in 01-nanogpt - Add generative text references (OUTPUT, Love Letters) - Add PYTORCH.md troubleshooting (MPS, CUDA, WSL) - Minor spacing fix in 02-ollama Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
parent
564e75b824
commit
794cdaea0d
3 changed files with 119 additions and 4 deletions
|
|
@ -195,7 +195,7 @@ Every 250th iteration, the training script does a validation step. If the valida
|
|||
|
||||
```
|
||||
step 250: train loss 2.4293, val loss 2.4447
|
||||
saving checkpoint to out-shakespeare-char-cpu
|
||||
saving checkpoint to out-shakespeare-char
|
||||
...
|
||||
```
|
||||
|
||||
|
|
@ -205,7 +205,7 @@ When we train nanoGPT, it starts with randomly assigned weights and biases. This
|
|||
|
||||
> **Exercise 3:** As the model trains, it reports the training and validation losses. In a Jupyter notebook, plot these values with the number of iterations. *Hint:* To capture the output when you perform a training run, you could run the process in the background while redirecting its output to a file: `python train.py config/train_shakespeare_char.py [options] > output.txt &`. (Remember, the ampersand at the end runs the process in the background.) You can still monitor the run by typing `tail -f output.txt`. This command will "follow" the end of the file as it is written.
|
||||
|
||||
After the training finishes, we should have the model in `/out-shakespeare-char-cpu`:
|
||||
After the training finishes, we should have the model in `/out-shakespeare-char`:
|
||||
|
||||
```
|
||||
$ ls -l
|
||||
|
|
@ -221,7 +221,7 @@ In this case, the model is about 9.3 MB. That's not great! Our *training* text w
|
|||
The script `sample.py` runs inference on the model we just trained. We're using the CPU here, too.
|
||||
|
||||
```bash
|
||||
python sample.py --out_dir=out-shakespeare-char-cpu --device=cpu
|
||||
python sample.py --out_dir=out-shakespeare-char --device=cpu
|
||||
```
|
||||
|
||||
After a short time, the model will begin generating text.
|
||||
|
|
@ -376,3 +376,7 @@ These books are informative and accessible resources for understanding the under
|
|||
Including the sections:
|
||||
- Attention and LLMs - https://d2l.ai/chapter_attention-mechanisms-and-transformers/index.html
|
||||
- Softmax - https://d2l.ai/chapter_linear-classification/softmax-regression.html
|
||||
|
||||
If generating text with computers tickles your fancy, I recommend checking out the book *OUTPUT: An Anthology of Computer-Generated Text* by Lillian-Yvonne Bertram and Nick Montfort. It is a timely book covering a wide range of texts, "from research systems, natural-language generation products and services, and artistic and literary programs." (Bertram, Lillian-Yvonne, and Nick Montfort, editors. Output: An Anthology of Computer-Generated Text, 1953–2023. The MIT Press, 2024.)
|
||||
|
||||
While it still feels novel to many of us, interest in machine or "generative" text dates almost to the beginning of the modern computer era. Many experiments, spanning a context from AI research to artistic and literary practices, have been shared over the intervening decades. Christopher Strachey's program, often referred to as *Love Letters*, was written in 1952 for the Manchester Mark I computer. It is considered by many to be the first example of generative computer literature. In 2009, David Link ran Strachey's original code on an emulated Mark I, and Nick Montfort, professor of digital media at MIT, coded a modern recreation of it in 2014. The text output follows the pattern "you are my [adjective] [noun]. my [adjective] [noun] [adverb] [verbs] your [adjective] [noun]," signed by "M.U.C." for the Manchester University Computer. With the vocabulary in the program, there are over 300 billion possible combinations.
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue