LLM Course documentation
Basic usage completed!
0. Setup
1. Transformer models
2. Using 🤗 Transformers
IntroductionBehind the pipelineModelsTokenizersHandling multiple sequencesPutting it all togetherBasic usage completed!Optimized Inference DeploymentEnd-of-chapter quiz
3. Fine-tuning a pretrained model
4. Sharing models and tokenizers
5. The 🤗 Datasets library
6. The 🤗 Tokenizers library
7. Classical NLP tasks
8. How to ask for help
9. Building and sharing demos
10. Curate high-quality datasets
11. Fine-tune Large Language Models
12. Build Reasoning Models new
Course Events
Basic usage completed!
Great job following the course up to here! To recap, in this chapter you:
- Learned the basic building blocks of a Transformer model.
- Learned what makes up a tokenization pipeline.
- Saw how to use a Transformer model in practice.
- Learned how to leverage a tokenizer to convert text to tensors that are understandable by the model.
- Set up a tokenizer and a model together to get from text to predictions.
- Learned the limitations of input IDs, and learned about attention masks.
- Played around with versatile and configurable tokenizer methods.
From now on, you should be able to freely navigate the 🤗 Transformers docs: the vocabulary will sound familiar, and you’ve already seen the methods that you’ll use the majority of the time.
Update on GitHub