Transformers documentation
BERTology
Get started
Using 🤗 Transformers
Summary of the tasksSummary of the modelsPreprocessing dataFine-tuning a pretrained modelModel sharing and uploadingSummary of the tokenizersMulti-lingual models
Advanced guides
ExamplesTroubleshootingFine-tuning with custom datasets🤗 Transformers NotebooksRun training on Amazon SageMakerCommunityConverting Tensorflow CheckpointsMigrating from previous packagesHow to contribute to transformers?How to add a model to 🤗 Transformers?How to add a pipeline to 🤗 Transformers?Using tokenizers from 🤗 TokenizersPerformance and Scalability: How To Fit a Bigger Model and Train It FasterModel ParallelismTestingDebuggingExporting transformers modelsChecks on a Pull Request
Research
API
Main Classes
CallbacksConfigurationData CollatorKeras callbacksLoggingModelsOptimizationModel outputsPipelinesProcessorsTokenizerTrainerDeepSpeed IntegrationFeature Extractor
Models
ALBERTAuto ClassesBARTBARThezBARTphoBEiTBERTBertweetBertGenerationBertJapaneseBigBirdBigBirdPegasusBlenderbotBlenderbot SmallBORTByT5CamemBERTCANINECLIPConvBERTCPMCTRLDeBERTaDeBERTa-v2DeiTDETRDialoGPTDistilBERTDPRELECTRAEncoder Decoder ModelsFlauBERTFNetFSMTFunnel TransformerherBERTI-BERTImageGPTLayoutLMLayoutLMV2LayoutXLMLEDLongformerLUKELXMERTMarianMTM2M100MBart and MBart-50MegatronBERTMegatronGPT2MLUKEMobileBERTmLUKEMPNetMT5OpenAI GPTOpenAI GPT2GPT-JGPT NeoHubertPerceiverPegasusPhoBERTProphetNetQDQBertRAGReformerRemBERTRetriBERTRoBERTaRoFormerSegFormerSEWSEW-DSpeech Encoder Decoder ModelsSpeech2TextSpeech2Text2SplinterSqueezeBERTT5T5v1.1TAPASTransformer XLTrOCRUniSpeechUniSpeech-SATVision Encoder Decoder ModelsVision Text Dual EncoderVision Transformer (ViT)VisualBERTWav2Vec2Wav2Vec2PhonemeWavLMXLMXLM-ProphetNetXLM-RoBERTaXLNetXLSR-Wav2Vec2XLS-R
Internal Helpers
You are viewing v4.15.0 version. A newer version v5.8.1 is available.
BERTology
There is a growing field of study concerned with investigating the inner working of large-scale transformers like BERT (that some call “BERTology”). Some good examples of this field are:
- BERT Rediscovers the Classical NLP Pipeline by Ian Tenney, Dipanjan Das, Ellie Pavlick: https://arxiv.org/abs/1905.05950
- Are Sixteen Heads Really Better than One? by Paul Michel, Omer Levy, Graham Neubig: https://arxiv.org/abs/1905.10650
- What Does BERT Look At? An Analysis of BERT’s Attention by Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D. Manning: https://arxiv.org/abs/1906.04341
In order to help this new field develop, we have included a few additional features in the BERT/GPT/GPT-2 models to help people access the inner representations, mainly adapted from the great work of Paul Michel (https://arxiv.org/abs/1905.10650):
- accessing all the hidden-states of BERT/GPT/GPT-2,
- accessing all the attention weights for each head of BERT/GPT/GPT-2,
- retrieving heads output values and gradients to be able to compute head importance score and prune head as explained in https://arxiv.org/abs/1905.10650.
To help you understand and use these features, we have added a specific example script: bertology.py while extract information and prune a model pre-trained on GLUE.