--- library_name: transformers license: apache-2.0 base_model: Salesforce/codet5-small tags: - generated_from_trainer model-index: - name: codet5-small-code-corrector results: [] --- # codet5-small-code-corrector This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0127 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.0866 | 0.1182 | 1000 | 0.0282 | | 0.0455 | 0.2364 | 2000 | 0.0202 | | 0.0338 | 0.3545 | 3000 | 0.0169 | | 0.0287 | 0.4727 | 4000 | 0.0153 | | 0.0241 | 0.5909 | 5000 | 0.0142 | | 0.023 | 0.7091 | 6000 | 0.0138 | | 0.0202 | 0.8272 | 7000 | 0.0132 | | 0.0206 | 0.9454 | 8000 | 0.0127 | ### Framework versions - Transformers 4.57.1 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.1