repo string | github_id int64 | github_node_id string | number int64 | html_url string | api_url string | title string | body string | state string | state_reason string | locked bool | comments_count int64 | labels list | assignees list | created_at string | updated_at string | closed_at string | author_association string | milestone_title string | snapshot_id string | extracted_at string | author_login string | author_id int64 | author_node_id string | author_type string | author_site_admin bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers | 836,684,366 | MDU6SXNzdWU4MzY2ODQzNjY= | 10,816 | https://github.com/huggingface/transformers/issues/10816 | https://api.github.com/repos/huggingface/transformers/issues/10816 | [trainer] figuring out why eval with `--fp16_full_eval` is 25% slower | Recently HF trainer was extended to support full fp16 eval via `--fp16_full_eval`. I'd have expected it to be either equal or faster than eval with fp32 model, but surprisingly I have noticed a 25% slowdown when using it.
This may or may not impact deepspeed as well, which also runs eval in fp16, but we can't compar... | closed | completed | false | 11 | [
"Good First Issue",
"Good Second Issue"
] | [] | 2021-03-20T04:30:07Z | 2026-03-02T13:59:47Z | 2026-03-02T13:59:47Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | stas00 | 10,676,103 | MDQ6VXNlcjEwNjc2MTAz | User | false |
huggingface/transformers | 860,870,722 | MDU6SXNzdWU4NjA4NzA3MjI= | 11,307 | https://github.com/huggingface/transformers/issues/11307 | https://api.github.com/repos/huggingface/transformers/issues/11307 | Getting time offsets of beginning and end of each word in Wav2Vec2 | # 🚀 Feature request
Hello I was thinking it would be of great help if I can get the time offsets of start and end of each word .
## Motivation
I was going through Google Speech to text documentation and found this [feature](https://cloud.google.com/speech-to-text/docs/async-time-offsets) and thought will be re... | closed | completed | false | 27 | [
"Good First Issue",
"Good Second Issue"
] | [] | 2021-04-19T03:57:57Z | 2026-02-26T14:14:43Z | 2026-02-26T14:14:43Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | theainerd | 15,798,640 | MDQ6VXNlcjE1Nzk4NjQw | User | false |
huggingface/transformers | 919,408,065 | MDU6SXNzdWU5MTk0MDgwNjU= | 12,126 | https://github.com/huggingface/transformers/issues/12126 | https://api.github.com/repos/huggingface/transformers/issues/12126 | [Performance] Tracking open Issues and PRs (pytorch transformers) | Let's use this Issue to track performance issues and enhancement requests, so it's easier to prioritize the work.
**This is for pytorch `transformers`**
Also I will label it as a `Good Difficult Issue` in case someone is ready for a challenging but rewarding experience of figuring things out. If you do want to ta... | closed | completed | false | 3 | [
"Good First Issue",
"Performance",
"Good Difficult Issue"
] | [
"stas00",
"patil-suraj"
] | 2021-06-12T03:45:57Z | 2026-03-02T14:18:44Z | 2026-03-02T14:18:44Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | stas00 | 10,676,103 | MDQ6VXNlcjEwNjc2MTAz | User | false |
huggingface/transformers | 921,433,978 | MDU6SXNzdWU5MjE0MzM5Nzg= | 12,177 | https://github.com/huggingface/transformers/issues/12177 | https://api.github.com/repos/huggingface/transformers/issues/12177 | Exception during hyperparameter search with Ray and transformers library starting from version 4.5.0 | I currently face the problem that with recent versions of the transformers library (issue starting at version 4.5.0)
the hyperparameter search with ray tune runs into a serialization issue described below.
## Environment info
- `transformers` version: 4.5.0
- Platform: Linux-4.19.0-16-amd64-x86_64-with-glibc2.1... | closed | completed | false | 4 | [] | [] | 2021-06-15T14:02:20Z | 2026-02-26T12:32:52Z | 2021-06-15T18:53:20Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | sven-h | 8,777,506 | MDQ6VXNlcjg3Nzc1MDY= | User | false |
huggingface/transformers | 978,451,864 | MDU6SXNzdWU5Nzg0NTE4NjQ= | 13,244 | https://github.com/huggingface/transformers/issues/13244 | https://api.github.com/repos/huggingface/transformers/issues/13244 | Tapas tokenization Different from Tensorflow Code | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.1
### Who can help
@LysandreJik @sgugger @NielsRogge
## Information
Model I am using (Bert, XLNet .... | closed | completed | false | 12 | [
"Good First Issue"
] | [] | 2021-08-24T20:19:40Z | 2026-01-26T12:57:44Z | 2026-01-26T12:57:26Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Doreenruirui | 8,978,500 | MDQ6VXNlcjg5Nzg1MDA= | User | false |
huggingface/transformers | 1,050,733,132 | I_kwDOCUB6oc4-oOpM | 14,368 | https://github.com/huggingface/transformers/issues/14368 | https://api.github.com/repos/huggingface/transformers/issues/14368 | Export LayoutLMv2 to onnx | I am trying to export LayoutLMv2 model to onnx but there is no support for that available in transformers library.
I have tried to follow the method available for layoutLM but that is not working.
Here is config class for LayoutLMv2
```
class LayoutLMv2OnnxConfig(OnnxConfig):
def __init__(
self,
... | closed | completed | false | 28 | [
"Good First Issue"
] | [] | 2021-11-11T08:54:39Z | 2026-03-20T08:32:38Z | 2026-03-20T08:32:16Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | fadi212 | 37,739,280 | MDQ6VXNlcjM3NzM5Mjgw | User | false |
huggingface/transformers | 1,115,366,508 | I_kwDOCUB6oc5CeyRs | 15,354 | https://github.com/huggingface/transformers/issues/15354 | https://api.github.com/repos/huggingface/transformers/issues/15354 | GeneratorExp aren't supported by torch.jit.script when I try to export a previously trained model 'google/vit-base-patch16-224-in21k'. | ## Environment info
- `transformers` version: 4.15.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.7.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxL... | closed | completed | false | 5 | [
"Good First Issue"
] | [] | 2022-01-26T18:47:55Z | 2026-03-09T13:09:33Z | 2026-03-09T13:09:33Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | ssriram1978 | 12,517,415 | MDQ6VXNlcjEyNTE3NDE1 | User | false |
huggingface/transformers | 1,162,459,652 | I_kwDOCUB6oc5FSboE | 15,980 | https://github.com/huggingface/transformers/issues/15980 | https://api.github.com/repos/huggingface/transformers/issues/15980 | Bad error message when downloading private model without being logged in. | Let's say an organization creates a private model and wants to share it with other team members which are less savy of `huggingface_hub` and `transformers`.
So e.g. I create: https://huggingface.co/NewT5/dummy_model
and want to share it with others.
Now if I run:
```python
from transformers import BertMod... | closed | completed | false | 8 | [] | [
"julien-c",
"LysandreJik",
"SBrandeis",
"sgugger"
] | 2022-03-08T10:06:05Z | 2026-02-22T17:04:53Z | 2022-06-21T15:07:36Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | patrickvonplaten | 23,423,619 | MDQ6VXNlcjIzNDIzNjE5 | User | false |
huggingface/transformers | 1,219,113,876 | I_kwDOCUB6oc5IqjOU | 16,998 | https://github.com/huggingface/transformers/issues/16998 | https://api.github.com/repos/huggingface/transformers/issues/16998 | Question on model_max_length (DeBERTa-V3) | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.3
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.5.1 (False)
- Tensorflow version (GPU?): 2.4.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not i... | closed | completed | false | 19 | [
"bug"
] | [] | 2022-04-28T18:29:57Z | 2026-01-27T18:00:48Z | 2022-08-15T15:02:40Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | ioana-blue | 17,202,292 | MDQ6VXNlcjE3MjAyMjky | User | false |
huggingface/transformers | 1,223,112,039 | I_kwDOCUB6oc5I5zVn | 17,051 | https://github.com/huggingface/transformers/issues/17051 | https://api.github.com/repos/huggingface/transformers/issues/17051 | Collection of Tokenizer issues | ### System Info
```shell
Transformers + Tokenizers
```
### Who can help?
This Issue is a summary of multiple problems that we are currently encountering with Tokenizers. To solve them we'll need a more profound discussion of:
- To what extend fast and slow tokenizers should be aligned
- Whether all slow tokenizer... | closed | completed | false | 8 | [
"Discussion",
"WIP",
"bug"
] | [] | 2022-05-02T16:53:59Z | 2026-03-18T13:10:46Z | 2026-03-18T13:10:46Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | patrickvonplaten | 23,423,619 | MDQ6VXNlcjIzNDIzNjE5 | User | false |
huggingface/transformers | 1,264,955,622 | I_kwDOCUB6oc5LZbDm | 17,611 | https://github.com/huggingface/transformers/issues/17611 | https://api.github.com/repos/huggingface/transformers/issues/17611 | SSLError: HTTPSConnectionPool(host='huggingface.co', port=443) | I'm trying in python:
from sentence_transformers import SentenceTransformer
sbert_model = SentenceTransformer('all-MiniLM-L6-v2')
and I get this error:
SSLError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/sentence-transformers/all-MiniLM-L6-v2 (Caused by S... | closed | completed | false | 121 | [] | [] | 2022-06-08T15:46:00Z | 2026-03-01T21:51:12Z | 2022-08-15T15:02:26Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | alexsomoza | 8,261,170 | MDQ6VXNlcjgyNjExNzA= | User | false |
huggingface/transformers | 1,364,946,168 | I_kwDOCUB6oc5RW2z4 | 18,926 | https://github.com/huggingface/transformers/issues/18926 | https://api.github.com/repos/huggingface/transformers/issues/18926 | Follow ups to DocumentQuestionAnswering Pipeline | ### Feature request
PR https://github.com/huggingface/transformers/pull/18414 has a number of TODOs left over which we'd like to track as follow up tasks.
## Pipeline
- [x] Add support for documents which have more than the tokenizer span (e.g. 512) words
- [ ] Add support for multi-page documents (e.g. for Don... | closed | completed | false | 22 | [
"Good First Issue"
] | [] | 2022-09-07T16:55:54Z | 2026-03-03T18:25:07Z | 2026-03-02T08:52:21Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | ankrgyl | 565,363 | MDQ6VXNlcjU2NTM2Mw== | User | false |
huggingface/transformers | 1,532,447,654 | I_kwDOCUB6oc5bV0um | 21,110 | https://github.com/huggingface/transformers/issues/21110 | https://api.github.com/repos/huggingface/transformers/issues/21110 | Add support for BLIP and GIT in image-to-text and VQA pipelines | ### Feature request
BLIP and GIT are 2 recent additions in the library, providing state-of-the-art performance for tasks like image captioning and visual question answering (VQA). GIT is even capable of video captioning and video QA.
Hence it makes sense to support them in our image-to-text and VQA pipelines.
### ... | closed | completed | false | 27 | [
"Good First Issue"
] | [] | 2023-01-13T15:08:12Z | 2026-03-02T08:56:33Z | 2026-03-02T08:56:33Z | MEMBER | null | 20260325T173244Z | 2026-03-25T17:32:44Z | NielsRogge | 48,327,001 | MDQ6VXNlcjQ4MzI3MDAx | User | false |
huggingface/transformers | 1,638,876,459 | I_kwDOCUB6oc5hr0Ur | 22,355 | https://github.com/huggingface/transformers/issues/22355 | https://api.github.com/repos/huggingface/transformers/issues/22355 | No module named transformers.onnx | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.5.1
- Platform: Linux-5.19.0-35-generic-x86_64-with-debian-bookworm-sid
- Python version: 3.6.13
- PyTorch version (GPU?): 1.5.0 (False)
- Tensorflow version (GPU?): not installed (NA... | closed | completed | false | 5 | [] | [] | 2023-03-24T07:33:05Z | 2026-02-22T19:10:13Z | 2023-03-27T07:30:27Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | co-develop-drv | 50,092,251 | MDQ6VXNlcjUwMDkyMjUx | User | false |
huggingface/transformers | 1,688,042,727 | I_kwDOCUB6oc5knXzn | 23,042 | https://github.com/huggingface/transformers/issues/23042 | https://api.github.com/repos/huggingface/transformers/issues/23042 | Using `inputs_embeds` for generation gives an incorrect warning | I'm trying to use the `inputs_embeds` parameter to run the LLaMA model. This is part of my code.
```python
# INPUT = ...embedding of a sequence, ensuring that there are no pad tokens
output_sequences = LLaMA.generate(
inputs_embeds=INPUT.to(device)
pad_token_id=tokenizer.pad_token_id,
... | closed | completed | false | 17 | [] | [] | 2023-04-28T07:24:25Z | 2026-03-16T14:08:05Z | 2023-05-12T16:06:17Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | zrthxn | 35,369,637 | MDQ6VXNlcjM1MzY5NjM3 | User | false |
huggingface/transformers | 1,778,270,143 | I_kwDOCUB6oc5p_j-_ | 24,540 | https://github.com/huggingface/transformers/issues/24540 | https://api.github.com/repos/huggingface/transformers/issues/24540 | Issue Loading 4-bit and 8-bit language models: ValueError: `.to` is not supported for `4-bit` or `8-bit` models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`. | ### System Info
### System Info
I'm running into an issue where I'm not able to load a 4-bit or 8-bit quantized version of Falcon or LLaMa models. This was working a couple of weeks ago. This is running on Colab. I'm wondering if anyone knows of a fix, or why this is no longer working when it was 2-3 weeks ago arou... | closed | completed | false | 45 | [] | [
"younesbelkada"
] | 2023-06-28T06:07:36Z | 2026-02-01T03:55:15Z | 2024-10-11T16:21:33Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | DJT777 | 47,899,472 | MDQ6VXNlcjQ3ODk5NDcy | User | false |
huggingface/transformers | 1,787,616,386 | I_kwDOCUB6oc5qjNyC | 24,643 | https://github.com/huggingface/transformers/issues/24643 | https://api.github.com/repos/huggingface/transformers/issues/24643 | "RuntimeError: 'weight' must be 2-D" training with DeepSpeed | ### System Info
- `transformers` version: 4.30.2
- Platform: Linux-5.19.0-46-generic-x86_64-with-glibc2.35
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/... | closed | completed | false | 21 | [
"solved"
] | [] | 2023-07-04T10:08:50Z | 2026-03-25T04:08:17Z | 2023-10-20T08:07:02Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | ZizoAdam | 124,168,668 | U_kgDOB2ap3A | User | false |
huggingface/transformers | 1,812,635,816 | I_kwDOCUB6oc5sCqCo | 24,934 | https://github.com/huggingface/transformers/issues/24934 | https://api.github.com/repos/huggingface/transformers/issues/24934 | Change package name from "transformers" to something less generic | ### Feature request
I'm repeatedly finding myself in situations where I want to have a package called `datasets.py` or `evaluate.py` in my code and can't because those names are being taken up by Huggingface packages. While I can understand how (even from the user's perspective) it's aesthetically pleasing to have n... | closed | completed | false | 9 | [] | [] | 2023-07-19T19:53:24Z | 2026-02-17T14:15:44Z | 2023-08-30T08:02:47Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | jack-jjm | 2,124,157 | MDQ6VXNlcjIxMjQxNTc= | User | false |
huggingface/transformers | 1,832,446,081 | I_kwDOCUB6oc5tOOiB | 25,251 | https://github.com/huggingface/transformers/issues/25251 | https://api.github.com/repos/huggingface/transformers/issues/25251 | Defining top_k within pipeline changes output from list to nested list | ### System Info
```
- `transformers` version: 4.30.2
- Platform: Linux-5.14.0-162.22.2.el9_1.x86_64-x86_64-with-glibc2.34
- Python version: 3.9.17
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 1.11.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Fla... | closed | completed | false | 7 | [] | [
"ydshieh"
] | 2023-08-02T05:12:29Z | 2026-02-03T15:54:21Z | 2023-08-04T07:46:53Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Harjas123 | 107,530,287 | U_kgDOBmjILw | User | false |
huggingface/transformers | 1,909,152,925 | I_kwDOCUB6oc5xy1yd | 26,350 | https://github.com/huggingface/transformers/issues/26350 | https://api.github.com/repos/huggingface/transformers/issues/26350 | Community contribution: Adding Flash Attention 2 support for more architectures | ### Feature request
Flash Attention 2 is a library that provides attention operation kernels for faster and more memory efficient inference and training: https://github.com/Dao-AILab/flash-attention
. Here is a list of the files ready for translation. Let us know in this issue if you'd like to... | open | null | false | 6 | [
"WIP"
] | [] | 2023-10-26T18:06:15Z | 2026-03-15T11:34:51Z | null | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | mertyyanik | 32,648,818 | MDQ6VXNlcjMyNjQ4ODE4 | User | false |
huggingface/transformers | 2,045,776,155 | I_kwDOCUB6oc558BEb | 28,103 | https://github.com/huggingface/transformers/issues/28103 | https://api.github.com/repos/huggingface/transformers/issues/28103 | OWL-VIT Vision Foundation Model deployment in the edge cases - Need SDPA support for OWL-ViT Model optimization for Edge Deployment | ### Feature request
Hi Team,
I am working with OWL-ViT Size model which has around 611 MB size ( https://huggingface.co/google/owlvit-base-patch16).
I want to optimize this model and like to deploy in the edge device for object detection.
Come to know from the group torch.scaled_dot_product_attention can be used... | closed | completed | false | 4 | [
"Good First Issue"
] | [] | 2023-12-18T05:34:53Z | 2026-02-20T14:01:20Z | 2026-02-20T14:01:20Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | solomonmanuelraj | 25,194,971 | MDQ6VXNlcjI1MTk0OTcx | User | false |
huggingface/transformers | 2,060,276,201 | I_kwDOCUB6oc56zVHp | 28,282 | https://github.com/huggingface/transformers/issues/28282 | https://api.github.com/repos/huggingface/transformers/issues/28282 | ImportError: AutoModel requires the PyTorch library but it was not found in your environment | ### System Info
I'm trying to load a AutoModel pre-trained model. However, I receiving the following error :
```
ImportError:
AutoModel requires the PyTorch library but it was not found in your environment.
However, we were able to find a TensorFlow installation. TensorFlow classes begin
with "TF", but are ot... | closed | completed | false | 9 | [] | [] | 2023-12-29T17:24:50Z | 2026-02-24T08:21:12Z | 2024-02-11T08:03:47Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Marwen94 | 36,446,303 | MDQ6VXNlcjM2NDQ2MzAz | User | false |
huggingface/transformers | 2,143,620,996 | I_kwDOCUB6oc5_xQ-E | 29,127 | https://github.com/huggingface/transformers/issues/29127 | https://api.github.com/repos/huggingface/transformers/issues/29127 | err_handle(layoutlmv3): Error message doesn't give much clarity when boxes not containing enough information | ### System Info
- `transformers` version: 4.37.2
- Platform: Windows-10-10.0.22000-SP0
- Python version: 3.11.5
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.0+cpu (False)
- Tensorflow version (G... | closed | completed | false | 7 | [
"Good Second Issue"
] | [] | 2024-02-20T06:18:05Z | 2026-02-27T14:42:58Z | 2026-02-27T14:42:50Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Sushaanth-Suresh-Kumar | 123,300,765 | U_kgDOB1lrnQ | User | false |
huggingface/transformers | 2,144,914,235 | I_kwDOCUB6oc5_2Ms7 | 29,149 | https://github.com/huggingface/transformers/issues/29149 | https://api.github.com/repos/huggingface/transformers/issues/29149 | Generate: support passing position_ids | Thank you @tengomucho, for uncovering this bug.
### The problem
In a nutshell, passing the correct `position_ids` to `generate` should result in exactly the same results as not passing them. In other words, the following test should pass on all models, if added to `GenerationTesterMixin`. We can see that it is fa... | closed | completed | false | 1 | [
"WIP",
"bug",
"Generation"
] | [
"gante"
] | 2024-02-20T17:34:00Z | 2026-02-12T09:57:21Z | 2026-02-12T09:57:21Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | gante | 12,240,844 | MDQ6VXNlcjEyMjQwODQ0 | User | false |
huggingface/transformers | 2,178,032,113 | I_kwDOCUB6oc6B0iHx | 29,576 | https://github.com/huggingface/transformers/issues/29576 | https://api.github.com/repos/huggingface/transformers/issues/29576 | error: casting `&T` to `&mut T` is undefined behavior, even if the reference is unused, consider instead using an `UnsafeCell` --> tokenizers-lib/src/models/bpe/trainer.rs:517:47 | ### System Info
```
2024-03-11 01:14:30.782590: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-03-11 01:14:30.782649: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:... | closed | completed | false | 25 | [] | [] | 2024-03-11T01:23:29Z | 2026-02-12T09:45:57Z | 2024-05-22T12:35:18Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | dbl001 | 3,105,499 | MDQ6VXNlcjMxMDU0OTk= | User | false |
huggingface/transformers | 2,199,099,680 | I_kwDOCUB6oc6DE5kg | 29,769 | https://github.com/huggingface/transformers/issues/29769 | https://api.github.com/repos/huggingface/transformers/issues/29769 | Support batch_size > 1 in assisted decoding | ### Feature request
Support batch_size > 1 in assisted decoding
### Motivation
With this support, we can provide more capability for assisted decoding, including beam search.
### Your contribution
I would like to submit a PR to enable this; I mainly need to cut and pad the past_key_values because each sequence may... | closed | completed | false | 3 | [
"Feature request",
"Generation"
] | [] | 2024-03-21T04:37:45Z | 2026-02-26T07:31:11Z | 2024-04-01T08:22:48Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | jiqing-feng | 107,918,818 | U_kgDOBm614g | User | false |
huggingface/transformers | 2,211,192,891 | I_kwDOCUB6oc6DzCA7 | 29,911 | https://github.com/huggingface/transformers/issues/29911 | https://api.github.com/repos/huggingface/transformers/issues/29911 | Support DBRX Model | ### Feature request
Support the DBRX model (only correct pronunciation: DB-Rex) [blog post](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm).
Code is from the open source [databricks/dbrx](https://github.com/databricks/dbrx) repository.
### Motivation
> Across a range of standard benchmark... | open | null | false | 9 | [
"New model"
] | [] | 2024-03-27T16:03:01Z | 2026-02-27T15:40:50Z | null | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | milocress | 19,612,401 | MDQ6VXNlcjE5NjEyNDAx | User | false |
huggingface/transformers | 570,865,148 | MDU6SXNzdWU1NzA4NjUxNDg= | 3,021 | https://github.com/huggingface/transformers/issues/3021 | https://api.github.com/repos/huggingface/transformers/issues/3021 | Can GPT2LMHeadModel do batch inference with variable sentence lengths? | Given GPT2 tokenizer do not have an internal pad_token_id, how do I pad sentences and do batch inference using GPT2LMHeadModel?
Specifically my code as:
```
prompt_text = [
'in this paper we',
'we are trying to',
'The purpose of this workshop is to check whether we can', ]
tokens = [tokenizer.conve... | closed | completed | false | 58 | [] | [
"patrickvonplaten"
] | 2020-02-25T22:05:02Z | 2026-03-19T07:15:26Z | 2020-02-26T13:11:23Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | schizism | 3,358,940 | MDQ6VXNlcjMzNTg5NDA= | User | false |
huggingface/transformers | 2,246,614,757 | I_kwDOCUB6oc6F6J7l | 30,277 | https://github.com/huggingface/transformers/issues/30277 | https://api.github.com/repos/huggingface/transformers/issues/30277 | Jamba-v01 Model + Deepspeed Zero3 lead to "RuntimeError: Detected mismatch between collectives on ranks." | ### System Info
- `transformers` version: 4.39.0
- Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.29.2
- Accelerate config: not found
- Deepspeed version: 0.14.1
- PyTorch version (GPU?)... | closed | completed | false | 9 | [
"DeepSpeed",
"bug"
] | [] | 2024-04-16T18:04:07Z | 2026-02-21T17:10:24Z | 2024-11-23T08:13:49Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | KaiQiangSong | 9,112,038 | MDQ6VXNlcjkxMTIwMzg= | User | false |
huggingface/transformers | 2,258,877,823 | I_kwDOCUB6oc6Go71_ | 30,430 | https://github.com/huggingface/transformers/issues/30430 | https://api.github.com/repos/huggingface/transformers/issues/30430 | Remove `mps` workaround for `isin()` | ### Feature request
Remove `mps` workaround for `isin()`
### Motivation
#30376 introduced a workaround for `isin()` on `mps` devices, because PyTorch does not support that op yet: https://github.com/pytorch/pytorch/issues/77764#issuecomment-2067838075.
Going forward, it'd be desirable to use the much more readabl... | closed | completed | false | 10 | [
"Should Fix",
"WIP"
] | [] | 2024-04-23T13:23:10Z | 2026-02-24T12:56:26Z | 2026-02-24T11:11:32Z | MEMBER | null | 20260325T173244Z | 2026-03-25T17:32:44Z | pcuenca | 1,177,582 | MDQ6VXNlcjExNzc1ODI= | User | false |
huggingface/transformers | 2,267,456,217 | I_kwDOCUB6oc6HJqLZ | 30,525 | https://github.com/huggingface/transformers/issues/30525 | https://api.github.com/repos/huggingface/transformers/issues/30525 | Support align_corners=True in image_transforms module | ### Feature request
For a new model I'm working on #30136 I'd need to resize images in the image processor using `align_corners=True`, as the original code uses `torch.nn.functional(..., align_corners=True)` for resizing images during pre-processing.
### Motivation
Would be great to have this option available so tha... | closed | completed | false | 4 | [
"Feature request",
"Vision"
] | [] | 2024-04-28T09:36:32Z | 2026-03-18T09:49:12Z | 2026-03-18T09:49:12Z | MEMBER | null | 20260325T173244Z | 2026-03-25T17:32:44Z | NielsRogge | 48,327,001 | MDQ6VXNlcjQ4MzI3MDAx | User | false |
huggingface/transformers | 2,272,057,528 | I_kwDOCUB6oc6HbNi4 | 30,579 | https://github.com/huggingface/transformers/issues/30579 | https://api.github.com/repos/huggingface/transformers/issues/30579 | Community contribution: enable dynamic resolution input for more vision models. | ### Feature request
Some of our models interpolate its positional embeddings, enabling pretrained checkpoints to be used on different input resolutions. For example, [here in ViT](https://github.com/huggingface/transformers/blob/75bbfd5b2237b7e35a9265731ecf63022579e7e2/src/transformers/models/vit/modeling_vit.py#L79... | closed | completed | false | 40 | [
"Good First Issue",
"Vision"
] | [] | 2024-04-30T17:00:10Z | 2026-02-10T12:47:55Z | 2026-02-10T12:47:55Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | amyeroberts | 22,614,925 | MDQ6VXNlcjIyNjE0OTI1 | User | false |
huggingface/transformers | 2,287,419,716 | I_kwDOCUB6oc6IV0FE | 30,725 | https://github.com/huggingface/transformers/issues/30725 | https://api.github.com/repos/huggingface/transformers/issues/30725 | Support for Multiple Datasets and Domain-Specific Loss Calculation in Trainer | ### Feature request
I am currently working on a project that involves sequence level distillation across multiple domains, requiring the handling of separate datasets for each domain within a single training loop. Specifically, the challenge involves integrating data from four distinct domains, computing loss individu... | open | null | false | 19 | [
"trainer",
"Feature request"
] | [] | 2024-05-09T10:45:25Z | 2026-02-18T19:39:36Z | null | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | ghost | 10,137 | MDQ6VXNlcjEwMTM3 | User | false |
huggingface/transformers | 2,313,264,506 | I_kwDOCUB6oc6J4Z16 | 30,990 | https://github.com/huggingface/transformers/issues/30990 | https://api.github.com/repos/huggingface/transformers/issues/30990 | Sentence Transformers Gets Stuck loading | ### System Info
Ubuntu 20.04
Python 3.8.10
Updating Nvidia Driver is not possible, have to do with Cuda 11.6 (Torch 1.13.0)
torch 1.13.0
transformers 4.38.1
nvidia-cublas-cu11 11.10.3.66
nvidia-cuda-nvrtc-cu11 11.7.99
nvidia-cuda-runtime-cu11 11.7.99
nvidia-cudnn-cu11 ... | closed | completed | false | 8 | [] | [] | 2024-05-23T15:49:26Z | 2026-03-11T19:02:03Z | 2024-07-28T08:04:54Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Jaswir | 15,957,528 | MDQ6VXNlcjE1OTU3NTI4 | User | false |
huggingface/transformers | 2,359,684,102 | I_kwDOCUB6oc6MpewG | 31,474 | https://github.com/huggingface/transformers/issues/31474 | https://api.github.com/repos/huggingface/transformers/issues/31474 | Quantization support for heads and embeddings | ### Feature request
Hi! I’ve been researching LLM quantization recently ([this paper](https://arxiv.org/abs/2405.14852)), and noticed a potentially improtant issue that arises when using LLMs with 1-2 bit quantization.
### Problem description :mag:
Transformers supports several great ways for quantizing transfor... | closed | completed | false | 15 | [
"Feature request",
"Quantization"
] | [] | 2024-06-18T11:56:34Z | 2026-02-18T14:33:03Z | 2026-02-18T14:21:10Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | galqiwi | 17,232,054 | MDQ6VXNlcjE3MjMyMDU0 | User | false |
huggingface/transformers | 2,363,874,975 | I_kwDOCUB6oc6M5d6f | 31,515 | https://github.com/huggingface/transformers/issues/31515 | https://api.github.com/repos/huggingface/transformers/issues/31515 | from_pretrained 加载checkpoint过慢的问题 | ### System Info
latest python3.9.8
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
... | closed | completed | false | 3 | [] | [] | 2024-06-20T08:41:06Z | 2026-03-20T03:44:30Z | 2024-07-29T08:04:21Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | zhaoyuchen1128 | 167,266,669 | U_kgDOCfhJbQ | User | false |
huggingface/transformers | 2,391,073,573 | I_kwDOCUB6oc6OhOMl | 31,795 | https://github.com/huggingface/transformers/issues/31795 | https://api.github.com/repos/huggingface/transformers/issues/31795 | Confusing documentation of input_ids and past_key_values in model.forward | ### System Info
Current documentation
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
... | closed | completed | false | 6 | [
"WIP"
] | [
"zucchini-nlp"
] | 2024-07-04T15:08:39Z | 2026-02-19T11:43:05Z | 2026-02-19T11:43:05Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | alex-hh | 5,719,745 | MDQ6VXNlcjU3MTk3NDU= | User | false |
huggingface/transformers | 2,418,835,728 | I_kwDOCUB6oc6QLIEQ | 32,090 | https://github.com/huggingface/transformers/issues/32090 | https://api.github.com/repos/huggingface/transformers/issues/32090 | [Error] with Trainer: TypeError: Unsupported types (<class 'NoneType'>) passed to `_gpu_broadcast_one`. | ### System Info
- `transformers` version: 4.42.4
- Platform: Linux-5.15.0-101-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.24.0
- Safetensors version: 0.4.2
- Accelerate version: 0.32.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.3.1+cu121 (False)
- Ten... | closed | completed | false | 3 | [
"trainer",
"bug"
] | [] | 2024-07-19T13:01:37Z | 2026-03-20T06:22:10Z | 2024-09-22T08:06:59Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | halixness | 20,798,848 | MDQ6VXNlcjIwNzk4ODQ4 | User | false |
huggingface/transformers | 2,480,528,547 | I_kwDOCUB6oc6T2dyj | 32,937 | https://github.com/huggingface/transformers/issues/32937 | https://api.github.com/repos/huggingface/transformers/issues/32937 | Some casualLM models don't get position_ids in their forward pass. | ### Feature request
There are some models such that their forward pass doesn't get position_ids. e.g. we can see that OPTModel doesn't get position_ids, while GPTJModel does get position_ids. most newer models do have position_ids.
### Motivation
There are two main reasons we would like for all LM models to get ... | closed | completed | false | 6 | [
"Good Second Issue",
"Feature request"
] | [] | 2024-08-22T11:18:43Z | 2026-03-18T13:04:19Z | 2026-03-18T13:04:19Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | avishaiElmakies | 36,810,152 | MDQ6VXNlcjM2ODEwMTUy | User | false |
huggingface/transformers | 2,481,187,393 | I_kwDOCUB6oc6T4-pB | 32,944 | https://github.com/huggingface/transformers/issues/32944 | https://api.github.com/repos/huggingface/transformers/issues/32944 | clarify the label shifting behavior of llama models when `labels` is given. | ### Feature request
i believe `labels` in the training of causal LMs means the value to predict at time `n`, i.e., the next token. in other words, i'd assume, if `labels` is given, it should be already shifted by one in the data loader w.r.t. the `input_ids`.
however, in `LlamaForCausalLM.forward()`, i found the ... | closed | completed | false | 10 | [
"Feature request"
] | [] | 2024-08-22T15:54:02Z | 2026-03-13T13:32:44Z | 2026-03-13T13:32:44Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | keunwoochoi | 16,153,797 | MDQ6VXNlcjE2MTUzNzk3 | User | false |
huggingface/transformers | 2,504,129,656 | I_kwDOCUB6oc6VQfx4 | 33,290 | https://github.com/huggingface/transformers/issues/33290 | https://api.github.com/repos/huggingface/transformers/issues/33290 | oom when using adafactor optimizer in deepspeed | ### System Info
```python
- `transformers` version: 4.44.2
- Platform: Linux-5.15.0-105-generic-x86_64-with-glibc2.31
- Python version: 3.10.0
- Huggingface_hub version: 0.23.4
- Safetensors version: 0.4.2
- Accelerate version: 0.33.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.3.0+cu118 (T... | closed | completed | false | 10 | [
"Usage",
"Good First Issue",
"bug"
] | [] | 2024-09-04T01:56:08Z | 2026-03-02T15:37:38Z | 2026-03-02T15:37:38Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | zhangvia | 38,352,569 | MDQ6VXNlcjM4MzUyNTY5 | User | false |
huggingface/transformers | 2,510,650,907 | I_kwDOCUB6oc6VpX4b | 33,357 | https://github.com/huggingface/transformers/issues/33357 | https://api.github.com/repos/huggingface/transformers/issues/33357 | bus error on version 4.43.0 with pretrained community CLIP model - MacOS | ### System Info
- `transformers` version: 4.43.0
- Platform: macOS-13.0-arm64-arm-64bit
- Python version: 3.10.9
- Huggingface_hub version: 0.24.6
- Safetensors version: 0.4.5
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1 (False)
- Tensorflow version (GPU?):... | closed | completed | false | 21 | [
"PyTorch",
"bug"
] | [] | 2024-09-06T15:08:19Z | 2026-02-13T15:28:22Z | 2025-03-17T08:11:29Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | pezafar | 48,569,151 | MDQ6VXNlcjQ4NTY5MTUx | User | false |
huggingface/transformers | 2,522,954,925 | I_kwDOCUB6oc6WYTyt | 33,453 | https://github.com/huggingface/transformers/issues/33453 | https://api.github.com/repos/huggingface/transformers/issues/33453 | Regression in tokenizer loading | ### System Info
There was a regression in commit b4727a1216bb21df2795e973063ed07202235d7e that prevents loading of some tokenizers.
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in th... | closed | completed | false | 19 | [
"Core: Tokenization",
"Fast Tokenizers",
"bug"
] | [] | 2024-09-12T17:24:24Z | 2026-01-27T08:28:32Z | 2024-12-27T08:09:37Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | JCRPaquin | 1,820,796 | MDQ6VXNlcjE4MjA3OTY= | User | false |
huggingface/transformers | 2,543,131,188 | I_kwDOCUB6oc6XlRo0 | 33,666 | https://github.com/huggingface/transformers/issues/33666 | https://api.github.com/repos/huggingface/transformers/issues/33666 | Qwen2-VL: Multi-GPU training | ### System Info
- `transformers` version: 4.45.0.dev0
- Platform: Linux-4.18.0-477.10.1.el8_8.x86_64-x86_64-with-glibc2.28
- Python version: 3.11.5
- Huggingface_hub version: 0.24.0
- Safetensors version: 0.4.3
- Accelerate version: 0.34.2
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distribu... | open | null | false | 10 | [
"Distributed Training / Models",
"trainer",
"Feature request",
"bug",
"Vision",
"Multimodal"
] | [] | 2024-09-23T16:27:55Z | 2026-02-06T08:47:55Z | null | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | ManuelFay | 43,467,008 | MDQ6VXNlcjQzNDY3MDA4 | User | false |
huggingface/transformers | 2,576,436,738 | I_kwDOCUB6oc6ZkU4C | 34,046 | https://github.com/huggingface/transformers/issues/34046 | https://api.github.com/repos/huggingface/transformers/issues/34046 | Support for torch._dynamo.export for Phi3 | ### Feature request
Compared to `symbolic_trace`, the new (but I assume, experimental) entrypoint in `torch._dynamo.export` seems to provide a more robust way to extract modular FX graphs, that can't have any graph breaks.
I have been experimenting with some networks (Pythia, OPT, Llama, Mistral), and they all go thr... | open | null | false | 3 | [
"Feature request",
"Deployment"
] | [] | 2024-10-09T16:42:52Z | 2026-03-12T14:04:03Z | null | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Giuseppe5 | 18,719,316 | MDQ6VXNlcjE4NzE5MzE2 | User | false |
huggingface/transformers | 2,613,625,535 | I_kwDOCUB6oc6byMK_ | 34,406 | https://github.com/huggingface/transformers/issues/34406 | https://api.github.com/repos/huggingface/transformers/issues/34406 | Support dynamic batch size | ### Feature request
Hi thanks for the library! When training, I realize that, if a micro batch contains too few tokens, the throughput will be quite bad (i.e. average time per token is large). However, I cannot increase the batch size, because there are long (e.g. 2000 tokens) and short (e.g. 500 tokens) sequences i... | closed | completed | false | 4 | [
"trainer",
"Feature request"
] | [] | 2024-10-25T09:48:47Z | 2026-03-19T13:16:20Z | 2026-03-18T13:14:33Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | fzyzcjy | 5,236,035 | MDQ6VXNlcjUyMzYwMzU= | User | false |
huggingface/transformers | 2,629,405,187 | I_kwDOCUB6oc6cuYoD | 34,567 | https://github.com/huggingface/transformers/issues/34567 | https://api.github.com/repos/huggingface/transformers/issues/34567 | TrainerState's property `num_input_tokens_seen` is not updating | ### System Info
```
- `transformers` version: 4.46.0
- Python version: 3.10.15
- Huggingface_hub version: 0.26.1
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax versio... | closed | completed | false | 5 | [
"bug"
] | [] | 2024-11-01T16:30:39Z | 2026-03-06T07:15:35Z | 2024-11-04T07:47:03Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | SwayamInSync | 74,960,567 | MDQ6VXNlcjc0OTYwNTY3 | User | false |
huggingface/transformers | 2,639,805,010 | I_kwDOCUB6oc6dWDpS | 34,634 | https://github.com/huggingface/transformers/issues/34634 | https://api.github.com/repos/huggingface/transformers/issues/34634 | BarkProcessor voice_preset doesn't work | ### System Info
- `transformers` version: 4.47.0.dev0
- Platform: Windows-11-10.0.22631-SP0
- Python version: 3.12.7
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.1.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1 (True)
- Tensorflow version (GPU?): n... | closed | completed | false | 11 | [
"bug",
"Audio"
] | [] | 2024-11-07T04:01:37Z | 2026-03-06T07:24:28Z | 2025-07-18T13:14:47Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | etheryee | 24,294,721 | MDQ6VXNlcjI0Mjk0NzIx | User | false |
huggingface/transformers | 2,650,135,779 | I_kwDOCUB6oc6d9dzj | 34,689 | https://github.com/huggingface/transformers/issues/34689 | https://api.github.com/repos/huggingface/transformers/issues/34689 | Transformers 4.46.2 breaks model loading for Llama 3.2 90B Vision Instruct | ### System Info
- `transformers` version: 4.46.2
- Platform: Linux-5.10.226-214.880.amzn2.x86_64-x86_64-with-glibc2.26
- Python version: 3.10.14
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.1.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.2 (True)
- Te... | closed | completed | false | 7 | [
"bug"
] | [] | 2024-11-11T18:58:37Z | 2026-03-06T08:09:09Z | 2024-11-25T10:20:21Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | iprivit | 41,305,661 | MDQ6VXNlcjQxMzA1NjYx | User | false |
huggingface/transformers | 2,658,784,395 | I_kwDOCUB6oc6eedSL | 34,733 | https://github.com/huggingface/transformers/issues/34733 | https://api.github.com/repos/huggingface/transformers/issues/34733 | Better error message when loading adapter models with peft dependency missing | ### Feature request
Loading adapter models (such as https://huggingface.co/lightonai/MonoQwen2-VL-v0.1/tree/main) fails with an error message when peft isn't installed. The error message
`OSError: lightonai/MonoQwen2-VL-v0.1 does not appear to have a file named pytorch_model.bin, model.safetensors, tf_model.h5, model... | closed | completed | false | 2 | [
"Feature request",
"PEFT"
] | [] | 2024-11-14T13:15:48Z | 2026-02-03T13:26:06Z | 2026-02-03T13:26:06Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | maxjeblick | 24,281,881 | MDQ6VXNlcjI0MjgxODgx | User | false |
huggingface/transformers | 2,691,898,113 | I_kwDOCUB6oc6gcxsB | 34,928 | https://github.com/huggingface/transformers/issues/34928 | https://api.github.com/repos/huggingface/transformers/issues/34928 | Recomputed tensor size does not match when using activation checkpointing when using FSDP and accelerate | ### System Info
```
- `transformers` version: 4.46.3
- Platform: Linux-6.8.0-1015-aws-x86_64-with-glibc2.35
- Python version: 3.12.6
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.1.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Ten... | open | reopened | false | 41 | [
"bug"
] | [] | 2024-11-25T19:02:12Z | 2026-03-03T18:28:52Z | null | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | jjbuck | 12,192,842 | MDQ6VXNlcjEyMTkyODQy | User | false |
huggingface/transformers | 2,770,869,698 | I_kwDOCUB6oc6lKB3C | 35,532 | https://github.com/huggingface/transformers/issues/35532 | https://api.github.com/repos/huggingface/transformers/issues/35532 | RagTokenizer Missing patch_token_id, patch_token, and encode Functionality | ### Feature request
I propose adding the following functionalities to the RagTokenizer in the Hugging Face Transformers library:
Support for patch_token_id and patch_token attributes: These attributes are essential for specifying special tokens that can be used during tokenization, particularly for Retrieval-Augm... | open | null | false | 4 | [
"Good Second Issue",
"Feature request"
] | [] | 2025-01-06T15:13:29Z | 2026-01-24T21:37:07Z | null | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | hanshengzhu0001 | 74,083,194 | MDQ6VXNlcjc0MDgzMTk0 | User | false |
huggingface/transformers | 2,772,466,699 | I_kwDOCUB6oc6lQHwL | 35,545 | https://github.com/huggingface/transformers/issues/35545 | https://api.github.com/repos/huggingface/transformers/issues/35545 | ModernBERT export to onnx error | ### System Info
- `transformers` version: 4.48.0.dev0
- Platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.35
- Python version: 3.11.11
- Huggingface_hub version: 0.27.0
- Safetensors version: 0.4.5
- Accelerate version: 1.2.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.5.1 (True)
- Tensorflo... | closed | completed | false | 10 | [
"bug",
"ONNX"
] | [] | 2025-01-07T10:26:26Z | 2026-02-12T22:04:20Z | 2025-01-14T10:22:09Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | wakaka6 | 48,764,488 | MDQ6VXNlcjQ4NzY0NDg4 | User | false |
huggingface/transformers | 2,789,302,377 | I_kwDOCUB6oc6mQWBp | 35,707 | https://github.com/huggingface/transformers/issues/35707 | https://api.github.com/repos/huggingface/transformers/issues/35707 | Issue with Progressive Generation Using inputs_embeds and past_key_values | ### System Info
- `transformers` version: 4.46.3
- Platform: Linux-6.8.0-48-generic-x86_64-with-glibc2.17
- Python version: 3.8.20
- Huggingface_hub version: 0.26.1
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Tensorflow versi... | closed | completed | false | 18 | [
"bug",
"Generation"
] | [] | 2025-01-15T09:39:18Z | 2026-02-18T22:48:47Z | 2025-03-26T08:06:29Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Superbooming | 45,536,749 | MDQ6VXNlcjQ1NTM2NzQ5 | User | false |
huggingface/transformers | 2,825,179,039 | I_kwDOCUB6oc6oZM-f | 36,002 | https://github.com/huggingface/transformers/issues/36002 | https://api.github.com/repos/huggingface/transformers/issues/36002 | Mismatch Between Image Tokens and Features in LLaVA Model Fine-Tuning | **Model: llava-hf/llava-1.5-7b-hf**
**Issue Description**
When I try to generate a response using the fine-tuned model, I encounter the following error:
ValueError: Image features and image tokens do not match: tokens: 575, features: 576
This error occurs during the generate() call, indicating a mismatch between the n... | closed | completed | false | 12 | [] | [] | 2025-02-01T12:15:54Z | 2026-01-31T13:45:00Z | 2025-02-02T08:36:57Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Md-Nasif03 | 164,668,292 | U_kgDOCdCjhA | User | false |
huggingface/transformers | 2,826,780,037 | I_kwDOCUB6oc6ofT2F | 36,010 | https://github.com/huggingface/transformers/issues/36010 | https://api.github.com/repos/huggingface/transformers/issues/36010 | ImportError: cannot import name 'GenerationMixin' from 'transformers.generation' | ### System Info
- `transformers` version: 4.47.1
- Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.39
- Python version: 3.11.11
- Huggingface_hub version: 0.28.1
- Safetensors version: 0.5.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflo... | closed | completed | false | 3 | [
"bug"
] | [] | 2025-02-03T08:40:41Z | 2026-03-05T04:04:29Z | 2025-03-15T08:04:24Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | sij411 | 77,471,503 | MDQ6VXNlcjc3NDcxNTAz | User | false |
huggingface/transformers | 2,830,956,433 | I_kwDOCUB6oc6ovPeR | 36,032 | https://github.com/huggingface/transformers/issues/36032 | https://api.github.com/repos/huggingface/transformers/issues/36032 | T5 Tokenzier not load with `AttributeError: add_special_tokens conflicts with the method add_special_tokens in T5Tokenizer` | ### System Info
- `transformers` version: 4.48.2
- Platform: Linux-5.10.0-33-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.12.0
- Huggingface_hub version: 0.28.1
- Safetensors version: 0.5.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.6.0+cu118 (True)
- Tensor... | closed | completed | false | 10 | [
"bug"
] | [] | 2025-02-04T18:06:41Z | 2026-03-02T08:12:23Z | 2026-03-02T08:12:23Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | utkarsh-fileread | 171,386,284 | U_kgDOCjclrA | User | false |
huggingface/transformers | 2,859,319,686 | I_kwDOCUB6oc6qbcGG | 36,246 | https://github.com/huggingface/transformers/issues/36246 | https://api.github.com/repos/huggingface/transformers/issues/36246 | ImportError: cannot import name 'Qwen2_5_VLImageProcessor' from 'transformers.models.qwen2_5_vl' (/usr/local/lib/python3.10/dist-packages/transformers/models/qwen2_5_vl/__init__.py) | ### System Info
vllm==0.7.2
transformers==4.49.0
### Who can help?
ImportError: cannot import name 'Qwen2_5_VLImageProcessor' from 'transformers.models.qwen2_5_vl' (/usr/local/lib/python3.10/dist-packages/transformers/models/qwen2_5_vl/__init__.py)
### Information
- [ ] The official example scripts
- [x] My own mo... | closed | completed | false | 5 | [
"bug"
] | [] | 2025-02-18T04:33:33Z | 2026-01-25T23:28:19Z | 2025-02-18T17:02:07Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | LaoWangGB | 144,193,886 | U_kgDOCJg5Xg | User | false |
huggingface/transformers | 2,865,407,905 | I_kwDOCUB6oc6qyqeh | 36,296 | https://github.com/huggingface/transformers/issues/36296 | https://api.github.com/repos/huggingface/transformers/issues/36296 | tensor parallel training bug | ### System Info
transformers:4.45.dev0
python:3.11
linux
### Who can help?
#34194
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
##... | closed | completed | false | 7 | [
"bug"
] | [] | 2025-02-20T08:15:10Z | 2026-03-02T15:39:51Z | 2025-03-31T08:04:28Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | iMountTai | 35,353,688 | MDQ6VXNlcjM1MzUzNjg4 | User | false |
huggingface/transformers | 2,869,102,621 | I_kwDOCUB6oc6rAwgd | 36,331 | https://github.com/huggingface/transformers/issues/36331 | https://api.github.com/repos/huggingface/transformers/issues/36331 | TypeError: CustomTrainer.compute_loss() got an unexpected keyword argument 'num_items_in_batch' | ### System Info
- `transformers` version: 4.50.0.dev0
- Platform: Linux-5.15.0-210.163.7.el8uek.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.16
- Huggingface_hub version: 0.29.1
- Safetensors version: 0.5.2
- Accelerate version: 1.4.0
- Accelerate config: not found
- DeepSpeed version: 0.16.3
- PyTorch version... | closed | completed | false | 13 | [
"bug"
] | [] | 2025-02-21T13:58:12Z | 2026-03-06T10:49:50Z | 2025-06-01T08:04:12Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | ruidazeng | 31,152,346 | MDQ6VXNlcjMxMTUyMzQ2 | User | false |
huggingface/transformers | 2,914,781,972 | I_kwDOCUB6oc6tvAsU | 36,683 | https://github.com/huggingface/transformers/issues/36683 | https://api.github.com/repos/huggingface/transformers/issues/36683 | AttributeError: 'Gemma3Config' object has no attribute 'vocab_size' | ### System Info
v4.50.0.dev0
### Who can help?
@ArthurZucker
@LysandreJik
@xenova
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
... | open | reopened | false | 39 | [
"bug"
] | [] | 2025-03-12T18:11:39Z | 2026-03-23T13:36:49Z | null | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | jumelet | 9,407,977 | MDQ6VXNlcjk0MDc5Nzc= | User | false |
huggingface/transformers | 2,931,161,181 | I_kwDOCUB6oc6utfhd | 36,817 | https://github.com/huggingface/transformers/issues/36817 | https://api.github.com/repos/huggingface/transformers/issues/36817 | Add EuroBert Model To Config | ### Model description
I would like to have the EuroBert model added to the config (configuration_auto.py) :)
Especially the 210M version:
https://huggingface.co/EuroBERT
This would probably solve an issue in Flair:
https://github.com/flairNLP/flair/issues/3630
```
File "C:\Users\nick\PycharmProjects\flair\.venv\Lib... | open | null | false | 2 | [
"New model"
] | [] | 2025-03-19T09:56:20Z | 2026-02-24T09:37:27Z | null | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | zynos | 8,973,150 | MDQ6VXNlcjg5NzMxNTA= | User | false |
huggingface/transformers | 2,947,704,577 | I_kwDOCUB6oc6vsmcB | 36,979 | https://github.com/huggingface/transformers/issues/36979 | https://api.github.com/repos/huggingface/transformers/issues/36979 | [Community contributions] Model cards | Hey friends! 👋
We are currently in the process of improving the Transformers model cards by making them more directly useful for everyone. The main goal is to:
1. Standardize all model cards with a consistent format so users know what to expect when moving between different model cards or trying to learn how to use... | closed | completed | false | 195 | [
"Good First Issue",
"Good First Documentation Issue",
"contributions-welcome"
] | [] | 2025-03-25T20:39:10Z | 2026-03-10T09:38:29Z | 2025-09-17T15:03:55Z | MEMBER | null | 20260325T173244Z | 2026-03-25T17:32:44Z | stevhliu | 59,462,357 | MDQ6VXNlcjU5NDYyMzU3 | User | false |
huggingface/transformers | 2,986,242,010 | I_kwDOCUB6oc6x_m_a | 37,428 | https://github.com/huggingface/transformers/issues/37428 | https://api.github.com/repos/huggingface/transformers/issues/37428 | ImportError: cannot import name '_flash_supports_window_size' from 'transformers.modeling_flash_attention_utils' | ### System Info
Hi there,
I'm using tridao's flash attention and I'm running into an import error with the transformers library:
```
File "/g/g14/venkatraman2/glm/glm/train/training.py", line 34, in <module>
from glm.train.train_wrapper_registry import train_wrapper_registry
File "/g/g14/venkatraman2/glm/glm/t... | closed | completed | false | 5 | [
"bug"
] | [] | 2025-04-10T16:30:59Z | 2026-02-09T14:37:12Z | 2025-05-20T08:02:52Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | mv2731 | 60,421,398 | MDQ6VXNlcjYwNDIxMzk4 | User | false |
huggingface/transformers | 3,036,862,351 | I_kwDOCUB6oc61AteP | 37,934 | https://github.com/huggingface/transformers/issues/37934 | https://api.github.com/repos/huggingface/transformers/issues/37934 | Is Llama4TextL2Norm meant to be RMS norm? | https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama4/modeling_llama4.py#L118
```
x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps)
```
This is just the rms norm? | closed | completed | false | 2 | [] | [] | 2025-05-02T21:28:05Z | 2026-03-10T06:05:09Z | 2025-06-11T08:02:45Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | 0x6b64 | 105,229,704 | U_kgDOBkWtiA | User | false |
huggingface/transformers | 3,054,509,355 | I_kwDOCUB6oc62EB0r | 38,066 | https://github.com/huggingface/transformers/issues/38066 | https://api.github.com/repos/huggingface/transformers/issues/38066 | `AutoModel.from_pretrained(...)` (with explicit `device_map` unset) fails under `with torch.device("meta")` with PyTorch 2.6.0 and 2.7.0 | ```python
# from torch.nn.attention.flex_attention import BlockMask, flex_attention
from transformers import AutoModel
import torch
with torch.device('meta'):
AutoModel.from_pretrained('Qwen/Qwen2.5-0.5B', trust_remote_code=True)
````
I found this code in the wild in https://github.com/Open-Reasoner-Zero/Open-Rea... | closed | completed | false | 10 | [] | [] | 2025-05-10T20:35:19Z | 2026-02-03T16:32:34Z | 2025-07-12T08:03:15Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | vadimkantorov | 1,041,752 | MDQ6VXNlcjEwNDE3NTI= | User | false |
huggingface/transformers | 3,068,593,888 | I_kwDOCUB6oc625wbg | 38,175 | https://github.com/huggingface/transformers/issues/38175 | https://api.github.com/repos/huggingface/transformers/issues/38175 | Unexpected Zero Probabilities with siglip2-base-patch16-224 Model | ### System Info
```
transformers version: 4.51.3
Platform: Linux
Python version: 3.10.14
PyTorch version (GPU?): 2.2.2 (CUDA available: True)
Huggingface Hub version: 0.31.2
Safetensors version: 0.5.3
Accelerate version: 1.7.0
Accelerate config: Not configured
TensorFlow version (GPU?): Not installed
Flax version (CPU... | closed | completed | false | 4 | [
"bug"
] | [] | 2025-05-16T10:18:19Z | 2026-02-12T08:50:16Z | 2025-05-30T13:57:13Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Magician6174 | 86,114,922 | MDQ6VXNlcjg2MTE0OTIy | User | false |
huggingface/transformers | 3,113,022,900 | I_kwDOCUB6oc65jPW0 | 38,549 | https://github.com/huggingface/transformers/issues/38549 | https://api.github.com/repos/huggingface/transformers/issues/38549 | Clarification on default top_k sampling parameter | Hi 🤗 team,
I'm writing to inquire about the design choice to set the default top_k sampling parameter to 50 in the transformers library.
https://github.com/huggingface/transformers/blob/f4fc42216cd56ab6b68270bf80d811614d8d59e4/src/transformers/generation/configuration_utils.py#L431
It appears top_k is the only samp... | closed | completed | false | 3 | [] | [] | 2025-06-03T08:42:36Z | 2026-02-13T18:39:29Z | 2025-07-12T08:02:49Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | MostHumble | 56,939,432 | MDQ6VXNlcjU2OTM5NDMy | User | false |
huggingface/transformers | 3,121,797,099 | I_kwDOCUB6oc66Etfr | 38,617 | https://github.com/huggingface/transformers/issues/38617 | https://api.github.com/repos/huggingface/transformers/issues/38617 | ImportError: cannot import name 'layer_type_validation' from 'transformers.configuration_utils' | ### System Info
env:
Name: transformers
Version: 4.53.0.dev0
whe i called hte code bellowed:
`model = AutoModelForImageTextToText.from_pretrained(model_id, local_files_only=True, **model_kwargs)`
the model_id is medgemma model that from https://huggingface.co/models?other=medgemma.
the ImportError: cannot import nam... | closed | completed | false | 2 | [
"bug"
] | [] | 2025-06-05T16:09:03Z | 2026-02-12T04:42:18Z | 2025-06-15T07:56:37Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Jacoobr | 25,550,204 | MDQ6VXNlcjI1NTUwMjA0 | User | false |
huggingface/transformers | 3,135,466,321 | I_kwDOCUB6oc6642tR | 38,740 | https://github.com/huggingface/transformers/issues/38740 | https://api.github.com/repos/huggingface/transformers/issues/38740 | [DOCS] Add `pruna` as optimization framework | ### Feature request
Have a section on Pruna AI within the documentation. We did [a similar PR for diffusers](https://github.com/huggingface/diffusers/pull/11688) and thought it would be nice to show how to optimize transformers models too.
.
### Motivation
Have a section on Pruna AI within the documentation to show... | closed | completed | false | 9 | [
"Feature request"
] | [] | 2025-06-11T04:52:33Z | 2026-02-27T14:04:57Z | 2026-02-27T14:04:50Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | davidberenstein1957 | 25,269,220 | MDQ6VXNlcjI1MjY5MjIw | User | false |
huggingface/transformers | 3,202,815,590 | I_kwDOCUB6oc6-5xZm | 39,224 | https://github.com/huggingface/transformers/issues/39224 | https://api.github.com/repos/huggingface/transformers/issues/39224 | transformers: FlaubertTokenizer: do_lowercase_and_remove_accent: make the logger warning actionable (don't only tell what's wrong, rather suggest what could be done about that) | Please, make the logger warning below *actionable* (**don't only tell what's wrong, rather suggest what could be done about that**):
https://github.com/huggingface/transformers/blob/e6a8063ef1af16df964b644b07e1d17e96555d23/src/transformers/models/flaubert/tokenization_flaubert.py#L208-L209
Here's more context:
https... | open | null | false | 21 | [] | [] | 2025-07-04T13:48:52Z | 2026-03-18T08:41:18Z | null | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | kirisakow | 11,773,604 | MDQ6VXNlcjExNzczNjA0 | User | false |
huggingface/transformers | 3,214,087,656 | I_kwDOCUB6oc6_kxXo | 39,290 | https://github.com/huggingface/transformers/issues/39290 | https://api.github.com/repos/huggingface/transformers/issues/39290 | v4.53.0+ starts erroring with 'Gemma3TextConfig' object has no attribute 'sliding_window_pattern' with vLLM | ### System Info
- `transformers` version: 4.53.1
- Platform: Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.31
- Python version: 3.11.13
- Huggingface_hub version: 0.33.2
- Safetensors version: 0.5.3
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch... | closed | completed | false | 6 | [
"bug"
] | [] | 2025-07-09T00:28:57Z | 2026-03-09T07:04:55Z | 2025-07-09T14:10:40Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | es94129 | 12,763,339 | MDQ6VXNlcjEyNzYzMzM5 | User | false |
huggingface/transformers | 3,228,950,168 | I_kwDOCUB6oc7Add6Y | 39,401 | https://github.com/huggingface/transformers/issues/39401 | https://api.github.com/repos/huggingface/transformers/issues/39401 | Qwen3 tokenizer wrong offset_mapping | ### System Info
transformers 4.53.2, Ubuntu 22.04.4, python 3.11.13
### Who can help?
@ArthurZucker and @itazap There must be a problem with the `offset_mapping` of Qwen3 `tokenizer`. The starting point in the text for each token, except the first and the last, is one position behind. I compared it with the BERT's `... | closed | completed | false | 6 | [
"bug"
] | [] | 2025-07-14T14:21:08Z | 2026-01-26T07:39:40Z | 2025-07-16T09:59:35Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | contribcode | 24,355,946 | MDQ6VXNlcjI0MzU1OTQ2 | User | false |
huggingface/transformers | 3,229,815,847 | I_kwDOCUB6oc7AgxQn | 39,404 | https://github.com/huggingface/transformers/issues/39404 | https://api.github.com/repos/huggingface/transformers/issues/39404 | Whisper `return_language` with pipeline no longer working | ### System Info
Platform: Initially discovered on Nvidia. Can be reproduced on CPU and in Google Colab (see attached gist).
- `transformers` version: 4.53.2
- Platform: Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.33.4
- Safetensors version: 0.5.3
... | open | reopened | false | 12 | [
"bug",
"Audio"
] | [
"eustlb"
] | 2025-07-14T19:36:46Z | 2026-03-24T13:00:45Z | null | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Metric-Void | 21,335,640 | MDQ6VXNlcjIxMzM1NjQw | User | false |
huggingface/transformers | 3,265,628,633 | I_kwDOCUB6oc7CpYnZ | 39,692 | https://github.com/huggingface/transformers/issues/39692 | https://api.github.com/repos/huggingface/transformers/issues/39692 | SigLIP2 documentation example has multiple errors (model/processor mismatch + quantization failure) | ### System Info
- `transformers` version: 4.54.0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.10.6
- Huggingface_hub version: 0.34.1
- Safetensors version: 0.5.3
- Accelerate version: 1.9.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu128 (... | closed | completed | false | 5 | [
"bug"
] | [] | 2025-07-26T13:25:19Z | 2026-02-03T13:37:21Z | 2026-02-03T13:37:21Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | david-littlefield | 30,560,737 | MDQ6VXNlcjMwNTYwNzM3 | User | false |
huggingface/transformers | 3,307,563,945 | I_kwDOCUB6oc7FJWup | 40,070 | https://github.com/huggingface/transformers/issues/40070 | https://api.github.com/repos/huggingface/transformers/issues/40070 | Transformer GGUF support philosophy / naive question | Hey there, I am a huge user of both transformers and diffusers and really love the work of the teams at HF. However something is not entirely clear to me regarding the GGUF support by transformers.
GGUF main idea is to be a format that allows to run big models on machines with limited capabilities.
With this in mind... | open | reopened | false | 6 | [
"Feature request",
"GGUF"
] | [] | 2025-08-10T13:14:42Z | 2026-02-26T05:45:46Z | null | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | luke14free | 166,602 | MDQ6VXNlcjE2NjYwMg== | User | false |
huggingface/transformers | 3,353,286,554 | I_kwDOCUB6oc7H3xea | 40,444 | https://github.com/huggingface/transformers/issues/40444 | https://api.github.com/repos/huggingface/transformers/issues/40444 | Finetuning Qwen2.5-VL with an IterableDataset with multiple images per prompt fails | ### System Info
- `transformers` version: 4.55.3
- Platform: Linux-5.4.0-1113-oracle-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.34.4
- Safetensors version: 0.5.3
- Accelerate version: 1.10.0
- Accelerate config: not found
- DeepSpeed version: 0.16.9
- PyTorch version (accelerator?)... | closed | completed | false | 14 | [
"bug"
] | [] | 2025-08-25T21:38:56Z | 2026-02-22T10:05:33Z | 2025-10-04T08:02:15Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Infernaught | 72,055,086 | MDQ6VXNlcjcyMDU1MDg2 | User | false |
huggingface/transformers | 3,406,911,750 | I_kwDOCUB6oc7LEVkG | 40,822 | https://github.com/huggingface/transformers/issues/40822 | https://api.github.com/repos/huggingface/transformers/issues/40822 | Welcome v5 | In this issue we share our plan for the upcoming version 5 of transformers. We've talked about version 5 for years and it's finally around the corner! We'll release a blog post announcing the focus of this release shortly, and wanted to share what we believe the process will look like over the coming weeks.
- Soon, a ... | closed | completed | false | 33 | [
"for_v5?"
] | [
"LysandreJik",
"ArthurZucker",
"Cyrilvallez"
] | 2025-09-11T14:49:29Z | 2026-03-05T08:08:25Z | 2026-03-05T08:08:25Z | MEMBER | null | 20260325T173244Z | 2026-03-25T17:32:44Z | LysandreJik | 30,755,778 | MDQ6VXNlcjMwNzU1Nzc4 | User | false |
huggingface/transformers | 3,432,292,570 | I_kwDOCUB6oc7MlKDa | 40,990 | https://github.com/huggingface/transformers/issues/40990 | https://api.github.com/repos/huggingface/transformers/issues/40990 | Extremely high perplexity on openai/gpt-oss-20b with WikiText-2 (raw) | ### System Info
- `transformers` version: 4.56.1
- Platform: Linux-6.5.0-1025-gcp-x86_64-with-glibc2.35
- Python version: 3.11.10
- Huggingface_hub version: 0.35.0
- Safetensors version: 0.6.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: 0.17.3+cu126.pt27.v0.17.3.recogni2
- PyTor... | closed | completed | false | 6 | [
"bug"
] | [] | 2025-09-19T00:40:14Z | 2026-03-01T11:07:19Z | 2025-09-22T10:07:32Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | kuantuna | 66,808,459 | MDQ6VXNlcjY2ODA4NDU5 | User | false |
huggingface/transformers | 3,443,922,628 | I_kwDOCUB6oc7NRhbE | 41,084 | https://github.com/huggingface/transformers/issues/41084 | https://api.github.com/repos/huggingface/transformers/issues/41084 | Set Block Decoding | ### Feature request
Adding Set Block Decoding for Training and inference.
https://huggingface.co/papers/2509.04185
### Motivation
Speeding up generation time with minimal additional fine-tuning.
### Your contribution
Could implement a first draft. | open | null | false | 7 | [
"Feature request"
] | [] | 2025-09-23T06:42:35Z | 2026-02-16T14:07:24Z | null | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | davidmrau | 20,661,461 | MDQ6VXNlcjIwNjYxNDYx | User | false |
huggingface/transformers | 3,444,381,708 | I_kwDOCUB6oc7NTRgM | 41,093 | https://github.com/huggingface/transformers/issues/41093 | https://api.github.com/repos/huggingface/transformers/issues/41093 | IndexError: The shape of the mask [1406] at index 0 does not match the shape of the indexed tensor [1405] at index 0 | ### System Info
transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py”,在 get_rope_index 中 [rank3]: input_ids = input_ids[attention_mask[i] == 1] IndexError: The shape of the mask [1406] at index 0 does not match the shape of the indexed tensor [1405] at index 0
transformers==4.49.0 transformers==4.51.2
### Who can he... | closed | completed | false | 14 | [
"bug",
"Vision"
] | [] | 2025-09-23T09:11:35Z | 2026-03-08T14:58:20Z | 2025-10-06T08:56:31Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | wyn1015 | 201,194,623 | U_kgDOC_38fw | User | false |
huggingface/transformers | 3,468,498,317 | I_kwDOCUB6oc7OvRWN | 41,211 | https://github.com/huggingface/transformers/issues/41211 | https://api.github.com/repos/huggingface/transformers/issues/41211 | Add DEIMv2 | ### Model description
It would be nice to integrate DEIMv2, a new state-of-the-art model for real-time object detection based on DINOv3. The weights are released under Apache 2.0.
Related thread: https://github.com/Intellindust-AI-Lab/DEIMv2/issues/20
### Open source status
- [x] The model implementation is availab... | open | null | false | 6 | [
"New model"
] | [] | 2025-09-30T09:43:07Z | 2026-03-01T08:52:35Z | null | MEMBER | null | 20260325T173244Z | 2026-03-25T17:32:44Z | NielsRogge | 48,327,001 | MDQ6VXNlcjQ4MzI3MDAx | User | false |
huggingface/transformers | 3,511,681,654 | I_kwDOCUB6oc7RUAJ2 | 41,553 | https://github.com/huggingface/transformers/issues/41553 | https://api.github.com/repos/huggingface/transformers/issues/41553 | Bad error message for AutoTokenizer loading Voxtral | ### System Info
Getting the following unhelpful error when trying to load Voxtral's tokenizer with `AutoTokenizer` without `mistral-common` installed.
```
../../.conda/envs/et_new/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:1144: in from_pretrained
return tokenizer_class_fast.from_p... | closed | completed | false | 21 | [
"Good First Issue",
"bug"
] | [] | 2025-10-13T22:37:26Z | 2026-02-13T17:30:05Z | 2025-11-24T12:16:54Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | jackzhxng | 32,371,937 | MDQ6VXNlcjMyMzcxOTM3 | User | false |
huggingface/transformers | 3,518,780,612 | I_kwDOCUB6oc7RvFTE | 41,628 | https://github.com/huggingface/transformers/issues/41628 | https://api.github.com/repos/huggingface/transformers/issues/41628 | Cannot import name 'AutoImageProcessor' from 'transformers' | ### System Info
Intel CPU
Nvidia 3090
ubuntu 22.04
python 3.10.12
transformers=5.0.0.dev0 (installed from the official git repo)
### PS:
It's also tested with transformers=4.57.1, which is installed using "pip install", the same error persisted while executing "from transformers import AutoImageProcessor, AutoModel".... | closed | completed | false | 6 | [
"bug"
] | [] | 2025-10-15T16:29:20Z | 2026-02-26T18:36:13Z | 2025-10-16T12:37:07Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Pittmann-XIE | 103,981,664 | U_kgDOBjKiYA | User | false |
huggingface/transformers | 3,528,715,552 | I_kwDOCUB6oc7SU-0g | 41,720 | https://github.com/huggingface/transformers/issues/41720 | https://api.github.com/repos/huggingface/transformers/issues/41720 | Qwen3 with auto device mapping fails due to cudaErrorAssert on A800 | ### System Info
- `transformers` version: 4.57.1
- Platform: Linux-4.19.90-2107.6.0.0192.8.oe1.bclinux.x86_64-x86_64-with-glibc2.35
- Python version: 3.12.12
- Huggingface_hub version: 0.35.3
- Safetensors version: 0.6.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
... | closed | completed | false | 7 | [
"bug"
] | [] | 2025-10-18T11:50:43Z | 2026-03-12T05:43:23Z | 2026-01-05T08:03:26Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | guosyjlu | 69,756,483 | MDQ6VXNlcjY5NzU2NDgz | User | false |
huggingface/transformers | 3,532,707,392 | I_kwDOCUB6oc7SkNZA | 41,749 | https://github.com/huggingface/transformers/issues/41749 | https://api.github.com/repos/huggingface/transformers/issues/41749 | `_get_num_multimodal_tokens` is not implemented for model `mllama` | vLLM 0.11’s Transformers-backend expects the HF processor to implement a method called `_get_num_multimodal_tokens` which is [not implemented for mllama](https://github.com/huggingface/transformers/blob/main/src/transformers/models/mllama/processing_mllama.py) in `transformers 4.57.1`.
Because of this, `vllm serve met... | closed | completed | false | 4 | [
"bug"
] | [] | 2025-10-20T14:38:22Z | 2026-01-26T10:05:20Z | 2025-10-21T09:58:49Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | mrtpk | 8,076,245 | MDQ6VXNlcjgwNzYyNDU= | User | false |
huggingface/transformers | 3,535,832,788 | I_kwDOCUB6oc7SwIbU | 41,762 | https://github.com/huggingface/transformers/issues/41762 | https://api.github.com/repos/huggingface/transformers/issues/41762 | `IndexError: index 0 is out of bounds for dimension 0 with size 0` when loading Gemma3ForConditionalGeneration with DeepSpeed ZeRO-3 | ### System Info
transformers=4.57.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction... | closed | completed | false | 8 | [
"bug"
] | [] | 2025-10-21T09:58:58Z | 2026-02-20T15:36:18Z | 2025-10-22T15:10:46Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Asunatan | 105,210,894 | U_kgDOBkVkDg | User | false |
huggingface/transformers | 3,548,058,215 | I_kwDOCUB6oc7TexJn | 41,842 | https://github.com/huggingface/transformers/issues/41842 | https://api.github.com/repos/huggingface/transformers/issues/41842 | Incorrect usage of `num_items_in_batch`? | It seems that `num_items_in_batch` is computed for all items in the batch [here](https://github.com/huggingface/transformers/blob/9c20660138830ca362533551ca978c27b48283a1/src/transformers/trainer.py#L2430).
However, when loss is computed in the `training_step`, it is computed for each input in the batch one by one. Do... | closed | completed | false | 3 | [] | [] | 2025-10-24T07:36:00Z | 2026-03-09T14:02:44Z | 2025-12-01T08:02:48Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | gohar94 | 6,470,801 | MDQ6VXNlcjY0NzA4MDE= | User | false |
huggingface/transformers | 3,570,611,821 | I_kwDOCUB6oc7U0zZt | 41,950 | https://github.com/huggingface/transformers/issues/41950 | https://api.github.com/repos/huggingface/transformers/issues/41950 | video-classification pipeline looks for image processors | ### System Info
4.57.1
### Who can help?
@zucchini-nlp I can take a stab at this sometime
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details bel... | open | null | false | 6 | [
"WIP",
"bug"
] | [] | 2025-10-30T12:45:06Z | 2026-02-19T10:56:02Z | null | MEMBER | null | 20260325T173244Z | 2026-03-25T17:32:44Z | merveenoyan | 53,175,384 | MDQ6VXNlcjUzMTc1Mzg0 | User | false |
huggingface/transformers | 3,590,608,152 | I_kwDOCUB6oc7WBFUY | 42,032 | https://github.com/huggingface/transformers/issues/42032 | https://api.github.com/repos/huggingface/transformers/issues/42032 | ValueError: Unrecognized configuration class <class 'transformers.models.qwen3_omni_moe.configuration_qwen3_omni_moe.Qwen3OmniMoeConfig'> for this kind of AutoModel: AutoModel. | ### System Info
I have started testing the Qwen3-Omni model and at that time there was transformers version 4.56.0 available which had the issues to the model. With the commits and bugs fixation for transformers version 4.57.0 it got fixed but that commit was available on git. Since there is transformer update on the ... | closed | completed | false | 5 | [
"bug"
] | [] | 2025-11-05T11:39:39Z | 2026-02-11T23:54:10Z | 2025-12-27T08:03:07Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Tortoise17 | 36,593,708 | MDQ6VXNlcjM2NTkzNzA4 | User | false |
huggingface/transformers | 3,604,732,641 | I_kwDOCUB6oc7W29rh | 42,111 | https://github.com/huggingface/transformers/issues/42111 | https://api.github.com/repos/huggingface/transformers/issues/42111 | Add thinking-budget support (max_thinking_tokens) for reasoning-capable chat models | ### Feature request
A built-in way to cap how many tokens a reasoning model spends inside its ``<think> … </think>`` block. Today, we can only control the total response length via ``max_new_tokens``. No parameter limits the internal reasoning segment when ``enable_thinking=True``.
### Motivation
- Reasoning models ... | open | null | false | 1 | [
"Feature request"
] | [] | 2025-11-09T10:09:11Z | 2026-02-14T05:37:15Z | null | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | AndresAlgaba | 35,764,158 | MDQ6VXNlcjM1NzY0MTU4 | User | false |
huggingface/transformers | 3,607,099,901 | I_kwDOCUB6oc7W__n9 | 42,116 | https://github.com/huggingface/transformers/issues/42116 | https://api.github.com/repos/huggingface/transformers/issues/42116 | Integration of the SINQ quantization strategy | ### Feature request
Adding support for **SINQ** quantization for Hugging Face compatible models, enabling users to apply it directly through the configuration settings. The **SINQ** quantization method, recently introduced in the paper [SINQ: Sinkhorn-Normalized Quantization for Calibration-Free Low-Precision LLM Weig... | closed | completed | false | 8 | [
"Feature request"
] | [] | 2025-11-10T09:44:32Z | 2026-02-16T15:08:43Z | 2026-02-16T15:08:43Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | ChiaraBoretti | 83,216,540 | MDQ6VXNlcjgzMjE2NTQw | User | false |
huggingface/transformers | 3,619,868,194 | I_kwDOCUB6oc7Xws4i | 42,175 | https://github.com/huggingface/transformers/issues/42175 | https://api.github.com/repos/huggingface/transformers/issues/42175 | Tensorflow not include in the backend when using pip install '.[torch]' | ### System Info
I install the program successfully when using `pip install -e .[torch]`.
However, I encounter the below issue when using ``pip install '.[torch]'``:
```
(omni) pqyin@proj54:/data2/pqyin/transformers$ python
Python 3.13.9 | packaged by Anaconda, Inc. | (main, Oct 21 2025, 19:16:10) [GCC 11.2.0] on linu... | closed | completed | false | 2 | [
"bug"
] | [] | 2025-11-13T07:21:50Z | 2026-02-13T22:47:40Z | 2025-11-18T14:49:34Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | yinpeiqi | 60,515,999 | MDQ6VXNlcjYwNTE1OTk5 | User | false |
huggingface/transformers | 3,623,280,797 | I_kwDOCUB6oc7X9uCd | 42,199 | https://github.com/huggingface/transformers/issues/42199 | https://api.github.com/repos/huggingface/transformers/issues/42199 | Cardinality error is incorrect for models derived from DETR that do not have an explicit background class | ## Issue
For DETR variants, the cardinality errors that are reported during training are incorrect. This was reported in the DeformableDETR repository, and was acknowledged but not resolved:
https://github.com/fundamentalvision/Deformable-DETR/issues/24
Since all the derived models no longer include an explicit bac... | closed | completed | false | 10 | [
"bug"
] | [] | 2025-11-13T23:46:02Z | 2026-02-09T17:30:44Z | 2026-02-09T17:30:44Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | jveitchmichaelis | 3,159,591 | MDQ6VXNlcjMxNTk1OTE= | User | false |
huggingface/transformers | 3,623,324,953 | I_kwDOCUB6oc7X940Z | 42,200 | https://github.com/huggingface/transformers/issues/42200 | https://api.github.com/repos/huggingface/transformers/issues/42200 | Request of rewriting implementation of prediction_step in trainer.py | ### System Info
Any system. Because it's a problem coming from source code.
### Who can help?
@SunMarc
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (gi... | open | null | false | 4 | [
"Good Second Issue",
"bug"
] | [] | 2025-11-14T00:13:40Z | 2026-02-24T22:09:56Z | null | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Yacklin | 139,425,274 | U_kgDOCE91-g | User | false |
huggingface/transformers | 3,624,126,333 | I_kwDOCUB6oc7YA8d9 | 42,202 | https://github.com/huggingface/transformers/issues/42202 | https://api.github.com/repos/huggingface/transformers/issues/42202 | Deformable DETR Finetuning breaks for any dataset | ### System Info
- GPU: V100
- torch2.6.0+cu126
- transformers 4.57.1
### Who can help?
Hi @yonigozlan @molbap @NielsRogge
Thanks for the awesome work on vision models!
I've been trying to finetune the Deformable DETR models (SenseTime/deformable-detr-with-box-refine-two-stage) for the past few days on a custom ... | closed | completed | false | 6 | [
"bug"
] | [] | 2025-11-14T06:29:52Z | 2026-02-08T16:56:36Z | 2026-02-08T16:56:36Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | iamsashank09 | 26,921,144 | MDQ6VXNlcjI2OTIxMTQ0 | User | false |
huggingface/transformers | 3,628,809,478 | I_kwDOCUB6oc7YSz0G | 42,222 | https://github.com/huggingface/transformers/issues/42222 | https://api.github.com/repos/huggingface/transformers/issues/42222 | All vitpose model were brokentransformers/models/vitpose_ | ### System Info
transformers/models/vitpose_backbone/modeling_vitpose_backbone.py", line 304, in forward
raise ValueError(transformers/models/vitpose_backbone/modeling_vitpose_backbone.py", line 304, in forward
raise ValueError(
ValueError: dataset_index must be provided when using multiple experts (num_expert... | closed | completed | false | 11 | [
"bug"
] | [] | 2025-11-15T14:56:04Z | 2026-02-09T08:11:37Z | 2026-02-09T08:11:37Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | lucasjinreal | 21,303,438 | MDQ6VXNlcjIxMzAzNDM4 | User | false |
huggingface/transformers | 3,634,466,348 | I_kwDOCUB6oc7YoY4s | 42,249 | https://github.com/huggingface/transformers/issues/42249 | https://api.github.com/repos/huggingface/transformers/issues/42249 | `parse_response` should drop EOS | When using `parse_response`, I noticed it includes the EOS token in the `content`. However, the EOS token should be excluded, as it adds an unwanted EOS before tool calls during subsequent formatting.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B")
# Why ... | closed | completed | false | 7 | [] | [] | 2025-11-17T18:14:56Z | 2026-02-15T08:04:47Z | 2026-02-15T08:04:47Z | MEMBER | null | 20260325T173244Z | 2026-03-25T17:32:44Z | qgallouedec | 45,557,362 | MDQ6VXNlcjQ1NTU3MzYy | User | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.