Transformers documentation

YOLOS

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v5.3.0).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

This model was released on 2021-06-01 and added to Hugging Face Transformers on 2022-05-02.

PyTorch FlashAttention SDPA

YOLOS

YOLOS uses a Vision Transformer (ViT) for object detection with minimal modifications and region priors. It can achieve performance comparable to specialized object detection models and frameworks with knowledge about 2D spatial structures.

You can find all the original YOLOS checkpoints under the HUST Vision Lab organization.

drawing YOLOS architecture. Taken from the original paper.

This model wasa contributed by nielsr. Click on the YOLOS models in the right sidebar for more examples of how to apply YOLOS to different object detection tasks.

The example below demonstrates how to detect objects with Pipeline or the AutoModel class.

Pipeline
Automodel
import torch
from transformers import pipeline

detector = pipeline(
    task="object-detection",
    model="hustvl/yolos-base",
    dtype=torch.float16,
    device=0
)
detector("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")

Notes

  • Use YolosImageProcessor for preparing images (and optional targets) for the model. Contrary to DETR, YOLOS doesn’t require a pixel_mask.

Resources

YolosConfig

class transformers.YolosConfig

< >

( output_hidden_states: bool | None = False return_dict: bool | None = True dtype: typing.Union[str, ForwardRef('torch.dtype'), NoneType] = None chunk_size_feed_forward: int = 0 is_encoder_decoder: bool = False id2label: dict[int, str] | dict[str, str] | None = None label2id: dict[str, int] | dict[str, str] | None = None problem_type: typing.Optional[typing.Literal['regression', 'single_label_classification', 'multi_label_classification']] = None tokenizer_class: str | None = None hidden_size: int = 768 num_hidden_layers: int = 12 num_attention_heads: int = 12 intermediate_size: int = 3072 hidden_act: str = 'gelu' hidden_dropout_prob: float = 0.0 attention_probs_dropout_prob: float = 0.0 initializer_range: float = 0.02 layer_norm_eps: float = 1e-12 image_size: list[int] | tuple[int, ...] = (512, 864) patch_size: int | list[int] | tuple[int, int] = 16 num_channels: int = 3 qkv_bias: bool = True num_detection_tokens: int = 100 use_mid_position_embeddings: bool = True auxiliary_loss: bool = False class_cost: int = 1 bbox_cost: int = 5 giou_cost: int = 2 bbox_loss_coefficient: int = 5 giou_loss_coefficient: int = 2 eos_coefficient: float = 0.1 )

Parameters

  • output_hidden_states (bool, optional, defaults to False) — Whether or not the model should return all hidden-states.
  • return_dict (bool, optional, defaults to True) — Whether to return a ModelOutput (dataclass) instead of a plain tuple.
  • dtype (Union[str, torch.dtype], optional) — The chunk size of all feed forward layers in the residual attention blocks. A chunk size of 0 means that the feed forward layer is not chunked. A chunk size of n means that the feed forward layer processes n < sequence_length embeddings at a time. For more information on feed forward chunking, see How does Feed Forward Chunking work?.
  • chunk_size_feed_forward (int, optional, defaults to 0) — The dtype of the weights. This attribute can be used to initialize the model to a non-default dtype (which is normally float32) and thus allow for optimal storage allocation. For example, if the saved model is float16, ideally we want to load it back using the minimal amount of memory needed to load float16 weights.
  • is_encoder_decoder (bool, optional, defaults to False) — Whether the model is used as an encoder/decoder or not.
  • id2label (Union[dict[int, str], dict[str, str]], optional) — A map from index (for instance prediction index, or target index) to label.
  • label2id (Union[dict[str, int], dict[str, str]], optional) — A map from label to index for the model.
  • problem_type (Literal[regression, single_label_classification, multi_label_classification], optional) — Problem type for XxxForSequenceClassification models. Can be one of "regression", "single_label_classification" or "multi_label_classification".
  • tokenizer_class (str, optional) — The class name of model’s tokenizer.
  • hidden_size (int, optional, defaults to 768) — Dimension of the hidden representations.
  • num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer decoder.
  • num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer decoder.
  • intermediate_size (int, optional, defaults to 3072) — Dimension of the MLP representations.
  • hidden_act (str, optional, defaults to gelu) — The non-linear activation function (function or string) in the decoder. For example, "gelu", "relu", "silu", etc.
  • hidden_dropout_prob (float, optional, defaults to 0.0) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
  • attention_probs_dropout_prob (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities.
  • initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
  • layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers.
  • image_size (Union[list[int], tuple[int, ...]], optional, defaults to (512, 864)) — The size (resolution) of each image.
  • patch_size (Union[int, list[int], tuple[int, int]], optional, defaults to 16) — The size (resolution) of each patch.
  • num_channels (int, optional, defaults to 3) — The number of input channels.
  • qkv_bias (bool, optional, defaults to True) — Whether to add a bias to the queries, keys and values.
  • num_detection_tokens (int, optional, defaults to 100) — The number of detection tokens.
  • use_mid_position_embeddings (bool, optional, defaults to True) — Whether to use the mid-layer position encodings.
  • auxiliary_loss (bool, optional, defaults to False) — Whether auxiliary decoding losses (losses at each decoder layer) are to be used.
  • class_cost (int, optional, defaults to 1) — Relative weight of the classification error in the Hungarian matching cost.
  • bbox_cost (int, optional, defaults to 5) — Relative weight of the L1 bounding box error in the Hungarian matching cost.
  • giou_cost (int, optional, defaults to 2) — Relative weight of the generalized IoU loss in the Hungarian matching cost.
  • bbox_loss_coefficient (int, optional, defaults to 5) — Relative weight of the L1 bounding box loss in the panoptic segmentation loss.
  • giou_loss_coefficient (int, optional, defaults to 2) — Relative weight of the generalized IoU loss in the panoptic segmentation loss.
  • eos_coefficient (float, optional, defaults to 0.1) — Relative classification weight of the ‘no-object’ class in the object detection loss.

This is the configuration class to store the configuration of a YolosModel. It is used to instantiate a Yolos model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the hustvl/yolos-base

Configuration objects inherit from PreTrainedConfig and can be used to control the model outputs. Read the documentation from PreTrainedConfig for more information.

Example:

>>> from transformers import YolosConfig, YolosModel

>>> # Initializing a YOLOS hustvl/yolos-base style configuration
>>> configuration = YolosConfig()

>>> # Initializing a model (with random weights) from the hustvl/yolos-base style configuration
>>> model = YolosModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

YolosImageProcessor

class transformers.YolosImageProcessor

< >

( **kwargs: typing_extensions.Unpack[transformers.models.yolos.image_processing_yolos.YolosImageProcessorKwargs] )

Parameters

  • format (str, kwargs, optional, defaults to AnnotationFormat.COCO_DETECTION) — Data format of the annotations. One of “coco_detection” or “coco_panoptic”.
  • do_convert_annotations (bool, kwargs, optional, defaults to True) — Controls whether to convert the annotations to the format expected by the YOLOS model. Converts the bounding boxes to the format (center_x, center_y, width, height) and in the range [0, 1]. Can be overridden by the do_convert_annotations parameter in the preprocess method.
  • **kwargs (ImagesKwargs, optional) — Additional image preprocessing options. Model-specific kwargs are listed above; see the TypedDict class for the complete list of supported arguments.

Constructs a YolosImageProcessor image processor.

preprocess

< >

( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor']] annotations: dict[str, int | str | list[dict]] | list[dict[str, int | str | list[dict]]] | None = None return_segmentation_masks: bool | None = None masks_path: str | pathlib.Path | None = None **kwargs: typing_extensions.Unpack[transformers.models.yolos.image_processing_yolos.YolosImageProcessorKwargs] ) ~image_processing_base.BatchFeature

Parameters

  • images (Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, list[PIL.Image.Image], list[numpy.ndarray], list[torch.Tensor]]) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False.
  • annotations (AnnotationType or list[AnnotationType], optional) — Annotations to transform according to the padding that is applied to the images.
  • return_segmentation_masks (bool, optional, defaults to self.return_segmentation_masks) — Whether to return segmentation masks.
  • masks_path (str or pathlib.Path, optional) — Path to the directory containing the segmentation masks.
  • format (str, kwargs, optional, defaults to AnnotationFormat.COCO_DETECTION) — Data format of the annotations. One of “coco_detection” or “coco_panoptic”.
  • do_convert_annotations (bool, kwargs, optional, defaults to True) — Controls whether to convert the annotations to the format expected by the YOLOS model. Converts the bounding boxes to the format (center_x, center_y, width, height) and in the range [0, 1]. Can be overridden by the do_convert_annotations parameter in the preprocess method.
  • return_tensors (str or TensorType, optional) — Returns stacked tensors if set to 'pt', otherwise returns a list of tensors.
  • **kwargs (ImagesKwargs, optional) — Additional image preprocessing options. Model-specific kwargs are listed above; see the TypedDict class for the complete list of supported arguments.

Returns

~image_processing_base.BatchFeature

  • data (dict) — Dictionary of lists/arrays/tensors returned by the call method (‘pixel_values’, etc.).
  • tensor_type (Union[None, str, TensorType], optional) — You can give a tensor_type here to convert the lists of integers in PyTorch/Numpy Tensors at initialization.

YolosImageProcessorPil

class transformers.YolosImageProcessorPil

< >

( **kwargs: typing_extensions.Unpack[transformers.models.yolos.image_processing_yolos.YolosImageProcessorKwargs] )

Parameters

  • format (str, kwargs, optional, defaults to AnnotationFormat.COCO_DETECTION) — Data format of the annotations. One of “coco_detection” or “coco_panoptic”.
  • do_convert_annotations (bool, kwargs, optional, defaults to True) — Controls whether to convert the annotations to the format expected by the YOLOS model. Converts the bounding boxes to the format (center_x, center_y, width, height) and in the range [0, 1]. Can be overridden by the do_convert_annotations parameter in the preprocess method.
  • **kwargs (ImagesKwargs, optional) — Additional image preprocessing options. Model-specific kwargs are listed above; see the TypedDict class for the complete list of supported arguments.

Constructs a YolosImageProcessor image processor.

preprocess

< >

( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor']] annotations: dict[str, int | str | list[dict]] | list[dict[str, int | str | list[dict]]] | None = None return_segmentation_masks: bool | None = None masks_path: str | pathlib.Path | None = None **kwargs: typing_extensions.Unpack[transformers.models.yolos.image_processing_yolos.YolosImageProcessorKwargs] ) ~image_processing_base.BatchFeature

Parameters

  • images (Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, list[PIL.Image.Image], list[numpy.ndarray], list[torch.Tensor]]) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False.
  • annotations (AnnotationType or list[AnnotationType], optional) — Annotations to transform according to the padding that is applied to the images.
  • return_segmentation_masks (bool, optional, defaults to self.return_segmentation_masks) — Whether to return segmentation masks.
  • masks_path (str or pathlib.Path, optional) — Path to the directory containing the segmentation masks.
  • format (str, kwargs, optional, defaults to AnnotationFormat.COCO_DETECTION) — Data format of the annotations. One of “coco_detection” or “coco_panoptic”.
  • do_convert_annotations (bool, kwargs, optional, defaults to True) — Controls whether to convert the annotations to the format expected by the YOLOS model. Converts the bounding boxes to the format (center_x, center_y, width, height) and in the range [0, 1]. Can be overridden by the do_convert_annotations parameter in the preprocess method.
  • return_tensors (str or TensorType, optional) — Returns stacked tensors if set to 'pt', otherwise returns a list of tensors.
  • **kwargs (ImagesKwargs, optional) — Additional image preprocessing options. Model-specific kwargs are listed above; see the TypedDict class for the complete list of supported arguments.

Returns

~image_processing_base.BatchFeature

  • data (dict) — Dictionary of lists/arrays/tensors returned by the call method (‘pixel_values’, etc.).
  • tensor_type (Union[None, str, TensorType], optional) — You can give a tensor_type here to convert the lists of integers in PyTorch/Numpy Tensors at initialization.

pad

< >

( image: ndarray padded_size: tuple annotation: dict[str, typing.Any] | None = None update_bboxes: bool = True fill: int = 0 )

post_process_object_detection

< >

( outputs threshold: float = 0.5 target_sizes: transformers.utils.generic.TensorType | list[tuple] = None ) list[Dict]

Parameters

  • outputs (YolosObjectDetectionOutput) — Raw outputs of the model.
  • threshold (float, optional) — Score threshold to keep object detection predictions.
  • target_sizes (torch.Tensor or list[tuple[int, int]], optional) — Tensor of shape (batch_size, 2) or list of tuples (tuple[int, int]) containing the target size (height, width) of each image in the batch. If unset, predictions will not be resized.

Returns

list[Dict]

A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model.

Converts the raw output of YolosForObjectDetection into final bounding boxes in (top_left_x, top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch.

YolosModel

class transformers.YolosModel

< >

( config: YolosConfig add_pooling_layer: bool = True )

Parameters

  • config (YolosConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
  • add_pooling_layer (bool, optional, defaults to True) — Whether to add a pooling layer

The bare Yolos Model outputting raw hidden-states without any specific head on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( pixel_values: torch.Tensor | None = None **kwargs: typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs] ) BaseModelOutputWithPooling or tuple(torch.FloatTensor)

Parameters

  • pixel_values (torch.Tensor of shape (batch_size, num_channels, image_size, image_size), optional) — The tensors corresponding to the input images. Pixel values can be obtained using YolosImageProcessor. See YolosImageProcessor.__call__() for details (processor_class uses YolosImageProcessor for processing images).

Returns

BaseModelOutputWithPooling or tuple(torch.FloatTensor)

A BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (YolosConfig) and inputs.

The YolosModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model.

  • pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining.

  • hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Example:

YolosForObjectDetection

class transformers.YolosForObjectDetection

< >

( config: YolosConfig )

Parameters

  • config (YolosConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

YOLOS Model (consisting of a ViT encoder) with object detection heads on top, for tasks such as COCO detection.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< >

( pixel_values: FloatTensor labels: list[dict] | None = None **kwargs: typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs] ) YolosObjectDetectionOutput or tuple(torch.FloatTensor)

Parameters

  • pixel_values (torch.FloatTensor of shape (batch_size, num_channels, image_size, image_size)) — The tensors corresponding to the input images. Pixel values can be obtained using YolosImageProcessor. See YolosImageProcessor.__call__() for details (processor_class uses YolosImageProcessor for processing images).
  • labels (list[Dict] of len (batch_size,), optional) — Labels for computing the bipartite matching loss. List of dicts, each dictionary containing at least the following 2 keys: 'class_labels' and 'boxes' (the class labels and bounding boxes of an image in the batch respectively). The class labels themselves should be a torch.LongTensor of len (number of bounding boxes in the image,) and the boxes a torch.FloatTensor of shape (number of bounding boxes in the image, 4).

Returns

YolosObjectDetectionOutput or tuple(torch.FloatTensor)

A YolosObjectDetectionOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (YolosConfig) and inputs.

The YolosForObjectDetection forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

  • loss (torch.FloatTensor of shape (1,), optional, returned when labels are provided)) — Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized scale-invariant IoU loss.

  • loss_dict (Dict, optional) — A dictionary containing the individual losses. Useful for logging.

  • logits (torch.FloatTensor of shape (batch_size, num_queries, num_classes + 1)) — Classification logits (including no-object) for all queries.

  • pred_boxes (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding possible padding). You can use ~YolosImageProcessor.post_process to retrieve the unnormalized bounding boxes.

  • auxiliary_outputs (list[Dict], optional) — Optional, only returned when auxiliary losses are activated (i.e. config.auxiliary_loss is set to True) and labels are provided. It is a list of dictionaries containing the two above keys (logits and pred_boxes) for each decoder layer.

  • last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the decoder of the model.

  • hidden_states (tuple[torch.FloatTensor], optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

    Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.

  • attentions (tuple[torch.FloatTensor], optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

    Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

>>> from transformers import AutoImageProcessor, AutoModelForObjectDetection
>>> import torch
>>> from PIL import Image
>>> import httpx
>>> from io import BytesIO

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> with httpx.stream("GET", url) as response:
...     image = Image.open(BytesIO(response.read()))

>>> image_processor = AutoImageProcessor.from_pretrained("hustvl/yolos-tiny")
>>> model = AutoModelForObjectDetection.from_pretrained("hustvl/yolos-tiny")

>>> inputs = image_processor(images=image, return_tensors="pt")
>>> outputs = model(**inputs)

>>> # convert outputs (bounding boxes and class logits) to Pascal VOC format (xmin, ymin, xmax, ymax)
>>> target_sizes = torch.tensor([image.size[::-1]])
>>> results = image_processor.post_process_object_detection(outputs, threshold=0.9, target_sizes=target_sizes)[
...     0
... ]

>>> for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
...     box = [round(i, 2) for i in box.tolist()]
...     print(
...         f"Detected {model.config.id2label[label.item()]} with confidence "
...         f"{round(score.item(), 3)} at location {box}"
...     )
Detected remote with confidence 0.991 at location [46.48, 72.78, 178.98, 119.3]
Detected remote with confidence 0.908 at location [336.48, 79.27, 368.23, 192.36]
Detected cat with confidence 0.934 at location [337.18, 18.06, 638.14, 373.09]
Detected cat with confidence 0.979 at location [10.93, 53.74, 313.41, 470.67]
Detected remote with confidence 0.974 at location [41.63, 72.23, 178.09, 119.99]
Update on GitHub