TWIN / README.md
dmarsili's picture
Update README.md
c22c472 verified
metadata
dataset_info:
  features:
    - name: pair_id
      dtype: string
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: image_1
      dtype: string
    - name: image_2
      dtype: string
    - name: idx
      dtype: string
    - name: supercategory
      dtype: string
    - name: category
      dtype: string
    - name: type
      dtype: string
    - name: source_json
      dtype: string
  splits:
    - name: train
      num_bytes: 365770574
      num_examples: 561569
  download_size: 14528564
  dataset_size: 365770574
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: mit
task_categories:
  - visual-question-answering
language:
  - en
tags:
  - finegrained
  - finegrained-vqa
pretty_name: TWIN
size_categories:
  - 100K<n<1M

TWIN

This repository contains the TWIN dataset introduced in the paper Same or Not? Enhancing Visual Perception in Vision-Language Models. TWIN contains 561K challenging (image, question, answer) tuples emphasizing fine-grained image understanding.

For evaluating on the dataset with LMMS-eval, please refer to this repo.

Citation

If you use the TWIN dataset in your research, please use the following BibTeX entry.

@misc{marsili2025notenhancingvisualperception,
      title={Same or Not? Enhancing Visual Perception in Vision-Language Models}, 
      author={Damiano Marsili and Aditya Mehta and Ryan Y. Lin and Georgia Gkioxari},
      year={2025},
      eprint={2512.23592},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2512.23592}, 
}