Datasets:
Tasks:
Image Classification
Modalities:
Text
Formats:
webdataset
Sub-tasks:
multi-class-image-classification
Languages:
English
Size:
1M - 10M
ArXiv:
License:
| annotations_creators: | |
| - crowdsourced | |
| language: | |
| - en | |
| license: | |
| - apache-2.0 | |
| multilinguality: | |
| - monolingual | |
| paperswithcode_id: imagenet | |
| pretty_name: ImageNet | |
| size_categories: | |
| - 1M<n<10M | |
| source_datasets: | |
| - https://huggingface.co/datasets/imagenet-1k | |
| task_categories: | |
| - image-classification | |
| task_ids: | |
| - multi-class-image-classification | |
| ## General Information | |
| **Title**: ImageNet-AB | |
| **Description**: ImageNet-AB is an extended version of the ImageNet-1K training set, enriched with annotation byproducts (AB). | |
| In addition to the image and corresponding class labels, this dataset provides a rich history of interactions per input signal per front-end component during the annotation process. | |
| They include mouse traces, click locations, annotation times, as well as anonymised worker IDs. | |
| **Links**: | |
| - [ICCV'23 Paper](https://arxiv.org/abs/2303.17595) | |
| - [Main Repository](https://github.com/naver-ai/NeglectedFreeLunch) | |
| - [ImageNet Annotation Interface](https://github.com/naver-ai/imagenet-annotation-tool) | |
| ## Collection Process | |
| **Collection Details**: | |
| The additional annotations for the ImageNet-AB dataset were collected using Amazon Mechanical Turk (MTurk) workers from the US region, due to the task being described in English. | |
| The task was designed as a human intelligence task (HIT), and the qualification approval rate was set at 90% to ensure the task's quality. | |
| Each HIT contained 10 pages of annotation tasks, each page having 48 candidate images. | |
| We follow the original annotation interface of ImageNet as much as possible. | |
| See [GitHub repository](https://github.com/naver-ai/imagenet-annotation-tool) and [Paper](https://arxiv.org/abs/2303.17595) for further information. | |
| Annotators interact with different components in the annotation interface, using input devices. | |
| This interaction results in time-series data for mouse movements (mouseTracking) and mouse clicks (selectedRecord) for every image. | |
| The dataset also records whether the image was ultimately selected by the annotator in the 'selected' field. | |
| **Annotator Compensation**: | |
| Annotators were paid 1.5 USD per HIT. | |
| The median time taken to complete each HIT was 9.0 minutes, yielding an approximate hourly wage of 10.0 USD. | |
| This wage is above the US federal minimum hourly wage. | |
| A total of 20,304 USD was paid to the MTurk annotators, with an additional 20% fee paid to Amazon. | |
| **Annotation Rejection**: | |
| We rejected a HIT under the following circumstances. | |
| - The recall rate was lower than 0.333. | |
| - The total number of selections among 480 candidates was lower than 30. | |
| - The annotator did not complete at least 9 out of the 10 pages of tasks. | |
| - The annotation was not found in our database, and the secret hash code for confirming their completion was incorrect. | |
| - In total, 1,145 out of 14,681 completed HITs (7.8%) were rejected. | |
| **Collection Time**: | |
| The entire annotation collection process took place between December 18, 2021, and December 31, 2021. | |
| ## Data Schema | |
| ```json | |
| { | |
| "imageID": "n01440764/n01440764_105", | |
| "originalImageHeight": 375, | |
| "originalImageWidth": 500, | |
| "selected": true, | |
| "imageHeight": 243, | |
| "imageWidth": 243, | |
| "imagePosition": {"x": 857, "y": 1976}, | |
| "hoveredRecord": [ | |
| {"action": "enter", "time": 1641425051}, | |
| {"action": "leave", "time": 1641425319} | |
| ], | |
| "selectedRecord": [ | |
| {"x": 0.540, "y": 0.473, "time": 1641425052} | |
| ], | |
| "mouseTracking": [ | |
| {"x": 0.003, "y": 0.629, "time": 1641425051}, | |
| {"x": 0.441, "y": 0.600, "time": 1641425052} | |
| ], | |
| "worker_id": "47DBDD543E", | |
| "assignment_id": "36DSNE9QZFQKOCZGAHS6R63J6E1OJL", | |
| "page_idx": 3 | |
| } | |
| ``` | |
| ## Usage | |
| One could use the annotation byproducts to improve the model generalisability and robustness. | |
| This is appealing, as the annotation byproducts do not incur extra annotation costs for the annotators. | |
| For more information, refer to our [ICCV'23 Paper](https://arxiv.org/abs/2303.17595). | |
| ## Dataset Statistics | |
| There were two annotation rounds covering 1,281,167 ImageNet1K training images. | |
| In the first round, annotators re-selected 71.8% of these images. | |
| The remaining 28.2% were re-packaged into a second batch of HITs, from which an additional 14.9% were selected. | |
| In total, 1,110,786 (86.7%) of ImageNet1K training images were re-selected, with annotation byproducts available for 1,272,225 (99.3%) of the images. | |
| Other dataset statistics are inherited from the parent dataset, ImageNet-1K. | |
| ## Ethics and Legalities | |
| The crowdsourced annotators were fairly compensated for their time at a rate well above the U.S. federal minimum wage. | |
| In terms of data privacy, the dataset maintains the same ethical standards as the original ImageNet-1K dataset. | |
| Worker identifiers were anonymized using a non-reversible hashing function, ensuring privacy. | |
| Our data collection has obtained IRB approval from an author’s institute. | |
| For the future collection of annotation byproducts, we note that there exist potential risks that annotation byproducts may contain annotators’ privacy. | |
| Data collectors may even attempt to leverage more private information as byproducts. | |
| We urge data collectors not to collect or exploit private information from annotators. | |
| Whenever appropriate, one must ask for the annotators’ consent. | |
| ## Citation Information | |
| Detailed citation information is to be provided. | |
| ``` | |
| @inproceedings{han2023iccv, | |
| title = {Neglected Free Lunch – Learning Image Classifiers Using Annotation Byproducts}, | |
| author = {Han, Dongyoon and Choe, Junsuk and Chun, Seonghyeok and Chung, John Joon Young and Chang, Minsuk and Yun, Sangdoo and Song, Jean Y. and Oh, Seong Joon}, | |
| booktitle = {International Conference on Computer Vision (ICCV)}, | |
| year = {2023} | |
| } | |
| ``` | |
| ## Maintenance and Updates | |
| This section will be updated as and when there are changes or updates to the dataset. | |
| ## Known Limitations | |
| We have not been able to acquire annotation byproducts for all original ImageNet-1K dataset samples. | |
| This is because not all ImageNet-1K samples are re-selected by the annotators, potentially because of the errors in the original ImageNet-1K dataset. | |
| Given the budget constraint, we have not been able to acquire 10+ annotations per sample, as done in the original work. | |