Datasets:
Tasks:
Image Classification
Modalities:
Text
Formats:
webdataset
Sub-tasks:
multi-class-image-classification
Languages:
English
Size:
1M - 10M
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -19,3 +19,124 @@ task_ids:
|
|
| 19 |
- multi-class-image-classification
|
| 20 |
---
|
| 21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
- multi-class-image-classification
|
| 20 |
---
|
| 21 |
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
## General Information
|
| 25 |
+
|
| 26 |
+
**Title**: ImageNet-AB
|
| 27 |
+
|
| 28 |
+
**Description**: ImageNet-AB is an extended version of the ImageNet-1K training set, enriched with annotation byproducts (AB).
|
| 29 |
+
In addition to the image and corresponding class labels, this dataset provides a rich history of interactions per input signal per front-end component during the annotation process.
|
| 30 |
+
They include mouse traces, click locations, annotation times, as well as anonymised worker IDs.
|
| 31 |
+
|
| 32 |
+
**Links**:
|
| 33 |
+
|
| 34 |
+
- [ICCV'23 Paper](https://arxiv.org/abs/2303.17595)
|
| 35 |
+
- [Main Repository](https://github.com/naver-ai/NeglectedFreeLunch)
|
| 36 |
+
- [ImageNet Annotation Interface](https://github.com/naver-ai/imagenet-annotation-tool)
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
## Collection Process
|
| 40 |
+
|
| 41 |
+
**Collection Details**:
|
| 42 |
+
The additional annotations for the ImageNet-AB dataset were collected using Amazon Mechanical Turk (MTurk) workers from the US region, due to the task being described in English.
|
| 43 |
+
The task was designed as a human intelligence task (HIT), and the qualification approval rate was set at 90% to ensure the task's quality.
|
| 44 |
+
Each HIT contained 10 pages of annotation tasks, each page having 48 candidate images.
|
| 45 |
+
We follow the original annotation interface of ImageNet as much as possible.
|
| 46 |
+
See [GitHub repository](https://github.com/naver-ai/imagenet-annotation-tool) and [Paper](https://arxiv.org/abs/2303.17595) for further information.
|
| 47 |
+
|
| 48 |
+
Annotators interact with different components in the annotation interface, using input devices.
|
| 49 |
+
This interaction results in time-series data for mouse movements (mouseTracking) and mouse clicks (selectedRecord) for every image.
|
| 50 |
+
The dataset also records whether the image was ultimately selected by the annotator in the 'selected' field.
|
| 51 |
+
|
| 52 |
+
**Annotator Compensation**:
|
| 53 |
+
Annotators were paid 1.5 USD per HIT.
|
| 54 |
+
The median time taken to complete each HIT was 9.0 minutes, yielding an approximate hourly wage of 10.0 USD.
|
| 55 |
+
This wage is above the US federal minimum hourly wage.
|
| 56 |
+
A total of 20,304 USD was paid to the MTurk annotators, with an additional 20% fee paid to Amazon.
|
| 57 |
+
|
| 58 |
+
**Annotation Rejection**:
|
| 59 |
+
We rejected a HIT under the following circumstances.
|
| 60 |
+
|
| 61 |
+
- The recall rate was lower than 0.333.
|
| 62 |
+
- The total number of selections among 480 candidates was lower than 30.
|
| 63 |
+
- The annotator did not complete at least 9 out of the 10 pages of tasks.
|
| 64 |
+
- The annotation was not found in our database, and the secret hash code for confirming their completion was incorrect.
|
| 65 |
+
- In total, 1,145 out of 14,681 completed HITs (7.8%) were rejected.
|
| 66 |
+
|
| 67 |
+
**Collection Time**:
|
| 68 |
+
The entire annotation collection process took place between December 18, 2021, and December 31, 2021.
|
| 69 |
+
|
| 70 |
+
## Data Schema
|
| 71 |
+
|
| 72 |
+
```json
|
| 73 |
+
{
|
| 74 |
+
"imageID": "n01440764/n01440764_105",
|
| 75 |
+
"originalImageHeight": 375,
|
| 76 |
+
"originalImageWidth": 500,
|
| 77 |
+
"selected": true,
|
| 78 |
+
"imageHeight": 243,
|
| 79 |
+
"imageWidth": 243,
|
| 80 |
+
"imagePosition": {"x": 857, "y": 1976},
|
| 81 |
+
"hoveredRecord": [
|
| 82 |
+
{"action": "enter", "time": 1641425051},
|
| 83 |
+
{"action": "leave", "time": 1641425319}
|
| 84 |
+
],
|
| 85 |
+
"selectedRecord": [
|
| 86 |
+
{"x": 0.540, "y": 0.473, "time": 1641425052}
|
| 87 |
+
],
|
| 88 |
+
"mouseTracking": [
|
| 89 |
+
{"x": 0.003, "y": 0.629, "time": 1641425051},
|
| 90 |
+
{"x": 0.441, "y": 0.600, "time": 1641425052}
|
| 91 |
+
],
|
| 92 |
+
"worker_id": "47DBDD543E",
|
| 93 |
+
"assignment_id": "3AMYWKA6YLE80HK9QYYHI2YEL2YO6L",
|
| 94 |
+
"page_idx": 3
|
| 95 |
+
}
|
| 96 |
+
```
|
| 97 |
+
|
| 98 |
+
## Usage
|
| 99 |
+
|
| 100 |
+
One could use the annotation byproducts to improve the model generalisability and robustness.
|
| 101 |
+
This is appealing, as the annotation byproducts do not incur extra annotation costs for the annotators.
|
| 102 |
+
For more information, refer to our [ICCV'23 Paper](https://arxiv.org/abs/2303.17595).
|
| 103 |
+
|
| 104 |
+
## Dataset Statistics
|
| 105 |
+
There were two annotation rounds covering 1,281,167 ImageNet1K training images.
|
| 106 |
+
In the first round, annotators re-selected 71.8% of these images.
|
| 107 |
+
The remaining 28.2% were re-packaged into a second batch of HITs, from which an additional 14.9% were selected.
|
| 108 |
+
In total, 1,110,786 (86.7%) of ImageNet1K training images were re-selected, with annotation byproducts available for 1,272,225 (99.3%) of the images.
|
| 109 |
+
|
| 110 |
+
Other dataset statistics are inherited from the parent dataset, ImageNet-1K.
|
| 111 |
+
|
| 112 |
+
|
| 113 |
+
## Ethics and Legalities
|
| 114 |
+
The crowdsourced annotators were fairly compensated for their time at a rate well above the U.S. federal minimum wage.
|
| 115 |
+
In terms of data privacy, the dataset maintains the same ethical standards as the original ImageNet-1K dataset.
|
| 116 |
+
Worker identifiers were anonymized using a non-reversible hashing function, ensuring privacy.
|
| 117 |
+
|
| 118 |
+
Our data collection has obtained IRB approval from an author’s institute.
|
| 119 |
+
For the future collection of annotation byproducts, we note that there exist potential risks that annotation byproducts may contain annotators’ privacy.
|
| 120 |
+
Data collectors may even attempt to leverage more private information as byproducts.
|
| 121 |
+
We urge data collectors not to collect or exploit private information from annotators.
|
| 122 |
+
Whenever appropriate, one must ask for the annotators’ consent.
|
| 123 |
+
|
| 124 |
+
## Citation Information
|
| 125 |
+
Detailed citation information is to be provided.
|
| 126 |
+
|
| 127 |
+
```
|
| 128 |
+
@inproceedings{han2023iccv,
|
| 129 |
+
title = {Neglected Free Lunch – Learning Image Classifiers Using Annotation Byproducts},
|
| 130 |
+
author = {Han, Dongyoon and Choe, Junsuk and Chun, Seonghyeok and Chung, John Joon Young and Chang, Minsuk and Yun, Sangdoo and Song, Jean Y. and Oh, Seong Joon},
|
| 131 |
+
booktitle = {International Conference on Computer Vision (ICCV)},
|
| 132 |
+
year = {2023}
|
| 133 |
+
}
|
| 134 |
+
```
|
| 135 |
+
|
| 136 |
+
## Maintenance and Updates
|
| 137 |
+
This section will be updated as and when there are changes or updates to the dataset.
|
| 138 |
+
|
| 139 |
+
## Known Limitations
|
| 140 |
+
We have not been able to acquire annotation byproducts for all original ImageNet-1K dataset samples.
|
| 141 |
+
This is because not all ImageNet-1K samples are re-selected by the annotators, potentially because of the errors in the original ImageNet-1K dataset.
|
| 142 |
+
Given the budget constraint, we have not been able to acquire 10+ annotations per sample, as done in the original work.
|