Datasets:
The dataset viewer is not available for this split.
Error code: UnexpectedApiError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for ARU Speech Corpus
The ARU Speech Corpus is a high-quality collection of IEEE (Harvard) sentences recorded in anechoic conditions by twelve native British English speakers. This dataset was created at the University of Liverpool's Acoustics Research Unit for speech intelligibility research.
Dataset Details
Dataset Description
The ARU speech corpus comprises single-channel recordings of 720 IEEE sentences spoken by twelve adult native British English speakers (6 male, 6 female) in controlled anechoic conditions. All recordings were made in October and November 2017 using professional-grade audio equipment in the Acoustics Research Unit's anechoic chamber. The corpus features high sampling rates (65,536 Hz), 24-bit depth, and careful signal processing to ensure consistent speech levels across all recordings. Speakers were selected for near Received Pronunciation accents and underwent audiometric screening to ensure normal hearing ability.
- Curated by: Dr. Simone Graetzer, Dr. Gary Seiffert, and Professor Carl Hopkins (Acoustics Research Unit, University of Liverpool)
- Funded by: HM Government
- Shared by: University of Liverpool, Acoustics Research Unit
- Language(s) (NLP): English (en-GB, British English)
- License: CC-BY-4.0 (verify based on original repository terms)
Dataset Sources
- Repository: https://datacat.liverpool.ac.uk/681/
- Paper: Hopkins, C., Graetzer, S., Seiffert, G. (2019). ARU adult British English speaker corpus of IEEE sentences (ARU speech corpus) version 1.0
Uses
Direct Use
This dataset is suitable for:
- Automatic Speech Recognition (ASR) training and evaluation, particularly for British English
- Speech intelligibility research in noise and reverberant conditions
- Speaker recognition and verification systems
- Accent classification and dialect studies
- Speech quality assessment benchmarking
- Audio signal processing algorithm development
- Text-to-speech (TTS) evaluation using reference speech
- Acoustic model training for British English variants
The high sampling rate (65,536 Hz) makes it particularly valuable for wideband and super-wideband speech processing research.
Out-of-Scope Use
This dataset should not be used for:
- Speaker identification for surveillance purposes - violates participant consent terms
- Biometric authentication systems - participants did not consent to such use
- Training models for strongly regional British accents - speakers were specifically selected for near Received Pronunciation
- Emotional speech recognition - recordings were made with neutral, conversational delivery
- Spontaneous speech modeling - content is read speech from standardized sentence lists
- Multi-speaker or overlapping speech scenarios - all recordings are single-speaker
- Noisy or reverberant speech modeling - recordings made in anechoic conditions
Dataset Structure
Data Instances
Each instance contains:
- audio: Audio file at 65,536 Hz sampling rate, 24-bit depth
- speaker_id: Two-digit identifier (01-12)
- sex: Speaker gender (M/F)
- age: Speaker age in years (21-47)
- accent: Geographic origin (county where primary/secondary education completed)
- list_number: IEEE word list number (1-72)
- sentence_number: Sentence number within list (1-10)
- text: IEEE sentence transcription (if available)
Data Splits
Due to the limited number of speakers (12 total), splits are designed to maintain 50/50 gender balance:
| Split | Speakers | Percentage | Files |
|---|---|---|---|
| Train | 8 (4M, 4F) | 67% | ~5,760 |
| Test | 2 (1M, 1F) | 17% | ~1,440 |
| Validation | 2 (1M, 1F) | 16% | ~1,440 |
Total: 8,640 utterances (12 speakers × 720 sentences)
File Naming Convention
Files follow the pattern: ID{speaker}_ARU_Fs=65536Hz_Standard speech - List {list_num} - Sentence {sent_num} - Version 1_0.wav
Example: ID01_ARU_Fs=65536Hz_Standard speech - List 1 - Sentence 1 - Version 1_0.wav
Dataset Creation
Curation Rationale
This corpus was created as part of a larger research project investigating speech intelligibility in noise. The goal was to obtain high-quality reference recordings of standardized speech materials (IEEE sentences) from native British English speakers in controlled acoustic conditions. The anechoic recording environment eliminates room reflections, making the recordings suitable for adding controlled acoustic conditions in post-processing for intelligibility studies.
Source Data
Data Collection and Processing
Recording Setup:
- Environment: ARU anechoic chamber (internal dimensions 5m × 4m × 2.6m)
- Microphone: Brüel & Kjær Type 4190 free-field half-inch microphone
- Preamplifier: Brüel & Kjær Type 2669 (No. 3004348)
- Conditioning amplifier: Brüel & Kjær Nexus (Serial 2301697)
- Generator module: Brüel & Kjær LAN-XI Type 3160-A 4/2
- Recording software: Brüel & Kjær Pulse Time Data Recorder v20
- Microphone distance: 1m on-axis from speaker
- Sampling rate: 65,536 Hz
- Bit depth: 24 bits per sample
Signal Processing:
- High-pass filtering to remove energy below 60 Hz (Finite Impulse Response filter with Kaiser window method)
- Low-pass filtering to attenuate energy above 9 kHz (removes electrical background noise)
- Normalization using the activlev function from VOICEBOX (Brookes, 2014-2016) to achieve consistent active speech levels according to ITU-T P.56 (2011)
Recording Procedure:
- Speakers seated comfortably in anechoic chamber
- Instructed to speak with "normal vocal effort, as you would in everyday conversation"
- Sentences presented in randomized order
- Video monitoring to ensure speakers faced microphone
- Repetition allowed for hesitations or errors
Who are the source data producers?
Speaker Demographics:
| ID | Gender | Age | Geographic Origin (County, Country) |
|---|---|---|---|
| 01 | M | 47 | Avon, England |
| 02 | M | 21 | Ceredigion, Wales |
| 03 | F | 23 | Berkshire, England |
| 04 | F | 35 | Surrey and Middlesex, England |
| 05 | M | 35 | Denbighshire and Conwy, Wales |
| 06 | M | 47 | Kent, England |
| 07 | F | 24 | Norfolk, England |
| 08 | F | 32 | Merseyside, England |
| 09 | F | 44 | Wirral, England |
| 10 | M | 29 | Cheshire, England |
| 11 | F | 45 | East Sussex, England |
| 12 | M | 32 | Leicestershire, England |
Selection Criteria:
- Native British English speakers (first language)
- Age range: 20-60 years (actual: 21-47)
- Completed all primary and secondary schooling in the UK
- Accents not strongly regional (preference for near Received Pronunciation)
- Non-smokers with no recent smoking history
- No history of speech disorders or treatment by speech pathologist
- No medical conditions affecting vocal apparatus (vocal folds, larynx, trachea, pharynx, esophagus, respiratory system)
- Not taking medications affecting speech-related anatomy
- Self-reported normal hearing ability
- Passed audiometric screening: thresholds of 20 dB HL or better (age-adjusted) at frequencies from 125 Hz to 8 kHz (per BS EN ISO 8253-1:2010)
Annotations
Annotation process
The dataset uses IEEE (Harvard) sentences, which are standardized phonetically-balanced sentences commonly used in speech research. Transcriptions are available from the IEEE standard (IEEE, 1969). Speaker metadata (ID, gender, age, accent) was collected during participant screening and is encoded in filenames and metadata fields.
Who are the annotators?
Not applicable - the dataset uses pre-existing IEEE sentence materials. Metadata was collected by the research team during participant recruitment and screening.
Personal and Sensitive Information
Privacy Protections:
- Participants are identified only by anonymous ID numbers (01-12)
- No names, contact details, or uniquely identifiable information is included
- Only aggregate demographic information is shared: age (in years) and county of education
- All participants provided informed consent specifically for public distribution of recordings
- Participants were explicitly informed that age and educational county would be associated with their recordings
Consent Process:
- Two-part screening procedure with full informed consent
- Participants received written information sheets explaining data usage
- Explicit consent obtained for public distribution via the ARU website
- Participants paid £15 per recording session (Tesco vouchers)
- Right to withdraw data until November 30, 2018 (before public release)
Bias, Risks, and Limitations
Demographic Limitations:
- Limited speaker diversity: Only 12 speakers total
- Age range: 21-47 years (excludes children, older adults)
- Geographic bias: Primarily England (10 speakers), limited Welsh representation (2 speakers)
- Accent bias: Selected for near Received Pronunciation, not representative of regional British English varieties
- Health bias: Excludes speakers with any hearing loss, speech disorders, or smoking history
- Socioeconomic bias: Likely skewed toward university-affiliated individuals
Technical Limitations:
- Anechoic conditions: Not representative of real-world acoustic environments
- Read speech only: Does not capture spontaneous speech characteristics
- Limited phonetic content: Restricted to IEEE sentence set
- Single-channel: No multi-microphone or spatial audio data
- High sampling rate: 65,536 Hz may require downsampling for many applications
Ethical Considerations:
- Voice biometrics could potentially identify speakers despite anonymization
- Dataset should not be used for surveillance or unauthorized speaker identification
- Limited diversity may lead to biased model performance on underrepresented demographics
Recommendations
Users should:
- Acknowledge limitations when reporting results, particularly regarding accent and demographic diversity
- Downsample appropriately for applications not requiring super-wideband audio (most ASR systems use 16 kHz)
- Combine with other datasets for more demographically diverse training data
- Respect participant consent by not using recordings for biometric identification or surveillance
- Consider acoustic mismatch when applying models trained on anechoic speech to real-world conditions
- Evaluate fairness across age, gender, and accent when using for model development
- Cite properly using the provided citation information
- Verify license compliance for commercial applications
Citation
BibTeX:
@misc{hopkins2019aru,
author = {Hopkins, Carl and Graetzer, Simone and Seiffert, Gary},
title = {ARU adult British English speaker corpus of IEEE sentences (ARU speech corpus) version 1.0},
year = {2019},
publisher = {University of Liverpool},
howpublished = {Acoustics Research Unit, School of Architecture, University of Liverpool},
doi = {10.17638/datacat.liverpool.ac.uk/681},
url = {https://datacat.liverpool.ac.uk/681/}
}
APA:
Hopkins, C., Graetzer, S., & Seiffert, G. (2019). ARU adult British English speaker corpus of IEEE sentences (ARU speech corpus) version 1.0 [Data set]. Acoustics Research Unit, School of Architecture, University of Liverpool. https://doi.org/10.17638/datacat.liverpool.ac.uk/681
Glossary
IEEE sentences: Phonetically-balanced sentences from IEEE Recommended Practice for Speech Quality Measurements (1969), also known as Harvard sentences Anechoic chamber: Room designed to absorb sound reflections, creating a reflection-free environment Received Pronunciation (RP): Accent traditionally considered standard British English, historically associated with educated speakers in southern England Active speech level: Speech level measurement excluding pauses, per ITU-T P.56 dB HL: Decibels Hearing Level, audiometric measurement relative to normal hearing thresholds Sampling rate 65,536 Hz: Super-wideband sampling rate (2^16 Hz), capturing frequencies up to ~32 kHz
More Information
Related References:
IEEE (1969). Recommended practice for speech quality measurements. IEEE Transactions on Audio and Electroacoustics, 17(3), 227-246. ITU-T P.56 (2011). Objective measurement of active speech level. International Telecommunication Union. Brookes, M. (2014-2016). VOICEBOX: Speech Processing Toolbox for MATLAB. http://www.ee.ic.ac.uk/hp/staff/dmb/voicebox/voicebox.html BS EN ISO 8253-1:2010. Acoustics: Audiometric test methods part 1: Basic pure tone air and bone conduction threshold audiometry.
Contact Information:
Acoustics Research Unit, School of Architecture University of Liverpool Abercromby Square, Liverpool L69 7ZN, United Kingdom
Dataset Card Authors
Chris Weaver, Logitech Inc.
Dataset Card Contact
For questions about this dataset card or the Hugging Face repository, contact cweaver@logitech.com
For questions about the original dataset, contact the Acoustics Research Unit at the University of Liverpool or email: carl.hopkins@liv.ac.uk
- Downloads last month
- 148