General information |
Label |
Value |
Description |
|
Name |
ETH Acoustic Event Dataset |
Full dataset name |
|
ID |
sounds/eth_aed
|
Datalist id for external indexing |
|
Abbreviation |
ETH-AED |
Official dataset abbreviation, e.g. one used in the original paper |
|
Provider |
ETH |
|
|
Year |
2016 |
Dataset release year |
|
Modalities |
Audio
|
Data modalities included in the dataset |
|
Collection name |
ETH |
Common name for all related datasets, used to group datasets coming from same source |
|
Research domain |
Tagging
Weak annotation
|
Related domains, e.g., Scenes, Mobile devices, Audio-visual, Open set, Ambient noise, Unlabelled, Multiple sensors, SED, SELD, Tagging, FL, Strong annotation, Weak annotation, Unlabelled, Multi-annotator |
|
License |
Creative Commons |
|
|
Download |
Download
(1.2GB)
|
|
|
Companion site |
Site
|
Link to the companion site for the dataset |
|
Citation |
[Takahashi2016] Deep Convolutional Neural Networks and Data Augmentation for Acoustic Event Recognition
|
|
Audio |
Label |
Value |
Description |
|
Data |
|
|
Data type |
Audio
|
Possible values: Audio | Features |
|
|
File format |
|
|
|
File format type |
Constant
|
Possible values: Constant | Variable |
|
|
|
File format |
wav
|
Possible value: wav | aiff | flac | mp3 | aac | ogg |
|
|
|
Lossy compression |
No
|
is audio compressed in a lossy manner |
|
|
|
Bit rate |
16 |
Bit depth of audio, possible values: 8 | 16 | 24 | 32 |
|
|
|
Sampling rate (kHz) |
16 kHz |
Sampling rate in kHz, possible values: 8 | 16 | 22.05 | 32 | 44.1 | 48 |
|
|
Channels |
|
|
Material |
|
|
|
Source |
Freesound
|
Possible values: Original | Youtube | Freesound | Online | Crowdsourced | [Dataset name] |
|
Content |
|
Recording |
|
|
Setup |
Unknown
|
Possible values: Near-field | Far-field | Mixed | Uncontrolled | Unknown |
|
Files |
|
|
Count |
5223 files |
Total number of files |
|
|
Total duration (minutes) |
768.4 min |
Total duration of the dataset in minutes |
Meta |
Label |
Value |
Description |
|
Types |
Tag
|
List of meta data types provided for the data, possible values: Event, Tag, Scene, Caption, Geolocation, Spatial location, Annotator, Timestamp, Presence, Proximity, etc. |
|
Scene |
|
Event |
|
|
Classes |
28 |
Number of event classes |
|
|
Classes |
False
|
Possible values: True | False | Almost |
|
|
Classes |
- acoustic guitar
- airplane
- applause
- bird
- car
- cat
- child
- church bell
- crowd
- dog barking
- engine
- fireworks
- footstpes
- glass breaking
- hammer
- helicopter
- knock
- laughter
- mouse click
- ocean surf
- rustle
- scream
- speech
- squeak
- tone
- violin
- water tap
- whistle
|
|
|
|
Annotation |
|
|
|
Type |
Weak
|
Possible values: Strong | Weak | Location | None |
|
|
|
Source |
Experts
|
Possible values: Experts | Crowdsourced | Synthetic | Metadata | Automatic |
|
|
|
Annotations per item |
1 |
How many annotations there are available per item (possible multi-annotator setup) |
|
|
|
Labelled amount (%) |
100 % |
Percentage of all data, amount of data which is labelled |
|
|
|
Strong annotations amount (%) |
0 % |
Percentage of all data, amount of data which has strong annotations |
|
|
|
Overlapping event instances |
No
|
|
|
|
Labeling |
|
|
|
Hierarchical |
No
|
|
|
|
Instance |
|
|
|
Count |
5223 |
Count of all event instances in the dataset |
|
|
|
Average instances per class |
186.5 |
Average per class instance count |
Cross-validation setup |
Label |
Value |
Description |
|
|
Provided |
Yes
|
|
|
|
Folds |
1 |
|
|
|
Sets |
Train
Test
|
Set types provided in the split, possible values: Train | Test | Val | Dev | Eval |
Baseline |
Label |
Value |
Description |
|
|
Download |
Download
|
Link to baseline system source code |
|
|
Citation |
[Takahashi2016]
|
Paper to cite for the baseline |