| General information | |||||
| Label | Value | Description | |||
|---|---|---|---|---|---|
| Name | AudioCaps: Generating Captions for Audios in The Wild | Full dataset name | |||
| ID | captions/audiocaps | Datalist id for external indexing | |||
| Abbreviation | AudioCaps | Official dataset abbreviation, e.g. one used in the original paper | |||
| Provider | SNU | ||||
| Year | 2019 | Dataset release year | |||
| Modalities | Audio Video | Data modalities included in the dataset | |||
| Collection name | AudioCaps | Common name for all related datasets, used to group datasets coming from same source | |||
| Research domain | Captioning Tagging Multi-annotator | Related domains, e.g., Scenes, Mobile devices, Audio-visual, Open set, Ambient noise, Unlabelled, Multiple sensors, SED, SELD, Tagging, FL, Strong annotation, Weak annotation, Unlabelled, Multi-annotator | |||
| Related datasets name | |||||
| Download | Download (None) | ||||
| Citation | [Kim2019] AudioCaps: Generating Captions for Audios in The Wild | ||||
| Audio | |||||
| Label | Value | Description | |||
| Data | |||||
| Data type | Audio | Possible values: Audio | Features | |||
| File format | |||||
| Channels | |||||
| Setup | Mono | Possible values: Mono | Stereo | Binaural | Ambisonic | Array | Multi-Channel | Variable | |||
| Number of channels | 1 | ||||
| Material | |||||
| Source | AudioSet | Possible values: Original | Youtube | Freesound | Online | Crowdsourced | [Dataset name] | |||
| Content | |||||
| Content type | Freefield | Possible values: Freefield | Synthetic | Isolated | |||
| Recording | |||||
| Setup | Unknown | Possible values: Near-field | Far-field | Mixed | Uncontrolled | Unknown | |||
| Spot type | Unknown | Possible values: Fixed | Moving | Unknown | |||
| Files | |||||
| Count | 51308 files | Total number of files | |||
| Total duration (minutes) | 8551.333333333 min | Total duration of the dataset in minutes | |||
| File length | Constant | Characterization of the file lengths, possible values: Constant | Quasi-constant | Variable | |||
| File length (seconds) | 10 sec | Approximate length of files | |||
| Meta | |||||
| Label | Value | Description | |||
| Types | Caption | List of meta data types provided for the data, possible values: Event, Tag, Scene, Caption, Geolocation, Spatial location, Annotator, Timestamp, Presence, Proximity, etc. | |||
| Scene | |||||
| Event | |||||
| Caption | |||||
| Annotation | |||||
| Languages | English | Languages used for annotation | |||
| Source | Crowdsourced | Possible values: Experts | Crowdsourced | Synthetic | Metadata | Automatic | |||
| Captions per item | 1-5 | How many annotations there are available per item (possible multi-annotator setup) | |||
| Validated amount (%) | 100 % | Percentage of all data, amount of data which is validated by human | |||
| Guidance | Word hints | Type of guidance annotators were provided during the annotation, e.g. Video, Image, Tags | |||
| Cross-validation setup | |||||
| Label | Value | Description | |||
| Provided | Yes | ||||
| Folds | 1 | ||||
| Sets | Train Val Test | Set types provided in the split, possible values: Train | Test | Val | Dev | Eval | |||
| Baseline | |||||
| Label | Value | Description | |||
| Download | Download | Link to baseline system source code | |||