DEyeAdicContact

DOI

We created our own dataset of natural dyadic interactions with fine-grained eye contact annotations using videos of dyadic interviews published on YouTube. Especially compared to lab-based recordings, these Youtube interviews allow us to analyse behaviour in a natural situation. All interviews were conducted via video conferencing and provide frontal views of interviewer and interviewee side-by-side. Specifically, we downloaded videos from the YouTube channels “Wisdom From North” and “The Spa Dr.” that both provide a large number of interviews, often with a high video quality. Each channel features a single host interviewing different guests in each session. We manually selected videos with high video quality, resulting in 60 videos for “The Spa Dr.” and 61 videos for “Wisdom From North”. All videos are recorded at a frame rate between 24 and 30 fps and vary in length from 17 minutes to 58 minutes (average: 37 minutes). In total the videos contain 74 hours of conversations, amounting to 7,817,821 video frames.

We instructed five human annotators to classify the gaze of interviewer and interviewee (in the following referred to as “subjects”). Even though in this study we were only interested in a binary classification of averted gaze versus eye contact, a more fine-grained distinction of averted gaze might prove beneficial for future research. To this end we used in total 11 mutually exclusive classes during annotation. Annotators were asked to select the class “eye contact” if the subject was looking at the location of the other person on her screen or the camera from which she was recorded. We found that annotators were able to reliably determine the placements of camera and screen by skimming through the video prior to starting the annotation. If there was no eye contact, annotators classified whether the subject gazed “up”,“down”, “left”, “right”, or to the “upper left”, “lower left”, ”upper right” or “lower right”. In the following, we refer to the union of these classes as the “no eye contact class”. A separate class was dedicated to blinks, while yet another class indicated instances in which annotators were unsure about how to decide, e.g. as a result of low image quality. As annotators worked on disjoint sets of videos, one of the authors was present throughout the first sessions in order to ensure consistency. To strike a good balance between sufficient coverage and annotation effort, we collected these annotations on a frame-by-frame basis every 30 seconds for the Wisdom From North interviews, and every 15 seconds for The Spa Dr. interviews. We collected annotations for The Spa Dr. on a finer timescale given that the host of that channel almost always keeps eye contact with her interviewees. A coarser time scale would have increased the risk of missing the no eye contact classes in the annotation. In total, we collected 23,131 annotated video frames of which 83% were labelled as "eye contact".

OpenFace, 2.0

The data is only to be used for non-commercial scientific purposes.

Identifier
DOI https://doi.org/10.18419/darus-3289
Related Identifier IsCitedBy https://doi.org/10.1145/3379155.3391332
Metadata Access https://darus.uni-stuttgart.de/oai?verb=GetRecord&metadataPrefix=oai_datacite&identifier=doi:10.18419/darus-3289
Provenance
Creator Bulling, Andreas ORCID logo
Publisher DaRUS
Contributor Bulling, Andreas
Publication Year 2022
Funding Reference European Research Council ERC 801708 ; JST CREST research grant, Japan JPMJCR14E1 ; DFG EXC 2075 - 390740016
Rights CC BY-NC-SA 4.0; info:eu-repo/semantics/openAccess; http://creativecommons.org/licenses/by-nc-sa/4.0
OpenAccess true
Contact Bulling, Andreas (Universität Stuttgart)
Representation
Resource Type Dataset
Format text/tab-separated-values; text/plain
Size 2340; 1417; 3066; 1015; 1091; 974; 1267; 916; 996; 2516; 2476; 886; 726; 2165; 1597; 1868; 3317; 1672; 2111; 915; 1728; 3440; 2163; 1827; 1085; 1248; 3860; 2135; 1217; 1117; 3019; 134; 1691; 1789; 1441; 270; 1182; 2368; 1003; 2609; 1855; 1046; 1353; 2509; 1580; 1181; 647; 2282; 3084; 1002; 940; 1068; 608; 1384; 825; 1866; 686; 1530; 1903; 1104; 2078; 1986; 991; 831; 1664; 1713; 2182; 1861; 1077; 2285; 2463; 1685; 1048; 3551; 2500; 2116; 1776; 1560; 921; 454; 4701; 1428; 2745; 1765; 971; 2321; 2189; 2139; 4092; 1660; 3756; 1737; 1799; 2551; 2053; 1185; 2763; 4369; 1409; 1341; 499; 982; 1400; 802; 3259; 2210; 1287; 804; 1382; 962; 1088; 1064; 1352; 1723; 4232; 1820; 2923; 1427; 1535; 1227; 3219
Version 1.0
Discipline Other