Data for "Prediction of Search Targets From Fixations in Open-World Settings"

DOI

We designed a human study to collect fixation data during visual search. We opted for a task that involved searching for a single image (the target) within a synthesised collage of images (the search set). Each of the collages are the random permutation of a finite set of images.

To explore the impact of the similarity in appearance between target and search set on both fixation behaviour and automatic inference, we have created three different search tasks covering a range of similarities. In prior work, colour was found to be a particularly important cue for guiding search to targets and target-similar objects. Therefore we have selected for the first task 78 coloured O'Reilly book covers to compose the collages. These covers show a woodcut of an animal at the top and the title of the book in a characteristic font underneath. Given that overall cover appearance was very similar, this task allows us to analyse fixation behaviour when colour is the most discriminative feature.

For the second task we use a set of 84 book covers from Amazon. In contrast to the first task, appearance of these covers is more diverse. This makes it possible to analyse fixation behaviour when both structure and colour information could be used by participants to find the target. Finally, for the third task, we use a set of 78 mugshots from a public database of suspects. In contrast to the other tasks, we transformed the mugshots to grey-scale so that they did not contain any colour information. In this case, allows abalysis of fixation behaviour when colour information was not available at all. We found faces to be particularly interesting given the relevance of searching for faces in many practical applications.

18 participants (9 males), age 18-30 Gaze data recorded with a stationary Tobii TX300 eye tracker

More information about the dataset can be found in the README file.

The data is only to be used for non-commercial scientific purposes.

Identifier
DOI https://doi.org/10.18419/darus-3226
Related Identifier IsCitedBy https://doi.org/10.1109/CVPR.2015.7298700
Metadata Access https://darus.uni-stuttgart.de/oai?verb=GetRecord&metadataPrefix=oai_datacite&identifier=doi:10.18419/darus-3226
Provenance
Creator Bulling, Andreas ORCID logo
Publisher DaRUS
Contributor Bulling, Andreas
Publication Year 2022
Funding Reference Cluster of Excellence on Multimodal Computing and Interaction (MMCI) at Saarland University ; DFG EXC 284 - 39134088
Rights CC BY-NC-SA 4.0; info:eu-repo/semantics/openAccess; http://creativecommons.org/licenses/by-nc-sa/4.0
OpenAccess true
Contact Bulling, Andreas (Universität Stuttgart)
Representation
Resource Type Eye gaze fixations; Dataset
Format application/zip; text/plain
Size 162874710; 143; 87739990; 125243756; 9024
Version 1.0
Discipline Other