Visual search is a complicated habit that is impacted by various circumstances. Therefore, many studies utilize very simplified stimuli to adjust for these issues. However, the statistics of these stimuli are considerably different from the statistics of natural pictures that the human visual system has been honed to see via evolution and experience. Could this distinction influence search behavior? If this is the case, simplified stimuli may lead to effects commonly associated with cognitive processes, such as selective attention. 

Researchers utilized deep neural networks to investigate how improving models for the statistics of one picture distribution constrains performance on a task using photos from another distribution. Four deep neural network architectures were trained on one of three source datasets—natural photos, faces, and x-ray images—and then applied to a visual search task using reduced stimuli. This adaptation resulted in models with performance constraints similar to humans, but models trained only on the search task do not. 

They also discovered that deep neural networks trained to identify natural photos have comparable constraints when applied to a search task that employs a different collection of natural images. As a result, data distribution alone cannot explain this impact. Finally, they explored how future research may include an optimization-based strategy in existing visual search behavior models.

Reference: jov.arvojournals.org/article.aspx?articleid=2778890

Author