On two-dimensional displays, central and peripheral vision during visual tasks have been widely investigated, showing perceptual and functional differences. For a study, researchers sought to: recreate on-screen gaze-contingent studies in virtual reality by deleting the center or peripheral field of vision, and finding visuomotor biases related to the exploration of 360-degree sceneries with a large field of view. The findings were applicable to vision modeling and gaze position prediction (e.g., content compression and streaming). They wanted to know how past on-screen findings transfer to settings where observers may examine stimuli with their heads. They used a gaze-contingent paradigm to simulate vision loss in virtual reality, allowing participants to freely observe omnidirectional natural settings. The procedure enabled the modeling of vision loss with a wide field of view (>80°) and the investigation of the head’s contributions to visual attention. Contrary to previous research in visual tasks led by instructions, the time-course of visuomotor variables in our pure free-viewing test showed extended fixations and brief saccades during the initial seconds of exploration. They showed that the effect of visual loss is predominantly reflected in eye movements, which is consistent with the research on two-dimensional displays. 

They anticipated that head movements are mostly used to investigate the sceneries during free viewing and that the inclusion of masks had little effect on head scanning behaviors. They offered new fixation and saccadic visuomotor tendencies in a 360° environment, with the intention that they will aid in the development of gaze prediction models for virtual reality.