The following is a summary of “Gaze dynamics are sensitive to target orienting for working memory encoding in virtual reality,” published in the January 2022 issue of Ophthalmology by Peacock, et al.

For a study, researchers sought to investigate whether there is a consistent relationship between visuospatial attention and gaze dynamics during working memory encoding in naturalistic environments. The study utilized eye-tracking technology to record participants’ eye movements as they searched for and encoded objects in a virtual apartment (Experiment 1) and a cluttered virtual kitchen (Experiment 2).

They decomposed gaze into 61 features that capture gaze dynamics and used a sliding window logistic regression model to predict when participants found target objects for working memory encoding. The model was trained on group data and successfully predicted when people were oriented to a target for encoding for the trained task (Experiment 1) and the novel task (Experiment 2).

Six features were identified as predictive of target orienting for encoding, including decreased distances between subsequent fixation/saccade events, increased fixation probabilities, and slower saccade decelerations before encoding. In addition, it suggested that as people orient toward a target to encode new information, they decrease task-irrelevant, exploratory sampling behaviors.

Overall, the research demonstrated the potential use of gaze dynamics to capture target orienting for working memory encoding and has implications for real-world use in technology and special populations