Comparisons of Target Localization Abilities during Physical and Virtual Rotating Scenes by Cognitively-Intact and Cognitively Impaired Older Adults
Omid Ranjbar Pouya 1, *, Ahmad Byagowi 2, Debbie M. Kelly 3, Zahra Moussavi 1
1. Graduate Program in Biomedical Engineering, University of Manitoba, 75 Chancellor's Circle, Winnipeg, Canada
2. Electrical and Computer Engineering in Faculty of Engineering, University of Manitoba, 75 Chancellor's Circle, Winnipeg, Canada
3. Department of Psychology, University of Manitoba, 190 Dysart Rd, Winnipeg, Canada
* Correspondence: Omid Ranjbar Pouya
Academic Editor: Michael Fossel
Received: August 20, 2018 | Accepted: June 21, 2019 | Published: June 27, 2019
OBM Geriatrics 2019, Volume 3, Issue 2, doi:10.21926/obm.geriatr.1902059
Recommended citation: Pouya OR, Byagowi A, Kelly DM, Moussavi Z. Comparisons of Target Localization Abilities during Physical and Virtual Rotating Scenes by Cognitively-Intact and Cognitively Impaired Older Adults. OBM Geriatrics 2019; 3(2): 059; doi:10.21926/obm.geriatr.1902059.
© 2019 by the authors. This is an open access article distributed under the conditions of the Creative Commons by Attribution License, which permits unrestricted use, distribution, and reproduction in any medium or format, provided the original work is correctly cited.
Abstract
Background: Previous studies have reported that coordinate information (i.e. distance between any two objects in a specific direction) is encoded differently from Virtual Reality (VR) and physical scenes. However, the accuracy of encoding categorical information (i.e. relative positions of objects) from VR scenes has not been adequately investigated. During this study, we used a novel rotating visual scene to study the effects of aging, prior experience with VR, and dementia on the accuracy of encoding categorical information between physical and virtual environments.
Methods: We recruited a cohort of 60 cognitively-healthy older adults, with and without previous VR experience (Experiment 1), as well as 18 older adults with mild to moderate Alzheimer disease (AD) (Experiment 2). During both of the experiments, the participants were asked to attend to a target window in a virtual or real small-scale model building (dependent upon group assignment) as the building was rotated around its vertical axis in depth of the scene. Participants were required to verbally judge the final position of the target in terms of direction (e.g., left, right, back, and front) with respect to the entrance of the buildings after the full rotation has stopped. A score was calculated for each participant based on s/her accuracy in locating the target window.
Results: Healthy older adults succeeded in accurately localizing the target's position from both environments, whereas individuals with AD were only able to encode the target’s position from the physical environment.
Conclusions: Our results suggest the inability to encode from a rotating VR scene might be a symptom of dementia.
Keywords
Dementia; aging; virtual reality; categorical spatial encoding; rotating visual scene
1. Introduction
Successful navigation in virtual reality (VR) environments primarily relies on having visuospatial abilities, particularly spatial encoding. To develop a functional spatial representation of a VR environment, the observer must encode objects, their locations and the spatial relations among these objects. These spatial relationships can be encoded either by categorical information (i.e. relative positions of objects such as left/right or front/behind) or coordinate information (i.e. distance between any two objects) [1,2]. Several studies have shown distinct neuronal networks are engaged with these two types of spatial encoding [3,4]. Distances based on the coordinate system have been widely reported to be underestimated when encoded from a VR scene compared to an identical physical replica [5,6]. However, the accuracy of encoding categorical relationships from VR scenes has not been investigated adequately; this is the focus of our study.
The ability to encode spatial relationships is usually investigated by either providing a survey (map) view of an environment to participants or allowing them to have a direct navigation experience within an environment. These two encoding methods are shown to engage different neuronal substrates [7], involve different cognitive processes [8], and lead to different types of spatial knowledge [9]. To compare the encoding of categorical information from a physical environment and its virtual replica, an aerial (survey) view of the environment is not traditionally used. This is because survey views typically do not provide sufficient differences in the visual properties of these two types of environments to provide meaningful comparisons. Therefore, previous studies have used direct navigation of the environments to compare participants’ spatial encoding abilities [10,11,12]. Generally, studies using this approach have only reported inferior spatial encoding in VR environments in terms of general acquired spatial knowledge. However, spatial categorical relationships have not specifically been examined.
Using a navigational paradigm to assess spatial encoding may include potential confounding factors. First, navigating within an environment usually involves higher-order cognitive processes in addition to spatial encoding, such as proficiency with the VR interface, which has been found to contribute to substantial individual differences in the ability to acquire spatial information from a VR environment [13,14]. Particularly when comparing between VR and physical environments, the differences between vestibular and proprioceptive feedback in the two environments have been found as a cofounding source of difference [15,16,17]. However, without navigation, providing a ground-level static view of a VR environment may not always be sufficient for examining the encoding of spatial relationships among objects, as some objects or structural entities may occlude others. Therefore, during this study, we used a novel method to examine the accuracy of encoding spatial categorical relationships, which allowed us to address these remaining issues. Specifically, we developed a paradigm permitting us to examine whether participants encode spatial categorical relationships among objects in a similar manner when encoding them from a rotating physical building (model sized) or a VR replica of the building.
Rotating visual scenes or objects have been widely used for investigations of object-recognition [18] and visual change-detection [19,20]. However, their application for studies of spatial encoding has not received much attention [21,22]. The rotating scene, which changes continuously in a regular and predictable manner, presents the observer with a different view each moment in time. The visual system has been suggested to integrate these different views into a coherent 3D mental representation (i.e. scene integration) [21,23].
For our proposed paradigm, participants are asked to attend to a target window in a virtual building (a reference object) as the building was being rotated around its vertical axis in depth of the scene (passive visual exposure). This visual target was initially not visible but entered the observer’s visual field and then subsequently disappeared as the building continued to rotate. The participant was required to judge the final position of the target after the 360° rotation has stopped. The position of the target could only be encoded in terms of directions (e.g., left, right, back, and front) with respect to the entrance of the building (e.g. “the target was on the left side of the building”). Therefore, the paradigm we designed may provide a complementary and novel approach for investigating spatial categorical encoding from VR and physically rotating scenes. Furthermore, this approach allowed us to examine the possible effects of participant age and prior VR experience, as well as the possible differences among healthy aging individuals compared to those with dementia, on spatial encoding abilities.
Aging is recognized as one of the important influential factors on spatial abilities [24]. However, many previous studies comparing virtual and real-world paradigms have not included the evaluation of possible effects of aging. We have only found four studies to date [25,26,27,28] that have directly compared age-related differences of navigational processing in physical and virtual environments. Nevertheless, these studies have focused on navigational abilities without including a direct comparison of VR and physical age-related effects on spatial encoding. Interestingly, aging has shown to have more of a detrimental effect on spatial encoding during a VR navigation task in comparison to VR map reading task (aerial view) [29]. Furthermore, older adults also show slower processing speeds [30] and reduced activation in related brain areas when making categorical relational judgments on a 2D screen [31]. To the best our knowledge, the effect of normal aging on spatial encoding during tasks of categorical spatial encoding from 3D rotational scenes has not been yet investigated. Following our previous study on younger adults [32], one of the main goals of the current study was to investigate spatial encoding from VR and physical rotational scenes by healthy older adults and those with cognitive decline.
During our previous study, younger adults (24-36 years) showed a similar ability to acquire categorical spatial knowledge when presented in the form of either a physical or a virtual building, and they were able to transfer information when required to navigate through the VR building [32]. However, compared to older adults, younger adults usually have much more experience with VR through media, such as gaming, prior to their participation in VR studies. Considering the reported positive effect of having VR-game experience on learning from VR environments by both younger [33,34,35] and older adults [14,36], we were interested in examining whether VR experience would influence older adults’ ability to categorically encode spatial information from our rotational scenes. Therefore, we included two groups of older adults, one with and one without previous VR experience.
The last factor investigated in this study was the influence of cognitive impairment on spatial encoding. Impairment in visuospatial abilities has been suggested as an early symptom of Alzheimer’s disease (AD) [37]. In particular, categorical spatial memory function has been shown as a discriminative factor between individuals with AD and Mild Cognitive Impairment (MCI) [38]. However, the effect of cognitive impairment on the ability to categorical encode spatial information from a 3D rotational scene task (either VR or physical) has not yet been examined. Inspired by our previous study on individuals with AD (60 – 83 years), who were only able to accurately locate a target when viewing a physical, but not a VR, building [39] , we compared the performance of a cohort of individuals with mild to moderate AD during our current task, with both the VR and physical buildings. The objective of this aspect of the study was to determine whether the ability to encode a target location from our rotating VR scene was reduced in normal age-relate cognitive decline, or whether it may be a symptom of diseased aging.
2. Materials and Methods
2.1 Design of Virtual and Physical Environment
For building the virtual environment, a three-story virtual building was designed using VC++.Net and OpenGL library (Figure 1a). On each wall of the building, and at each floor, there were three windows, a large central window and two identical smaller windows to the left and to the right. The building looked identical from every side, except for the front side that had an entrance door located on the first floor. For each trial, one of the 16 left/right corner windows was randomly illuminated, and the building was rotated clockwise around its vertical axis in depth of the scene through 360° (16 seconds for one full rotation).
The design of physical environment was aimed to replicate the virtual environment with a 10-fold reduction in scale (Figure 1b). There were 24 rooms in the building, out of which 16 could be selected randomly as target rooms. Thus, in the physical building, each room had a designated LED light in a window that was turned “ON” if the room was selected as a target. The 16 LEDs were connected using a charlieplexing configuration. Thus, only 5 pins were used to control the 16 LEDs. The building rotated using a 12-volt battery powered, spiral-gear attached to a permanent magnet DC motor. To control the rotation and the angular velocity of the building so that it was the same for each trial, a touch-less magnetic encoder AS5134, made by Austrian Micro Electronics, was employed. The magnetic encoder provided the absolute position as well as the angular velocity.
Figure 1 (a) Outdoor view of the virtual building. (b) Outdoor view of the physical building (scale 10:1). For both environments, the target window is shown illuminated with a green light. The target window is located on the right-hand side of the second floor for illustrative purposes only (as the position was randomized during the study).
The physical building was controlled using an Arduino based software running on an ATMEGA328P microcontroller. Foremost, a velocity controller loop regulated the torque implied to the actuation motor. The velocity controller was based on a proportional-differential (PD) controller. Cascading the velocity control loop, a position controller executed a trapezoidal velocity profile (acceleration, constant speed, and deceleration). This ensured a smooth rotational motion by the building to mimic the rotation in the VR. In addition, the controller of the building received commands from the host computer running the VR engine to execute appropriate commands. The commands were transmitted over a virtual serial port implemented on a Bluetooth radio link. The host computer sent a command based on the target room (window), required motion and required velocity.
2.2 Experiment Design
Participants performed a Target Mapping Phase, during which they were required to encode the location of a target item. Each participant was seated on a chair facing the entrance of a three-story building, which was presented either in a VR medium (i.e. a laptop screen) or as a scaled down (1:10) physical replica of the VR version (see Figure 1). Participants completed an initial Target Mapping phase, by either viewing the position of a target in the virtual environment (group Virtual), or the scaled-down physical model of the virtual environment (group Physical).
In the case of making correct object-centered relational judgments during the Target Mapping phase, the acquired spatial knowledge was further assessed by evaluating how well the older adults could transfer this knowledge to VR navigation. During the Navigation phase, each participant was asked to virtually navigate through the building, using a VR Navigation task previously designed and evaluated by our team [40] in search of the observed target window seen in the Target Mapping phase. Figure 2 shows an illustration of the experimental design.
Figure 2 Schematic model of the experimental design for Experiment 1. During the Target Mapping phase, the participants observed the location of the target window in either the virtual building or the physical model building. During the Navigation phase, all participants navigated through the virtual building in search of the target window.
2.3 Experiment Procedure
Prior to starting the experiments, participants’ cognitive function was assessed using the Montreal Cognitive Assessment (MoCA) [40,41]. MoCA is a brief measure of global cognitive function developed to detect cognitive impairments. MoCA includes seven subtests for examining cognitive components such as visuospatial ability, executive function, attention, language, abstraction, short-term memory and awareness of present time and location, with a total score of 30; normally a MoCA score of <26 is associated with some cognitive impairment.
During each trial, with either the virtual or the physical condition, one of the windows of the building was randomly chosen as the target window, which was indicated by its illumination. The target window (herein referred to as target) was always one the 16 left/right corner windows (never a central window) on either the 2nd or 3rd floor. To show the target to the participant, the building was virtually or physically rotated clockwise through 360° (16 seconds for one full rotation), allowing participants to view the location of the target window from outside the building (Figure 1).
To determine if participants learned the location of the target, after each rotation the participant was asked to verbally state the location of the target. The participant was assigned 2 points if they identified the wall on which the target was located (herein referred to a side) or a score of 0 if they were incorrect. If the participant correctly identified the side location of the target, they were subsequently asked to identify the location of the target relative to the central window (left or right); correct responses were assigned an additional 1 point. Each participant experienced six trials, the target was shown from each hidden side of the building (left, right and back) twice in a pseudo-random order. The average number of points accumulated from each side was summed. Thus, the total score of a participant could range from 0 to 9.
2.3.1 Experiment 1
During Experiment 1, the acquired spatial knowledge of older adults was further assessed by evaluating how well they could transfer this knowledge to VR navigation. To move through the virtual building, participants physically pushed a specialized wheelchair (VRNChair; Figure 3a) designed by our team [42]. Within the virtual building, other than a set of centrally positioned stairs, there were no other landmarks (Figure 3b). When the participant started navigating, the target window was not illuminated. However, upon entering the room with the target window, the light in the window was illuminated, and a recorded voice announced “Good Job” providing positive feedback to the participant. Once the participant successfully located the target window, a new trial was started from the same starting position in front of the building. This procedure was repeated for six trials. Before beginning the navigation phase, each participant was given two practice trials navigating within the VR environment.
Figure 3 Movement within the virtual environment. (a) Navigation in the virtual building required pushing the wheelchair through a large open room, and (b) Indoor view of one floor within the virtual building.
The optical flow of transitional movement is calibrated such that distance traversed in the physical environment is reflected by a scaled down (logarithmic) distance in the VR environment to limit the exploration space of the participant and to prevent him/her from colliding with the walls in the physical space of the experiment. However, the rotational movement is calibrated such that a rotation of 360° by the VRNChair in the physical environment produces exactly 360° of rotation in the virtual environment. In summary, the designed VRNChair merges the sense of motion observed by the vision (optical flow) with the inertial sense of motion of the participant. Our experimental observations show that our design removes motion sickness effect well-known in virtual reality studies of older adults.
2.3.2 Experiment 2
An initial assessment showed that none of the participants with AD (n=18) were able to successfully navigate in the VR environment. This failure was primarily due to participants being unable to recognize the location of the target window, even after a second rotation of the building. During instance where the experimenter explicitly stated the location of the target, and the participant was encouraged to continue to search for the target, the participants showed some or all of the following symptoms: 1) they were not able to find the target window within the specified time limit, 2) they were unable to remember or report the previously stated target location, or 3) they adopted a trial-and-error strategy for locating the target window. Therefore, in Experiment 2, older adults with AD were only assessed in Target mapping phase of the experiment.
2.4 Participants
All participants signed an informed consent form approved by the Health Research Ethics Board of the University of Manitoba (HS11295 (H2009:033), approved on May 2012) prior to participation.
2.4.1 Experiment 1
Sixty older adults (32 women) including 30 VR-novice and 30 VR-experienced individuals with an age range of 55 to 81 years (66.0 ± 5.7 years) were recruited for this study. As the focus of the experiment was to assess cognitively-intact older adults, three participants (2 Novice and 1 Experienced) were excluded as they scored lower than 26 on the MoCA. The participants in VR-novice and VR-experienced were matched in terms of age and MoCA score. The participants in the VR-experienced cohort (n=29, 18 women) had participated in our previous Virtual Reality Navigation (VRN) study [40] approximately 8 months prior to participating in the current study. The participants in the VR-novice cohort (n=28, 13 women) had not participated in any previous VR-based experiment, nor had any other VR experience (self-reported). All participants were right-handed with normal or corrected-to-normal vision and were free from any known neurological or psychiatric disorders.
The participants in the VR-novice and VR-experienced cohorts were further subdivided into two groups, with one group completing the physical building task (i.e. physical group) and the other completing the VR task (i.e. virtual group). The participants of the subgroups (approximately 15 in each subgroup) were matched for age and MoCA score. For a summary of the participant details, see Table 1.
Table 1 Specifications of participants’ groups (mean ± standard deviations).
Our previous research [40], using the same navigational test on a larger group of participants, revealed no significant effect of gender on spatial updating across different ages or MoCA groups. Therefore, although we attempted to maintain a similar proportion of men and women, we did not completely balance gender in each of our experimental groups.
2.4.2 Experiment 2
Eighteen (8 females) volunteers with varying degrees of AD, with an age range of 57–86 years (71.4 ± 8.8 yrs) and MoCA score range of 7 to 25 (17.6 ± 6.2), participated in this study. All participants and their primary caregiver (in case a participant deemed not competent to give consent) signed the informed consent form prior to the experiments. The inclusion criteria for our study were: 1) being diagnosed of AD by their treating physician, and 2) having a MoCA score lower than 26.
2.5 Data Analysis
During Experiment 1, the outcome measures were divided into according to the Target Mapping phase and the Navigation phase. For the Target Mapping phase, participants received a score (out of a possible 9) for their accuracy in reporting the location of the target. During each trial of the Navigation phase the participants’ trajectory, visited room(s) along with the total traversed distance and time to locate the target room were logged. For each participant, three dependent variables were examined: the average time (in seconds) spent navigating (navigation duration), the total traversed distance (in virtual meters) until entering the target room (navigation distance), and the “Error score” which is a weighted sum of plausible errors made by a participant when searching for the target room [32,39,40]. This score included the number of unsuccessful trials, in which a participant “gave up” without reaching the target room, the number of successful trials achieved by using a trial-and-error strategy, the number of errors made when determining cardinal direction of the target (e.g., left, right, back, or front side of the building), the number of errors made when determining the relative position of the target (e.g. front/back end of the left side), and the number of errors made when determining the floor on which the target was located. During a recent study [40], we showed that Error Score was a reliable and sensitive measure when examining age-related spatial decline.
We performed a Multivariate Analysis of Variance (MANOVA) with the type of cohort (VR-Novice and VR-Experienced) and encoding condition (Physical and Virtual) as fixed factors. To test the hypotheses that experience, and/or the type of environment does not affect spatial updating (a null hypothesis), a Bayesian ANOVA was employed on each dependent variable using JASP 0.8.0.0 software [43]. The primary outcome of Bayesian ANOVA was selected as Bayes Factor [44] of null hypothesis over alternative hypothesis (BF01) .
During Experiment 2, statistical analyses were as described for the target mapping phase of Experiment 1, with the exception that VR experience was not considered a factor for this Experiment. A Wilcoxon signed-rank test was employed to investigate any significant differences between the results of the virtual and physical rotation conditions.
3. Results
3.1 Experiment 1
All groups of older adults succeeded in accurately localizing the target's position during either the VR or physical buildings (Mean ± Standard Deviation = 8.6 ± 0.7). Therefore, there was no significant differences between encoding from virtual and physical conditions in both VR-Experienced and VR-Novice cohorts during the Target Mapping phase.
For the Navigation phase, the results are summarized in terms of the average time spent navigating, the average traversed distance until the participant entered the target room, and Error score (see Table 2). A MANOVA revealed no significant interactions between gender and the factors of interest including the type of encoding environment (F (3, 50) =.61, p > 0.05; Wilk’s Λ = .96), the VR experience (F (3, 50) =.71, p > 0.05; Wilk’s Λ = .95) and the encoding and experience interaction (F (3, 50) =.51, p > 0.05; Wilk’s Λ = .97).
Table 2 Results of the VR-Novice and VR-Experienced cohorts on the three main measurements (mean ± standard deviations).
The analysis showed significant main effects for the type of encoding environment [Physical, Virtual; F (3, 51) =.46, p > 0.05; Wilk’s Λ = .97], and the effect of VR experience [VR-Novice, VR-Experienced; F (3, 51) =.64, p > 0.05; Wilk’s Λ = .96] but no significant interaction of encoding environment and VR experience (F (3, 51) =.83, p > 0.05; Wilk’s Λ = .95). Consequently, the tests of between-subjects effects revealed no significant differences for any of the dependent variables.
A Bayesian ANOVA was calculated to determine the strength of our null results. A Bayesian Factor greater than 3 is typically considered as evidence to support a null hypothesis [44]. This ANOVA showed that for the effect of encoding environment during the Navigation phase, the null hypothesis was almost four times more likely than the alternative hypothesis on Error Score (BF01= 3.78) and Average Duration (BF01= 3.63), as well as about three times more likely on Average Distance (BF01= 2.88). For the effect of experience, the null hypothesis was also shown to be more likely for the dependent variable of Average Distance (BF01= 3.25). However, for Error Score (BF01= 2.22) and Average Duration (BF01= 1.28) the null hypothesis was less supported. Thus, the encoding environment provided the best support of a null hypothesis. To summarize, in Experiment 1, we found no significant difference between the spatial knowledge acquired from virtual and physical rotations for both groups of VR-Experienced and VR-Novice older adults. Results from the Navigation phase revealed no significant differences between the groups.
3.2 Experiment 2
The AD participants, on average, obtained considerably higher scores during the physical condition (Mean ± Standard Deviation = 5.8 ± 4.3) compared to the virtual condition (1.7 ± 3.1). A Wilcoxon signed-rank test showed the significance of this difference (Z = -2.87. p = 0.004). Considering the wide range of MoCA scores for the AD participants, it was of interest to investigate the potential relationship between MoCA scores obtained and the difference between Physical and Virtual scores. As the variable representing the difference between physical and Virtual scores was ordinal, a Kendall's tau-b correlation was conducted to determine the relationship between this variable and MoCA score amongst participants with AD. Although the overall correlation was not found significant (τb = 0.375, p = .08), after removing an outlier participant (Female, age: 83 yrs , MoCA score: 9, Physical rotation score: 9, Virtual rotation score: 0), the correlation between MoCA scores and Physical-Virtual score difference became statistically significant (τb = .52, p = .04). This finding indicates that having a higher MoCA score correlates with performing better in Physical rotation test.
4. Discussion
We report on using a VR rotational scene as a new paradigm for the study of spatial encoding, alongside map reading and direct navigation methods. This paradigm permitted the investigation into the possible differences between VR and physical categorical encoding of 3D spatial relationships. The lack of significant differences between the VR and physical encoding conditions from our previous research examining younger adults [32], and the current performance similarity for both VR-Novice and VR-Experienced cognitively-healthy older adults, suggests that the ability to encode categorical information from a rotating virtual/physical scene may not be influenced by healthy aging nor inexperience with VR environments. Our results from the navigational performance of VR-Novice and VR-Experienced cognitively-intact older adults are consistent with the physical-virtual similarity with respect to aging effects reported by previous studies [25,26,27,28].
Our findings from individuals with AD is promising in terms of revealing a selective impairment in the processing of virtual information due to effects of dementia. This result is consistent with previous studies suggesting the impairment in visuospatial abilities as a sign of dementia [37]. Particularly, this finding may extend the reported ability of categorical spatial memory function in 2D scenes for discriminating between AD and MCI individuals [38] to 3D scenes. In line with the results of our previous study examining individuals with dementia (60 – 83 year) [39], which showed their selective inability to encode spatial information from a rotational VR task, the results of the current experiment suggests the inability to acquire categorical information from a rotational VR scene is likely a symptom of dementia.
The current study may be criticised due to the non-immersive methodology used during the VR environment. However, studies comparing desktop virtual reality and head-mounted displays (HMD), usually report the similar performance using these the two paradigms, but with a greater incidence rate of motion sickness for HMDs [45,46,47,48]. Furthermore, some studies have shown that participants using a desktop display may extract more information from a non-immersive desktop VR environment than participants who used a HMD [49]. Moreover, it is known that using HMD can lead to less accurate estimations of coordinate information [5,6] compared to real-world tasks.
It may be speculated that the underlying cognitive mechanism responsible for extracting categorical information from a rotational scene may be spared even during diseased aging processes. However, it seems that VR rotation may not sufficiently trigger the same mechanisms for individuals with AD possibly due to their well-documented low-level deterioration in motion perception and depth perception [50,51]. Particularly, individuals with AD are reported to be impaired in their ability to construct shape-from-motion, and other visuospatial production abilities [52]. However, this notion requires further investigation. Interestingly, one of our previous studies showed that individuals with MoCA scores lower than 25, and those with AD, perceive the rotational duration of the VR building differently compared to cognitively-intact older adults, as they had a significant tendency to overestimate of the duration of the rotation [53]. This suggests important dementia-related differences in the processing of rotation from VR, probably due to different low-level visual processing.
The selective impairment, found during current study, for the encoding of rotation during the virtual task might be explained due to the primary role of the Parahippocampus (PHC) for encoding spatial locations of landmarks and their topographic relationships from visual scenes [54]. This area has been shown to be activated during when retrieving information acquired from both virtual and real-world environments [55]. Although PHC has been suggested to be preserved during normal aging processes [56,57,58], it has been shown to undergo abnormal atrophy as a result of dementia [59,60,61].
One limitation of the current study is that, despite efforts to achieve maximum similarity between the physical and virtual buildings, as can be seen from Figure 1, some differences were apparent. For instance, the dimension of the windows, and the distance between the outside windows and the edges of the building, were not the same between the two types of buildings. Although recognizing these differences, and suggest that future research should minimize such inconsistencies, these coordinate-based differences are not expected to substantially affect the outcome of the experiments aimed to evaluate the encoding of categorical spatial information.
As a future step, adding standard depth perception and motion perception tests to the assessment of AD patients would aid in understanding the observed differences between the encoding of spatial information during the virtual and physical spaces, during the current study. Another potentially informative approach, would be to examine the eye movements of individuals with AD and healthy controls (sex- and age-matched) during the encoding of the target when undergoing virtual and physical rotation. The possible differences seen between these groups, for type of environment, as well as latency and duration of gaze fixation, might shed light on the cognitive findings reported here, and is one of the next steps for our program of research.
5. Conclusions
Our findings contribute to the current knowledge regarding the effect of aging and Alzheimer’s disease on cognition and perception. The results support previous research showing a deterioration in motion perception, depth perception and the ability to construct shape-from-motion by individuals with Alzheimer’s disease, while also extending this knowledge to the use of three-dimensional Virtual Reality environments. This research shows, for the first time, that the ability to encode categorical information from a rotating virtual scene is likely not affected by healthy aging or experience with VR, but might a symptom of dementia.
Author Contributions
O. R. P. contributed to the design of the experiment, collecting data, statistical analysis of the data and writing manuscript. A. B. contributed to developing hardware and software tools required for the experiment. D. M. K. contributed to the analysis of the data and revising manuscript. Z. M. contributed to the design of the experiment and revising manuscript.
Funding
This study was supported by Natural Sciences and Engineering Research Council (NSERC) of Canada.
Competing Interests
The authors have declared that no competing interests exist.
References
- Jager G, Postma A. On the hemispheric specialization for categorical and coordinate spatial relations: A review of the current evidence. Neuropsychologia. 2003; 41: 504-515. [CrossRef]
- Ruotolo F, Iachini T, Postma A, van der Ham I. Frames of reference and categorical and coordinate spatial relations: A hierarchical organisation. Exp Brain Res. 2011; 214: 587-595. [CrossRef]
- Palermo L, Bureca I, Matano A, Guariglia, C. Hemispheric contribution to categorical and coordinate representational processes: A study on brain-damaged patients. Neuropsychologia. 2008; 46: 2802-2807. [CrossRef]
- Trojano L, Grossi D, Linden D, Formisano E, Goebel R, Cirillo S, et al. Coordinate and categorical judgments in spatial imagery. An fMRI study. Neuropsychologia. 2002; 40: 1666-1674. [CrossRef]
- Kimura K, Reichert J, Olson A, Ranjbar Pouya O, Wang X, Moussavi Z, et al. Orientation in virtual reality does not fully measure up to the real-world. Sci Rep. 2017; 7: 18109. [CrossRef]
- Renner R, Velichkovsky B, Helmert J. The perception of egocentric distances in virtual environments: A review. ACM Comput Surv. 2013; 46: 23. [CrossRef]
- Zhang H, Copara M, Ekstrom AD. Differential recruitment of brain networks following route and cartographic map learning of spatial environments. PLoS One. 2012; 7: e44886. [CrossRef]
- Coluccia E, Bosco A, Brandimont MA. The role of visuo-spatial working memory in map learning: new findings from a map drawing paradigm. Psychol Res. 2007; 71: 359-372. [CrossRef]
- Thorndyke PW, Hayes-Roth B. Differences in spatial knowledge acquired from maps and navigation. Cogn Psychol. 1982; 14: 560-589. [CrossRef]
- Richardson AE, Montello DR, Hegarty M. Spatial knowledge acquisition from maps and from navigation in real and virtual environments. Mem Cognit. 1999; 27: 741-750. [CrossRef]
- Schmelter A, Jansen P, Heil M. Empirical evaluation of virtual environment technology as an experimental tool in developmental spatial cognition research. Eur J Cogn Psychol. 2009; 21: 724-739. [CrossRef]
- Sorita E, N'kaoua B,Larrue F,Criquillon J,Simion A,Sauzéon H, et al. Do patients with traumatic brain injury learn a route in the same way in real and virtual environments?. Disabil Rehabil. 2013; 35: 1371-1379. [CrossRef]
- Waller D. Individual differences in spatial learning from computer-simulated environments. J Exp Psychol Appl. 2000; 6: 307-321. [CrossRef]
- Carelli L, Rusconi ML,Scarabelli C,Stampatori C,Mattioli F,Riva G. The transfer from survey (map-like) to route representations into Virtual Reality Mazes: Effect of age and cerebral lesion. J Neuroeng Rehabil. 2011; 8: 6. [CrossRef]
- Ruddle RA, Lessels S. The benefits of using a walking interface to navigate virtual environments. ACM Trans Comput Hum Interact. 2009; 16: 1-18. [CrossRef]
- Moffat S. Aging and spatial navigation: What do we know and where do we go?. Neuropsychol Rev. 2009; 19: 478-489. [CrossRef]
- Ham IJ, Faber AM, Venselaar M, Kreveld MJ, Löffler M. Ecological validity of virtual environments to assess human navigation ability. Front Psychol. 2015; 6: 637. [CrossRef]
- Vuong QC, Tarr MJ. Rotation direction affects object recognition. Vision Res. 2004; 44: 1717-1730. [CrossRef]
- Hollingworth A , Henderson JM. Sustained change blindness to incremental scene rotation: A dissociation between explicit change detection and visual memory. Percept Psychophys. 2004; 66: 800-807. [CrossRef]
- Finlay CA, Motes MA, Kozhevnikov M. Updating representations of learned scenes. Psychol Res. 2007; 71: 265-276. [CrossRef]
- Lehmann A, Vidal M, Bülthoff HH. A high-end virtual reality setup for the study of mental rotations. Presence. 2008; 17: 365-375. [CrossRef]
- Wraga M, Creem-Regehr SH , Proffitt DR. Spatialupdatingofvirtualdisplaysduring self- and display rotation. Mem Cognit. 2004; 32: 399-415. [CrossRef]
- Kourtzi Z, Shiffrar M. The visual representation of three-dimensional, rotating objects. Acta Psychol (Amst). 1999; 102: 265-292. [CrossRef]
- Techentin C, Voyer D, Voyer S. Spatial abilities and aging: A meta-analysis. Exp Aging Res. 2014; 40: 395-425. [CrossRef]
- Kalová E, Vlček K, Jarolímová E, Bureš J. Allothetic orientation and sequential ordering of places is impaired in early stages of Alzheimer's disease: Corresponding results in real space tests and computer tests. Behav Brain Res. 2005; 159: 175-186. [CrossRef]
- Cushman LA, Stein K, Duffy CJ. Detecting navigational deficits in cognitive aging and Alzheimer disease using virtual reality. Neurology. 2008; 71: 888–895. [CrossRef]
- Kalia, AA, Legge, GE, Giudice, NA. Learning building layouts with non-geometric visual information: The effects of visual impairment and age. Perception. 2008; 37: 1677-1699. [CrossRef]
- Taillade M, N'Kaoua B , Sauzéon H. Age-related differences and cognitive correlates of self-reported and direct navigation performance: The effect of real and virtual test conditions manipulation. Front Psychol. 2015; 6: 2034. [CrossRef]
- Yamamoto N, Degirolamo GJ. Differential effects of aging on spatial learning through exploratory navigation and map reading. Front Aging Neurosci. 2012; 4: 12. [CrossRef]
- Meadmore KL, Dror IE, Bucks RS. Lateralisation of spatial processing and age. Laterality. 2009; 14: 17-29. [CrossRef]
- Lai CY. Visuo-spatial processing in ageing: Neuropsychological and neuroimaging correlates. Newcastle: Newcastle University; 2016.
- Ranjbar Pouya O, Byagowi A,Kelly D,Moussavi Z. The effect of physical and virtual rotations of a 3D object on spatial perception. International IEEE/EMBS Conference on Neural Engineering (NER). San Diego, CA : IEEE; 2013. pp. 1362-1365. [CrossRef]
- Murias K, Kwok K, Castillejo AG, Liu I, Iaria G. The effects of video game use on performance in a virtual navigation task. Comput Hum Behav. 2016; 58: 398-406. [CrossRef]
- Geslin E, Bouchard S, Richir S. You better control for video gaming experience because video gamers are more difficult to scare in virtual reality. J CyberTher Rehabilitat. 2011; 4: 167.
- Smith SP, Du'Mont S. Measuring the effect of gaming experience on virtual environment navigation tasks. IEEE Symposium on 3D User Interfaces. Lafayette , LA: IEEE; 2009. pp. 3-10. [CrossRef]
- Anguera JA, Boccanfuso J,Rintoul JL,Al-Hashimi O,Faraji F,Janowich J, et al. Video game training enhances cognitive control in older adults. Nature. 2013; 501: 97-101. [CrossRef]
- Iachini T, Iavarone A,Senese VP,Ruotolo F,Ruggiero G. Visuospatial memory in healthy elderly, AD and MCI: A review. Curr Aging Sci. 2009; 2: 43-59. [CrossRef]
- Kessels RP, et al. Categorical spatial memory in patients with mild cognitive impairment and Alzheimer dementia: Positional versus object-location recall. J Int Neuropsychol Soc. 2010; 16: 200-204. [CrossRef]
- Zen D, Byagowi A, Garcia M, Kelly D, Lithgow B, Moussavi Z, et al. The perceived orientation in people with and without Alzheimer's disease. 6th International IEEE/EMBS Conference on Neural Engineering (NER). San Diego, USA: IEEE; 2013. pp. 460-463. [CrossRef]
- Ranjbar Pouya O, Byagowi A, Kelly DM, Moussavi Z. Introducing a new age-and-cognition-sensitive measurement for assessing spatial orientation using a landmark-less virtual reality navigational task. Q J Exp Psychol. 2016; 1-14.
- Nasreddine ZS, et al. The montreal cognitive assessment (MoCA©): A brief screening tool for mild cognitive impairment. J Am Geriatr Soc. 2005; 53: 695-699. [CrossRef]
- Byagowi A, Mohaddes D, Moussavi Z. Design and application of a novel virtual reality navigational technology (VRNChair). J Exp Neurosci. 2014; 8: 7-14. [CrossRef]
- Team JA. Jasp. Version 0.8. 0.0. software. 2016.
- Rouder JN, Speckman PL,Sun D,Morey RD,Iverson G. Bayesian t tests for accepting and rejecting the null hypothesis. Psychon Bull Rev. 2009; 16: 225-237. [CrossRef]
- Dahmani L, Ledoux AA, Boyer P, Bohbot VD. Wayfinding: The effects of large displays and 3-D perception. Behav Res Methods. 2012; 44: 447-454. [CrossRef]
- Kesztyues TI, Mehlitz M, Schilken E, Weniger G, Wolf S, Piccolo U, et al. Preclinical evaluation of a virtual reality neuropsychological test system: Occurrence of side effects. CyberPsychol Behav. 2000; 3: 343-349. [CrossRef]
- Hastings BL. The Influence of Shading, Display Size and Individual Differences on Navigation Performance in Virtual Reality in an Applied Industry Setting. Vancouver: University of British Columbia; 2013. [CrossRef]
- Richardson AE, Collaer ML. Virtual navigation performance: The relationship to field of view and prior video gaming experience. Percept Mot Skills. 2011; 112: 477-498. [CrossRef]
- Spatuzzi A. Tranfer of spatial knowledge in a virtual environment: Comparing the acquisition of spatial knowledge between head mounted displays and desktop displays. 2015
- Cronin-Golomb A. Vision in Alzheimer's disease. Gerontologist. 1995; 35: 370-376. [CrossRef]
- Thiyagesh SN, Farrow TF,Parks RW,Accosta-Mesa H,Young C,Wilkinson ID, et al. The neural basis of visuospatial perception in Alzheimer's disease and healthy elderly comparison subjects: An fMRI study. Psychiatry Res. 2009; 172: 109-116. [CrossRef]
- Rizzo M, Anderson SW,Dawson J,Nawrot M. Vision and cognition in Alzheimer’s disease. Neuropsychologia. 2000; 38: 1157-1169. [CrossRef]
- Ranjbar Pouya O, Kelly DM, Moussavi Z. Tendency to overestimate the explicit time interval in relation to aging and cognitive decline. Conf Proc IEEE Eng Med Biol Soc. 2015; 2015: 4692-4695. [CrossRef]
- Iaria G, Chen JK, Guariglia C, Ptito A, Petrides M. Retrosplenial and hippocampal brain regions in human navigation: complementary functional contributions to the formation and use of cognitive maps. Eur J Neurosci. 2007; 25: 890-899. [CrossRef]
- Epstein RA. Parahippocampal and retrosplenial contributions to human spatial navigation. Trends Cogn Sci. 2008; 12: 388-396. [CrossRef]
- Mellet E, Laou L, Petit L, Zago L, Mazoyer B, Tzourio‐Mazoyer N. Impact of the virtual reality on the neural representation of an environment. Hum Brain Mapp. 2010; 31: 1065-1075. [CrossRef]
- Rapp PR, Deroche PS, Mao Y, Burwell RD. Neuron number in the parahippocampal region is preserved in aged rats with spatial learning deficits. Cereb Cortex. 2002; 12: 1171-1179. [CrossRef]
- Lithfous S, Dufour A, Després O. Spatial navigation in normal aging and the prodromal stage of Alzheimer's disease: Insights from imaging and behavioral studies. Ageing Res Rev. 2013; 12: 201-213. [CrossRef]
- Mitchell TW, Mufson EJ, Schneider JA, Cochran EJ, Nissanov J, Han LY, et al. Parahippocampal tau pathology in healthy aging, mild cognitive impairment, and early Alzheimer's disease. Ann Neurol. 2002; 51: 182-189. [CrossRef]
- Pantel J, Kratz B, Essig M, Schröder J. Parahippocampal volume deficits in subjects with aging-associated cognitive decline. Am J Psychiatry. 2003; 160: 379-382. [CrossRef]
- Echavarri C, Aalten P, Uylings HB, Jacobs HI, Visser PJ, Gronenschild EH, et al. Atrophy in the parahippocampal gyrus as an early biomarker of Alzheimer’s disease. Brain Struct Funct. 2011; 215: 265-271. [CrossRef]