Wednesday, June 28, 2017: 12:45 PM
Wednesday, June 28 & Thursday, June 29
Andreas Schindler1,2,3, Andreas Bartels1,2,3
1Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany, 2Max Planck Institute for Biological Cybernetics, Tübingen, Tübingen, Germany, 3Department of Psychology, University of Tübingen, Tübingen, Germany
Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen|Max Planck Institute for Biological Cybernetics, Tübingen|Department of Psychology, University of Tübingen
Tübingen, Germany|Tübingen, Germany|Tübingen, Germany
Spatial representations in distinct reference frames are essential for human behavior. Visual input is received in retinotopic coordinates. Our conscious experience of the environment is however non-retinotopic as eye- and head-movements are typically subtracted from retinal input to provide us with world-centered perceptual stability. Our immediate interactions with the external world occur in self-centered reference frames. Limb-actions are inherently body-centered and need to be invariant to eye- or head-movements. In turn, head- or body-movements shift different parts of the environment in and out of the field of view. Egocentric maps of our full surroundings are thus not only essential for the integration between different senses such as audition, vision, and touch but also assure the stable phenomenal experience of our surrounding based on continual updating of internal spatial maps in the face of changing sensory input. In the present study, we used a novel virtual reality paradigm that involved tilted head-positions during fMRI and multi-variate analyses to identify head- and body-centered representations, as well as attention modulation thereof.
Classically, the participant's fixed head and body orientation inside the scanner prevents disentangling head- from body-centered reference systems. To circumvent this problem, we used a modified version of a virtual reality paradigm we introduced previously (Schindler & Bartels 2013). Participants had to imagine the location of six distinct objects surrounding them arranged in a hexagon, including in front of them and behind them. Importantly, participants' heads were rotated by +60° or -60° in different conditions, such that head and body axes were misaligned. This paradigm allowed us to systematically disentangle head- from body- centered neural representations. In addition, participants performed this task in two distinct attention sets, either involving imagery in head- or body-centered coordinates. This allowed us to probe modulation of head- and body-centered spatial representations by attention to either reference frame.
Participants underwent several days of extensive training in which they reached ceiling performance in learning the object locations within the surrounding hexagon. Training was performed outside the scanner using virtual reality goggles. Participants were placed in the center of a virtual hexagonal room that contained a unique object in each corner. Every few trials the participants' viewpoint rotated such that they faced a different corner, i.e., a different allocentric location. This allowed us to isolate six abstract egocentric directions, regardless of the identity of reference objects or of allocentric representations.
When performance reached criterion, participants were invited for fMRI scanning and performed a modified task inside the scanner. Using multivariate voxel analysis, we identified egocentric representations beyond the visual field according to body and head coordinates.
We found significant decoding of egocentric directions in head- as well as in body-centered reference frames in a network of brain areas associated to spatial processing, attention, and lesion-sites of spatial neglect patients. Among those were pre-cuneus, prefrontal cortex, and parietal cortex. Whereas egocentric codes for body- and head-centered representations overlapped in most regions, we also found biases towards either reference frame in some regions. Attention to body- or head-centered coordinates tended to modulate the decoding accuracy in favor of the respective representation.
Our results provide evidence for the presence of both, head- and body-centered neural spatial representations in the human brain. While most of these representations appear to be co-localized, a subset of brain areas was tuned to head or body coordinates. In addition, our results show that the distinct representations can be modulated by attention.
Higher Cognitive Functions:
Space, Time and Number Coding 1
BOLD fMRI 2
Learning and Memory:
Long-Term Memory (Episodic and Semantic)
Poster Session - Wednesday
1|2Indicates the priority used for review
Would you accept an oral presentation if your abstract is selected for an oral session?
I would be willing to discuss my abstract with members of the press should my abstract be marked newsworthy:
Please indicate below if your study was a "resting state" or "task-activation” study.
By submitting your proposal, you grant permission for the Organization for Human Brain Mapping (OHBM) to distribute the presentation in any format, including video, audio print and electronic text through OHBM OnDemand, social media channels or other electronic media and on the OHBM website.
Healthy subjects only or patients (note that patient studies may also involve healthy subjects):
Internal Review Board (IRB) or Animal Use and Care Committee (AUCC) Approval. Please indicate approval below. Please note: Failure to have IRB or AUCC approval, if applicable will lead to automatic rejection of abstract.
Please indicate which methods were used in your research:
For human MRI, what field strength scanner do you use?
Which processing packages did you use for your study?
Provide references in author date format
Schindler A, Bartels A; 2013; Parietal cortex codes for egocentric space beyond the field of view. Curr Biol. 2013 Jan 21;23(2)