Attention to Action Categories Shifts Semantic Tuning Toward Targets Across the Brain

Poster No:

T661 

Submission Type:

Abstract Submission 

Authors:

Mohammad Shahdloo1, Burcu Ürgen2,3, Tolga Çukur1,3

Institutions:

1Electrical and Electronics Engineering Dept., Bilkent University, Ankara, Turkey, 2Psychology Dept., Bilkent University, Ankara, Turkey, 3Neuroscience Program, Bilkent University, Ankara, Turkey

Introduction:

Humans are remarkably adept in social interactions in the real world, an ability that requires reliable detection of visual objects along with their various actions [4]. A previous study from our laboratory shows that object-based attention causes broad shifts in voxel-wise semantic tuning toward the target [2]. This is consistent with the view that the visual system implements a "matched filter" to optimize the processing of behaviorally-relevant objects during natural vision [3]. Recent reports suggest that not only objects but also hundreds of action categories are represented via a continuous semantic space [5]. However, little is known about representational modulations during action-based attention. Here, we sought to assess semantic representations during natural visual search for action categories.

Methods:

Five human subjects viewed 30 min of natural movies while performing two separate attention tasks: search for animate targets that perform "communication" or search for animate targets that perform "locomotion". The two tasks were interleaved and performed in separate 10 min runs. Whole-brain BOLD responses were recorded using fMRI. Responses were detrended using a Savitzky-Golay filter. To remove baseline or gain modulations, voxel-wise responses were z-scored to attain zero mean and unity variance. A stimulus matrix was constructed by labeling the presence of 813 object and 109 action categories in the movies. Object-driven responses were regressed out of measured BOLD responses in single voxels. Separate voxel-wise action-category models were then fit under each attention task [5]. To estimate the semantic space underlying action-category representation, principal components analysis was performed on voxel-wise category models fit during a separate passive-viewing task. Two template tuning profiles were constructed by identifying the set of actions that belong to each of the two target categories. Voxel-wise tuning profiles and template tuning profiles were projected onto the semantic space. Tuning strength for each target was then quantified as Pearson's correlation coefficient between projections of voxel tuning and the target templates. Finally, a tuning shift index (TSI) was quantified using the measured tuning strengths for each voxel. Tuning shifts toward/away from the attended category will yield positive/negative TSIs in the range [-1, 1].

Results:

We find that attention to action categories causes tuning shifts toward the target category in many cortical voxels. Tuning shifts are moderate in temporal cortex, but they are substantial in angular gyrus, frontal cortex (SFG, MFG, and IFG), and cingulate cortex (ACC, PCC; Fig. 1). The average TSI is 0.064±0.011 in angular gyrus, 0.026±0.006 in frontal cortex, and 0.015±0.005 in cingulate cortex (mean±s.e.m, across all five subjects). TSI was not significant in temporal cortex (STG, MTG, pSTS), and in supramarginal gyrus (bootstrap test, p>0.05; Fig. 2). Overall, TSI is substantially high in higher areas of the action observation network (AON) [1] and in more anterior cortical areas belonging to the attention network (p<0.05).
Supporting Image: TSI_embedded_g.png
Supporting Image: tsi_mean.png
 

Conclusions:

Our results demonstrate that brain optimizes search for action categories by shifting semantic tuning of single voxels in higher areas of AON and the attention network toward the targets. This finding implies that action perception in the real world is facilitated by a matched-filter mechanism that dynamically modulates representation according to task demand.

Higher Cognitive Functions:

Higher Cognitive Functions Other

Imaging Methods:

BOLD fMRI

Perception and Attention:

Attention: Visual 1
Perception: Visual 2

Keywords:

Computational Neuroscience
FUNCTIONAL MRI
Modeling
Perception
Other - semantic representation; action-based attention; action-observation network

1|2Indicates the priority used for review

My abstract is being submitted as a Software Demonstration.

No

Please indicate below if your study was a "resting state" or "task-activation” study.

Task-activation

Healthy subjects only or patients (note that patient studies may also involve healthy subjects):

Healthy subjects

Was any human subjects research approved by the relevant Institutional Review Board or ethics panel? NOTE: Any human subjects studies without IRB approval will be automatically rejected.

Not applicable

Was any animal research approved by the relevant IACUC or other animal research panel? NOTE: Any animal studies without IACUC approval will be automatically rejected.

Not applicable

Please indicate which methods were used in your research:

Functional MRI

For human MRI, what field strength scanner do you use?

3.0T

Which processing packages did you use for your study?

Other, Please list  -   python

Provide references using author date format

[1] Caspers, Svenja, et al. (2010). “ALE meta-analysis of action observation and imitation in the human brain”, Neuroimage, 50(3), 1148-1167
[2] Çukur, Tolga, et al. (2013), “Attention during natural vision warps semantic representation across the human brain.”, Nature Neuroscience, 16(6), 763-770
[3] David, Stephen, et al. (2008) “Attention to Stimulus Features Shifts Spectral Tuning of V4 Neurons during Natural Vision”, Neuron, 59(3), 509-521
[4] Haxby, James V., et al. (2011), “A Common, High-Dimensional Model of the Representational Space in Human Ventral Temporal Cortex”, Neuron, 72(2), 404-416
[5] Huth, Alexander G., et al. (2012), “A continuous semantic space describes the representation of thousands of object and action categories across the human brain.”, Neuron, 76(6), 1210-1224