Information

What neural mechanism is responsible for our identification with a group?

What neural mechanism is responsible for our identification with a group?

Why do we need to identify ourselves with a group? Is there naturally a hormone/neurotransmiter reward? Or is it based on more complex mechanisms like finding the "deeper" meanings of our actions?


Miguel Eckstein

Miguel Eckstein earned a Bachelor Degree in Physics and Psychology at UC Berkeley and a Doctoral Degree in Cognitive Psychology at UCLA. He then worked at the Department of Medical Physics and Imaging, Cedars Sinai Medical Center and NASA Ames Research Center before moving to UC Santa Barbara. He is recipient of the Optical Society of America Young Investigator Award, the Society for Optical Engineering (SPIE) Image Perception Cum Laude Award, Cedars Sinai Young Investigator Award, the National Science Foundation CAREER Award, the National Academy of Sciences Troland Award, and a Guggenheim Fellowship. He has served as the chair of the Vision Technical Group of the Optical Society of America, chair of the Human Performance, Image Perception and Technology Assessment conference of the SPIE Medical Imaging Annual Meeting, Vision Editor of the Journal of the Optical Society of America A, the board of directors of the Vision Sciences Society, the board of editors of Journal of Vision, and as a member of National Institute of Health study section panels on Mechanisms of Sensory, Perceptual and Cognitive Processes and Biomedical Imaging Technology.

He has published over 170 articles relating to computational human vision, visual attention, search, perceptual learning, the perception of medical images. He has published in journals/conferences spanning a wide range of disciplines: Proceedings of the National Academy of Sciences, Nature Human Behavior, Current Biology, Journal of Neuroscience, Psychological Science, PLOS Computational Biology, Annual Reviews in Vision Science, Neural Information Processing Systems (NIPS), Computer Vision and Pattern Recognition (CVPR), IEEE Transactions in Medical Imaging, International Conference in Learning Representations (ICLR), Neuroimage, Academic Radiology, Journal of the Optical Society of America A, Medical Physics, Journal of Vision, Journal of Experimental Psychology Human Perception and Performance, Vision Research, and SPIE Medical Imaging.

Research

Research Interests: Finding your toothbrush, recognizing a face, or an object all might seem effortless but behind the scenes the brain devotes over 1/4 of its neural machinery to make these complex tasks seem easy. How does the brain do it? My research uses a wide variety of tools including behavioral psychophysics, eye tracking, electro-encephalography (EEG), functional magnetic resonance imaging (fMRI) and computational modeling to understand how the brain successfully achieves these everyday perceptual tasks. The investigations involve understanding basic visual perception, eye movements, visual attention, perceptual learning and decision making. I utilize the gained knowledge about how the brain accomplishes every day vision in combination with engineering tools to advance various applied problems: 1) understanding visual, cognitive and decision processes by which doctors detect and classify abnormalities in medical images and developing computer models to improve the way in which we display medical images so that doctors can do fewer errors in clinical diagnosis 2) develop with engineers bio-inspired computer vision systems 3) improve the interactions between robots/computer systems and humans.

Selected Publications

Rosedahl, L.A., Eckstein, M.P., Ashby, F.G., Retinal-specific category learning, Nature Human Behaviour 2 (7), 500, (2018)

Eckstein, M.P., Koehler, K., Akbas, E., Humans but not Deep Neural Networks miss giant targets in scenes, 27 (18), 2827-2832, Current Biology, (2017)

Akbas, E., Eckstein, M.P., Object Detection with a Foveated Search Model, PLOS Computational Biology, 13 (10), e1005743, (2017)

Juni, M., Eckstein M.P., The wisdom of the crowds for visual search, Proceedings of the National Academy of Sciences, 201610732, (2017)

Probabilistic computations for attention, eye movements, and search, Eckstein, M.P., Annual Review of Vision Science 3, 319-342, (2017)

Domain specificity of oculomotor learning after changes in sensory processing, Y Tsank, MP Eckstein, Journal of Neuroscience, 1208-17, (2017)

Deza, A., Ecktein, M.P. Can Peripheral Representations Improve Clutter Metrics on Complex Scenes?, Neural Information Processing Systems (NIPS), 29, (2016)

JR Peters, J.R., Srivastava, V., Taylor, G.S., Surana, A., Eckstein, M.P., Bullo, F. Human Supervisory Control of Robotic Teams: Integrating Cognitive Modeling with Engineering Design, Control Systems, IEEE 35, 57-80, (2015)

Ludwig,. C., Eckstein, M.P., Foveal analysis and peripheral selection during active visual sampling, Proceedings of the National Academy of Sciences, E291-9, (2014)

Preston, T. J., Guo, F., Das, K., Giesbrecht, B., & Eckstein, M. P. Neural Representations of Contextual Guidance in Visual Search of Real-World Scenes, The Journal of Neuroscience, 33(18), 7846-7855 (2013)

Peterson, M. F., & Eckstein, M. P. Individual Differences in Eye Movements During Face Identification Reflect Observer-Specific Optimal Points of Fixation , Psychological science, 24(7), 1216-25 (2013)

Peterson, M. F., & Eckstein, M. P. Looking just below the eyes is optimal across face recognition tasks. Proceedings of the National Academy of Sciences, 109(48), E3314-E3323 (2012)

Eckstein M.P., Das K., Pham, B.T., Peterson M., Abbey, C.K., Sy J., Giesbrecht, B., Neural decoding of collective wisdom with multi-brain computing, Neuroimage, 59. 94-108, (2012)

Guo F., Preston T.J., Das K., Giesbrecht B., Eckstein M.P., Feature-independent neural coding of target detection during search of natural scenes, Journal of Neuroscience, 32, 9499-510, (2012)

Eckstein M.P, Visual Search: a retrospective, Journal of Vision, 11, (2011)

Das K., Giesbrecht B., Eckstein M.P., Predicting variations of perceptual performance across individuals from neural activity using pattern classifiers, Neuroimage, 51(4):1425-37 (2010)

Zhang S., Eckstein M.P., Evolution and Optimality of similar neural mechanisms for perception actions during search, PLoS Comput Biol., 9, 6, (2010)

Eckstein, M.P., Drescher B., Shimozaki, S.S., Attentional cues in real scenes, saccadic targeting and Bayesian priors, Psychological Science, 17, 973-80 (2006)

Zhang, Y, Pham, B.T., Eckstein, M. P., The Effect of Nonlinear Human Visual System Components on Performance of a Channelized Hotelling Observer Model in Structured Backgrounds, IEEE Transactions on Medical Imaging, 25, 1348-1362 (2006)

Avi Caspi, Brent R. Beutter, and Miguel P. Eckstein, (2004) The time course of visual information accrual guiding eye movement decisions, Proceedings of the National Academy of Sciencies,101: 13086-13090


Associations between physiological and neural measures of sensory reactivity in youth with autism

Shulamite A. Green, Ahmanson-Lovelace Brain Mapping Center, Semel Institute of Neuroscience and Human Behavior, University of California, Los Angeles, 660 Charles E. Young Drive South, Los Angeles, CA 90095, USA Email: [email protected]

Department of Psychiatry and Biobehavioral Sciences, Jane & Terry Semel Institute of Neuroscience and Human Behavior, University of California, Los Angeles, Los Angeles, CA, USA

Division of the Humanities and Social Sciences, California Institute of Technology, Pasadena, CA, USA

Department of Psychiatry and Biobehavioral Sciences, Jane & Terry Semel Institute of Neuroscience and Human Behavior, University of California, Los Angeles, Los Angeles, CA, USA

Division of Clinical Psychology and Psychopathology, Department of Psychology, University of Salzburg, Salzburg, Austria

Department of Psychiatry and Biobehavioral Sciences, Jane & Terry Semel Institute of Neuroscience and Human Behavior, University of California, Los Angeles, Los Angeles, CA, USA

Department of Psychology, University of California, Los Angeles, Los Angeles, CA, USA

Department of Psychiatry and Biobehavioral Sciences, Jane & Terry Semel Institute of Neuroscience and Human Behavior, University of California, Los Angeles, Los Angeles, CA, USA

Department of Psychiatry and Biobehavioral Sciences, Jane & Terry Semel Institute of Neuroscience and Human Behavior, University of California, Los Angeles, Los Angeles, CA, USA

Shulamite A. Green, Ahmanson-Lovelace Brain Mapping Center, Semel Institute of Neuroscience and Human Behavior, University of California, Los Angeles, 660 Charles E. Young Drive South, Los Angeles, CA 90095, USA Email: [email protected]

Conflict of interest statement: No conflicts declared.

Abstract

Background

Individuals with Autism Spectrum Disorder (ASD) commonly show sensory over-responsivity (SOR), an impairing condition related to over-reactive brain and behavioral responses to aversive stimuli. While individuals with ASD often show atypically high physiological arousal, it is unclear how this relates to sensory reactivity. We therefore investigated how physiological arousal relates to brain and behavioral indices of SOR, to inform understanding of the biological mechanisms underlying SOR and to determine whether physiological measures are associated with SOR-related brain responses.

Methods

Youth aged 8–18 (49 ASD 30 age- and performance-IQ-matched typically developing (TD)) experienced mildly aversive tactile and auditory stimuli first during functional magnetic resonance imaging (N = 41 ASD, 26 TD) and then during skin conductance (SCR) (N = 48 ASD, 28 TD) and heart rate (HR) measurements (N = 48 ASD, 30 TD). Parents reported on their children’s SOR severity.

Results

Autism Spectrum Disorder youth overall displayed greater SCR to aversive sensory stimulation than TD youth and greater baseline HR. Within ASD, higher SOR was associated with higher mean HR across all stimuli after controlling for baseline HR. Furthermore, the ASD group overall, and the ASD-high-SOR group in particular, showed reduced HR deceleration/greater acceleration to sensory stimulation compared to the TD group. Both SCR and HR were associated with brain responses to sensory stimulation in regions previously associated with SOR and sensory regulation.

Conclusions

Autism Spectrum Disorder youth displayed heightened physiological arousal to mildly aversive sensory stimulation, with HR responses in particular showing associations with brain and behavioral measures of SOR. These results have implications for using psychophysiological measures to assess SOR, particularly in individuals with ASD who cannot undergo MRI.

Appendix S1. Supplemental methods.

Appendix S2. Supplemental results.

Figure S1. (a) Skin conductance level during the 2-minute baseline phase did not show any significant between-group differences. (b) Mean heart rate during the 2-minute baseline phase was significantly higher in the ASD compared to the TD group.

Table S1. Montreal Neurological Institute (MNI) Coordinates for Associations between Skin Conductance Responses and Brain Responses to Aversive Sensory Stimulation within the ASD Group.

Table S2. Montreal Neurological Institute (MNI) coordinates for associations between mean heart rate responses and brain responses to aversive sensory stimulation.

Table S3. Montreal Neurological Institute (MNI) coordinates for associations between inter-beat-interval orienting phase slopes and brain responses to aversive sensory stimulation.

Table S4. Montreal Neurological Institute (MNI) coordinates for associations between inter-beat-interval acceleration phase slopes and brain responses to aversive sensory stimulation.

Table S5. Correlation matrix of sensory over-responsivity composite score and parent-reported sensory measures within ASD participants.

Table S6. Descriptive table for stimuli pilot testing.

Please note: The publisher is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.


Human functional imaging

PET studies initially showed activation of the fusiform gyrus in a variety of face perception tasks (Haxby et al 1991, Sergent et al 1992), and subsequently fMRI revealed more of the specificity of these cortical regions for faces, with demonstrations of fusiform regions that responded more strongly to faces than to letter strings and textures (Puce et al 1996), flowers (McCarthy et al 1997), everyday objects, houses, and hands (Kanwisher et al 1997). Although face-specific fMRI activation can also be seen in the superior temporal sulcus (fSTS) and in part of the occipital lobe (the “occipital face area”, OFA), the most robust face-selective activation is consistently found on the lateral side of the right mid-fusiform gyrus, the 𠇏usiform face area” or FFA (Kanwisher et al 1997) ( Figure 6 ). The fact that this part of the brain is activated selectively in response to faces indicates that activity in this region must arise at or subsequent to a detection stage.

Face-selective regions in one representative subject. Face-selective regions (yellow) were defined as regions that respond more strongly to faces than houses, cars and novel objects (P < 10 -4 ) From (Grill-Spector 2003).

A number of studies support the idea that the FFA is activated specifically by faces, and not by the low-level stimulus features usually present in faces, that is, activity in the FFA indicates that stimuli have been detected as faces: The FFA shows increased blood flow in response to a wide variety of face stimuli: front and profile photographs of faces (Tong et al 2000), line drawings of faces (Spiridon & Kanwisher 2002), and animal faces (Tong et al 2000). Furthermore, the FFA BOLD signal to upright “Mooney faces” is almost twice as strong as to inverted Mooney stimuli (which have similar low-level features but do not look like faces) (Kanwisher et al 1998). Finally, for bistable stimuli such as the illusory face-vase, or for binocularly rivalrous stimuli in which a face is presented to one eye and a nonface is presented to the other eye, the FFA responds more strongly when subjects perceive a face than when they do not, even though the retinal stimulation is unchanged (Andrews et al 2002, Hasson et al 2001).

Although the FFA shows the strongest increase in blood flow in response to faces, it does also respond to non-face objects. Therefore two alternative hypotheses have been proposed to the idea that activity in the FFA represents face-specific processing: The Expertise Hypothesis. According to this hypothesis the FFA is engaged not in processing faces per se, but rather in processing any sets of stimuli that share a common shape, and for which the subject has gained substantial expertise (Tarr & Gauthier 2000). Distributed coding : In an important challenge to a more modular view of face and object processing, Haxby et al. (2001) argued that objects and faces are coded via the distributed profile of neuronal activity across much of the ventral visual pathway. Central to this view is the suggestion that “nonpreferred” responses, for example to objects in the FFA, may form an important part of the neural code for those objects. The functional significance of the smaller but still significant response of the FFA to non-face objects will hopefully be unraveled by the combined assaults of higher resolution imaging in humans and single-unit recordings in non-human primates.

Measurement & Categorization

Does the human brain use separate systems for face measurement and face classification? Some fMRI evidence suggests that it does. For example, in a study of morphing between Marilyn Monroe and Margaret Thatcher, adaptation strength in the OFA followed the amount of physical similarity along the morph line, while in the FFA it followed the perceived identity (Rotshtein et al 2005), suggesting that the OFA performs measurement and the FFA performs classification. However, another study indicates that release from adaptation occurs in the FFA when there are physical differences unaccompanied by changes in perceived identity (Yue et al 2006).

According to Bruce and Young (1986) the processing of facial expression (one form of categorization) and facial identity (another form of categorization) take separate routes. A possible neural basis for this model has been proposed by Haxby and colleagues (2000). According to this proposal, the inferior occipital gyri are involved in early perception of facial features (i.e., measurement). The pathway then diverges, with one branch going to the superior temporal sulcus, which is proposed to be responsible for processing changeable aspects of faces including direction of eye gaze, view angle, emotional expression, and lip movement. The other projection is to the lateral fusiform gyrus, which is responsible for processing identity. A recent review has challenged the Bruce and Young model, arguing that changeable aspects and invariant identity may instead be processed together and rely on partially overlapping visual representations (Calder & Young 2005).

Invariance

Several studies have used fMRI-adaptation for face identity in the FFA, and found invariance to position (Grill-Spector et al 1999), image size (Andrews 2004, Grill-Spector et al 1999), and spatial scale (Eger et al 2004). Thus representations in the FFA are not tied to low-level image properties, but instead show at least some invariance to simple image transformations, though not to viewpoint (Pourtois et al 2005).

Summary

Behavioral studies complement computational approaches in indicating that specialized machinery may be used to process faces and that a face-detection stage gates the flow of information into this domain-specific module. Also reminiscent of successful computational approaches, the gating or detection step may use coarse, simple filters to screen out non-face images. These filters, or templates, require an upright, positive contrast face, with the usual arrangement of features, and images that do not fit the template are analyzed only by the general object recognition system. Even images that pass into the face-specific module are probably processed in parallel by the general system. The face module appears to process images differently from the general object system: Face processing is holistic, in the sense that we cannot process individual face parts without being influenced by the whole face. We suggest that this difference arises early in the face module. The face-detection stage may, in addition to gating access, obligatorily segment faces as a whole for further processing by the face module. Finally, substantial recent evidence suggests that face identity is coded in an adaptive norm-based fashion.

Human imaging studies converge on the conclusion that faces are processed in specific locations in the temporal lobe, but the degree of specialization for faces within these locations is debated. The modular interpretation is consistent with neurological findings, and, as described below, with single-unit recordings in macaques. The role of experience in determining both the localization of face processing and its holistic characteristics is also debated. And the relationship, if any, between modular organization and holistic processing is completely unexplored. Only a few visual object categories show functional localization in fMRI: faces, body parts, places, and words (for review see (Cohen & Dehaene 2004, Grill-Spector & Malach 2004)). Faces, bodies, and places are all biologically significant, and their neural machinery could be genetically determined, but the use of writing arose too recently in human history for word processing to be genetically determined, therefore at least one kind of anatomical compartmentalization must be due to extensive experience. We have suggested that the existence of discrete regions of the brain dedicated to face processing implies an obligatory detection stage and that an obligatory detection stage then causes holistic processing. What we know about word processing suggests that it too displays holistic properties, and it is localized, interestingly, in the left hemisphere in an almost mirror symmetric location to the position of the FFA in the right hemisphere (Cohen & Dehaene 2004, Hasson et al 2002).


Reduced Empathy for Outgroup Suffering

Empathy refers to the ability to share and understand the subjective states and feelings of others. Several types of empathy are typically distinguished within the literature such as affective empathy (i.e., the ability to feel and share the emotions of others), cognitive empathy (i.e., the ability to rationally understand the emotions of others), and emotional regulation (i.e., the ability to regulate one’s emotions), with separate brain circuits associated with each type of empathy (Bernhardt and Singer, 2012 Decety, 2015). Here we focus on one particular type of affective empathy: empathy for the pain of others. The dorsal anterior cingulate (dACC) and anterior insula (AI) are consistently detected across studies in response to this type of affective empathy and respond to both the first-hand experience of pain and its perception in others (Lamm et al., 2011). One of the first fMRI studies that investigated how the neural regions involved in empathy are influenced by group membership is a study by Xu et al. (2009). They presented Chinese and Caucasian participants with video clips of Chinese and Caucasian people receiving either painful (i.e., needle prick) or non-painful (i.e., cotton swab) stimulation to the face. Observing painful stimulation of ingroup faces led to more activation in the dACC and AI, but when participants viewed outgroup faces in pain, no increased activation was observed in the dACC.

Another fMRI study that used similar stimuli examined whether a general social group category, other than race, could similarly modulate neural empathic responses and perhaps account for the apparent racial bias reported in previous studies (Contreras-Huerta et al., 2013). Using a minimal group paradigm, the authors assigned participants to one of two mixed-race teams (Chinese or Caucasian), and then measured neural empathic responses as participants observed members of their own group or other group, and members of their own race or other race, receiving either painful or non-painful stimuli. Participants showed clear group biases, with no significant effect of race, on behavioral measures of implicit and explicit group identification. Hemodynamic responses to perceived pain in dACC and AI showed significantly greater activation when observing pain in own-race compared with other-race individuals, with no significant effect of minimal groups. These results suggest that racial bias in empathic responses is not easily influenced by minimal forms of group categorization, despite the fact that participants indicated a clear increased association with ingroup versus outgroup members, as measured both by implicit and explicit measures of group identification.

Another fMRI study examined empathetic responses in soccer fans (Hein et al., 2010). Participants in this study received high or low painful shocks to the hand, and observed ingroup (i.e., fans from the same soccer team) and outgroup (i.e., fans from the opposing soccer team) members receive the same type of shocks. Activation in the AI was stronger for ingroup members in the high minus low painful condition compared to the same contrast in outgroup members, thus reflecting an ingroup bias in empathy responses. In a second session, they measured how much the participant was willing to help the ingroup and outgroup member by asking them if they were willing to receive half of the persons’ painful stimulation (and thus reduce the pain for the other person). Increased response in the AI in high versus low painful trials pooled across ingroup and outgroup conditions was associated with increased helping overall. Moreover, individual differences in the AI response in high versus low painful trials in the ingroup compared to outgroup conditions predicted how much more they were willing to help an ingroup versus outgroup member reduce their pain.

Reduced activation in the AI between watching ingroup and outgroup members in pain was also reported in an fMRI study in which White and Black participants watched video clips of white and black hands receive either painful stimulation by a syringe or non-painful stimulation by a Q-tip (Azevedo et al., 2013). Participants also completed a racial implicit association test (IAT) to measure their implicit racial bias. Watching painful stimulation to a hand from the same race resulted in increased activation in AI compared to the other race. Participants that showed a greater ingroup bias in AI activation, also showed a larger ingroup bias as measured by the IAT. Finally, a recent fMRI study suggests that perceived threat of the outgroup to the status of the ingroup can modulate ingroup bias (Richins et al., 2018). Students who watched other students from the same and other university in pain only showed less activation in AI and dACC for outgroup members if the student was from a competing university, and not if the student was from a university that was not considered a threat to the status of the ingroup.

The fMRI studies reviewed in this section suggest that people typically activate brain regions associated with watching others in pain, such as the dACC and AI, less when observing outgroup (vs. ingroup) members in pain. Individual ingroup bias differences in neural responses in these regions is also associated with reduced helping behavior (Hein et al., 2010), and increased implicit negative bias toward outgroup members (Azevedo et al., 2013). However, as the study by Richins et al. (2018) suggests, this reduced neural response when confronted with outgroup members in pain depends on the type of outgroup people are dealing with.


Mammals

3.07.1 Introduction

Understanding neural development can inform us about how a brain structure evolves. By examining the development of a cortical region, one can elucidate the role that molecular cues and afferent inputs play in determining the evolution of a cortical structure, its function, and associated behaviors. Cross-modal experiments provide insight through a gain-of-function approach, whereby inputs of one sensory system are redirected to a different sensory modality. This allows the role of intrinsic and extrinsic factors to be distinguished and their relative contribution to cortical structure, function, and behavior to be determined. Here we will discuss how molecular cues in early development may influence the evolution of a cortical structure, as well as how rewiring visual inputs to innervate the auditory pathway provides insight into the role of patterned electrical activity as a key extrinsic factor determining the ultimate organization and function of a structure (see A History of Ideas in Evolutionary Neuroscience , Relevance of Understanding Brain Evolution ).


Neural Mechanisms of Social and Nonsocial Reward Prediction Errors in Adolescents with Autism Spectrum Disorder

Address for correspondence and reprints: Jessica L. Kinard, Carolina Institute for Developmental Disabilities, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, Chapel Hill, NC 27510. E-mail: [email protected] Search for more papers by this author

Department of Psychology and Neuroscience, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina

Department of Psychology and Neuroscience, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina

Department of Psychiatry, University of Arkansas for Medical Science, Little Rock, Arkansas

Duke-UNC Brain Imaging and Analysis Center, Duke University Medical Center, Durham, North Carolina

Department of Psychiatry, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina

Duke-UNC Brain Imaging and Analysis Center, Duke University Medical Center, Durham, North Carolina

Department of Psychology and Neuroscience, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina

Department of Psychiatry, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina

Department of Psychiatry, University of Illinois at Chicago, Neuropsychiatric Institute, Chicago, Illinois

Department of Psychology and Neuroscience, University of Colorado Boulder, Boulder, Colorado

Carolina Institute for Developmental Disabilities, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, Chapel Hill, North Carolina

Division of Speech and Hearing Sciences, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina

Division of Speech and Hearing Sciences, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina

Carolina Institute for Developmental Disabilities, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, Chapel Hill, North Carolina

Department of Psychology and Neuroscience, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina

Department of Psychiatry, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina

Carolina Institute for Developmental Disabilities, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, Chapel Hill, North Carolina

Division of Speech and Hearing Sciences, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina

Address for correspondence and reprints: Jessica L. Kinard, Carolina Institute for Developmental Disabilities, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, Chapel Hill, NC 27510. E-mail: [email protected] Search for more papers by this author

Department of Psychology and Neuroscience, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina

Department of Psychology and Neuroscience, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina

Department of Psychiatry, University of Arkansas for Medical Science, Little Rock, Arkansas

Duke-UNC Brain Imaging and Analysis Center, Duke University Medical Center, Durham, North Carolina

Department of Psychiatry, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina

Duke-UNC Brain Imaging and Analysis Center, Duke University Medical Center, Durham, North Carolina

Department of Psychology and Neuroscience, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina

Department of Psychiatry, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina

Department of Psychiatry, University of Illinois at Chicago, Neuropsychiatric Institute, Chicago, Illinois

Department of Psychology and Neuroscience, University of Colorado Boulder, Boulder, Colorado

Carolina Institute for Developmental Disabilities, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, Chapel Hill, North Carolina

Division of Speech and Hearing Sciences, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina

Division of Speech and Hearing Sciences, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina

Carolina Institute for Developmental Disabilities, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, Chapel Hill, North Carolina

Department of Psychology and Neuroscience, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina

Department of Psychiatry, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina

Abstract

Autism spectrum disorder (ASD) is characterized by impaired predictive abilities however, the neural mechanisms subsuming reward prediction errors in ASD are poorly understood. In the current study, we investigated neural responses during social and nonsocial reward prediction errors in 22 adolescents with ASD (ages 12–17) and 20 typically developing control adolescents (ages 12–18). Participants performed a reward prediction error task using both social (i.e., faces) and nonsocial (i.e., objects) rewards during a functional magnetic resonance imaging scan. Reward prediction errors were defined in two ways: (a) the signed prediction error, the difference between the experienced and expected reward and (b) the thresholded unsigned prediction error, the difference between expected and unexpected outcomes regardless of magnitude. During social reward prediction errors, the ASD group demonstrated the following differences relative to the TD group: (a) signed prediction error: decreased activation in the right precentral gyrus and increased activation in the right frontal pole and (b) thresholded unsigned prediction error: increased activation in the right anterior cingulate gyrus and bilateral precentral gyrus. Groups did not differ in brain activation during nonsocial reward prediction errors. Within the ASD group, exploratory analyses revealed that reaction times and social-communication impairments were related to precentral gyrus activation during social prediction errors. These findings elucidate the neural mechanisms of social reward prediction errors in ASD and suggest that ASD is characterized by greater neural atypicalities during social, relative to nonsocial, reward prediction errors in ASD. Autism Res 2020, 13: 715–728. © 2020 International Society for Autism Research, Wiley Periodicals, Inc.

Lay Summary

We used brain imaging to evaluate differences in brain activation in adolescents with autism while they performed tasks that involved learning about social and nonsocial information. We found no differences in brain responses during the nonsocial condition, but differences during the social condition of the learning task. This study provides evidence that autism may involve different patterns of brain activation when learning about social information.


In Face Perception and Memory

Our memory for faces from both other age- and ethnic groups is substantially reduced. The figure shows results from an experiment in which we examined such own-race and own-age biases.

Mission Statement

The efficient analysis and representation of person-related information is one of the most important challenges for human social perception. Faces, for instance, inform us about a large variety of socially relevant information including a person´s identity, emotions, gender, age, ethnic background or focus of attention. Cognitive models of face perception acknowledge a degree of functional independence between these different aspects of perception, each of which may be mediated by different types of “diagnostic” information in the stimulus. Cognitive neuroscience is beginning to reveal the neural mechanisms that underlie face perception, but there has been little work on integrating those data with models from cognitive and social psychology. The present applicants have begun to successfully collaborate on this integrative view. In this Research Unit, we will combine a multilevel methodological approach to promote a unified theory of the psychological and neural bases of person perception. In a closely coordinated research programme, we will investigate i) basic perceptual processes, ii) the processing of social and emotional information about people, and iii) person perception in specific populations.


DNA methylation analysis of the autistic brain reveals multiple dysregulated biological pathways

Autism spectrum disorders (ASD) are a group of neurodevelopmental conditions characterized by dysfunction in social interaction, communication and stereotypic behavior. Genetic and environmental factors have been implicated in the development of ASD, but the molecular mechanisms underlying their interaction are not clear. Epigenetic modifications have been suggested as molecular mechanism that can mediate the interaction between the environment and the genome to produce adaptive or maladaptive behaviors. Here, using the Illumina 450 K methylation array we have determined the existence of many dysregulated CpGs in two cortical regions, Brodmann area 10 (BA10) and Brodmann area 24 (BA24), of individuals who had ASD. In BA10 we found a very significant enrichment for genomic areas responsible for immune functions among the hypomethylated CpGs, whereas genes related to synaptic membrane were enriched among hypermethylated CpGs. By comparing our methylome data with previously published transcriptome data, and by performing real-time PCR on selected genes that were dysregulated in our study, we show that hypomethylated genes are often overexpressed, and that there is an inverse correlation between gene expression and DNA methylation within the individuals. Among these genes there were C1Q, C3, ITGB2 (C3R), TNF-α, IRF8 and SPI1, which have recently been implicated in synaptic pruning and microglial cell specification. Finally, we determined the epigenetic dysregulation of the gene HDAC4, and we confirm that the locus encompassing C11orf21/TSPAN32 has multiple hypomethylated CpGs in the autistic brain, as previously demonstrated. Our data suggest a possible role for epigenetic processes in the etiology of ASD.

Figures

DNA methylation changes in autistic…

DNA methylation changes in autistic cerebral cortex regions. ( a, b ) Heat…

Gene Ontology analysis of differentially…

Gene Ontology analysis of differentially methylated CpGs in autistic prefrontal cortex. ( a…

Directional association between gene expression…

Directional association between gene expression and methylation in prefrontal cortex. Gene expression analysis…

Dysregulated DNA methylation in the…

Dysregulated DNA methylation in the genes HDAC4 and TSPAN32. ( a ) Gene…


Neural Circuits and Behavior

Our brain is responsible for all our perceptions, thoughts and actions. Despite the incredible array of processes the brain performs - from memory to emotion - its elementary units of function are the nerve cell and the synaptic junction. How is that a collection of neurons and their synapses gives rise to all of animal and human behavior?

In mammals and especially in humans, the cerebral cortex is an area of the brain that is crucially involved in nearly all cognitive functions. Individual neurons in the cortex can make over 10,000 connections with other brain cells. The precise pattern of connections between a local group of neurons in the cortex gives rises to its elementary unit of computation - the cortical microcircuit.

The goal of our laboratory is to reveal the neural basis of perception. More specifically, we want to understand exactly how cortical microcircuits process sensory information to drive behavior. While decades of research have carefully outlined how individual neurons extract specific features from the sensory environment, the cellular and synaptic mechanisms that permit ensembles of cortical neurons to actually process sensory information and generate perceptions are largely unknown.

Addressing this fundamental question of modern neuroscience requires working at both the cellular and system-wide level to assess how populations of neurons cooperate to encode information, generate perceptions, and execute behavioral decisions. Towards this end, we monitor and and then manipulate specific subsets of genetically identified neurons in awake behaving mice to quantitatively determine their contribution to sensory processing and behavior. By turning neurons 'on' and 'off' using optogenetic approaches, we can identify groups of cortical neurons that are both necessary and sufficient for specific neural computations. By complementing our in vivo work with detailed analysis of synaptic connectivity and network dynamics in vitro, we hope to arrive at a more complete understanding for how neural circuits in our brain support sensation, cognition, and action. Our lab is also developing a suite of novel optical and genetic approaches to manipulate neural circuits in the intact brain and at far greater resolution than is possible with current techniques. These new techniques will allow us to address fundamental questions about sensory computation and perception that have as yet eluded investigation.

There are three major aims of our research. Our first goal is to understand the role of horizontal and vertical connections across each of the six major cortical layers in computing the size, shape, and texture of sensory objects. Horizontal connections that connect cortical columns are likely to play a critical role in these processes by providing a context for sensory stimuli in space. We take advantage of the mouse 'barrel' cortex in which each cortical column represents one whisker on the face. Using multi-electrode array recordings, two photon imaging, and in vivo whole cell voltage clamp recordings in the barrel cortex of awake, actively whisking mice we are dissecting the cortical circuits responsible for encoding and decoding tactile stimuli.

The second goal of the lab is to develop new high-speed and spatially precise optical approaches to manipulate neural activity at the level of single neurons in the intact brains of awake, behaving animals to decipher the neural code that underlies sensory perception. By combining non-linear optics with optogenetics we aim to control the activity of hundreds to thousands of neurons at single cell resolution, providing the key tool to understand how patterns of action potentials in space and time represent the external world.

Our third goal is to understand how global activity across multiple sensory, motor and associational areas, synthesizes specific perceptions and selects the appropriate behavioral response to a given set of environmental conditions. We combine large scale electrophysiology with optogenetic techniques to reveal how brain areas cooperate to generate specific percepts.

Our hope is that by understanding how the brain generates perceptions at the level of synapses and circuits we will not only come to a much deeper appreciation for the biological mechanisms underlying brain function, but also reveal new avenues to treat neurological disease such as autism, schizophrenia, epilepsy, and movement disorders.


Watch the video: Μάθε πως Μαθαίνουν Τεχνητά Νευρωνικά Δίκτυα Μηχανική Μάθηση - Μέρος 2 (January 2022).