DPZ-Homepage
Menu mobile menu

Research in the Sensorimotor Group

 

In the Sensorimotor Group we are interested in the neural processes underlying goal-directed behavior. We investigate the basis of movement planning and decision making in different areas in the cerebral cortex of primates. We have a particular focus on the interplay between frontal and parietal lobe areas in the context of rule-guided behavior. Our research in fundamental neuroscience goes hand in hand with research towards modern neuroprostheses and development of neurotechnology tools. Additionally we advance methods with the goal of improving animal welfare.

Neuroscience of goal-directed behavior

Movements are more than reflexive responses to environmental changes. Goal-directed behavior is the consequence of planning and deciding and is continuously shaped by adaptation and associative learning. In the Sensorimotor Group we investigate the cortical neural mechanisms underlying the planning and selection of goal-directed action. What do we encode about future movements, where in the fronto-parietal sensorimotor system, and when do movement planning and selection take place?

We employ different research methods towards our neuroscientific research goals. We mainly focus on neurophysiological recordings in the posterior parietal and premotor cortices of rhesus monkeys. Particularly, we investigate the parietal reach region (PRR) and the dorsal premotor cortex (PMd) during reaching. We interpret our neurophysiological findings with the help of computational modelling and by relating them to the sensorimotor abilities in humans and monkeys that we measure with psychophysical methods.

What? – Motor-goals in different spatial frames of reference

Motor-goals in different spatial frames of reference
Movement goals can be described in different spatial frames of reference, like the glass of milk relative to the direction of gaze or the hand.

Movements are changes in body-configuration in time and space. Hence, one can describe movements in different spatial frames of reference. For example, the position of the hand relative to the own body as determined by the shoulder, elbow or wrist angles might matter; or the desired endpoint of a reach relative to the direction of gaze; or the position of a reach target relative to other visual objects in the visual field. In how far do reference frames dynamically change or depend on the behavioral context? To what extend are different reference frames used in neural processing? We want to understand how sensorimotor transformations are achieved in the brain and adapted to new situations.

Sensorische Kodierung contra bewegungsabhängige Kodierung
At different points in time, the parietal reach region can encode both information about a visual cue (light blue line) as well as the relevance of the cue with respect to the subsequent movement (magenta).

Sensory versus motor-related encoding Depending on the behavioral context, the location of a target object can be associated with different motor-goals, e.g., reach towards or away from the object. Correspondingly, sensorimotor transformations need to be performed in a context-specific manner. Individual neurons in PRR and PMd, two parts of the brain known to be involved in sensorimotor transformations, reflect this ability in their activation patterns. On the one hand, the experimental dissociation of sensory input and associated movement allows isolating the neural encoding of motor-goals from encoding of sensory stimuli. At the same time, such context-dependent sensorimotor transformations allow flexible responses to sensory input, a cornerstone of goal-directed behavior which we can investigate this way (Gail & Andersen 2006; Brozovich et al. 2007; Gail et al. 2009).

Planning in visual and physical space
Special glasses allow seeing the hand displaced from its actual position. With this we can investigate the roles of visual versus physical movement planning.

Planning in visual and physical space We have a remarkable lack of awareness regarding the nature of our own motor commands, in sharp contrast to our detailed awareness about sensory inputs. So what are motor-goals at the cognitive level? A long standing idea poses that movements are planned, selected and initiated based on the sensory state that is associated with a certain movement. When we dissociate the sensory feedback associated with a movement from the physical movement, neurons ins posterior parietal and also dorsal premotor cortex reflect both the motor goal in physical and visual space, which can also be interpreted as associated proprioceptive and visual feedback about a pending movement (Kuang & Gail 2014; Kuang et al. 2015; Kuang et al., in preparation).

Object-centered reaching

Object-centered reaching When the task demands, we can plan movements not only relative to our own body (egocentric reference), but also relative to other objects (allocentric reference). Neurons in the frontoparietal sensorimotor network reflect this ability by mixed encoding of egocentric and allocentric motor-goal information (Taghizadeh & Gail 2014; Taghizadeh & Gail, in preparation).

The movement towards a target object like a cup of coffee can be planned both relative to the subject’s own body (egocentric, blue arrow) or relative to another object (allocentric, red arrow).

Sensorimotor adaptation

Sensorimotor adaptation Sensorimotor transformations for goal-directed behavior are highly adaptable. This is necessary to compensate for changes in our environment or our body. Experimentally one can induce adaptation by perturbing movements or sensory feedback. From the pattern of how adaptation generalizes to motor-goals in unperturbed parts of the workspace, one can infer the spatial encoding underlying adaptation. With neural network simulations we can additionally make prediction about underlying neural changes (Westendorff et al. 2015).   

Systematic discrepancies between an intended and actually achieved movement goal are compensated by adaptation in the sensorimotor system. Generalization of adaptation to motor goals which do not need correction, can lead to unwanted motor errors.

Where? – Frontal- and parietal contributions to rule-guided behavior

Selecting the right behavior in any given context is an important executive function. Conditional motor behavior refers to our ability to easily learn arbitrary abstract associations between a behavioral cue and an appropriate response. Overcoming stimulus-driven “impulsive” behavior with contextually controlled behavior is an important frontal lobe function. We investigate the mutual contributions and interactions of parietal and frontal lobe motor planning areas in context-dependent sensorimotor transformations.

Similarities and differences in parietal and frontal cortex.
In situations without spatial stimulus-response congruency, neurons in in the premotor cortex (PMd) encode the motor goal earlier than in the parietal cortex (PRR).

Similarities and differences in parietal and premotor encoding We analyze motor-goal encoding and its precise timing during rule-guided action selection to understand the functional structure und potential mutual contingencies in the cortical sensorimotor network. Basic neural encoding properties can be rather similar in reach-related frontal and parietal lobe areas. Yet, differences in spatial selectivity and timing become particularly obvious when comparing situations with and without spatial stimulus-response congruency, i.e. situations which do or do not require rule-based stimulus-response remapping (Gail et al. 2009; Westendorff et al. 2010).  

Fronto-parietal functional interactions
Dynamic functional interactions between frontal (marked in yellow) and parietal areas (marked in red) are expected to serve important executive control functions.

Fronto-parietal functional interactions Additional to characterizing individual motor planning areas, functional interaction between areas is key to understanding the structure of the sensorimotor system. While local signal correlations between spiking patterns of neurons suggest functional organization within areas, especially in the parietal cortex (Chakrabarti et al. 2014), across-area interaction can better be analyzed with local field potentials. The latter suggest phasic fronto-parietal interaction specific for working-memory processes, an important executive function (Martinez-Vazquez & Gail 2018). 

When? – The interdependence of action planning and selection

When (under which circumstances) does the sensorimotor system become engaged in cognitive processes which are not immediately associated with motor planning or even motor control? Does motor planning come into play before we commit to a certain choice, or only once we have selected a specific action? In how far is preliminary action planning an inherent part of action selection? We investigate the influence of action-outcome contingencies on behavioral choice and on motor planning activity in the sensorimotor system.

Rule-based action selection
The sensorimotor system can encode two potential motor goals in parallel (two hills of activity in center of figure). As soon as one of the motor goals is chosen and the movement is performed, this goal is predominantly encoded (dominant hill in the right side of figure).

Rule-based action selection Goal-directed behavior implies that we choose among multiple alternative actions, which might have to be inferred from the sensory input by means of learned behavioral rules. For example, we might have to decide on whether to aim for a target (your opponent tennis partner in a friendly match) to aim away from the target (in a competitive match). In such rule-based selection tasks the sensorimotor system can encode two potential motor-goals in parallel, provided both goals are equally preferred. Choice behavior in this case can depend on previous learning experience and immediate prior action planning (Klaes et al. 2011; Klaes et al. 2012; Suriya-Arunroj & Gail 2015; Suriya-Arunroj & Gail, in preparation).

Video about rule-based action selection research at the Sensorimotor Group
 

Economics and ergonomics

Economics and ergonomics Choice of action depends on multiple factors, not just economic criteria. To investigate action planning and selection in more complex motor behaviors, we run experiments on arm movements in stereoscopic 3D virtual reality settings in which humans and monkeys interact with haptic robotic interfaces (Morel et al. 2017).

 

Schematic illustration of a monkey interacting with a haptic robotic interface (top) in a stereoscopic 3D virtual reality.

Neurotechnology and Neuroprosthetics

Neural signals from the central and from the peripheral nervous system can be used to control advanced motor prostheses. Based on our improved understanding of the neural basics of motor planning, we can identify suitable neuroprosthetic control signals. Additionally, we work towards improved adaptive neural recording and wireless electro-muscular recordings for prosthetic control.

Adaptive multi-electrode positioning system (BFNT-AMEP)
With the help of pattern recognition algorithms, action potentials (top right) can be extracted from complex brain signals and assigned to individual neurons. The signal from one time point (τ<sub>k-1</sub>) to the next (τ<sub>k</sub>) can be recorded and analyzed with a consistent quality (bottom right) by adjusting microelectrode positions in an automated fashion.

Adaptive multi-electrode positioning system (BFNT-AMEP) Chronically implanted intracortical multi electrode arrays give scientists the opportunity to record in non-human primates, and in rare cases also in human patients, from many neurons simultaneously and for a prolonged amount of time. For signal optimizations it is desirable to combine chronic systems with the possibility to adapt electrode positions repeatedly, especially if repositioning can be automatized. A computerized and adaptive implantable electrode positioning system was developed by our industrial partner Thomas RECORDING, Giessen (Chakrabarti et al. 2012; Ferrea et al. 2018).

  

Myoplant
Myo-electric signals from arm-muscles during goal-directed movements are recorded and wirelessly transmitted to a prosthetic device.

Myoelectric prosthetics (MYOPLANT) Modern motor prostheses can be controlled with electrical signals from intact muscles. Up to now, such myoelectric control signals rather than signals from the brain are clinically most relevant in motor prosthetics. We help to improve this approach with the development of advanced wireless implantable myo-electric registration techniques. The Myoplant project is a collaborative effort between the Otto Bock Healthcare Company, a leading industrial partner in the field of prosthetics, and academic and clinical partners. The goal is to develop a new generation of bionic hand prostheses controlled via full-implantable wireless myo-electrical recording systems (Ruff et al. 2010; Lewis et al. 2013; Morel, Ferrea et al. 2016).

        

Animal Welfare

SeverityAssessment

Working with non-human primates, especially with awake and behaving animals, puts high demands on the care and wellbeing of the animals. More reliable evidence is needed on the determinants of the welfare of non-human primates used in research. This includes development of novel experimental approaches and comparison with existing ones, together with more systematic assessment of potential welfare-related biomarkers (Pfefferle et al. 2018). Activities along those lines should not be focused on individual labs. To achieve these far-reaching goals we collaborate with multiple labs locally (DPZ-WeCo; see DPZ Aktuell 4/2014, p.9), nationally (DFG-FOR 1847DFG-FOR 2591​​​​​​​), and internationally (EUPRIM-Net).

XBI
A monkey is performing a cognitive test on a touch screen to which the animal has free and self-paced access within its cage environment.

Cage-based testing systems As one central element towards alternative approaches, we developed a technical system for rhesus monkeys to perform cognitive and sensorimotor tasks in their cage environment (XBI). The system allows for self-paced interaction with the system via a touch screen and can be flexibly programmed to run behavioral experiments mostly equivalent to the conventional neuroscientific settings (Calapai et al., 2016; Berger et al. 2018).