Research

Research Areas

Our lab combines behavioral tasks, large-scale electrophysiology, calcium imaging, and computational modeling to understand the neural mechanisms of learning and decision making.

Decision Making

Our lab studies how the brain evaluates options, weighs evidence, and commits to actions under uncertainty. We combine behavioral paradigms with large-scale neural recordings to understand how value representations emerge across cortical and subcortical circuits during decision-making. By tracking neural activity across multiple brain regions simultaneously, we aim to reveal how distributed circuits coordinate to produce adaptive choices.

Key Questions

  • How does the brain represent and compare values of different options?
  • What neural mechanisms underlie the speed-accuracy tradeoff in decisions?
  • How do neuromodulators shape the decision process in real time?

Approaches

Multi-region Neuropixels recordingsBehavioral modelingOptogenetic circuit manipulation

Neuromodulation

Neuromodulators like dopamine play critical roles in learning, motivation, and cognitive flexibility. Our lab investigates how rapid dopamine dynamics in the striatum and prefrontal cortex modulate local circuit computations, and how disruptions in these signals contribute to neuropsychiatric conditions. We use fiber photometry and calcium imaging to measure neuromodulator dynamics at subsecond timescales during complex behaviors.

Key Questions

  • What information do rapid dopamine fluctuations encode beyond reward prediction errors?
  • How do dopamine signals differ across brain regions and behavioral contexts?
  • What is the relationship between tonic and phasic dopamine signaling?

Approaches

Fiber photometryCalcium imaging of dopamine neuronsPharmacological manipulation

Reinforcement Learning

We develop and test computational models of reinforcement learning to understand how the brain updates predictions, learns action values, and adapts behavior based on reward history. By comparing model predictions with neural data recorded during learning, we reveal the algorithms implemented by biological circuits and identify where they diverge from classical theoretical frameworks.

Key Questions

  • How does the brain compute and update reward prediction errors?
  • What neural substrates implement different components of RL algorithms?
  • How do model-based and model-free learning strategies interact in the brain?

Approaches

Computational modelingNeural data analysisNeuropixels recordings during learning

Interested in our research?

Explore our publications for detailed findings, or learn about opportunities to join our team.