top of page

​​​​"I raised to my lips a spoonful of the tea in which I had soaked a morsel of the cake. [...]

And suddenly the memory returns. The taste was that of the little crumb of madeleine which on Sunday mornings at Combray, when I went to say good day to her in her bedroom, my aunt Léonie used to give me, dipping it first in her own cup of real or of lime-flower tea." 

Marcel Proust, In search of lost time

what happened here?

How are episodic memories stored in neural circuits? What happened when they are recalled?

How do we forget and why? these are some of the questions we address, using a combination of top-down circuit models and data analysis (rodent hippocampus). 


Ambiguity and noise plague virtually all biologically-relevant computation. Hence, neural circuits need to represent and appropriately deal with uncertaintly.

We use theoretical models to identify general principles for representing uncertainty in neural circuits and develop novel data analysis tools to validate these models experimentally (ferret and monkey V1).

We are particularly interested in metamemory, making judgements about the reliability of information retrieved from memory.  



The properties of single neurons and their interactions change all the time, on a multitude of time scales. This plasticity allows the remarkable flexibility and robustness of the brain, be it when cramming for the next exam or when recovering function after injury. The challenge is to understand how the vast array of different plasticity mechanisms interact to give rise to stable circuit function. 

To address this question  we build probabilistic models that describe biologically-relevant computations and then use techniques borrowed from machine learning to work out how neural circuits could approximate the optimal solution for these tasks.

This work provides a normative account for the richness and diversity of synaptic and neural plasticity. In particular, it shows that homeostatic plasticity plays critical roles in circuit function: when learning efficient representations of sensory inputs or during efficient memory recall.

building efficient representations of sensory inputs

Theories are nice, but they are truly useful only when they link back to experimental data. Our theoretical results motivate new ways of looking at neural activity, and suggest novel experiments. We collaborate extensively with experimental labs and use state-of-the art machine learning to build new analysis tools for multiunit activity from (typically awake behaving) animals.

Current projects here involve using maximum entropy models to characterize joint statistics of neural activity in the hippocampus and how they change with learning. We are particularly interested in time varying data: extracting low dimensional representations of population dynamics and quantifying how these change with uncertainty.

how to make the most of time-limited neural recordings?
bottom of page