Clémentine Dominé

clem_fun_pic.jpeg

I am a PhD candidate at the Gatsby Computational Neuroscience Unit, supervised by Andrew Saxe and Caswell Barry. My work sits at the intersection of theoretical neuroscience and theoretical machine learning. Previously, I completed a degree in Theoretical Physics at the University of Manchester, with an exchange at the University of California, Los Angeles. At a high level, my research aims to understand how the brain learns and forms representations to perform complex tasks, such as continual, curriculum, and reversal learning, as well as the acquisition of structured knowledge. I develop mathematical frameworks grounded in deep learning theory to describe complex and adaptive learning mechanisms, addressing questions from both machine learning and cognitive science/neuroscience perspectives. Beyond my research, I am deeply involved in the academic community. I co-organize the UniReps : Unifying Representations in Neural Models workshop at NeurIPS , an event dedicated to fostering collaboration and dialogue between researchers working to unify our understanding of representations in biological and artificial neural networks.

*First Author

Published

  • Jarvis, D.*, Lee, S.*, Dominé, C.*, Saxe, A.M., and Sarao Mannelli, S. (2025). A Theory of Initialisation’s Impact on Specialisation. Proceedings of the International Conference on Learning Representations (ICLR 2025). [paper]
  • Dominé, C.*, Anguita, N.*, Proca, A.M., Braun, L., Kunin, D., Mediano, P.A.M. and Saxe, A.M. (2025). From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks. Proceedings of the International Conference on Learning Representations (ICLR 2025). [paper]
  • Kunin, D.*, Raventós, A.*, Dominé, C., Chen, F., Klindt, D., Saxe, A. & Ganguli, S., 2024. Get rich quick: Exact solutions reveal how unbalanced initializations promote rapid feature learning. Advances in Neural Information Processing Systems (Neurips),38, (Spotlight). [paper]
  • Fumero, M., Rodolá, E., Dominé, C., Locatello, F., Dziugaite, K. and Mathilde, C., 2024, May. Preface of UniReps: the First Workshop on Unifying Representations in Neural Models. In Proceedings of UniReps: the First Workshop on Unifying Representations in Neural Models (pp. 1-10). PMLR. [paper]
  • Dominé, C. *, Braun, L.*, Fitzgerald, J.E., and Saxe, A.M., 2023. Exact learning dynamics of deep linear networks with prior knowledge. Journal of Statistical Mechanics: Theory and Experiment, 2023(11), p.114004. [paper]
  • Braun, L.*, Dominé, C.*, Fitzgerald, J., and Saxe, A., 2022. Exact learning dynamics of deep linear networks with prior knowledge. Advances in Neural Information Processing Systems, 35, pp.6615-6629. [paper]
  • Pegoraro, M. *, Dominé, C.*, Rodolà, E., Veličković, P., and Deac, A., 2023. Geometric Epitope and Paratope Prediction. Bioinformatics, 40(7), p.btae405. [paper]

Pre-Print/Under-Review

  • Thompson, E., Rollik, L., Waked, B., Mills, G., Kaur, J., Geva, B., Carrasco-Davis, R., George, T., Domine, C., Dorrell, W. and Stephenson-Jones, M., 2024. Replay of procedural experience is independent of the hippocampus. bioRxiv, pp.2024-06. [paper]
  • Dominé, C.*, Carrasco Davis, R.A. *, Hollingsworth, L., Sirmpilatze, N., Tyson, A.L., Jarvis, D., Barry, C., and Saxe, A.M., 2024. Neural Playground: A Standardised Environment for Evaluating Models of Hippocampus and Entorhinal Cortex. bioRxiv, pp.2024-03. [paper]

News

Jan 20, 2025 Got two papers accepted at ICLR (2025) 🎉 🍊
  • Jarvis, D.*, Lee, S.*, Dominé, C.*, Saxe, A.M., and Sarao Mannelli, S. (2025). A Theory of Initialisation’s Impact on Specialisation. Proceedings of the International Conference on Learning Representations (ICLR 2025). [paper]
  • Dominé, C.*, Anguita, N.*, Proca, A.M., Braun, L., Kunin, D., Mediano, P.A.M. and Saxe, A.M. (2025). From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks. Proceedings of the International Conference on Learning Representations (ICLR 2025). [paper]
See you in Singapore! 🎉🍊
Dec 11, 2024 I will be at NeurIPS 2024, presenting our Spotlight paper Get rich quick: exact solutions reveal how unbalanced initializations promote rapid feature learning. Make sure to also come along to the UniReps workshop!🔵🔴
Dec 1, 2024 In Santa Barbara for the Follow-on Program at the Kavli Institute for Theoretical Physics! 🌴🍊
Jun 16, 2024 I am excited to announce that I will be visiting the Zuckerman Institute at Columbia University in New-York this Autumn to collaborate with Dr. Kimberly Stachenfeld! 🎉 🍊