Clémentine Dominé

clem_fun_pic.jpeg

I am a PhD candidate at the Gatsby Computational Neuroscience Unit, supervised by Andrew Saxe and Caswell Barry. My work sits at the intersection of theoretical neuroscience and theoretical machine learning. Previously, I completed a degree in Theoretical Physics at the University of Manchester, with an exchange at the University of California, Los Angeles.

At a high level, my research aims to understand how the brain learns and forms representations to perform complex tasks, such as continual, curriculum, and reversal learning, as well as the acquisition of structured knowledge. I develop mathematical frameworks grounded in deep learning theory to describe complex and adaptive learning mechanisms, addressing questions from both machine learning and cognitive science/neuroscience perspectives.

Beyond my research, I am deeply involved in the academic community. I co-organize the UniReps : Unifying Representations in Neural Models workshop at NeurIPS , an event dedicated to fostering collaboration and dialogue between researchers working to unify our understanding of representations in biological and artificial neural networks. 🔵🔴

News

Aug 1, 2025 I’m excited to share that I’ll be joining Marco Mondelli and Francesco Locatello’s labs as a postdoc at ISTA Vienna starting next year. Looking forward to exploring exciting questions at the intersection of machine learning theory, representation learning, and beyond!
May 20, 2025 Got two papers accepted at ICML (2025) 🎉 🍊
  • Proca, A.M., Dominé, C., Shanahan, M. and Mediano, P.A.M., 2025. Learning dynamics in linear recurrent neural networks. Proceedings of the 42nd International Conference on Machine Learning (ICML 2025). (Oral)
  • Nam, Y., Lee, S.H., Dominé, C., Park, Y.C., London, C., Choi, W., Goring, N. and Lee, S., 2025. Position: Solve Layerwise Linear Models First to Understand Neural Dynamical Phenomena (Neural Collapse, Emergence, Lazy/Rich Regime, and Grokking). ICML 2025 Position Paper Track. [paper]
May 11, 2025 🚀 We’re excited to launch the ELLIS UniReps speaker-series, a monthly event exploring how neural models—both biological and artificial—develop similar internal representations, and what this means for learning, alignment, and reuse. Each session features a keynote by a senior researcher and a flash talk by an early-career scientist, fostering cross-disciplinary dialogue at the intersection of AI, neuroscience, and cognitive science! 🔵🔴
Jan 20, 2025 Got two papers accepted at ICLR (2025) 🎉 🍊
  • Jarvis, D.*, Lee, S.*, Dominé, C.*, Saxe, A.M., and Sarao Mannelli, S. (2025). A Theory of Initialisation’s Impact on Specialisation. Proceedings of the International Conference on Learning Representations (ICLR 2025). [paper]
  • Dominé, C.*, Anguita, N.*, Proca, A.M., Braun, L., Kunin, D., Mediano, P.A.M. and Saxe, A.M. (2025). From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks. Proceedings of the International Conference on Learning Representations (ICLR 2025). [paper]
See you in Singapore! 🎉🍊