I am a postdoctoral fellow at École Polytechnique (Paris), working in the ML team with El Mahdi El Mhamdi's group on robust learning theory, and preference learning methods.
From January 2023 to Septembre 2025, I was a postdoctoral fellow at CTU (Prague) in the optimization group, working with Jakub Mareček on the optimization of tame functions.
Prior to this position, I did my PhD at the Laboratoire Jean Kuntzmann (Grenoble), in the DAO team under the supervision of Franck Iutzeler and Jérôme Malick. I focused on structured nonsmooth problems, which include learning problems and more general nonsmooth problems. I developed structure identification procedures and fast (Newton-type) methods for such nonsmooth nonconvex problems. More in my PhD thesis!
Here is a resume and a list of publications.
Interests: Application-wise, I'm focusing on Preference Learning, and Safety of ML systems. Math-wise, I enjoy using techniques from Nonsmooth nonconvex optimization; Second-order methods; Riemannian optimization, and Stratification theory.
April 2026: I will attend ICLR 2026! With contributions on stochastic constrained optimization (main track), outliers-dropout interplay in LLMs (Sci4DL workshop), and Byzantine Robustness (Trustworthy AI workshop). Feel free to reach out if you want to discuss!
March 2026: Byzantine Machine Learning: MultiKrum and an optimal notion of robustness, Journées MODE, Nice. Slides.
Dec. 2025: Generalizing while preserving monotonicity in comparison-based preference learning models, Eurips (poster), Copenhaguen.
Dec. 2025: Benchmarking Algorithms for Fairness-Constrained Training of Deep Neural Networks, Eurips (poster), Copenhaguen.
Sept. 2025: Our benchmark of solvers for learning with constraint, with Andrii, Jana and Jakub got accepted at the Neurips Constrained Optim for ML workshop!
Sept. 2025: Our work on generalization and monotonicity in preference learning, with Julien Fageot, Peva Blanchard and Lê-Nguyên Hoang got accepted at Neurips!
June 2023: Hybrid Methods for global optimization of large-scale polynomials, FOCM conference (poster), Paris. poster
Dec. 2022: Newton methods for structured nonsmooth optimization, Inria Mind seminar, Saclay (online).
Dec. 2022: I defended my PhD on December 2nd, 2022. I was very happy to have in the jury: Jalal Fadili and Claudia Sagastizábal as rapporters, Jean-Charles Gilbert and Mathurin Massias as examiners, and Nadia Brauner as president. Claude Lemaréchal also attended! Manuscript & Slides
Nov. 2022: Newton methods for structured nonsmooth optimization, Rutgers Optim & ML seminar (online).
Oct. 2022: Newton methods for structured nonsmooth optimization, Inria MLSP seminar, Lyon.
Oct. 2022: Newton methods for nonsmooth composite optimization, Journées MOA, Nice, Slides.
June 2022: Conjuguer Newton et gradient proximal pour l’optimisation non lisse, CANUM, Evian, Abstract & Slides.
June 2022: Newton methods for nonsmooth composite optimization, Journées MODE, Limoges. Received the "prix Dodu" awarding the three best talks among young researchers. Abstract & Slides
Dec. 2020: Harnessing Structure in Optimization for Large-scale Learning, LJK PhD day, Grenoble.
Sept. 2020: On the Interplay between Acceleration and Identification for the Proximal Gradient algorithm, Journées MODE (virtual). Abstract & Slides.
Feb. 2020 : Randomized Progressive Hedging methods for Multi-stage Stochastic Programming, ROADEF, Montpellier, Abstract & Slides.
E-mail: firstname . lastname |a|_ polytechnique.edu