about
I am a Doctoral Student in Machine Learning and a Fellow at the ETH AI Center, advised by Fanny Yang.
My research focuses on alignment and safety for language models, privacy-preserving machine learning, and trustworthy machine learning more broadly. During my PhD, I spent time at Apple, where I worked on pre-training visual generation models, and at Google DeepMind, where I worked on language model post-training and distillation. Before my PhD, I was a Research Scientist at Featurespace. I also conducted research at the University of Cambridge on conformal prediction with Adrian Weller MBE and on interpretability methods for causal inference with Mihaela van der Schaar.
research
Full list on Google Scholar.
* equal contribution
-
Efficient randomized experiments using foundation models
-
Copyright-protected language generation via adaptive model fusionoral
-
Detecting critical treatment effect bias in small subgroups
-
Privacy-preserving data release leveraging optimal transport and particle gradient descent
-
Hidden yet quantifiable: a lower bound for confounding strength using randomized trials
-
Approximating full conformal prediction at scale via influence functionsoral
news
- Sep 2025 Joined Google DeepMind as a PhD Student Researcher working with the Gemma post-training team.
- Apr 2025 Presented Copyright-Protected Language Generation via Adaptive Model Fusion as an oral at ICLR 2025 in Singapore.
- Mar 2025 Started working at Apple on image and video generation for Apple Intelligence.
- Oct 2024 Visited the Simons Institute at UC Berkeley.
- Feb 2023 Presented Approximating full conformal prediction at scale via influence functions as an oral at AAAI 2023 in Washington, DC.
- Dec 2022 Started my PhD at the ETH AI Center.