I am a PhD student on the Foundational Artifical Intelligence CDT (2020-2024) at University College London, supervised by Benjamin Guedj at the UCL Centre for Artificial Intelligence and Arthur Gretton at the Gatsby Computational Neuroscience Unit.
My research lies at the intersection of the machine learning, statistics and computer science communities, and has currently focused on designing kernel-based hypothesis tests for the two-sample, independence and goodness-of-fit problems. A unique aspect of my work is that an equally strong emphasis is put on both theory and practicality, which is of the utmost importance for the real-world uses of these tests. Indeed, strong theoretical guarantees in terms of minimax optimality are provided, and much effort has been put into providing user-friendly parameter-free implementations of all proposed tests with the ability to leverage GPU architectures for significant computational speedups.
This practical aim to construct parameter-free tests for users resulted in a common theme throughout my research: tackling the fundamental problem of kernel selection for kernel-based hypothesis tests, either via Aggregation, or via Fusion. A nowadays very common concern that could affect the adoption of these tests for real-world applications is privacy of sensitive data, as such, we have recently proposed tests with differential privacy guarantees, have proved their minimax optimality, and have released publicly available code.
I have also always had a keen interest in generative modelling and have kept close to that community.
I am affiliated with Inria through the Inria-London Programme and the Modal research team. I also organise the DeepMind/ELLIS CSML Seminar Series on Computational Statistics and Machine Learning for the UCL ELLIS Unit (European Lab for Learning & Intelligent Systems).
A short biography can be found here.