Interests
My research lies at the intersection of the machine learning, statistics and computer science communities, and has currently focused on designing kernel-based hypothesis tests for the two-sample, independence and goodness-of-fit problems. A unique aspect of my work is that an equally strong emphasis is put on both theory and practicality, which is of the utmost importance for the real-world uses of these tests. Indeed, strong theoretical guarantees in terms of minimax optimality are provided, and much effort has been put into providing user-friendly parameter-free implementations of all proposed tests with the ability to leverage GPU architectures for significant computational speedups.
This practical aim to construct parameter-free tests for users resulted in a common theme throughout my research: tackling the fundamental problem of kernel selection for kernel-based hypothesis tests, either via Aggregation, or via Fusion. A nowadays very common concern that could affect the adoption of these tests for real-world applications is privacy of sensitive data, as such, we have recently proposed tests with differential privacy guarantees, have proved their minimax optimality, and have released publicly available code.