Blog: DeepMind/ELLIS CSML Seminar Series 2021/2022
I have been delighted to be in charge of organising the DeepMind/ELLIS CSML Seminar Series 2021/2022 for the second year in a row. The aim of this seminar series is to foster collaboration across different UCL departments of the UCL ELLIS Unit (previously Computational Statistics and Machine Learning, CSML) which include the Gatsby Computational Neuroscience Unit, the Centre for Artificial Intelligence, the Department of Computer Science, the Department of Statistical Science, and the Department of Electronic and Electrical Engineering. Talks topics cover some of the latest research in Machine Learning and Statistics. All information about the seminar series can be found online, recordings are available on YouTube, talks are advertised on Twitter, on a mailing list and on a calendar.
Due to the sanitary situation, we have held our seminars online for the first half of the academic year, this allowed to host international speakers from across the globe. For the second half of the academic year, we were able to resume in-person seminars and to host speakers at UCL. We are immensely grateful to DeepMind for sponsoring our seminar series, allowing us to also host speakers from outside of London. I’d also like to thank all the speakers for presenting their latest work at our seminar series during this 2021/2022 academic year, all talks are presented below. Finally, I’d like to thank Jean Kaddour, Oscar Key, Pierre Glaser and Azhir Mahmood for their help in hosting the seminars, and I am very excited to welcome Kai Teh from the UCL Department of Statistical Science and Mathieu Alain from the UCL Centre for Artificial Intelligence who will join me in co-organising the seminar series for the 2022/2023 academic year!
- Online Multitask Learning with Long-Term Memory
- Contrastive Self-Supervised Learning and Potential Limitations
- Selective Inference with Kernels
- Optimal Subgroup Selection
- Data Augmentation in High Dimensional Low Sample Size Setting using a Geometry-Based Variational Autoencoder
- Sharpness-Aware Minimization (SAM) Current Method and Future Directions
- The Bayesian Learning Rule for Adaptive AI
- Machine Learning without Human Supervision on Neuroscience Signals
- Utilitarian Information Theory
- Lucas-Kanade Reloaded End-to-End Super-Resolution from Raw Image Bursts
- The Blessing and the Curse of the Multiplicative Updates - Discusses Connections between in Evolution and the Multiplicative Updates of Online Learning
- Explaining Kernel Methods With RKHS-SHAP
- Non-Euclidean Matérn Gaussian Processes
- Robust Bayesian Inference for Simulator-Based Models via the MMD Posterior Bootstrap
- Efficient MCMC Sampling with Dimension-Free Convergence Rate using ADMM-type Splitting
- Robust Estimation Via Maximum Mean Discrepancy
- Equivariant and Coordinate Independent Convolutional Networks
- Inference in High-Dimensional Logistic Regression Models with Separated Data
- Causal Foundations for Safe AI