Felix Biggs

Find me on twitter, github and linkedin or contact me at [email protected].


I currently finishing my PhD student in Machine Learning (FAI CDT) at University College London advised by Benjamin Guedj and John Shawe-Taylor. Previously I completed the UCL Machine Learning MSc and the Physics Tripos at Cambridge. I am broadly interested in learning and decision making under uncertainty, particularly in real-world settings. If you would like to collaborate or discuss further, don’t hesitate to get in touch!

I am actively seeking my next role, ideally one that emphasises a robust research component and offers opportunities to work on larger-scale models beyond the typical academic scope.


Papers and Preprints [Google Scholar]

MMD-FUSE: Learning and Combining Kernels for Two-Sample Testing Without Data Splitting.
Felix Biggs†, Antonin Schrab†, Arthur Gretton.
To appear, Neural Information Processing Systems (NeurIPS) 2023. Spotlight * [arXiv:2306.08777]

Tighter PAC-Bayes Generalisation Bounds by Leveraging Example Difficulty.
Felix Biggs, Benjamin Guedj.
Artificial Intelligence and Statistics (AISTATS), 2023. [arXiv:2210.11289]

On Margins and Generalisation for Voting Classifiers.
Felix Biggs, Valentina Zantedeschi, Benjamin Guedj.
Neural Information Processing Systems (NeurIPS) 2022. [arXiv:2206.04607]

Non-Vacuous Generalisation Bounds for Shallow Networks.
Felix Biggs, Benjamin Guedj.
International Conf. Machine Learning (ICML) 2022. [arXiv:2202.01627]

On Margins and Derandomisation in PAC-Bayes.
Felix Biggs, Benjamin Guedj.
Artificial Intelligence and Statistics (AISTATS), 2022. [arXiv:2107.03955]

A Note on the Efficient Evaluation of PAC-Bayes Bounds.
Felix Biggs.
Preprint. [arXiv:2209.05188]

Differentiable PAC–Bayes Objectives with Partially Aggregated Neural Networks.
Felix Biggs, Benjamin Guedj.
Entropy 2021, 23, 1280. [doi:10.3390/e23101280]

† = equal contribution. * = about 3% of NeurIPS submissions receive a spotlight.