Hi, I’m Tom! I am a second-year PhD candidate in machine learning supervised by Prof. Philip Torr (University of Oxford) and Dr. Tim G. J. Rudner (University of Toronto & Vector Institute). My research focuses on AI safety, robustness, and reliable machine learning, with an emphasis on uncertainty quantification and generalisation, particularly in the context of LLMs.
My work spans semantic calibration and uncertainty quantification for LLMs; steering LLMs adaptively at inference time; biases in VLMs; and giving formal statements on universal in-context approximation of fully recurrent models including SSMs.
In the past, I used to be a little better at mathematics. You can find some related material and work in the resources section.
Feel free to reach out to chat or discuss anything ML or mathematics related!
Updates
| Mar 01, 2026 | Improving Semantic Uncertainty Quantification in Language Model Question-Answering via Token-Level Temperature Scaling accepted at AISTATS 2026. |
|---|---|
| May 15, 2025 | Detecting LLM Hallucination Through Layer-wise Information Deficiency accepted at EMNLP 2025. |
| Jan 20, 2025 | Focus On This, Not That! Steering LLMs with Adaptive Feature Specification accepted at ICML 2025. |
| Sep 26, 2024 | Hidden in Plain Sight: Evaluating Abstract Shape Recognition in Vision-Language Models accepted at NeurIPS 2024 Datasets and Benchmarks. |
| Sep 26, 2024 | Universal In-Context Approximation by Prompting Fully Recurrent Models accepted at NeurIPS 2024. |