profile.jpg

Hi, I’m Tom! I am a second-year PhD candidate in machine learning supervised by Prof. Philip Torr (University of Oxford) and Dr. Tim G. J. Rudner (University of Toronto & Vector Institute). My research focuses on AI safety, robustness, and reliable machine learning, with an emphasis on uncertainty quantification and generalisation, particularly in the context of LLMs.

My work spans semantic calibration and uncertainty quantification for LLMs; steering LLMs adaptively at inference time; biases in VLMs; and giving formal statements on universal in-context approximation of fully recurrent models including SSMs.

In the past, I used to be a little better at mathematics. You can find some related material and work in the resources section.

Feel free to reach out to chat or discuss anything ML or mathematics related!

Updates

Mar 01, 2026 Improving Semantic Uncertainty Quantification in Language Model Question-Answering via Token-Level Temperature Scaling accepted at AISTATS 2026.
May 15, 2025 Detecting LLM Hallucination Through Layer-wise Information Deficiency accepted at EMNLP 2025.
Jan 20, 2025 Focus On This, Not That! Steering LLMs with Adaptive Feature Specification accepted at ICML 2025.
Sep 26, 2024 Hidden in Plain Sight: Evaluating Abstract Shape Recognition in Vision-Language Models accepted at NeurIPS 2024 Datasets and Benchmarks.
Sep 26, 2024 Universal In-Context Approximation by Prompting Fully Recurrent Models accepted at NeurIPS 2024.

Selected Publications

  1. Improving Semantic Uncertainty Quantification in Language Model Question-Answering via Token-Level Temperature Scaling
    Tom A Lamb, Desi R Ivanova, Philip H S Torr, and Tim G J Rudner
    In The 29th International Conference on Artificial Intelligence and Statistics, 2026
  2. Detecting LLM Hallucination Through Layer-wise Information Deficiency: Analysis of Ambiguous Prompts and Unanswerable Questions
    Hazel Kim, Tom A Lamb, Adel Bibi, Philip Torr, and Yarin Gal
    In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, 2025
  3. Focus On This, Not That! Steering LLMs with Adaptive Feature Specification
    Tom A Lamb, Adam Davies, Alasdair Paren, Philip Torr, and Francesco Pinto
    In Forty-second International Conference on Machine Learning, 2025
  4. Universal In-Context Approximation by Prompting Fully Recurrent Models
    Aleksandar Petrov, Tom Lamb, Alasdair Paren, Philip Torr, and Adel Bibi
    Advances in Neural Information Processing Systems, 2024
  5. Hidden in Plain Sight: Evaluating Abstract Shape Recognition in Vision-Language Models
    Arshia Hemmat, Adam Davies, Tom A Lamb, Jianhao Yuan, Philip Torr, and 2 more authors
    In The Thirty-eighth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2024