The Cancer Research UK Cambridge Centre and the Department of Radiology at Addenbrooke's are pleased to announce a seminar series on Artificial Intelligence (AI) in Medicine, which aims to provide a comprehensive overview of the latest developments in this rapidly evolving field. As AI continues to revolutionize healthcare, we believe it is essential to explore its potential and discuss the challenges and opportunities it presents.
The lunchtime seminar series will feature prominent experts in the field who will share their research and insights on a range of topics, including AI applications in disease diagnosis, drug discovery, and patient care. Each seminar will involve two talks, followed by an interactive discussion with a light lunch.
We hope that this seminar series will be a valuable platform for researchers, practitioners and students to learn about the latest trends and explore collaborations in the exciting field of AI in Medicine.
Join us for talks, discussions, and networking over lunch every last Tuesday of the month, from 12:00 – 13:00, as we welcome scientists from throughout Cambridge. These events will be held at the Lecture Theatre, Jeffrey Cheah Biomedical Centre, Puddicombe Way, Cambridge CB2 0AW.
Pascal Wodtke and Alexander Nicholas Shepherd present at the next Cambridge MedAI Seminar on Wednesday 25 June 2025, 12:00 -13:00 at the Jeffrey Cheah Biomedical Centre, Main Lecture Theatre.
This month’s seminar will be held on Wednesday 25 June 2025, 12-1pm at the Jeffrey Cheah Biomedical Centre (Main Lecture Theatre), University of Cambridge and streamed online via Zoom.
A light lunch from Aromi will be served from 11:45.
Eventbrite/sign-up link: https://medai_june2025.eventbrite.co.uk
The event will feature the following talks:
Phenotyping a rare case of GIST: How to image metabolism using MRI – Pascal Wodtke, PhD student, Department of Radiology, University of Cambridge
Pascal is a biomedical physicist in the department of Radiology and Gates Cambridge Scholar, focussing on the development and application of novel non-invasive MRI techniques for imaging metabolism. For his development of the pH sensor Z-OMPD he was selected as a runner-up for the Young Investigator Award of the European Society for Molecular Imaging and most recently he won the Magna cum Laude award by the International Society for Magnetic Resonance in Medicine.
Abstract: Succinate dehydrogenase-deficient (SDHd) gastrointestinal stromal tumors (GIST) are a rare type of cancer affecting mostly young adults.The genetic defect causes altered metabolism. Hyperpolarized magnetic resonance spectroscopic imaging (MRSI) is a novel imaging technique that enables quick, non-invasive imaging of metabolic features. Since conventional GIST treatments don’t work for this type of cancer, accurate diagnosis is crucial, and more research needs to be performed in these patients.This talk presents a way to image the metabolic hallmarks of SDHd GIST using a tailored MRSI protocol.
Shaping Responsible AI in Health Technologies – Alexander Nicholas Shepherd, AI Client Manager, British Standards Institute
Alex Shepherd is an AI auditor at the British Standards Institute, working on evaluating the effectiveness of Responsible AI Management in organisations. He has previously worked as a Data Scientist in the Healthcare and Life Science sectors and working towards being an “enlightened data scientist”: to understand the benefits and risks of AI technologies that are developed, provisioned and used. He is currently writing a book to guide AI professionals on implementing Responsible AI in their organisations.
Abstract: The objective of this presentation is to provide a concise overview of the Responsible AI framework, conceptualized as a “pyramid” comprising legal, ethical, technical, and standards-based foundations. This layered structure serves to support an ecosystem of innovation, regulatory compliance, and trust within organizations deploying AI technologies. The talk will also explore key AI ethics principles, examining their role and alignment within each layer of the Responsible AI pyramid, with particular emphasis on their implications for the development and application of AI in medical and healthcare contexts.
Dr Timothy Rittman presents at the latest Cambridge MedAI Seminar on Wednesday 28 May 2025, 12:00 -13:00 at the Jeffrey Cheah Biomedical Centre, Main Lecture Theatre.
This month’s seminar will be held on Wednesday 28 May 2025, 12-1pm at the Jeffrey Cheah Biomedical Centre (Main Lecture Theatre), University of Cambridge and streamed online via Zoom.
A light lunch from Aromi will be served from 11:45.
The event will feature the following talk:
AI for neuroimaging in dementia – Dr Timothy Rittman, Senior Clinical Research Associate, Department of Clinical Neurosciences, University of Cambridge
Timothy Rittman is an Alzheimer’s Research UK Senior Fellow, Senior Clinical Research Associate at the University of Cambridge and Honorary Consultant Neurologist at Addenbrookes Hospital. His research centres around neurodegenerative tauopathies, combining neuroimaging, cognitive assessments and neuropathology to understand how these diseases affect the whole brain. He also has an interest in translating methods from artificial intelligence and big data for use in memory clinics, and leads the Quantitative MRI in NHS Memory Clinics (QMIN-MC) study collecting real world data for validation of AI models. Tim co-leads the DEMON dementia network’s Imaging Working group and is an adviser to the World Young Leaders in Dementia. He is a consultant in the Addenbrookes Memory Clinic, and leads a clinic for people with Progressive Supranclear Palsy and Corticobasal Degeneration, in addition to co-leading a dementia genetics clinic.
Abstract: Novel biomarkers for early detection and prognosis in dementia are urgently needed, particularly with the advent of potentially disease modifying treatments. AI approaches to neuroimaging are promising, but require real world validation. This talk will cover the Quantitative MRI in NHS Memory Clinics (QMIN-MC) study, and how it is collecting real world data to bridge the gap between research and clinical application.
Mr Daniel Kreuter and Dr Ander Biguri will present at the next Cambridge MedAI Seminar on Friday 25 April 2025, 12:00 -13:00 at the Jeffrey Cheah Biomedical Centre, Main Lecture Theatre.
This month’s seminar will be held on Friday 25 April 2025, 12-1pm at the Jeffrey Cheah Biomedical Centre (Main Lecture Theatre), University of Cambridge and streamed online via Zoom.
Click here to register.
A light lunch from Aromi will be served from 11:45.
The event will feature the following talks:
Unlocking Hidden Potential: Federated Machine Learning on Blood Count Data Enables Accurate Iron Deficiency Detection in Blood Donors – Daniel Kreuter, PhD Student, Department of Applied Mathematics and Theoretical Physics, University of Cambridge
Daniel is a PhD student in the BloodCounts! project, focusing on building algorithms for advanced full blood count analysis to extract additional clinical information from the world’s most common medical test. His research aims to improve healthcare decision-making through more efficient use of existing data. He is in his final year and is supervised by Prof Carola-Bibiane Schönlieb from the Applied Mathematics department and Prof Willem Ouwehand from the department of Haematology. Before coming to Cambridge, Daniel studied physics at the Technische Universität Darmstadt in Germany. His Master’s thesis project focused on replacing costly laser-plasma interaction simulations with a much faster neural network model, reducing computation time from 4 hours to a few milliseconds.
Abstract: The full blood count is the world’s most common medical laboratory test, with 3.6 billion tests performed annually worldwide. Despite this ubiquity, the rich single-cell flow cytometry data generated by haematology analysers to calculate standard parameters like haemoglobin and cell counts is routinely discarded. Our research demonstrates how AI models can extract this hidden value, transforming a routine test into a powerful screening tool for iron deficiency in blood donors—with no additional testing required. Iron deficiency remains a major challenge in blood donation programs, affecting donor health and donation efficiency. By applying advanced machine learning to previously unused data dimensions within standard blood counts, we achieve significantly improved detection accuracy compared to conventional parameters. Furthermore, we show that federated learning enables this approach to scale and generalise across multiple centres while preserving data privacy. This work exemplifies how AI can enhance existing medical infrastructure, extracting new clinical value from already-collected data to improve donor health.
Reconstructing extremely low dose CT images using machine learning – Dr Ander Biguri, Senior Research Associate, Department of Applied Mathematics and Theoretical Physics, University of Cambridge
Ander Biguri received his Ph.D. in Electrical Engineering from the University of Bath in 2018, for his work on 4D Computed Tomography for radiotherapy. Since, he has held research positions at University of Southampton, University College London and lastly University of Cambridge. His research lies in the intersection of inverse problems and their applications in real-case scenarios, such as Positron Emission Tomography or various computed tomography modalities. He is best known for the development of the TIGRE toolbox for applied tomography applications.
Abstract: ML models can be used to denoise medical images, however when doing this we don’t use information from the measurements. You can instead add machine learning to the image formation/reconstruction process, ensuring high quality images that still match the measured data from medical scanners. In this talk we will briefly see different ways of adding machine learning to these mathematical processes and discuss the challenges still needed to be tackled to make the application of such methods a clinical reality.
Dr Michail Mamalakis and Mr Joshua Rothwell will present at the next Cambridge MedAI Seminar on Thursday 27th March 2025, 12:00 -13:00 at the Jeffrey Cheah Biomedical Centre, Main Lecture Theatre.
This month’s seminar will be held on Thursday 27 March 2025, 12-1pm at the Jeffrey Cheah Biomedical Centre (Main Lecture Theatre), University of Cambridge and streamed online via Zoom.
A light lunch from Aromi will be served from 11:45.
The event will feature the following talks:
Explainable and Interpretable AI: Building Trust and Uncovering Patterns in Healthcare and Neuroscience – Dr Michail Mamalakis, Research Associate, Department of Psychiatry, University of Cambridge
Dr Michail Mamalakis is a research scientist at the University of Cambridge, specializing in AI, Machine Learning, Explainable AI and Computer Vision for biomedical applications. His work focuses on explainable AI (XAI) for integrating imaging, genomics, and phenotyping data in neuroscience and clinical decision-making. He has collaborated with leading institutions, including Oxford, Sheffield, and Cambridge, on projects in brain tumors, Alzheimer’s, cardiac arrhythmias and pulmonary hypertension. His research spans AI-driven biomarker discovery, uncertainty estimation, attributional interpretability in funtional and structural imaging and mechanistic interpretability in protein language models and large language models. Currently, he develops multi-modal AI frameworks for Alzheimer’s prediction and glioblastoma analysis contributing to high-impact projects like EBRAINS 2.0. 🔗 GitHub | LinkedIn | Google Scholar
Abstract: Explainability is a critical factor in enhancing the trustworthiness and acceptance of artificial intelligence (AI) in healthcare, where decisions have a direct impact on patient outcomes. Despite significant advancements in AI interpretability, clear guidelines on when and to what extent explanations are required in medical applications remain insufficient. In this talk, I will provide guidance on the need for explanations in AI applications within healthcare. I will discuss possible explainable AI frameworks that can be used to identify new patterns and offer insights through explainable AI methods. These approaches have the potential to uncover new biomarkers and novel patterns relevant to the applications of interest. Finally, I will present some basic examples from neuroscience research to illustrate these concepts.
Retrospective evaluation and comparison of state-of-the-art deep learning breast cancer risk prediction algorithms – Joshua Rothwell, PhD Student, Department of Radiology, University of Cambridge School of Clinical Medicine
Josh is an MBBS/PhD student, researching and evaluating commercial mammography AI tools for the detection and prediction of breast cancer.
Abstract: Breast ‘interval’ cancers present between screening examinations and have poorer prognoses compared to screen detected cancers. Risk prediction tools can identify women that are at increased risk of developing cancer, and may therefore benefit from supplemental imaging or increased frequency screening, to detect cancers earlier and improve patient outcomes.This talk focuses on the retrospective evaluation of two state-of-the-art deep learning risk prediction algorithms, attempting to quantify potential cancer detection rates if implemented into the NHS Breast Screening Programme and discern the characteristics of misclassified cancers.
Ms Greta Markert and Dr Karen Sayal will present at the next Cambridge MedAI Seminar on Wednesday 29 January 2025, 12:00 -13:00 at the Jeffrey Cheah Biomedical Centre, Main Lecture Theatre.
This month’s seminar will be held on Wednesday 29 January 2025, 12-1pm at the Jeffrey Cheah Biomedical Centre (Main Lecture Theatre), University of Cambridge and streamed online via Zoom.
Click here to register.
A light lunch from Aromi will be served from 11:45.
The event will feature the following talks:
AI in Histopathology: Practical Lessons from an Unconventional Case Study – Greta Markert, PhD student, Cancer Research UK Cambridge Institute/University of Cambridge
Greta studied both Chemistry and Pharmaceutical Sciences at ETH Zurich with a strong focus on computational approaches. During her master’s thesis at IBM Research, she explored AI for drug discovery, which sparked her passion for artificial intelligence. Currently, she is in the final year of her PhD at the Cancer Institute in Cambridge, working under Prof. Florian Markowetz and Prof. Rebecca Fitzgerald at the Early Cancer Institute, working on AI in histopathology. Before starting her PhD, she worked in management consulting and in the patents department of Roche.
Abstract: Artificial intelligence in histopathology has predominantly focused on traditional biopsy samples. My research, however, applies AI to whole slide images derived from the capsule sponge, a minimally invasive alternative to endoscopy. The capsule sponge collects random cellular material along the esophagus, presenting distinct analytical challenges. Our work involves three stains—H&E for quality control and atypical features, TFF3 for detecting Barrett’s esophagus, and TP53 for tumor progression assessment—each addressing specific diagnostic questions. By integrating these stains and analyzing corresponding cellular structures, we enhance risk stratification and advance early detection of esophageal cancer. This presentation will outline the novel computational strategies developed to tackle this unique and complex application.
Next generation technology for next generation trials – Dr Karen Sayal, Senior Director in AI-driven Clinical Development and Clinical Translation at Recursion Pharmaceuticals & Honorary Consultant in Clinical Oncology, Cambridge University NHS Foundation Trust
Dr Karen Sayal is a Senior Director in AI-driven Clinical Development and Clinical Translation at Recursion Pharmaceuticals where she is focused on implementing high-throughput industrial scale clinical trials. She is also a honorary consultant in Clinical Oncology at Cambridge University NHS Foundation Trust.
Dr Sayal completed medical school at the University of Cambridge (Gonville and Caius college). Her specialist clinical training spanned across Cambridge and Oxford, which included being the first NIHR-funded academic clinical fellow in Clinical Oncology at Oxford. She completed a CRUK-funded DPhil in advanced sequencing technologies and machine learning at the University of Oxford. Prior to joining Recursion, Dr Sayal was a Fellow in Deep Learning in the AI/ML division of GSK. She is the first and only clinician to have embarked on the GSK AI fellowship scheme where she worked across technical AI research, clinical trial design, clinical data networks and AI regulation.
Abstract: Clinical trials are being transformed through an evolving suite of innovative AI-driven technologies combined with data-driven insights on the clinical and biological profile of disease. Such transformation is a reflection of a more fundamental shift where technology, drug development and patient care are coming together to redefine how we view and manage perturbed states of human physiology. In the next Cambridge MedAI seminar, Dr Karen Sayal will give an overview of the current landscape of trial-ready AI tools. She will also spotlight emerging growth areas for clinical AI technologies, and offer critical insights into the challenges we must collectively address to ensure AI innovation is deployed in a safe and meaningful way for patients.
This month’s seminar will be a joint event supported by SAS, the Maxwell Centre and the Office for Translational Research. It follows the recent announcement of the “Cambridge and SAS launch partnership in AI and advanced analytics to accelerate innovation in the healthcare sector.”
This will be followed by a networking session over drinks and snacks, with a demo on the SAS Viya platform: Applying Machine Learning and Artificial Intelligence in Real World Data in Personalized Medicine for Non-Small Cell Lung Cancer Patients.
Generative AI in Radiotherapy – Gimmick or Game Changer? – Prof. Raj Jena, Professor of AI in Radiation Oncology and Honorary Consultant in Clinical Oncology, University of Cambridge Department of Oncology and Cambridge University Hospitals
Raj Jena is a Professor of AI in Radiation Oncology based at the University of Cambridge Department of Oncology and Cambridge University Hospitals. His research interests focus on clinical image processing, data science and machine learning applications. Raj is the chief Investigator for Hamlet.rt, a multi-centre radiomics study in radiation therapy open at over 12 sites over the UK and Tata medical centre in Kolkata. He is a member of the Royal College of Radiologists’ AI in Clinical Oncology (AICO) committee and is Director of the Oncology Translational Research Collaboration (O-TRC) at the National Institute for Healthcare Research.
Raj enjoys working at the interface between the clinical, academic and commercial sectors. Following successful collaborations with Siemens and other imaging companies, he worked a clinical consultant to the InnerEye team at Microsoft Research. Here Raj had the opportunity to work with thought leaders in medical image analysis, and subsequently led the NHS AI lab funded OSAIRIS project, which developed the first cloud based open source imaging AI solution to be deployed at Addenbrooke’s Hospital.
Raj is now applying his knowledge of machine learning and image processing to the STELLA project, an international collaboration developing a novel smart radiotherapy unit for low- and middle-income countries.
Abstract: Generative AI models have demonstrated amazing capabilities in the creation of synthetic data, but how useful can they be in discovery science? I will discuss the application of a generative machine learning model to a problem relating to modelling late effects of radiotherapy in the brain.
Utilising AI for outcome prediction in glioma surgery: challenges and opportunities – Yizhou Wan, Clinical Research Fellow and Honorary Speciality Registrar, Brain Tumour Imaging Lab, Division of Neurosurgery, Department of Clinical Neurosciences, University of Cambridge
Yizhou Wan is a Neurosurgery Resident with an interest in how we can use advanced imaging methods to detect tumour effects on the brain and optimise surgical and non-surgical treatments for brain cancer. His PhD studies the impact of surgery on cognition and tumour networks using advanced neuroimaging. Prior to starting doctoral training at the University of Cambridge, Yizhou studied Medicine at Imperial College London followed by Academic Foundation Training in London and Neurosurgery training in Oxford. His research is funded by a Cancer Research U.K, Clinical Research Training Fellowship and Royal College of Surgeons England Research Fellowship.
Abstract: High-grade glioma is the most common primary brain cancer. Cognitive symptoms are the commonest neurological deficits reported by patients. They have a significant impact on quality of life and reduce survival. Cognition depends on brain network functioning. Gliomas have been shown to integrate into neuronal circuits and disrupt whole-brain connectivity. Tumours interact with brain networks to disrupt cognition and promote tumour growth. My PhD studies the impact of surgery on cognition and tumour networks. By using diffusion MRI we can identify imaging markers associated with tumour-related injury and computationally model the effects of surgery on brain function. I hope to predict preoperatively the effect of surgery on postoperative cognition and survival, by treating glioma as a whole-brain disease. This will facilitate shared-decision making between surgeons and patients to formulate personalised resections which preserve maximise survival. I will also discuss the challenges involved in applying AI to clinical imaging datasets and current areas where we may be able to make progress in translating AI to the clinic and Operating room.
Cascaded Transformer plus U-net in Medical Image Segmentation – Dr Xin Du, Postdoctoral Research Associate, Department of Physics, University of Cambridge
Xin Du is a postdoctoral researcher in the RadNet data science team at the Cavendish Laboratory. She was a Ph.D. student at the University of Southampton, with research interests in information theory, Cascade Learning, and transfer learning with applications to problems in computer vision, biology, and human activity monitoring from wearable sensors. Xin’s work is aimed at developing new learning algorithms and architectures, and deeper understanding of them in the context of these applied problems. Currently, she is focusing on auto-segmentation of 3D medical images with deep learning and trying to develop a way to combine the information from both text descriptions and medical image contexts. Outside of research, she enjoys baking, travelling, meeting new people, and exploring new activities.
Abstract: Radiotherapy plays a crucial role in modern medicine but requires considerable time for manually contouring radio-sensitive organs at risk, which can delay treatment processing. With the significant success of deep convolutional neural networks, auto-segmentation in medical image analysis has shown substantial improvements in saving time and reducing inter-operator variability. While convolutional neural networks utilise the locality of convolution operations, they lose global and long-range semantic information. To address this, we propose a cascaded transformer U-net for medical image segmentation that compensates for long-range dependencies and mitigates computational requirements without compromising performance.
Machine learning for treatment stratification in kidney cancer – Rebecca Wray, PhD Student, Early Cancer Institute, University of Cambridge & Dr Hania Paverd, Clinical Research Training Fellow, Early Cancer Institute, University of Cambridge
Rebecca completed her undergraduate degree in Biosciences from Durham University, where she specialised in Biochemistry and Molecular Biology, before moving to Cambridge to join CS Genetics, a biotechnology start-up investigating novel single-cell RNA-sequencing methods. She then joined Dr Annie Speak’s group at the Cambridge Institute for Therapeutic Immunology and Infectious Disease (CITIID). Currently, Rebecca is in her second year of the prestigious Cancer Research UK (CRUK) Cambridge Centre MRes + PhD programme. Under the mentorship of Dr Mireia Crispin-Ortuzar and Dr James Jones, she is employing data-driven approaches to uncover novel biomarkers and mechanisms related to treatment failure and resistance in kidney cancer.
Hania is a medical doctor specialising in Radiology, with a research interest in machine learning for medical image analysis. She studied Medicine at Newnham College, University of Cambridge, before moving onto Specialty Training in Radiology at Addenbrooke’s Hospital. She is currently in her first year of PhD as a Clinical Research Training Fellow at the Early Cancer Institute in Cambridge, working under the supervision of Dr Mireia Crispin-Ortuzar and Dr Matthew Hoare. Her PhD research focuses on computational analysis of CT and MRI scans, integrated with other data modalities such as genomic data, to enhance risk stratification for patients with liver disease and improve early detection of liver cancer.
Abstract: Clear cell renal cell carcinoma (ccRCC) is the most lethal urological malignancy. The cancer is highly heterogeneous, and therapy response varies between patients. In a subset of cases, the tumour extends into the renal vein and inferior vena cava, termed venous tumour thrombus (VTT), which complicates surgical intervention. While response signatures have been developed for metastatic RCC, there’s a notable gap for patients with VTT. Here we present molecular analysis of data from NAXIVA, a single-arm Phase II study, where 35% of patients showed a reduction in VTT length in response to Axitinib, a tyrosine kinase inhibitor.
We develop a machine learning model which uses baseline and dynamic data taken from blood samples early in treatment, and demonstrates good patient stratification. We report novel biological markers for positive response to anti-angiogenic agents, including CCL17, IL-12p70, PlGF and Tie-2. This research paves the way for better patient stratification and response prediction, offering promising avenues for personalised therapy in ccRCC.
How Low Can We Go? – Investigating the interaction between cancer-detecting AI and low-dose quantum noise in CT images – Jack Dixon, Master’s student, Department of Physics, University of Cambridge
Jack is an undergraduate student at the University of Cambridge currently studying for a Master’s Degree in Natural Sciences. He specialises in physics, and is particularly interested in statistical and computational physics. As part of his degree, he undertook a research project within the Early Cancer Institute under the supervision of William McGough, Dr Mireia Crispin-Ortuzar and Dr Ander Biguri, focused on low-dose CT scan simulation and deep-learning based segmentation.
Abstract: Renal cancers (RC) are associated with more than 140,000 deaths annually. Mortality rates for RC could be reduced if a suitable screening program to allow for early diagnosis was constructed. Trials into screening, such as the Yorkshire Kidney Screening Trial, use non-contrast enhanced CT scans and ideally seek to lower the dose of ionising radiation as much as is feasible. In order for a screening program to be effective, it must be both cost-effective and (relatively) safe. To this end, my Master’s project focused on assessing the performance of automatic renal segmentation models as the incident radiation dose of the input CT scans is decreased. This involved first attempting to construct and validate a low-dose CT scan simulation technique that can be applied retroactively, and then assessing segmentation performance as the dose is decreased. The renal and cancer segmentation models produced both displayed strong positive ranked correlation between Dice similarity coefficient and incident dose to a significance level of 2.5%. We conclude that renal segmentation performance in non-contrast enhanced CT scans is correlated with the incident dose.
Deformable registration to assess tumour progression in ovarian cancer patients – Gift Mungmeeprued – Master’s student, Department of Physics, University of Cambridge
Gift Mungmeeprued is a master’s student in the Department of Physics at the University of Cambridge. She is interested in machine learning to make healthcare more accessible and affordable.
Abstract: High-grade serous ovarian carcinoma (HGSOC) is the most common and deadliest subtype of ovarian cancer, often characterised by multi-site and heterogeneous tumours. The standard line of treatment for HGSOC in the UK is neoadjuvant chemotherapy (NACT) followed by delayed primary surgery. Response Evaluation Criteria in Solid Tumours (RECIST 1.1) is the current standardised criteria to assess the tumour response to NACT based on measurements of tumour diameters in pre- and post-NACT CT scans. While RECIST is designed to be relatively quick for radiologists to evaluate, it only captures 1-dimensional global change in tumour size. In this talk, we explored the use of deformable image registration as an automated tool to assess tumour response to NACT. Registration between pre- and post-NACT CT scans reveals spatial heterogeneity of changes within the tumour and across multiple disease sites.
Automating Segmentation and Chemotherapy Response Measurement in Ovarian Cancer with Multitask Deep Learning – Bevis Drury, Master’s student, Department of Physics, University of Cambridge
Bevis is a Part III student studying physics at the University of Cambridge. He is interested in applying machine learning to all areas of research, from physics to medicine.
Abstract: High Grade Serous Ovarian Cancer (HGSOC) is the most common type of ovarian cancer. Often diagnosed at advanced stages, HGSOC presents significant challenges due to its heterogeneity and metastatic nature. Treatment of HGSOC begins with either immediate primary surgery, or neoadjuvant chemotherapy prior to delayed primary surgery. To track disease progression, radiologists routinely use abdominopelvic CT imaging. The patient’s radiological response to treatment can be measured using the Response Evaluation Criteria in Solid Tumours (RECIST), which compares CT scans taken before and after treatment. Manual calculation of RECIST is time-consuming and often inconsistent between radiologists, impacting the accuracy and reliability of treatment assessments. This paper develops a multitask deep learning architecture for automating the segmentation and chemotherapy response prediction of HGSOC patients. The model combines features from two identical U-Net architectures, which are then used to predict binarised RECIST labels. We use a training cohort of 99 HGSOC cases with pre- and post-treatment CT scans, and an external validation cohort of 49 cases. For the validation cohort, we predict binarised RECIST labels with an AUC of 0.78. We are the first to predict RECIST labels for HGSOC patients using multitask deep learning, establishing this research as a benchmark for future work. RECIST measurements are not currently used in clinical practice, so this framework aims to provide radiologists with real-time segmentations and RECIST labels leading to more informed decisions.
Big Data and AI in Cardiac Imaging – Making a difference? – Jonathan Weir-McCall, Assistant Professor, Department of Radiology, University of Cambridge
Jonathan is a University Lecturer at the University of Cambridge and an Honorary Consultant Cardiothoracic Radiologist at the Royal Papworth Hospital. His research interests lie in the use of cardiovascular CT and MRI for better understanding how these can be used to improve patient treatment and outcomes in structural and coronary artery disease. He has authored >130 peer reviewed publications, co-authored the SCCT guidelines on the role of CT in the assessment for transcatheter aortic valve insertion, and the BSCI/BSTI guidelines on the reporting of calcification on routine chest CT. He sits on the executive committee of the BSCI, guideline committee of the SCCT, and Diagnostic Advisory Committee of NICE.
Abstract: AI and advanced analytics are reaching clinical practice with significant opportunities, but also challenges in determining their real world impact and efficacy. In cardiac imaging, advanced analytics using AI and computational fluid dynamics are being routinely used in clinical care. While small scale randomised control trials present promising insights into their potential benefits, real world data is lacking. Leveraging national datasets we analyse the impact of these technologies in the UK, examining the impact of one AI-augmented CT tool on healthcare behaviours and patient outcomes.
Learning structures in multimodal pathology – Konstantin Hemker, PhD Candidate, Computer Laboratory, University of Cambridge
Konstantin is a PhD student in the Computer Lab at the University of Cambridge focussing on multimodal representation learning for biomedical data modalities. In particular, he is looking at how fusion models can provide multi-scale context in computational pathology. Before starting his PhD, Konstantin worked as a Senior Data Scientist in the Healthcare and Pharmaceuticals practice at the Boston Consulting Group, focussing on drug yield optimisation of active ingredients in antibody treatments and radiocontrast agents. He holds Master’s degrees in Computer Science from Imperial and Cambridge and an undergraduate degree from the London School of Economics.
Abstract: Integrative modelling of multiple data structures (such as images, graphs, sequences, or tabular data) in the same model is a common challenge for machine learning approaches in biomedical domains. This challenge arises from a lack of shared semantics between modalities, one-to-many relationships, missing modalities, and data sparsity. Meanwhile, multi-scale context can provide important information about the tumour microenvironment in fields such as computational pathology and consequently help train better predictive models. This talk will cover state-of-the-art multimodal representation learning methods that can learn from multiple data distributions, capture cross-modal relationships, and handle missing modalities whilst maintaining structural information from each modality for predictive tasks in pathology.
Speaker: Stephanie Hyland, Principal Researcher, Microsoft Cambridge
Robust and interpretable AI-guided marker for early dementia prediction in real-world clinical settings – Dr Delshad Vaghari, Research Associate at Department of Psychology, University of Cambridge
Predicting dementia early has major implications for clinical management and patient outcomes. Yet, we still lack sensitive tools for stratifying patients early, resulting in patients being undiagnosed or wrongly diagnosed. Despite rapid expansion in machine learning models for dementia prediction, limited model interpretability and generalizability impede translation to the clinic. We build a robust and interpretable predictive prognostic model (PPM) and validate its clinical utility using real-world, routinely-collected, non-invasive, and low-cost (structural MRI scan, cognitive scales) patient data. To enhance scalability and generalizability to the clinic, we: 1) train the PPM with clinically-relevant predictors (grey matter atrophy, clinical scales) that are common across research and clinical cohorts, 2) test PPM predictions with independent multicenter real-world data from memory clinics across countries (UK, Singapore). PPM robustly predicts whether patients at early disease stages (MCI) will remain stable or progress to Alzheimer’s Disease (AD). PPM generalizes from research to real-world patient data across memory clinics and its predictions are validated against longitudinal clinical outcomes. PPM allows us to derive an individualized AI-guided multimodal marker (i.e. predictive prognostic index) that predicts progression to AD more precisely than standard clinical markers (grey matter atrophy, cognitive scores) or clinical diagnosis, reducing misdiagnosis. Our results demonstrate a robust and explainable clinical AI-guided marker for early dementia prediction that is validated against longitudinal, multicenter patient data across countries, and has strong potential for translation to clinical settings.
Leveraging real-world histopathology datasets to inform clinical research – Irina Zhang, Data Scientist at AstraZeneca, Cambridge
Recent advances in Computational Pathology have demonstrated how we can benefit greatly from applying ML&AI to decipher giga-pixel whole-slide histopathology images. However, it is still incredibly difficult to generalise models developed on high-quality datasets to heterogeneous tissue samples collected in clinical settings. We have investigated various real-world evidence cohorts to address the inherent challenges of real-world histopathology images and develop interpretable and generalizable AI pipelines to inform our clinical research, with the perspective to apply advanced digital pathology to clinical settings and benefit patients in various therapeutic areas.
Speaker: Professor Nasir Rajpoot, Professor of Computational Pathology & Director, Tissue Image Analytics (TIA) Centre, University of Warwick, UK
Multimodal, data-efficient, and robust AI for real-world biosignals and the role of generative models – Dr Dimitris Spathis, Senior Research Scientist at Nokia Bell Labs and Visiting Researcher at University of Cambridge
The limited availability of labels for machine learning on multimodal data hampers progress in the field. In this talk, I will discuss our recent efforts to address this problem, building on the paradigms of self-supervised and multimodal learning. With models such as CroSSL, Step2Heart, and SelfHAR, we put forward principled ways to learn generalizable representations from high-resolution data through masking, knowledge distillation, and physiology-inspired pre-training. We show that these models can be applied to various clinically relevant applications to improve mental health, fitness, sleep, and voice-based diagnostics. At the same time, due to data size limitations, these models are limited in size and generalization capabilities compared to popular generative models such as GPT. What if we could use Large Language Models (LLMs) as data-agnostic pre-trained models? I will close the talk by highlighting where LLMs fail in processing sequential data as text tokens and some ideas on how to address the critical “modality gap”.
Using machine learning methods to improve classification and prediction of psychiatric conditions – Dr Katharina Zühlsdorff, Visiting Postdoctoral Fellow at Department of Psychology
Cognitive flexibility can be investigated using tests such as probabilistic reversal learning (PRL). In various neuropsychiatric conditions, including substance use disorders, gambling disorder, major depressive disorder and schizophrenia, overall impairments in PRL flexibility are observed. Using reinforcement learning (RL) models, a deeper mechanistic explanation of the latent processes underlying flexibility can be gained. I will present results from an analysis of PRL data from individuals with different psychiatric diagnoses using a hierarchical Bayesian RL approach and relate behavioural findings to the underlying neural substrates. Furthermore, I will discuss how graph neural network models can be used to incorporate cognitive and neuroimaging data to improve prediction of psychiatric conditions.
“Turn and face the strange: Out-of-distribution generalisation in machine learning” – Dr. Agnieszka Słowik, Microsoft Research Cambridge
When applied to a new data distribution, machine learning algorithms have been shown to deteriorate. Distribution shifts are caused by spurious correlations that hold at training time but not at test time, changes to the domain, as well as under- and over-representation of certain populations in training data. In this talk, I present two studies in the setting of learning from multiple data sources. In the first study, On Distributionally Robust Optimization and Data Rebalancing, multiple data sources are used to minimise the error on the most challenging data source. In the second study, Linear unit-tests for invariance discovery, I present a set of ‘unit tests’ that validate whether a given algorithm ignores spurious, unstable features that are unlikely to hold in the future, while learning the features that hold across all sources of training data. I conclude with a discussion of potential applications of this research to AI in medicine.
“Development of a Natural Language Processing Multilingual Model for Summarizing Radiology Reports” – Mariana Lindo, Critical Techworks
The impression section of a radiology report summarizes important radiology findings and plays a critical role in communicating these findings to physicians. However, the preparation of these summaries is time-consuming and error-prone for radiologists. Recently, numerous models for radiology report summarization have been developed. Nevertheless, there is currently no model that can summarize these reports in multiple languages. Such a model could greatly improve future research and the development of Deep Learning models that incorporate data from patients with different ethnic backgrounds. In this study, the generation of radiology impressions in different languages was automated by fine-tuning a model, publicly available, based on a multilingual text-to-text Transformer to summarize findings available in English, Portuguese, and German radiology reports. In a blind test, two board-certified radiologists indicated that for at least 70% of the system-generated summaries, the quality matched or exceeded the corresponding human-written summaries, suggesting substantial clinical reliability. Furthermore, this study showed that the multilingual model outperformed other models that specialized in summarizing radiology reports in only one language, as well as models that were not specifically designed for summarizing radiology reports, such as ChatGPT.
“ScanXm – A new software tool for the visualization, annotation, and segmentation of biomedical images.” – Dr. Tristan Whitmarsh, Institute of Astronomy, University of Cambridge
ScanXm is a newly released software tool developed by Tristan Whitmarsh at the University of Cambridge. It aims to provide a simple and user-friendly interface for the annotation of 2D and 3D medical and biomedical images. ScanXm also includes several deep learning modules for image processing and the automatic segmentation of various organs and tissue types. Notably, this software requires no Python coding, will run without an expensive GPU, and does not use cloud computing, thus keeping your confidential patient data safe. Through a recent collaboration with NVIDIA, ScanXm now seamlessly integrates with MONAI Label. This integration enables ScanXm to run powerful AI models such as vision transformers, both locally and on a cloud platform. Additionally, it grants access to all publicly released AI models in the MONAI Model Zoo. In this talk, Tristan will provide an overview of all the features in ScanXm, as well as providing a live demonstration.
“Deep Learning Applications for Histological Image Analysis” – Matej Halinkovic, Vision and Graphics Group, Institute of Computer Engineering and Applied Informatics, Slovak University of Technology.
Analysis of structures contained in tissue samples and the relevant contextual information is of utmost importance to histopathologists during diagnosis. Our work primarily focuses on histological tissue samples and helping pathologists with the analysis of cardiac biopsies. We propose a method that provides supporting information in the form of structure segmentation to histopathologists while simulating their workflows. The proposed method utilizes semantic nuclei maps in addition to hierarchical image input for the semantic segmentation of blood vessels, inflammation, and endocardium in heart tissue. We demonstrate that the decision process of the deep learning model utilizes the supporting information correctly through custom-designed attention modules.
Distributional and relational inductive biases for graph representation learning in biomedicine – Paul Scherer, Department of Computer Science and Technology, University of Cambridge
The immense complexity in which biomolecular entities interact amongst themselves, with one another, and the environment to bring about life processes motivates the mass collection of biomolecular data and data-driven modelling to gain insights into physiological phenomena. Grand initiatives and continuing efforts have been coordinated to also structure our growing knowledge and understanding of biology (and beyond) within graph structured data. The (re)-emerging field of representation learning on graph structured data opens opportunities combine these streams of research to leverage prior knowledge on the structure of the data and construct models with improved performance or interpretability. This talk will discuss at a high-level how we may leverage the relational structures in biomedical knowledge and data to incorporate biologically relevant inductive biases into neural machine learning methods. This will be accompanied by considerations to make when designing relational inductive biases over some applications I have worked on that explore different scenarios under which graph structure arises in the data
Deep learning for segmentation of the Venous Tumour Thrombus in MRI – Robin Haljak, Department of Physics, University of Cambridge
An unusual hallmark of kidney cancer is the biological predisposition for vascular invasion, with the extension of the venous tumour thrombus (VTT) into the inferior vena cava occurring in 4-15% of cases. Automated segmentation of the VTT would be beneficial for the diagnostic evaluation of kidney cancer. However, the location, size and shape of the VTT are highly variable, making the automatic segmentation task difficult. Deep learning-based automatic segmentations of the VTT were created for the first time, using the nnU-Net segmentation framework. A two-stage localization-refinement-based 3D nnU-Net model is proposed to significantly increase the segmentation accuracy of the VTT in kidney cancer MRI scans. The proposed model involves two main steps. In the first step, the VTT is localised, and an initial segmentation is created. In the second step, the segmentation is expanded and refined to more accurately segment the VTT. Training and comparative experiments were conducted on the NAXIVA clinical trial data set.
Using latent shape information to detect renal cancers automatically from CT scans – William McGough, PhD Student, Cancer Research UK Cambridge Institute
The 5-year survival rate of renal cell carcinoma (RCC) drops from 88% to 15% when detected at stage 1 vs stage 4. Detecting RCC at the earliest developmental stage possible is therefore preferable, but RCC early detection programmes present significant challenges: high staffing costs and a lack of a high-risk target population. Consequently, early detection screening has been considered too costly to be viable for RCC. However, recent developments in medical imaging have shown that fast RCC screening may be possible with a low-dose CT imaging technique. Given the emergent opportunity presented by this imaging modality, my work exploits modern artificial intelligence (AI) technologies to enable automated early RCC detection. We show that detecting renal cancers is possible using the kidney’s shape information latent within CT scans.
Machine-Assisted Triage of Histopathology Slides for Detecting Precursors of Oesophageal Adenocarcinoma – Dr. William Prew, Postdoctoral Research Associate, Cancer Research UK Cambridge Institute
The prognosis for Oesophageal Adenocarcinoma (EAC) is relatively poor, as the five-year survival rate is less than 20%. This high mortality rate is commonly caused by late presentation of symptoms, at which point it becomes difficult to treat effectively. Barrett’s Oesophagus (BE) is a clinically recognised precursor to EAC, and monitoring these patients for signs of progression can help pathologists detect cancers at an earlier and more treatable stage. Recently, the Cytosponge-TFF3 test was developed to help screen these individuals, which is a minimally invasive and cheaper alternative to endoscopy, and samples cells from across the length of the oesophagus. This sampling method results in histopathology slides which pathologists may be unfamiliar with because the spatial context between cells is not retained compared to regular biopsy data. We therefore train and present a machine learning model capable of performing quality control and triage of Cytosponge slides. Our approach mimics decision patterns of gastrointestinal pathologists to classify and present regions of interest for manual expert review. By substituting manual review with automated review in low-priority classes, we can reduce pathologist workload by 57% while matching the diagnostic performance of experienced pathologists.
Mateo Espinosa Zarlengha – “Mind the explainability gap: harnessing the power of high-level concepts for interpretable AI in medicine”
Mateo is a PhD student and Gates scolar in the Department of Computer Science and Technology at the University of Cambridge
Francesco Prinzi – “Deep Learning and Radiomics in Breast Masses Detection and Classification”
Francesco is a PhD Student in the Department of Biomedicine, Neuroscience and Advanced Diagnostics at the University of Palermo.
Isaac Sebenius – “Linking neuroimaging, cortical gene expression and cognition”.
Tong Xia – “Sounds of COVID-19: exploring realistic performance of audio-based digital testing”.
Dr Ian Selby – “Short cuts make long delays: Machine Learning for Covid-19 and Medical Imaging”.
Postdoctoral Research Associate, Department of Oncology, University of Cambridge
Ines is a Postdoctoral Research Scientist at the University of Cambridge developing artificial intelligence and deep learning techniques for the analysis and interpretation of biomedical data. She is particularly interested in the application of medical imaging and computing technology to improve the diagnosis and stratification of patients with cancer. Her research interests lie at the intersection of big data, machine learning, data science, imaging informatics, and healthcare.
Postdoctoral Research Associate, Department of Radiology, University of Cambridge
Nicholas is a Medical Physicist working across breast cancer screening trials covering supplemental imaging and personalised screening based on risk. His work also covers the training of AI models in the detection of cancer on breast MRI and evaluating the performance of commercial mammography AI tools using a database of over 250,000 mammograms.
Hannah is a PhD student focused on developing multi-modal AI models to predict treatment response in ovarian cancer.
Josh is an MBBS/PhD student, researching and evaluating commercial mammography AI tools for the detection and prediction of breast cancer.
If you are interested in getting involved and presenting your work, please email Dr. Ines Prata Machado at im549@cam.ac.uk.
Feedback – we welcome suggestions / ideas / feedback on the club. Please get in touch via this form.
Join our mailing list – click here to join our mailing list to receive advertising about upcoming seminars.
Venue – the Lecture Theatre can be found on the ground floor of the Jeffrey Cheah Biomedical Centre, adjacent to the reception desk. There is wheelchair-accessible entry from the building entrance to the Lecture Theatre. Additionally, step-free access is available to the ground floor restroom facilities.
The Mark Foundation Institute for Integrated Cancer Medicine (MFICM) at the University of Cambridge aims to revolutionise cancer care by affecting patients along their treatment pathway.
© 2025 The Mark Foundation Institute for Integrated Cancer Medicine (MFICM). All Rights Reserved.
Department of Oncology, Hutchison-MRC Research Centre, Box 197, Cambridge Biomedical Campus, Cambridge CB2 0XZ