This Week in Westminster: The Global AI Healthcare Conference
- Sam McInerney
- Feb 9
- 6 min read
This week in Westminster marked the inaugural Global AI Healthcare Conference, hosted by the Royal College of Radiologists.
Set against the backdrop of Big Ben, it was hard to imagine a more fitting metaphor for being at the pinnacle of AI in healthcare - an obelisk, a point, the very top.
Well, unless of course it had been held in The Shard itself. But that was probably more expensive.

Backdrop
Being in Westminster felt like a return to my roots. I went to medical school in London. Lived here, loved it, and worked in it for well over a decade. For much of the last three years of that time, I spent countless days walking along the banks of the Thames, with Big Ben and the South Bank as my backdrop.
My now-wife and I would walk and talk for hours, testing each other on massive amounts of medical knowledge from our trusty cheese and onion handbook—a widely known pseudonym for the Oxford Handbook of Clinical Medicine, named at the time for its distinctive colour scheme.
What a strange juxtaposition then, to return to the same place 10 years later to talk about the future of medicine. How AI might make such processes of knowledge ingestion redundant, or even replace the need for it. A machine that already knows everything you could possibly read about medicine in 10 lifetimes.
Should we be worried? Are we going to be replaced? Or is this the dawn of a bright and promising future where AI and clinicians work together?
Then again, are the barriers to implementation so high in a cash and clinician-strapped NHS system that it’ll be a lifetime before anything gets properly implemented?
This is why I had to be at this conference. I needed to know: where we all are with AI in healthcare?
Inspired
Following the keynote speeches from the leaders of the AI world, it was hard not to be moved, in awe—but also proud to be part of the journey.
From Dominic Cushnan, who leads the NHS AI England movement, delivering an impassioned personal story about disrupting the NHS with this technology, to Pranav Rajpurkar—a man with so many accolades it makes you question how you’ve managed to waste so much of your own life.
(I think Pranav was designing randomized controlled trials before I even knew what one was... and I’m at least 10 years older than him. Sigh.)
To say it was the who’s who of AI in healthcare would be an understatement. It felt like being at a Gatsby party—secretly knowing I came from new money (or more accurately, no money) and hoping no one else knew.
The technology
It's no secret that AI is incredible. Anyone who uses a modern GPT clone is aware of the awesome power they hold. It's evolving faster than we can keep up with, and the incredible work being done by heavyweights like Microsoft, Google, and even Rolls Royce shows that it's not slowing down anytime soon.
Listening to Javier Alvarez-Valle, Senior Director of Biomedical Imaging at Microsoft, I took away two key points:
AI is the fifth industrial revolution and will cause a seismic shift in industry as we know it.
Javier is much, much smarter than I am.
Hearing Javier describe Microsoft’s multimodal work—the integration of large language models with imaging—gave me a glimpse into just how far companies like Microsoft are pushing the boundaries with massive datasets and cutting-edge technology. It’s a testament to how fast things are moving and how transformative this field is becoming.
One challenge I face in my own work—and I’m sure it’s the same across AI research—is the sheer volume of new research being published. Just last month, 24,000 AI-related papers were uploaded to arXiv alone.
(Shit. I better draw a line on my scoping review collection date soon.)
The main focus of the conference was on AI in radiology. Understandably so, given who was hosting it. But AI as an ecosystem doesn’t exist in isolation. The same challenges—evidence, regulation, integration—apply no matter what problem you’re trying to solve.
The key takeaway? There are a lot of companies developing this technology and working with clinicians to bring it to life in the NHS. But so far, it’s been a slow march through a maze of unknown unknowns.

What do Patients think, What about the Children!?
One session I couldn’t miss was the patient perspective.
Understanding what patients think about AI should be at the forefront of our efforts—after all, they’re at the center of our work. And if they’re not on board?
Well… we’re all screwed.
A fascinating talk by Nell Thornton from the Health Foundation, followed by Prof. Susan Shelmerdine, distilled how the public views AI and the use of their data. One of the key takeaways for me was that 25% of people don’t want their data used to build AI.
So what do we do about that? How do we identify those who want to opt out? And that stimulated a conversation which generated the question- who actually owns the data? How do we build models with millions of records—without consent?
Without trying to reduce two excellent 20-minute talks into a single phrase… which, of course, I’m now immediately going to do…
Children want a human in the loop. They want to be seen before a decision is made. And if a physician uses AI and something goes wrong? The physician is still to blame.
Which, to be honest, seems fair enough.
Reflections
There have been some bold claims about AI's future in healthcare. Personally, I think we're much further away from the AI-generated, talking clinician I saw explaining an imaging finding than some companies outside the NHS might like to think. Maybe in the States they are? I'm not sure how they'll handle all their chatbots being sued.
Pranav Rajpurkar highlighted the failure of a general GPT doctor in some of his work. He believes the models will get there eventually. These models will be trained on all the medical knowledge we have as a human race and gradually integrated and trained into the system until they work seamlessly as part of the team.
But I don't think that will work. What's written down does not equate to the sum of a clinician's knowledge. There is a reason that after the first couple of years of being a medical student, the focus shifts from reading books to being in front of patients.
Being a clinician is about understanding the person in front of you—the gestalt, the reading between the lines, and the filtering of what is important and what is not.
Body language makes up a huge proportion of what a person says to you. "It's not what they said, it's the way they said it."
My wife and I talk about this all the time when we talk about medicine.
I was always humbled by her during med school because she was a straight distinction student throughout. She just "got" the multiple-choice questions—she got them all right, give or take—even without knowing that much. She understood the format, how they were structured, and what they were testing. But that doesn't translate to the real world. It's a skill to dive into the details. Patients don't give you an MCQ-style vignette; they don't give you multiple-choice answers. And that's where the kicker is. The AI isn't there right now. And this is something I might actually disagree with Pranav on—I'm not sure we'll ever get there.
But I think that is a good thing. We need humans in the loop. Patients need humans in the loop. The message should be augmentation, not replacement.
The message
So what is the message then? This was an inspiring event. The whole conference, the feel of it, the pulse, the fun. The like-minded enthusiasm to create a better NHS that is AI powered. Something we all believe in.
What was missing, for me at least, is the delivery of these messages. Or the capacity to receive them. In the centre of the AI world I'm hearing this powerful and compelling narrative of an AI story coming together, but outside of London I'm not getting the same traction or excitement.
From my own short time trying to stir up engagement, or chase leads of the responsible management or e-health or innovation. It's hard graft. And people on the ground are fire- fighting, they don't have the time, or capacity to listen to the dreams of a fanciful AI system. There is no capacity to test it or to pilot it when their clinics are already running to 7pm. Especially when e-health has a backlog of issues so deep, they have effectively put the closed sign on the front door of innovation.
Final Thoughts
One of my great mentors in Brighton, Prof. Rob Galloway gave human factor workshops. In one of the workshops he talked about the ability to receive information.
Rob is a great A&E Consultant and communicator (he almost convinced me to switch specialities). He talked about the need for great communication in emergency situations. When you're on the same team, and have information to transmit to those running the emergency, you need to ask the question before speaking, "Are you ready to receive this information?"
Because otherwise you're just adding to the noise of an already busy emergency.
Right now, is the NHS as a whole ready to receive this information? I'm not sure it is. It doesn't have the capacity.
So what do we do? We keep trying, keep walking the path, keep asking. When the time is right, we can deliver the message.
Thanks for reading!
Sam

Comments