Will Nvidia kill the radiology stars?

For years, experts predicted that radiologists would be among the first to be replaced by AI. Why hasn't that happened?
Will Nvidia kill the radiology stars?
Illustration: Vicky Leta / Shutterstock

Hi Quartz members!

Seven years ago, Geoffrey Hinton, an artificial intelligence pioneer, made a bold prediction. “People should stop training radiologists now. It’s just completely obvious that, within five years, deep learning is going to do better than radiologists… It might be 10 years, but we got plenty of radiologists already,” Hinton said at a machine learning conference in Toronto. Hinton, who received the Turing Award in 2018, had pioneered research on the neural networks that underlie the recent progress of AI. So, naturally, people listened.

Fast forward to 2023. The world went through a global radiologist shortage during covid: Professionals were either burning out or aging out. For anyone training in the field now, the future looks bright—in the US, the employment of radiologists is projected to grow 4% between 2021 and 2031, at a faster rate than the overall employment of physicians and surgeons, according to the US Bureau of Labor Statistics. And pay remains high: the average wage hovered in the region of $300,000 a year.

So what happened? Was Hinton wrong, or are we missing something fundamental about the big AI boom, fueled by OpenAI, DeepMind, Nvidia, and dozens of other companies?


WHY RADIOLOGISTS?

Typically, we expect the machines to come for lower-skilled jobs first. Radiologists, though, train on average for 13 years, and they’re among the best-compensated professionals in the healthcare industry. So why did Hinton and others single their jobs out as particularly vulnerable to AI extinction?

For one thing, it’s a data-rich field. Radiology has generated decades and decades of good digitized images, said Curtis Langlotz, a professor at Stanford, whose work focuses on the intersection between AI and radiology. This kind of data offer the bread and butter of machine learning. “These neural networks happen to be incredibly capable of processing images,” Langlotz said.

The radiologist’s job is also highly structured—humans study an image and look for abnormalities—making it conducive to automation. There’s also a strong financial incentive at play to improve the productivity of radiologists: They’re expensive professionals, and their mistakes can be costly as well.

In a number of more limited ways, AI is already transforming radiology: computer-aided detection for cancer, for instance, or an FDA-approved AI tool to detect eye disease in diabetes patients. In the latter case, in fact, the AI is semi-autonomous, so the diagnostician only has to consider a subset of cases. A third of all US radiologists use AI in their practice, according to a 2020 survey published in the Journal of American College of Radiology.

The reality is, though, that deep learning algorithms haven’t made their way into widespread clinical use. And they certainly haven’t started replacing radiologists wholesale.


QUOTABLE

“Early on, people in general, including computer scientists, were a bit simplistic in understanding—including myself, by the way, but I’m talking two years back—we were a bit simplistic in our understanding about AI adoption and AI deployment throughout the economy.”

Luis Videgaray, director of the Massachusetts Institute of Technology AI Policy for the World Project, in an interview with Quartz


HUMAN V MACHINE

It isn’t that the machines have failed to become smarter. Studies show that AI predictions are more accurate than those of almost two-thirds of radiologists. Langlotz told Quartz that algorithms today can be built in a matter of weeks or months, and are more accurate than past ones in detecting a range of illnesses like pneumonia.

But when he examines a chest X-ray, Langlotz said, he’s looking for many more things than just pneumonia markers. He’s searching for hundreds of different abnormalities that may be present, so that he can use those details to create a full picture of what’s happening with the patient. “We’re taught to think not just about the most common thing that might be true, but some of the uncommon things as well,” he said. “And that’s why we train for as long as we do. And building algorithms that detect large numbers of abnormalities and integrate them—that has yet to be realized.” Additionally, issues like data privacy hinder the creation of the large training datasets that are needed to train these algorithms, Langlotz said.

The lesson from radiology, then, is that human expertise based on years of training and experience is not as easy to replace as experts initially predicted. That’s worth keeping in mind at a time when AI tools such as ChatGPT have caused turmoil in white-collar workplaces. The world is a messy place, and plugging AI into it everywhere is hard—and not just because of hurdles such as data privacy concerns. People and organizations all have to be schooled in fitting these technologies into their lives. The great AI replacement will happen only as fast as workplaces can move.


ONE 💻 THING

The fallibility of machines is always, in part, the fallibility of humans. And so it is with AI and radiologists.

A recent working paper (pdf), published by the National Bureau of Economic Research, found that radiologists, on average, did not benefit from the assistance of AI. To find out why, researchers conducted an experiment in which radiologists were provided with AI assistance to help them make decisions.

In failing to use AI’s inputs optimally, the study found, the radiologists suffered from two main biases. They gave less importance to the information supplied by AI, and they neglected any overlaps between the AI’s conclusions and their own. Radiologists also took significantly more time to take a decision when AI information was provided.

The solution? In certain cases, the researchers concluded, AI and humans should not be involved in the same task. When humans most benefited from AI assistance, the AI was very confident—and in such cases, it’s best to rely on the AI’s signals alone, rather than use the costly time of radiologists to confirm or reject them. When the AI was uncertain, the humans too did worse with its inputs; the radiologists would have done better handling the case themselves.

In these early days of AI, then, perhaps the immediate priority for companies is not how to swap their people for algorithms outright but how to best use their employees in tandem with their AI.


Thanks for reading! And don’t hesitate to reach out with comments, questions, or topics you want to know more about.

Have a naturally intelligent weekend,

—Michelle Cheng, AI and technology reporter