‘Oppenheimer’ and the risks of AI in health care

News

HomeHome / News / ‘Oppenheimer’ and the risks of AI in health care

Apr 14, 2024

‘Oppenheimer’ and the risks of AI in health care

By Junaid NabiAug. 2, 2023 All great stories have complicated endings. But that doesn’t mean they can’t offer simple and instructive lessons. Christopher Nolan’s magnum opus, “Oppenheimer,” highlights

By Junaid NabiAug. 2, 2023

All great stories have complicated endings. But that doesn’t mean they can’t offer simple and instructive lessons. Christopher Nolan’s magnum opus, “Oppenheimer,” highlights the tragic story about the “father of the atomic bomb.” But it’s also a story about how the United States missed an opportunity to be a global leader in the development of an innovation that would define the 20th century. This century will be defined by transformational technologies — especially artificial intelligence — and J. Robert Oppenheimer’s story offers particularly salient lessons for health care leaders, entrepreneurs, and policymakers.

Oppenheimer’s era was defined by the nuclear power race, while ours is being defined by the competition in artificial intelligence technology. Both technologies are powerful tools that will change the trajectory of humanity. In another disconcerting similarity with Oppenheimer’s era, political actors who frame national policies around innovation seem disengaged with or are politicizing the nuances of science. This politicking has made it increasingly difficult to have constructive conversations about the future of health care innovation. We already see this happening with the misinformation around mRNA technology and the ongoing harassment of scientists.

advertisement

Meanwhile, more than a dozen health care companies are already using ChatGPT, an AI-powered chatbot developed by OpenAI, for a variety of functions. But we do not fully understand the consequences of integrating these large language models into health functions. Surprisingly, many of the ongoing conversations on AI policymaking and regulation have focused on technology industry leaders, and have not fully utilized the expertise of leaders from the health care industry, who have unique insights on the application of AI on health and medicine. This reminded me of a powerful scene in the movie “Oppenheimer” in which the titular scientist tries to persuade President Truman about the dangers of the nuclear power, only to have his concerns dismissed.

Media attention on these rapid developments in technology and concerns about consumer data usage is also adding pressure on legislators to act. Last week, Republican Sen. Lindsey Graham and Democratic Sen. Elizabeth Warren acknowledged that “Congress is too slow, it lacks the tech expertise” and underscored the importance of creating a bipartisan regulatory agency through the Digital Consumer Protection Commission Act. The proposed idea seems to be more focused on large-scale technology companies, which misses the point on why these protections are necessary for AI applications and endeavors in health care. Research studies continue to highlight how machine learning-based tools can lead to inappropriate medical care.

Your daily dose of news in health and medicine

Oppenheimer’s story also illustrated how there is a disconnect between the goals of science — such as transparency, collaboration, and truth-seeking — and political goals, which are always in flux. Understanding this is particularly urgent now, since the public is continuously learning about how AI will change industries, including health care.

advertisement

However, lack of clarity on how these changes will impact patient care can lead to an environment of fear and mistrust which can ultimately stifle innovation — because health is much more personal than the economy. Health care leaders who are using these technologies to build digital applications can play an important role of science communicators, for example by having a dedicated section on their communication materials on what patients should expect their technology to deliver.

Finally, we must reorient our digital innovation efforts toward closing gaps in health care disparities and take steps to not further marginalize vulnerable populations. Oppenheimer’s story is also a reminder of the suffering that was caused to the Hispanic and Native American communities in New Mexico during the time of the Trinity Test. Recent studies are already demonstrating how digital algorithms in health care decision-making can exacerbate inequities. Health care technology entrepreneurs and policymakers must ensure that all necessary bias mitigation strategies — including the deployment of representative datasets for building digital applications — are implemented to avoid negative consequences for underserved patients. This is also one of the most effective ways for digital health enterprises to build trust.

We are at an inflection point in the application of these technologies in health care — not that different from how Oppenheimer’s story was in the application of atomic power to weaponry. Political and scientific leaders at that time did not fully understand the far-reaching implications of nuclear innovations. We are having a similar conversation now. A few leaders from the tech industry have argued for a pause in further research and development of generative AI. The major issues with this approach are that it puts the United States at a competitive disadvantage globally and doesn’t address the fundamental problems stemming from integrating this technology into various applications. A more effective prophylactic strategy to alleviate potential harmful ramifications would be to assemble the right stakeholders, develop frameworks to guide the direction of this incredible technology, and create global data partnerships that set the standards around future use of these tools.

Learning from historical misjudgments can provide the guidance needed for designing the next steps. Anyone who wants to innovate in AI for health care should watch “Oppenheimer” and take notes.

Junaid Nabi is a physician and health care strategist and serves on the Working Group on Regulatory Considerations for Digital Health and Innovation at the World Health Organization. He is a new voices senior fellow at the Aspen Institute and a millennium fellow at the Atlantic Council.

Artificial Intelligence

health tech

Exciting news! STAT has moved its comment section to our subscriber-only app, STAT+ Connect. Subscribe to STAT+ today to join the conversation or join us on Twitter, Facebook, LinkedIn, and Threads. Let's stay connected!

To submit a correction request, please visit our Contact Us page.

Exciting news!