Published Date : 11/07/2025
We are living in a disorienting time. At the quarter mark of the 21st century, several forces have conspired to transform a range of industries, how we go about our daily lives, and, of particular interest here, the conditions and structures that determine the health of individuals and populations. Perhaps nothing has been as disruptive as the arrival of widely available artificial intelligence (AI). Although research into AI has been happening for decades and AI has been similarly in use in specialized form for years, the introduction by ChatGPT (OpenAI) in 2022, the first widely available large language model chatbot, dramatically changed global awareness of the potential of the technology and opened the floodgates for adoption of AI approaches throughout all sectors, including those related to health and health care.
The speed with which AI has entered the national and global consciousness and, more practically, incorporated into algorithms that inform a broad range of systems and structures means, not surprisingly, that policies and oversight have lagged behind the implementation and adaptation of AI in several industries. This is true for health and health care. Although there have been steps taken to regulate AI algorithms in health care, including the US Center for Medicare & Medicaid Services (CMS) issuing a Final Rule in 2024 about the need for human engagement to make final medical determinations in Medicare Advantage plans, the evolution of the technology has outstripped regulatory and governance oversight. Rulemaking and standard setting are quite a bit behind the pace of technological development in this area. Although the potential harms of AI, in both health writ large and medical practice more specifically, have been discussed now in a range of articles and reports, it will take some time before we will align the range of policies and rules that would be needed to ensure that we can maximize the potential benefits of AI while minimizing the harms.
Compounding this challenge of sparse regulation has been the emergence of the second-term presidency of Donald J. Trump. Since the start of this second term, a flurry of executive orders and efforts to change regulatory and governance structures have been unleashed. These have been accompanied by frequent reversals and countermanding of recently implemented rules that have confused—and, in many ways, paralyzed—the regulatory landscape across a range of sectors. This disorienting change has been coupled with a concerted federal effort to cut costs, in no small part through trimming staffing in federal agencies, leaving many agencies—already underequipped to deal with the pace of change across sectors—even less well positioned to tackle the breadth of policymaking needed to help guide emerging technologies like AI. It has been amply noted that this moment holds enormous threat for the country and the world, with a range of potential consequences. However, much of our thinking thus far has focused, reasonably, on the consequences of action. It is also worth contemplating the consequences of inaction and, relevant to this moment, inaction that is brought about by the understaffing of the federal agencies, in particular throughout the US Department of Health and Human Services and in the various units—the Food and Drug Administration, CMS, the Centers for Disease Control and Prevention—that have a role in creating the guardrails that can ensure that the uses of AI are positive.
Two observations are worth reflecting on. First, technology will evolve that has potential to improve many aspects of the human experience. This has long been the case and will, we hope, continue to be so, with AI as a core part of that evolution. Second, the development of technology also poses potential harm, and regulation and governance cannot be left exclusively in the hands of the private sector that is animated, at its core, by a profit motive. This observation suggests the need for an active, engaged public sector and one that is functioning at a level that can match the speed of change and adoption of new technologies like AI. Several other parts of the world, notably Europe, have been ahead of the US in this regulatory space, potentially offering an example for how the US can similarly engage. We can only hope that the US federal government can resume this function in a timely manner and engage around AI to optimize the potential of this still new technology and mitigate its potential harms.
Q: What is the primary concern with the rapid adoption of AI in healthcare?
A: The primary concern is that the rapid adoption of AI has outpaced regulatory oversight, leading to potential risks and harms that may not be adequately addressed.
Q: What steps have been taken to regulate AI in healthcare?
A: Steps include the US Center for Medicare & Medicaid Services (CMS) issuing a Final Rule in 2024 about the need for human engagement in final medical determinations in Medicare Advantage plans.
Q: How has the second-term presidency of Donald J. Trump impacted AI regulation?
A: The second-term presidency has seen a flurry of executive orders and frequent reversals of rules, which have confused and paralyzed the regulatory landscape, further complicating the oversight of AI.
Q: What are the potential consequences of inaction in AI regulation?
A: Inaction can lead to unmitigated risks and harms, as the private sector, driven by profit motives, may not implement the necessary safeguards and ethical standards.
Q: What can the US learn from other regions in terms of AI regulation?
A: The US can learn from regions like Europe, which have been ahead in regulatory oversight, potentially offering a model for how to engage and regulate AI effectively.