THE VBP Blog
AI in Healthcare, One Year Later: Policy, Regulation, and Governance
Revisits AI in healthcare, exploring new federal policies, regulatory frameworks, and governance models shaping safer and more equitable adoption in 2025.
January 29, 2025 – When we published the initial “AI Revolution” series in 2024, we explored how Artificial Intelligence (AI) it could transform documentation, diagnosis, patient communication and value-based care, as well as the risks that came with it. At that time many health systems were testing AI solutions, and regulators were largely observing. Now, the conversation has shifted.
Health systems and payers are moving from pilots to wider adoption, and federal agencies have responded with more concrete guidance, clearer definitions, and governance frameworks. This first blog of our two-part AI in Healthcare update explores how the policy landscape around AI in healthcare has matured, and what that means for organizations operating in value-based payment models.
CMS and Medicare Advantage Put Up Guardrails for Algorithmic Decision-Making
In February 2024 the Centers for Medicare & Medicaid Services (CMS) issued FAQs clarifying how AI and algorithms may be used by Medicare Advantage (MA) organizations in coverage determinations. The main takeaway is that an algorithm may assist but may not replace an individualized medical necessity evaluation.
In April 2024, further guidance confirmed that while MA plans may deploy predictive tools like predicting length of stay, they cannot rely on a model alone to terminate post-acute services or make benefit denials without assessing the specific individual’s condition. For value-based organizations and long-term services & supports (LTSS) providers this is significant. It signals that algorithm-based triage or utilization management must include human-centered oversight, especially in home-based or post-acute settings where individual variation is high. Replacing human oversight was one of the issues we highlighted in our AI Revolution series, so it’s good to see it being taken seriously by CMS and others.
The FDA’s Framework for Machine Learning Medical Devices
The Food and Drug Administration (FDA) has also made some moves in the past year in how it treats AI/ML-enabled medical devices. In June 2024 the agency published Transparency for Machine Learning-Enabled Medical Devices: Guiding Principles, emphasizing that systems must clearly communicate intended use, logic, limitations and risk so that clinicians and patients understand the basis for AI outputs.
Later in 2024, the FDA finalized guidance on “Predetermined Change Control Plans (PCCPs)” for AI/ML software. This created a structured pathway that allows devices to evolve post-clearance under approved change-control rather than treating each update as a whole new device submission. What this means is that health systems, technology vendors and procurement teams now have clearer signals that adaptive AI is on regulators’ radars and must be built and managed accordingly.
Those changes close a major gap we identified in spring 2024 about how to govern learning systems. With lifecycle oversight, transparency, and risk-monitoring now more visible, providers have a firmer base for trusting and deploying AI tools cleared for clinical use
Operationalizing Transparency in the EHR
On the health IT side, the Office of the National Coordinator for Health Information Technology (ONC) issued the HTI-1 Final Rule (Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing). That rule includes a new certification criterion called “Decision Support Interventions” (DSI) which replaces the older “clinical decision support” criteria and explicitly targets AI or predictive tools embedded in certified health IT.
By December 31, 2024, developers of health IT certified under the prior criteria had to update to DSI. Starting January 1, 2025, the DSI criterion becomes required to meet the “Base EHR” definition for certain provider programs. This means hospitals and health systems deploying EHR or decision-support modules that are AI driven, must have consistent metadata about the model (e.g., training data, intended use, performance, maintenance) from the vendor. That transparency matters not just for adoption, but for auditability, fairness and safety.
Building Safety and Trust: National Governance Efforts in 2025
Beyond individual agencies, a broader pattern is emerging toward governance. For example, the Agency for Healthcare Research and Quality (AHRQ) has launched an AI in Healthcare Safety Program under its Patient Safety Organization auspices, soliciting public input in late 2024. The program is designed to build frameworks for identifying and tracking AI-related patient safety events, including bias or harm, and publishing lessons learned.
This all reflects a shift from going fast toward going wisely and monitoring to reduce any chance of harm to consumers. AI adoption in healthcare increasingly depends on safety, trust, equity and governance, especially for tools that make or influence clinical and coverage decisions.
A New Phase From Experimentation to Accountability
What’s the higher takeaway from all of this? In 2024 the story was primarily about potential and what AI could do in healthcare. In 2025, the story has shifted to what AI must do to deliver value safely, transparently and equitably. Health systems, payers, and vendors must now treat AI governance as every bit as important as algorithmic accuracy. Procurement, deployment, monitoring, audit trails, bias-checks and human-in-the-loop workflows are no longer optional. They are entering the regulatory terrain, and that is an important guardrail to protect consumers.
As these policy and regulatory frameworks solidify, they set the tone for Part 2 of this series, where we will look at what is happening in real-world practice, including how organizations are deploying AI, what results they are achieving and where the gaps remain.
Advocate’s Perspective
From a value-based care and LTSS vantage point, the changes in policy and governance are important. Systems that serve consumers through home-based care, LTSS, or complex case management depend on tools that coordinate care, assess risk, manage transitions, and support decision making. For these populations, transparency ensures that consumers and families can understand how an algorithm influences a care or authorization decision because governance guarantees that models are audited for bias, particularly for older adults, individuals with disabilities, and those affected by social determinants of health. There are also safeguards that machine-generated predictions never replace clinician judgment, especially in complex or high-variation settings. As AI expands into home health, skilled nursing, LTSS, and value-based risk arrangements, the frameworks outlined above represent essential guardrails. Without them, AI risks amplifying disparities rather than narrowing them. This shift toward regulation and governance the foundation for responsible innovation, that can benefit, not harm consumers if done properly.
Onward!
Free E-Book
Paying for Outcomes: The Value-Based Revolution Written by Fady Sahhar
A practical guide for payers, providers, and policymakers shaping the next generation of healthcare delivery.
Now Available for Free
DOWNLOAD NOW
Share This Blog!
Get even more insights on Linkedin & Twitter
About the Author
Fady Sahhar brings over 30 years of senior management experience working with major multinational companies including Sara Lee, Mobil Oil, Tenneco Packaging, Pactiv, Progressive Insurance, Transitions Optical, PPG Industries and Essilor (France).
His corporate responsibilities included new product development, strategic planning, marketing management, and global sales. He has developed a number of global communications networks, launched products in over 45 countries, and managed a number of branded patented products.
About the Co-Author
Mandy Sahhar provides experience in digital marketing, event management, and business development. Her background has allowed her to get in on the ground floor of marketing efforts including website design, content marketing, and trade show planning. Through her modern approach, she focuses on bringing businesses into the new digital age of marketing through unique approaches and focused content creation. With a passion for communications, she can bring a fresh perspective to an ever-changing industry. Mandy has an MBA with a marketing concentration from Canisius College.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.