AI Dominates, But Attention to Manufacturing, Supply Chain and Funding Remain Paramount to Success in Medtech Innovation in 2024 and 2025

 

AI’s potential and momentum in healthcare and medtech are massive. But the need for safety assurance remains paramount as regulators try to mitigate the risk of a technology that has many waypoints to achieve in the clinical care setting especially for diagnosis.

 

With support for note taking, scheduling, hybrid engagement moderation and reporting or simply your virtual assistant or pinpoint wayfinder; AI has become ubiquitous in our daily lives, professional and personal, with a relatively rapid rate of accuracy and maturity.

 

Though it may feel like it when you are juggling multiple tasks at once, lives are not at stake in those activities. While AI has shown high potential in many healthcare endeavors including drug discovery, engagement moderation and reporting, clinical trial design and enrollment, diagnostics, imaging systems and patient monitoring devices, early disease detection, predictive analytics, and personalized medicine, relatively simple tasks assigned to AI like medical summaries are still prone to errors or “hallucination.”


This month,
Medcity News reported, “In the 50 summaries produced by GPT-4o, the researchers identified 327 instances of medical event inconsistencies, 114 instances of incorrect reasoning and three instances of chronological inconsistencies.

 

“The 50 summaries generated by Llama-3 were shorter and less comprehensive than those produced by GPT-4o, [Prathiksha Rumale, one of the study’s authors,] Rumale noted. In these summaries, the research team found 271 instances of medical event inconsistencies, 53 instances of incorrect reasoning and one chronological inconsistency.

 

“‘The most frequent hallucinations were related to symptoms, diagnosis and medicinal instructions, highlighting the fact that medical domain knowledge remains challenging to the state-of-the-art language models,’ Rumale explained.”

 

However, regulators like the US FDA are charged with limiting risk and ensuring patient safety first and foremost. AI, especially generative AI, for diagnostic purposes has some distance to go before it can achieve confidence among regulators that it has sufficiently mitigated certain risks and is safe for substantial clinical utility as its potential suggests it might be able.

 

GCMI Medical Director Emily Blum, MD

The nature of AI and ML will require more substantial investments by innovators for quality systems and post market surveillance

“This field will continue to evolve quickly, and constantly,” said GCMI Medical Director Emily Blum, MD. “Innovators integrating AI technologies into their new devices will need to be nimble. They will need to partner with software engineers with experience managing AI and machine learning (ML) systems long-term, keeping the associated data clean and manageable as regulatory guidance documents change. They will also need a robust quality management system in place to make compliance documents more easily accessible and available to submit to regulatory bodies rather than having to stop what they are doing and scramble to comply.”


“Because the technology is evolving so quickly, and it will continue to do so, innovators need to pay close attention to regulatory news on the topic,” says GCMI Interim Executive Director Saylan Lukas. “Pre-sub calls with the FDA will be a must for new devices with an AI component. They will need to be regulated differently from other medical devices and technologies. Regulatory approvals for these will not be locked in place, approved and frozen once they enter the market and clinical utility. They will require significantly more post market surveillance, which will also impact business models and team investment requirements to support those post market activities and technological evolution inherent in its qualities and nature.”


“If AI by its very nature is constantly evolving, how do we ensure it is not learning the wrong lessons potentially leading to adverse events?” Emily asks.


“The only way to know is to test it,” Saylan says. “But what will be the appropriate post-market interval required by regulators? Is an annual ‘check up’ too burdensome? Is five years too long? Those are questions the industry as a whole, especially regulatory bodies, has to answer.”


“Just like physicians have to take qualification tests and accrue continuing medical education (CME) credits, the FDA is recognizing that AI is trained continuously and needs to be tested at regular intervals just like physicians do,” Emily says. They are establishing standards to do so.”

 

FDA’s AI/ML Regulatory Framework

The FDA is actively working to create a regulatory framework that addresses the unique challenges, including ethics concerns, posed by AI and ML in medical devices. In January 2021, the FDA released an AI/ML Action Plan, which outlines a pathway for the regulation of software as a medical device (SaMD) that utilizes AI and ML. The plan emphasizes the importance of a “total product lifecycle” approach, where AI/ML-based devices are monitored and updated throughout their use to ensure they remain safe and effective.


According to McKinsey & Company,
Innovators should develop protocols for implementing updates to AI/ML algorithms, which will need to be reviewed and potentially re-approved by the FDA depending on the risk level of the changes.

 

GCMI Interim Executive Director, Saylan Lukas

In the pre-market stage, innovators need to engage with the agency early in the development process to discuss the regulatory pathway along with data and other evidence for market approval. “Presub” meetings are always of high value for innovators,” Saylan says. “They will be paramount for innovators seeking to successfully commercialize technologies with a critical AI component.”


The FDA is collaborating with international bodies to develop standards and best practices for AI/ML development (Good Machine Learning Practices – GMLP), ensuring consistency in how these technologies are validated and implemented. The agency’s 10 “Guiding Principles of Good Machine Learning Practice for Medical Device Development” can be found
here.


Like Good Manufacturing Practices (GMP) and Good Laboratory Practices (GLP), regulators will look closely at adherence to GMLP guidelines. How are you controlling the code? Who can make changes? What does your post market analysis look like?

 

EU Regulatory Landscape

In Europe, the regulatory environment for AI and ML in medical devices is shaped by the Medical Device Regulation (MDR), which took effect in May 2021. The MDR places a strong emphasis on the clinical evaluation of AI-based devices, requiring robust evidence of safety and performance before they can be marketed. Additionally, the European Commission is developing the Artificial Intelligence Act, which will impose further requirements on high-risk AI applications, including medical devices. This Act is expected to introduce strict compliance obligations related to transparency, risk management, and human oversight of AI systems (AlphaSense).


In fact,
CNBC reported, “The European Union’s landmark artificial intelligence law officially enters into force [August 1st, 2024] — and it means tough changes for American technology giants.


“The AI Act, a landmark rule that aims to govern the way companies develop, use and apply AI, was given final approval by EU member states, lawmakers, and the European Commission — the executive body of the EU — in May.”

 

Will the US FDA follow suit? Time will tell, but some level of agreement and alignment is likely.

“The FDA is compelled to do everything in its power to ensure patient safety first and foremost,” Emily says. “Imagine this summer’s Crowdstrike event affecting the healthcare system in many more orders of magnitude than it did.”


AI and ML advancements for healthcare and medtech, along with new and nascent regulatory guidelines reflect a growing recognition of the potential and risks associated with AI and ML in healthcare. As these technologies continue to evolve, regulatory bodies are working to strike a balance between fostering innovation and ensuring patient safety.


Medtech innovators who’s new technologies have critical AI or ML components will need to proactively engage with regulatory bodies early and often to make the approval process as smooth as possible regardless of the disease state or therapeutic area of interest. Gathering clean, quality data to support product development (and improvements) along with strong quality and post-market monitoring systems is paramount throughout the product’s design, development and commercialization lifecycle.

 

Cybersecurity Regulations

As cybersecurity threats have increased, so have regulatory requirements. As previously mentioned, the Crowdstrike episode wrought havoc on hospitals and health systems. Imagine what it could do to medical technologies reliant upon AI and ML.


The FDA has added Section 524B to the Federal Food, Drug, and Cosmetic Act, which mandates that all new medical device applications include a cybersecurity plan. This plan must detail how manufacturers will monitor, identify, and address potential cybersecurity risks.


Devices, especially those connected to the internet, must have robust security features, and innovators should integrate these considerations from the design phase.
Plante Moran succinctly stated, “Detailed cybersecurity documentation must now be a part of regulatory submissions, requiring collaboration between product developers and cybersecurity experts​.”


In conclusion, the latest advancements in AI, ML, and other medical technologies offer significant opportunities for innovators in specific therapeutic areas. However, navigating the regulatory landscape and addressing the unique challenges of each application will be crucial for bringing these innovations to market successfully.


Thanks for reading! Stay tuned for part 2 in which we unpack US / EU regulatory alignment, wearable tech, 3D printing, funding and the supply chain should lead medtech innovators’ awareness for the balance of 2024 and 2025.