Last January 2025, Republican Representative David Schweikert introduced the Healthy Technology Act of 2025. If passed, the bill could reshape the use of artificial intelligence (AI) in medical prescriptions.
The proposed legislation seeks to amend Section 503(b) of the Federal Food, Drug, and Cosmetic Act, granting AI and machine learning technologies the status of authorized practitioners with the ability to prescribe medications without human involvement.
By recognizing AI’s expanding role in healthcare, the bill aims to integrate advanced technologies into the prescription process.
Currently, only healthcare professionals can prescribe medications, but AI-driven systems like Google’s prescription AI, Oxford’s DrugGPT, and PharmacyGPT are emerging to assist. As these technologies advance, the prospect of removing human clinicians from the process raises important questions.
The measure is in the early stages of the legislative process.
Key Takeaways
The Healthy Technology Act of 2025 proposes giving AI the authority to prescribe medications, sparking regulatory and ethical concerns.
- The bill aims to update FDA regulations to allow AI to prescribe medications without human oversight.
- The FDA is facing internal challenges that complicate the immediate rollout of AI-based prescription systems.
- Regulatory frameworks in the U.S. and Europe differ, with the EU adopting a more cautious stance on AI in healthcare.
Challenges in FDA oversight
The bill’s introduction comes amid significant internal challenges at the Food and Drug Administration (FDA), the federal agency responsible for regulating drugs, medical devices, and other healthcare technologies.
Weeks after its proposal, the government agency experienced haphazard cuts and staff resignations, particularly affecting probationary employees and AI regulatory initiatives.
Its weakened oversight capacity raises concerns about the immediate implementation of AI-based autonomous prescription systems. Critics argue that these systems must undergo rigorous testing and validation to ensure they are safe and effective.
The current unrest at the FDA introduces significant challenges for the future regulation of AI in the United States, complicating efforts to establish a clear and effective framework for overseeing and holding accountable these sophisticated technologies.
AI supervision in the U.S. healthcare sector
The Healthy Technology Act was introduced in the U.S., driven by an environment that encourages innovation.
While the U.S. adopts a flexible approach to technological progress, Europe takes a more cautious stance, especially in high-risk sectors like healthcare. This contrast reflects a broader cultural divide on balancing advancement with individual rights protection.
AI regulation in U.S. healthcare has evolved within a fragmented system, adapting existing rules rather than overhauling them. The FDA has played a central role, integrating AI into its frameworks as a specific type of medical device software for over a decade.
Since the approval of the first autonomous diagnostic algorithm for diabetic retinopathy in 2018, the FDA has adopted a more structured regulatory approach.
To ensure safe AI integration in healthcare, the FDA introduced frameworks like the “Predetermined Change Control Plan” and “Good Machine Learning Practices for Medical Device Development,” establishing quality standards and continuous oversight to safeguard patient safety and technology efficacy.
Digital health and global regulations
The European Union enforces stricter regulations for AI systems used in healthcare compared to other regions.
Under the European Artificial Intelligence Regulation (AI Act, 2024/1689), a significant number of AI systems employed in healthcare are classified as “high-risk”, which mandates the adherence to a series of stringent requirements that go beyond the fundamental ones.
These high-risk AI systems must undergo extensive quality and human oversight evaluations and obtain CE marking, a certification required for products sold in the European Union, by engaging a Notified Body to ensure compliance with strict safety and performance criteria.
The integration of AI into healthcare is closely aligned with the Medical Device Regulation, which enforces rigorous quality and safety requirements for all medical devices, including AI technologies.
Furthermore, the General Data Protection Regulation (GDPR) imposes strict limitations on health data usage and automated decision-making without human oversight, providing a robust safeguard for patient privacy and data security.
Digital health regulations and patient protection
The launch of autonomous machine-driven prescription systems introduces a series of significant regulatory obstacles that must be carefully navigated and resolved.
The existing legal framework in the United States does not include specific provisions for algorithmic liability, complicating the process of determining responsibility when intelligent systems make incorrect drug prescriptions or provide inaccurate diagnoses.
In the European Union, the legal landscape is more defined, ensuring that manufacturers are strictly accountable for any issues arising from high-risk AI systems.
The legal framework for artificial intelligence in healthcare is less clear in the United States, often leading to situations where healthcare professionals are held responsible for errors made by automated systems, even when they are not directly at fault.
With the advancement of AI towards more autonomous decision-making processes, the existing regulatory framework might not be robust enough to adequately manage the intricacies and challenges posed by these sophisticated technologies.
This highlights the need for adjustments in the existing regulatory structure to ensure a fair distribution of responsibilities among those who develop, provide, and utilize health technology.
The role of healthcare professionals in AI integration
To promote the safe and effective use of AI in healthcare, the European Union mandates that all providers and users guarantee that individuals working with these technologies have the necessary knowledge and training.
Making sure that healthcare experts receive thorough training is essential for reducing potential risks and maximizing the benefits of these advanced technologies.
In the healthcare sector, professionals must be able to determine when to rely on machine intelligence recommendations and when to critically assess its outputs, acknowledging their indispensable role in the decision-making process.
This underscores the necessity for health experts to continuously refine their knowledge and skills to effectively integrate and utilize intelligent systems.
The deployment of autonomous algorithms within the healthcare sector also triggers a range of ethical dilemmas.
AI has the potential to improve diagnostics and prognostics, but its ability to make autonomous therapeutic decisions raises ethical concerns, including patient privacy, accountability, and equitable healthcare access.
Regulatory uncertainty and the path forward
The transition towards autonomous AI prescribing systems in the United States is happening during a time marked by regulatory ambiguity and staff reductions at the FDA.
This emphasizes the pressing need for comprehensive regulatory oversight to guarantee that intelligent technologies are implemented in a manner that is both safe and effective within the healthcare industry.
Equipping the FDA with the necessary authority and resources to effectively manage and regulate the introduction of advanced machine-driven systems into the healthcare sector is critical.
The ongoing instability at the FDA highlights the importance of a robust and well-funded regulatory framework to ensure the safe implementation of intelligent systems in healthcare, safeguarding public health and minimizing potential risks.
It is imperative for Congress to carefully evaluate any proposed legislation that would permit AI-enabled autonomous prescribing, ensuring that such measures are accompanied by stringent regulatory safeguards to protect public health and safety.
While artificial intelligence holds immense potential to revolutionize medical care, the absence of adequate protections could introduce considerable risks and uncertainties.