
Why Regulatory Sandboxes Offer a Safe Space for AI Innovation in Healthcare
In the second part of her Pharma Commerce video interview, Linda Malek, JD, partner at Crowell & Moring, notes that by bringing regulators and innovators together, regulatory sandboxes could help US healthcare organizations test and refine AI tools responsibly, which allows oversight agencies to keep pace with rapid technological change.
The White House AI Action Plan is designed to position the United States as a global leader in artificial intelligence by accelerating development, adoption, and innovation across industries, including healthcare. According to Linda Malek, JD, partner at Crowell & Moring, the plan’s intent is twofold: to maintain US competitiveness against other nations advancing rapidly in AI, and to create an environment that fosters safe, efficient, and scalable AI implementation.
For hospitals, health systems, and health-tech companies, the plan could significantly shape the pace and scope of AI adoption through several key mechanisms. One of the most notable is the introduction of regulatory sandboxes—controlled environments where new AI tools can be tested and validated with reduced bureaucratic friction. These sandboxes are intended to streamline approval processes for AI-driven medical devices, clinical software, and digital tools, effectively “cutting red tape” and allowing innovations to reach clinical settings faster.
Another major component involves public investment in infrastructure. The plan calls on agencies such as the National Institute of Standards and Technology (NIST) and the U.S. Food and Drug Administration (FDA) to expand national data and computing infrastructure. This includes developing large-scale data centers and improving access to high-quality, diverse datasets—resources essential for training and validating AI models.
By emphasizing both regulatory flexibility and technical capacity-building, the Action Plan aims to lower barriers to entry for healthcare innovators while ensuring oversight and accountability remain in place. Ultimately, Malek suggests, this initiative could enable faster clinical integration of AI tools, strengthen domestic innovation pipelines, and help U.S. healthcare systems leverage AI more effectively for patient care, operational efficiency, and research advancement.
Malek also dives into the role sandboxes could play in helping healthcare organizations test and implement AI safely, while protecting patients; how providers and health systems prepare for potentially conflicting requirements when it comes to AI-related regulations; the steps healthcare organizations can take now to align with the Action Plan and position themselves for regulatory changes in the next few years; and much more.
A transcript of her conversation with PC can be found below.
PC: What role could regulatory sandboxes play in helping healthcare organizations test and implement AI safely, while protecting patients?
Malek: In Europe, the regulatory sandbox is already a more well-developed concept, and so here in the US and through the AI Action Plan, I would imagine it will develop in its own unique way. I think at at least a high level for regulatory authorities, the sandbox allows legislators and other oversight agencies to really understand how the technology is developing, because very often, the technology outpaces the regulation that we have in place. By creating a regulatory sandbox, you have an opportunity for regulators to be working closely with and observing the development of AI with private companies, and so in order for them to understand those developments and how industry is using these developments, they're better able to develop rules around what best practices should look like in the development of AI.
I think that's very important when it comes to patients and other consumers who may utilize these technologies, because if you look at a partnership—that is what occurs in a regulatory sandbox. You have the regulators on one side, and then you have organizations, businesses on the other side who are able to have some regulatory certainty, as they are developing these technologies, so that they can position themselves in such a way as to push forward with the innovation, while the regulators are looking at where they may need to push back.
There's a check, there's a balance, but yet there's an opportunity to develop in this sort of protected environment. I think that from the standpoint of patients and other consumers, that should give some assurances that the AI is being developed in a way that sort of replicates how it's going to happen in the real world and how it may apply to them, and that it's not sort of AI run amok, because there's a lot of fear I think around AI as well. To develop AI in a regulatory sandbox framework like this should provide, I think, some comfort that it's being developed in a way that is going to reflect best practices in real time.
Newsletter
Stay ahead in the life sciences industry with Pharmaceutical Commerce, the latest news, trends, and strategies in drug distribution, commercialization, and market access.