Every month, thousands of drug-promo packages, ranging from TV spots to banner ads and updated labels, land in the FDA’s queue for 2253 review. The agency tracks “the number of required submissions of promotional communications to FDA on Form 2253 and the total number of submitted materials, as a submission may contain more than one promotional communication.”
[Related: FDA’s new ‘Elsa’ AI set to expedite clinical protocol reviews]
Yet the agency is heading into fiscal 2026 with a proposed $6.8 billion budget, representing an overall decrease of nearly 4% from fiscal year 2025 amidst an ongoing reduction-in-force and early retirements. These pressures may be contributing to recent challenges, as the FDA has missed several PDUFA action dates this spring, including for GSK’s Nucala and Novavax’s updated COVID-19 vaccine.
Switching on the Elsa LLM
Against that backdrop the agency this week switched on Elsa, a locked-down generative-AI assistant Commissioner Marty Makary says can slash certain document checks “from two or three days to six minutes.” The agency has announced that Elsa is already being used to “summarize adverse events to support safety profile assessments, perform faster label comparisons, and generate code,” as well as to “accelerate clinical protocol reviews” and “identify high-priority inspection targets.” For industry veterans, the move feels less like a moonshot than a foregone conclusion. With LLMs now embedded in many workplaces, regulators had a choice: deploy a secure, in-house model or risk staff copying confidential text into public chatbots, says Ahmed Elsayyad, president of AI-platform maker Ostro, which brings consumer-grade AI experiences to pharma marketing.
Many pharma companies have already explored similar tools. And thus, Elsa’s initial capabilities are “well-aligned with the kind of operational efficiencies that many of us hoped to see from the FDA’s AI push,” said Jon Walsh, founder and chief scientific officer at Unlearn. As the industry and FDA both gain more experience in using genAI tools, Walsh expects to see it used “on both sides for writing and review.” He continues: “These are areas where generative AI can provide real value by saving time, standardizing workflows, and allowing human writers and reviewers to focus more on higher-order decision-making,” he said. “In a space where processes have historically moved slowly, seeing this kind of concrete progress is a strong signal that the agency is serious about deploying AI to improve the speed and cost-effectiveness of drug development.
The inevitability thesis
In any event, genAI has clearly gone mainstream. Over 500 million people now use ChatGPT weekly and Gartner notes that about 60% of organizations report using GenAI beyond ChatGPT for specific applications.
Pharma is no outlier: A March survey from SAS and Coleman Parkes found that 95% of life sciences and pharmaceutical organizations are either using or planning to adopt GenAI within the next two years. Given such dynamics, and the potential for at least some employees within FDA to be experimenting with genAI tools, the agency made the right move by developing a bespoke LLM for their needs, Elsayyad said.
That ubiquity creates a risk inside government, Elsayyad argues. If the FDA hadn’t built its own walled-garden model, people would end up using unconstrained public LLMs. The release of a system like Elsa may have been “inevitable,” Elsayyad added, and “it’s probably the right choice for the FDA to do that.”
Walsh sees the FDA’s internal AI adoption as a pivotal moment. “This move signals that the FDA is not only open to AI but is actively leading its thoughtful adoption,” Walsh stated. He adds that he views it as “a positive development for our company and others building AI tools to support clinical trials.” Unlearn creates AI-generated “digital twins” of trial participants to reduce control arm sizes by up to 35%, with its PROCOVA methodology qualified by the EMA in 2022 and endorsed by the FDA as aligning with current guidance.
Walsh believes the immediate applications will bring clear benefits: “These are areas where generative AI can provide real value by saving time, standardizing workflows, and allowing human writers and reviewers to focus more on higher-order decision-making.” For Walsh, this progress is particularly noteworthy given the agency’s historical pace. “In a space where processes have historically moved slowly, seeing this kind of concrete progress is a strong signal that the agency is serious about deploying AI to improve the speed and cost-effectiveness of drug development,” he added.
The assembly line vision
Chase Feiger, MD, CEO and co-founder of Ostro compares AI’s expanded role at the FDA to a sophisticated manufacturing process. “Picture an assembly line,” Feiger suggests, where different AI agents act as “individual machines…serving a critical, key, dedicated function.” This concept of AI agents, autonomous systems that can perceive their environment, make decisions, and take actions to achieve specific goals, is quickly becoming an operational reality, and are gaining ground in software engineering and even, in some cases, for automating compliance checks in regulated industries. In a regulatory context, a hypothetical “research agent” could scan clinical literature, trial data, and regulatory history. An “operations agent” might perform structured data extraction and check compliance with submission standards, Feiger said. “Human in the loop” reviewer agents would ensure “final validation before acceptance.”
Yet building such an assembly line, or a regulatory genAI workflow, isn’t without its complexities. Elsayyad of Ostro highlighted the practical challenges, especially around data security with confidential information. “Would you want the FDA to be utilizing browser use to actually download files to make actions on your behalf…?” he asks. He emphasized that establishing the necessary “guardrails” for these AI systems “is going to take time.”
Human-in-loop imperative
Feiger emphasizes the critical importance of human oversight in AI-driven regulatory processes. “Having ‘human in the loop’ is going to be very important in verifying citations and ensuring information comes from reliable sources,” he said. Beyond simple validation, human reviewers must examine potential biases within cited sources. “Just because something is cited on a reliable source, what are the intrinsic biases associated with that information?” Feiger asks.
He advocates for robust quality controls. “I still think that it’s very important to have expert human review and even have things like confidence scoring thresholds in place for any types of AI-generated content, so that it’s properly able to flag and even potentially discard potential outputs that might fall below whatever that predetermined reliability threshold is,” Feiger said.
Early adoption signals
Generative AI is no longer a fringe experiment. McKinsey’s 2025 “State of AI” survey found 78% of companies were using AI in at least one business function, up from 72% in early 2024.
Walsh notes that “AI adoption in large organizations is rarely uniform, so it’s natural to expect some variability in how Elsa is used across the agency.” Key indicators of meaningful traction include consistent use across multiple departments, ongoing feedback mechanisms for refinement, and collaboration with sponsors on developing their own AI tools around the drug review process.
“Transparency will also matter here,” Walsh adds. “The FDA doesn’t typically comment so openly on internal tools, so their willingness to engage publicly on Elsa is a reassuring sign, which could help foster more widespread and thoughtful adoption across the agency.”
Beyond document processing
The FDA published its first draft guidance on AI in drug development in January 2025, establishing a risk-based framework for AI model credibility. Given the continued momentum of the agency’s AI expansion, Walsh views Elsa as “an important first step that could set the foundation for bigger changes ahead.” If successful, it could create the internal momentum and regulatory comfort needed to explore more advanced AI applications in clinical trial design, data review, and trial analysis.
“When applied responsibly, AI can help regulators, scientists, and clinicians navigate complex processes more efficiently, saving time without sacrificing scientific rigor,” Walsh notes. “That’s not just good for the agency, it’s a big win for patients, too.” This shift could foster more dialogue and innovation, ultimately creating a regulatory environment that encourages responsible AI use to accelerate clinical research and bring effective treatments to patients faster.
Filed Under: Regulatory affairs