The FDA is betting big on genAI to transform tedious workflows, setting a June 30 deadline for agency-wide deployment. While details of its plans are scarce at present, using genAI to reduce FDA’s paperwork burden seems inevitable. The broader goal doesn’t qualify as “a high-order AI problem today,” said Vik Bajaj, founder and CEO of Foresite Labs. Bajaj envisions a low-risk, relatively easy scenario where genAI could find widespread use in “the bulk of paperwork” agency-wide.
In any event, regulators wade through mountains of documents daily. CDER and CBER processed nearly 468,000 filings last year, about 40,000 dossiers a month. A single NDA can top 100,000 pages. The agency itself notes: “Each year, CDER receives more than 300,000 submissions, amounting to millions of pieces of data”
An idea gaining precedent

Vikram Bajaj, Ph.D.
The goal of using AI to manage such paperwork burdens efficiently and accurately, with continued human oversight, is gaining ground across multiple industries. In fact, many enterprise organizations, from law firms to Wall Street shops, have implemented large language models with approaches like Retrieval-Augmented Generation (RAG) to, say, synthesize case law research, analyze contracts, or automate compliance monitoring.
Early adopters in areas such as financial services and law have already demonstrated significant efficiency gains and market traction, from OpenAI Startup Fund–backed Harvey’s legal AI platform handling complex due diligence work, to Deutsche Bank using generative AI to process thousands of documents daily with 97% accuracy.¹
While private sector adoption of AI for such administrative tasks is accelerating, translating this momentum to regulatory bodies like the FDA involves navigating a more complex and traditionally cautious landscape. The FDA itself highlighted the success of a pilot, which, as FDA Commissioner Martin Makary indicated, showed promise in freeing up tedious work for its scientists while reducing “the amount of non-productive busywork that has historically consumed much of the review process.”
In a certain sense, the regulator’s efforts are “piggybacking on global needs,” Bajaj said, as pharmaceutical companies and other regulated entities are already deploying similar AI tools for document processing and administrative tasks.
The regulatory staffing and culture angle
But there could be something of an adjustment process. Recent U.S. Government Accountability Office (GAO) audits have highlighted internal technological hurdles. For instance, a September 2024 GAO report (GAO-24-106638) found that HHS, the FDA’s parent agency, lacked a comprehensive inventory of its pandemic IT systems, failed to eliminate duplicative systems, and had 15 systems handling personally identifiable information without required privacy impact assessments.
In addition, recent GAO audits of FDA (GAO-25-106775 and GAO-24-107359) have noted that the agency has “struggled to retain staff,” and thus has had trouble overseeing foreign drug manufacturing. The agency is also facing newer staffing questions. While earlier proposed cuts would have resulted in the termination of about 3,500 employees from the agency, those plans appear to be at least partly reversed, but the agency has seen a string of early retirements and departures of some leaders. Before the recent reductions-in-force, the agency had about 19,700 employees.
Eventually, for rote tasks like inspecting records at the FDA, widespread AI deployment “could reduce the number of humans required,” Bajaj said. “Human judgment would still be enlisted for oversight,” he noted.
“A broad rollout” of genAI technology in a matter of months is “possible in a nimble private industry,” Bajaj said. The timeline for a regulator, however, could have unique hurdles in the short term as larger, more established organizations, whether pharmaceutical companies or regulatory agencies, have a complex set of organizational dynamics and process elements to navigate.
To oversee the transition, the agency has charged Jeremy Walsh, its newly appointed Chief AI Officer and Sridhar Mantha, director, Office of Strategic Programs.
Walsh and Mantha will navigate cultural as well as technological themes in their work. One of the largest barriers to adoption may in fact be organizational rather than technical. Rolling out genAI across the agency is more of “a sociological and implementation task” than a technical one, Bajaj said.
Despite these organizational hurdles, Bajaj believes comprehensive genAI adoption is inevitable. “It’s going to happen naturally, up to the limits of organizational dynamics to absorb new ways of working, where government historically is not the most nimble. But those transitions do happen.” The opportunities are abundant: “I can think of 20 low-hanging fruit applications not predicated on new guidance; you’re just satisfying old guidance more efficiently.”
‘Blue sky’ applications
The FDA’s genAI ambitions extend beyond administrative efficiency. On April 10, 2025, nearly a month before its genAI announcement, the agency released a plan to phase out animal testing requirements for monoclonal antibodies and other drugs, explicitly citing “AI-based computational models of toxicity” as a replacement method. Commissioner Martin Makary called it a “paradigm shift in drug evaluation.”
Bajaj sees this timing as intentional. First the messaging around moving away from animal testing, and then the news on the genAI pilot with plans to scale it across the agency. “I think there’s a sequence of topics: animal testing, then generative AI, and then more blue-sky stuff. It’s all part of the same idea,” he said. “People have long asked how the regulatory apparatus can adapt. You’ll see that sequence emerge.”
The April announcement specifically mentioned using “computer modeling and artificial intelligence to predict a drug’s behavior” and “software models [that] could simulate how a monoclonal antibody distributes through the human body and reliably predict side effects.” This aligns with what Bajaj described as the broader vision: “The idea is to collapse drug development time and improve model certainty using AI trained on human phenotype directly, rather than imperfect model systems.”
Companies like Xaira, where Bajaj serves as director and interim president, “are producing foundation models of biology that may have these properties, but they need to be demonstrated as reliable proxies for human biology.”
The goals of using AI to enable “early disease interception” and keep “people healthy” are examples of blue sky opportunities that Bajaj highlighted. “Understanding the molecular and genetic drivers of disease, individual risk, and intervening early—AI will be a core component of making that available to the masses,” he said. Achieving such aims would represent “a fundamental transformation.”
Footnote: ¹ Harvey AI has a $5 billion valuation. Other prominent general genAI implementation examples include Thomson Reuters CoCounsel’s AI-powered document analysis tools, AlphaSense’s $4+ billion market intelligence platform, Stanford’s RAG systems connecting to UpToDate for clinical decision support, Mayo Clinic Platform’s exploration of RAG for medical guideline repositories, and enterprise tools from Atlassian, Glean and AWS that provide RAG-backed documentation search across internal runbooks and API specs.
Filed Under: Data science, Regulatory affairs