
[Image generated via Sora]
Not RPA 2.0
According to Kailash Swarna, a managing director at Accenture, AI agents represent a fundamental shift, distinct from prior automation efforts, not just “RPA 2.0.” While earlier technologies like robotic process automation (RPA) meant users “almost needed to hard-code those specific… automation capabilities,” the current wave is different. “What was missing [before] was the glue that helped human beings to be able to interact with the model using natural language,” Swarna explained. “That seamless way of working never existed before.” This “glue” of natural language understanding, he suggests, allows agents to be “much more adaptable… [and] therefore actually more meaningful,” poised to redefine work rather than just execute pre-programmed rote tasks.
The multi-billion dollar opportunity for biopharma

Kailash Swarna
The Accenture-Wharton study puts a hard number on the promise of AI agents: $180 billion to $240 billion in annual U.S. value. Swarna noted that a core aim of their research was “to really think about what would be the impact of these agents on the workforce.” The report projects this substantial value comes from improving research productivity and reducing costs across operations. Specifically, this includes lower COGS, faster R&D cycles, and quicker launches, as agents address the work underlying that 55% of impacted total workforce hours (which is based on an analysis of 300 day-to-day tasks mapped across 90 job roles). Such scale makes adoption less a nice-to-have than a balance-sheet mandate.
The report breaks down how the pieces fit together:
- Utility agents – the Lego bricks. “[The] base layer is a set of utility agents that do tasks, individual tasks that can be pulled together like Lego bricks,” he said. (These are about 40% of digital agents).
- Orchestrator agent – the project manager. (About 10%). This agent “understands what the complex task is” and delegates work. When something looks off, it “[escalates] when there’s a conflict or… an anomaly detected… to [a] human for final adjudication.”
- Super agents – the specialists. (Roughly 50%). They stitch multiple utilities together for tougher jobs.
Three immediate wins with AI agents
Data retrieval that doesn’t drain a day “Information retrieval … is a monotonous, boring, error-prone task,” Swarna concedes. Agents, by contrast, “can do very well… pull data in a manner that is very reproducible, very efficient … [and] agents are never going to get tired.” A U.S. health insurer put that claim to work, tripling document throughput and cutting processing time 90%, with humans stepping in just 2.7% of the time.
Pattern crunching at discovery scale. Humans spot a few patterns; agents juggle thousands. “Humans are very good at recognizing some patterns, but we’re very bad at being able to do large-scale, multi-pattern optimizations in our head… That’s where agents can shine,” Swarna notes. Pairing tools such as an Assistant Agent or Analytics Agent with human intuition helps teams rank targets faster, design smarter trials, and trim dead ends before they hit the wet lab.
Lab robots that rewrite their own playbook. Old lab automation ran on fixed scripts. “We had to program those robots to carry out specific, discrete, predetermined tasks. The robots could not dynamically, on the fly, readjust,” Swarna says. With agents in the loop, “[now] the robots can be dynamically reconfigured with the use of the right kind of agents,” allowing equipment-handling systems and vision inspectors to tweak protocols mid-assay, shifting plates, volumes or imaging parameters without a coffee break or a code push. Precision goes up; re-runs go down.
The human + agent split
As AI agents handle more operational tasks, the human role transforms rather than diminishes. The system architecture explicitly includes human oversight, where the orchestrator agent “[promotes conflicts or anomalies] up the pyramid to the human being who’s going to make the decision,” Swarna emphasizes. This defines the evolving Human+ role with a focus on strategic oversight and collaboration. The Accenture/Wharton report champions a “Human+ workforce,” advising organizations to “Match tasks to strengths” (Page 3, Individuals) and “Build future-ready organizations” with adaptive infrastructures to “reskill at scale and continuously rebalance human-agent collaboration.” The report also highlights focusing on how “AI can improve not only the effectiveness of people but also how individuals get enjoyment from their work and collaborate.”
Even as AI agents chew through 55% of routine work, they still bump edge cases to people. “If there’s a conflict or an anomaly, the orchestrator kicks it upstairs for final adjudication,” Swarna says. The study urges companies to sort tasks by comparative advantage. That is, let agents grind through classification and data checks; keep humans on strategy, ethics, and anything that smells weird. That split only works if leaders budget for retraining and keep org charts flexible enough to shuffle jobs as the tech improves.
Extra-agentic tech
Outside of agents and not an explicit focus of the Accenture/Wharton report, other computational advances related to physics-based modeling promise significant biopharma shifts in addition to agents. In April, FDA announced a plan to phase out the animal testing requirement for monoclonal antibodies and other drugs. Realizing that ambition may be something of a journey. “That translatability is still not at a place where it needs to be,” Swarna said. The core challenge, often lies in the fundamental “physics-based modeling that goes behind it,” Swarna said. He points to specific complexities: “if you’re looking at ADME, PK/PD, we still simply don’t know how some of these things work.” Even where we understand the biology to some extent, but we might not have “the fidelity in terms of the underlying descriptors in the math to be able to build an effective trend in that space just yet,” he said. But the goal of replacing animal testing is “laudable” and progress steady. “The first step really is perhaps eliminating non‑human primates.” To get there, digital‑twin models must “demonstrate some degree of equivalency [between] the digital twin model versus… the mouse model,” then climb the ladder species by species.
Filed Under: machine learning and AI