Some reports state that extensive use of in silico technologies could reduce the overall cost of drug development by as much as 50%
When it received approval from European authorities last year for Pegasys, its combination drug for the treatment of hepatitis C, F. Hoffmann-La Roche AG, Basel, Switzerland, was able to receive approval in a subpopulation of patients for a dose its researchers had never studied. “We were able to simulate this patient population based on a model, and the health authority agreed with our approach and we got approval for that, which is quite a big value added,” says Karin Jorga, PhD, global head of clinical pharmacology for the company.
When Pegasys [peginterferon alfa-2a (40KD)] was being considered by the US Food and Drug Administration (FDA), the company did “extensive” computer modeling to determine the optimum dose for different patient populations, taking into account a variety of factors, including the genotype of the virus and the weight of the patient, but the agency still had questions. “We could basically overcome these concerns from the FDA and convince them with these modeling and simulation techniques that the proposed dosing regimen was approvable, and in the end, we got that approval,” she says.
Among the factors slowing the adoption of in silico tools in the pharmaceutical industry is a shortage of researchers with the appropriate skills, says Daniel Weiner, PhD, vice president of the software products business unit at Pharsight Corp., Mountain View, Calif. “There are a limited number of trained scientists that can effectively do this kind of work, and that’s been a rate-limiting step.”
Weiner says there are two aspects to the problem. One is training people how to use the tools, the other is mentoring them so they know when to apply those tools. “You can send somebody to training classes and they know all the buttons to push on the tools, but knowing how to apply that tool in the context of an efficient drug development program is another matter.”
Finding people with the appropriate skills set, those who are as comfortable with mathematics as they are with pharmacology, is difficult, says Karin Jorga, PhD, global head of clinical pharmacology for Hoffmann-La Roche, Basel, Switzerland. “This combination is very hard to get, so we sometimes, for example, train chemical engineers in this. But they have a complete lack of understanding of biology and variability and these things are very strange to them.” It is equally challenging for them to get a biologist or pharmacologist comfortable working with differential equations and statistics, she adds. “It’s a very rare breed. I have one guy in my group who is a physician and a mathematician, but they are very hard to get.”
Drug giant Pfizer has been able to recruit a number of scientists with the specialized skills that they need, but in other cases, they hire recently minted PhDs and give them specialized training in the use of modeling and simulation, says Richard Lalonde, PharmD, executive director, clinical pharmacokinetics and pharmacodynamics, Pfizer Global Research and Development, Ann Arbor, Mich. He adds that Pfizer has provided funding to academic centers in an effort to increase the number of scientists with these skills. Several universities have such programs, including the University of California at San Francisco, the University of Uppsala in Sweden, the University of Leiden in the Netherlands, the University of Florida, and the State University of New York at Buffalo. “This is basically more or less the big schools. Then we have to share these people across the pharmaceutical industry, so we’re head hunting,” Jorga says.
Jorga’s anecdotes underscore the growing use of in silico tools at both the preclinical and clinical ends of the drug discovery, development, and delivery process. While other industries have used in silico design for decades, pharmaceutical and biotech have only begun to embrace the technology in earnest over the last several years, says Richard Lalonde, PharmD, executive director, department of clinical pharmacokinetics and pharmacodynamics, Pfizer Global Research and Development, Ann Arbor, Mich. “Modeling and simulation are used extensively in other businesses like the aerospace industry, the automotive industry, meteorology. People don’t build planes and do it by trial and error to see which one is going to fly. You learn a lot from your previous experience and you know that the first time that plane takes off, it’s going to fly. That is done through computer models, and we are trying to apply this to the biological world of drug development.”
Cure for Industry Malaise?
Modeling and simulation could play a key role in alleviating the industry malaise outlined in an FDA report released last year, Innovation and Stagnation: Challenge and Opportunity on the Critical Path to New Medical Products. It noted that while spending on biomedical research has increased greatly over the last decade, the submission of new molecular entities has remained flat and the number of biologics license applications has plummeted. The report also pointed out that a drug entering phase I trials in 2000 was no more likely to reach the market than one entering phase I trials 15 years earlier.
Outdated technologies may be one reason for those discouraging numbers, the report states: “Often, developers are forced to rely on the tools of the last century to evaluate this century’s advances.” But the agency believes there are steps industry can take. “As biomedical knowledge increases and bioinformatics capability likewise grows,” the report states, “there is hope that greater predictive power may be obtained from in silico (computer modeling) analyses such as predictive modeling.” The report, citing data from PricewaterhouseCoopers, states that “extensive use of in silico technologies could reduce the overall cost of drug development by as much as 50%.”
Donald Stanski, MD, scientific advisor to the director of the FDA’s Center for Drug Evaluation and Research, says the agency sees model-based drug development as a critical path concept that can help companies increase their efficiency and help the FDA improve its review quality. “We see modeling and quantitation as fundamental for the future,” he says. “Certain companies are making major investments and finding that it is changing how they develop drugs. Other companies are sitting on the sidelines waiting. When it’s clear that it works, they’ll use it. Instead of being innovators, they want to be fast followers.”
One of the early adopters was Roche. The field began taking shape nearly 20 years ago with the emergence of pharmacokinetic modeling, Jorga says, and was followed by pharmacokinetic/ pharmacodynamic modeling to determine mechanism of action.
In the 1990s, new software allowed researchers to use more sophisticated techniques that enabled population modeling, and over the last few years, simulation tools began to come on the market. “This is where it becomes interesting, because based on the predictions that you’re trying to make, you’re trying to simulate the future based on results from the past. . . . These allow us, in theory, to simulate the drug development process.”
Jorga says that while Roche has been using various aspects of in silico technology for the last 10 to 12 years, it wasn’t until 2002 that the company fully invested in it and made it a routine part of the drug discovery process. They built a strong network across the company and invested in teaching scientists the specialized skills needed to do the work. It is playing an increasing role in the company with the goal of using it routinely on all projects. At present, however, they only have the resources to apply it to 50% of their projects, particularly those in phase II and III trials.
“We have a very strong network of modeling experts in the field, so the preclinical people start with their in silico predictions very early, and then at about nine months before entry into humans, the clinical group takes over, and they carry on the work. We’re trying to strip the workload between the different functions so that we optimize the support to the most important projects,” Jorga says.
Compounds are prescreened, synthesized, and tested to predict how they will be absorbed and distributed in humans. The first dose is selected based on these predictions, an experiment is done in human volunteers to see how the drug behaves, and the new information is collected. “We update our in silico model, and from this we make a prediction of how this drug would behave in patients. . . . You learn something from an experiment, then update your model and make a prediction for the next phase,” says Jorga. “It’s an iterative process.”
In the 1990s, the main tool the company used was NONMEM (nonlinear mixed effects modeling), a software package first developed at the University of California at San
|In Silico Modeling Guides Pharma Trials
When Richard Ho, MD, PhD, wants to understand a disease, he gets quantitative—really quantitative. As head of medical informatics at Johnson & Johnson’s La Jolla, Calif., site, Ho employs large-scale physiological models from Entelos in Foster City, Calif., to validate targets, select biomarkers, and optimize clinical trials for type II diabetes, obesity, and anemia. Disease modeling is still relatively rare in the pharmaceutical industry, and it is hard to convince people to use it, Ho says, because engineering is not one of the industry’s normal areas of expertise.
“It is an interesting approach, and we’ve had some great insights and value come out of it, but it’s very difficult to drive forward because [disease modeling] is different from what pharma has done.” The models start from plasma levels of glucose and insulin (or hemoglobin, in the case of anemia) and calculate metabolic rates and the resulting ebb and flow of fatty acid levels and glycogen stores during meals and fasting to arrive at an understanding of how changes at any point in the system affect the body. The models have validated a number of targets and suggested useful in-licensing opportunities, Ho says. Models have also predicted cases in which extrapolating from an animal gene knockout model to humans might not work, and the researchers have experiments to test those predictions. For clinical trials, the models can predict the speed and size of responses, sometimes allowing researchers to reduce the size of a trial. In an early phase I trial for a first-in-class diabetes drug, Ho says modeling predicted that side effects would show up only at very high doses, which allowed researchers to cut the trial size by two-thirds.
Francisco in 1979. The primary architects of NONMEM are Stuart Beal, PhD, and Lewis Sheiner, MD, considered by many to be a pioneer in developing mathematical and statistical models to describe drug response.
NONMEM is a regression program which can be used to fit many different types of data and to simulate data. It includes a package of subroutines that can compute predictions for population pharmacokinetic data. The software has been licensed to GloboMax LLC, Hanover, Md., for distribution and further development. “This is the tool which brought us a big step forward in terms of population analysis,” Jorga says.
A newer tool that they use is the Trial Simulator by Pharsight Corp., Mountain View, Calif. This software allows users to simultaneously model the effects of pharmacokinetic and pharmacodynamic variables, including genetic variation, so that they can investigate the sensitivity of a clinical trial design to various sources of uncertainty. Models can be modified to compare various scenarios and to explore alternative designs that might improve clinical trials, says Daniel Weiner, PhD, vice president of the company’s software business unit.
Another technology provider is Simulations Plus Inc., Lancaster, Calif., a developer of ADME/Tox (absorption, distribution, metabolism, excretion, and toxicity) neural net and simulation software. ADMET Predictor is a structure-property program that predicts important properties for oral absorption, including pKa, and several pharmacokinetic properties and various aspects of toxicity. It can be used for high-throughput in silico screening of large compound libraries or to estimate ADMET properties for individual compounds. While applying these tools to pharmaceutical discovery may seem simple, it can actually be very difficult, says Walt Woltosz, CEO of Simulations Plus. “When you hear the expression, ‘It’s not rocket science,’ it’s the wrong expression,” says Woltosz, who came to pharma from the aerospace industry. “Rocket science is easy. Pharmaceutical science is terribly difficult, because you have biology thrown in. Biology is inconsistent.”
Entering Comfort Zone
Like Hoffmann-La Roche, Pfizer was an enthusiastic early adopter of in silico technology, says Lalonde. “We have been working on promoting this approach to drug development for several years now, and what we’re seeing is almost a drastic increase in the use of these types of approaches and how comfortable people feel with making decisions based on them.” Lalonde estimates that his group has about 20 people who do in silico work, although that is not their sole responsibility, and adds that there is a joint effort between his team and the biostatistics group at the Ann Arbor site. “We’ve pooled our resources to try to do this better.”
The use of modeling has paid off for the company over the last few years. One example, Lalonde says, is Neurontin (gabapentin), which has been approved for a variety of neuropathic pain conditions, including post-herpetic neuralgia. “That was about three years ago, and there was a statement put on the label by the FDA saying that pharmacokinetic and pharmacodynamic modeling provided confirmatory evidence of efficacy . . . That was, we think, precedent setting at the time because it impacted the regulatory decision making.” Unfortunately, there is more demand for these techniques than there are skilled people to deliver them, so they are used in only about half the clinical studies done at Pfizer, Lalonde says.
Although companies have had success with in silico tools at the clinical end of the spectrum, its effect on preclinical research at Roche has not been fully realized, says Navita Mallalieu, PhD, senior principal scientist in discovery pharmacology at the company’s Nutley, N.J., site. “There have been organizational changes in support of modeling and simulation, but I feel that it’s still a pretty young department that has not yet spread throughout the preclinical organization.”
Mallalieu says doing simulations preclinically is more difficult than doing them clinically because the clinical data is so much more superior and researchers have a tighter focus. “You’re talking about predicting the behavior of one molecule, where what we’re trying to do is predict the behavior of a hundred molecules . . . Our hurdle is a lot higher because the input, the quality of the data, the quantity of the data, just exponentially decreases.” She says modeling and simulation can have more of an impact in the later preclinical stages, where the quality and quantity of data improves.
The group in Nutley comprises Mallalieu and six other pharmacokineticists who perform preclinical PK and also provide modeling and simulation support. They have several postdocs fully dedicated to modeling and simulation. Early preclinical work focuses on using a combination of in vitro and in silico measurements that feed into commercially available and in-house modeling software before the models are validated with in-house and marketed compounds. If the validation predictions meet preset criteria, the group will then prospectively predict how the new compound is going to behave. “We’re still new at this, so I don’t believe we have made a significant impact on the project teams in terms of the selection criteria, which tend to be driven largely by activity.”
Improving Preclinical Prediction
Mallalieu’s team has been working on ways of improving the use of in silico modeling in preclinical research, including identifying assays with certain input parameters they deem “vital” to the quality of prediction, assays that they hope to get increased resources for. She also wants to work on using in silico modeling to provide an overview. “We’re at a point where we get way too much information to comprehend across a variety of compounds, so if we do anything in preclinical development, we should be able to find a way to use modeling, because it’s the only way to tie it all together . . . It’s just so hard to look at 30 different data points for one compound and then do that for 100 compounds and somehow weigh one higher than the other.”
Although the use of in silico modeling is becoming more common, there are a number of factors slowing its adoption. The most important may be a lack of researchers trained in the specific skills particular to the field (See “Wanted: Mathematician/Biologist” on page 25). Another factor that needs to be addressed is software and hardware, says Jorga. “One of the limitations in this technique is always computer power and software, so you’re only as good as the software and the computer power that you have at the given time.”
In silico modeling will play a role in the future of pharmaceutical discovery and development, but the extent of that role remains to be seen. “At this point [it won’t] fizzle out,” says Mallalieu, “but I wish it spread faster than it has, and I think the reason that it hasn’t is that it hasn’t caught on. It’s a vicious cycle. You have to prove yourself to grow, but you need a certain critical mass in order to prove yourself.”
Pfizer’s Lalonde is optimistic. “The ones that can successfully implement this will probably be swallowing up other companies that are not so successful, because they will keep doing it the old-fashioned way and driving up the cost to astronomical levels, costs that will be very hard to justify in the marketplace. . . . All successful companies will have to do this routinely because it’s just too expensive to do it by trial and error, the way it’s often been done in the past.”
Filed Under: Drug Discovery