Drug failures due to toxicity range up to 40%, making it the main reason for preclinical attrition. Researchers cite 10 different ways to bring down those toxicity numbers.
Patrick McGee
Senior Editor
After its approval in 1999, Vioxx (rofecoxib) rapidly became a blockbuster. Merck & Co. Inc., Whitehouse Station, N.J., estimates that 105 million prescriptions were written for the drug in the United States through August 2004. In 2003, Vioxx, a nonsteroidal anti-inflammatory COX-2 inhibitor, made the company $2.5 billion. But that all ended last September when Merck announced it would withdraw Vioxx after a study revealed that compared to placebo, people taking the drug for more than 18 months had double the risk for cardiovascular events. A similar fate befell Bextra (valdecoxib). This April, Pfizer pulled its COX-2 inhibitor after reports of increased risk of adverse cardiovascular events in short-term coronary artery bypass surgery trials and potentially life-threatening skin reactions, including deaths.
Seven of the 303 new molecular entities approved by the US Food and Drug Administration (FDA) between January 1994 and April 2004 were withdrawn from the market due to safety concerns. While the number is small, constituting only 2.3% of approvals, the accompanying harm to patients and the billions spent developing and marketing the drugs looms large. Despite attorneys working furiously to get juries to believe such drugs were developed and approved too hastily, all went through FDA’s laborious approval process.
Those facts underscore the reality that, despite the general public’s perceptions, the benefits of drugs are not free of risk, however minimal. “It’s estimated that 100,000 people die every year in hospitals from drug side effects, and it’s estimated that there are billions of dollars of cost from drug side effects in this country. Most of those are from known side effects of the drugs. So clearly, although drugs have really advanced health and saved millions of lives, there is a price that we pay for that right now. The goal is to minimize that price,” says Janet Woodcock, MD, acting deputy commissioner for operations at the FDA.
Researchers in the pharmaceutical and biotech industry have been developing tools over the years to maximize the efficacy of drugs while minimizing toxicity, and
advances have been made. A decade ago, the number of drugs failing preclinically due to poor pharmacokinetics was upwards of 40%, but improved in vitro and animal models have reduced that rate to about 10%. Failures due to toxicology, however, are still in the 30% to 40% range, making it the number one reason for preclinical attrition. That disparity is likely due to outdated tools, says “Innovation and Stagnation: Challenge and Opportunity on the Critical Path to New Medical Products,” a white paper published last year by FDA: “Despite some efforts to develop better methods, most of the tools used for toxicology and human safety testing are decades old. Although traditional animal toxicology has a good track record for ensuring the safety of clinical trial volunteers, it is laborious, time-consuming, requires large quantities of product, and may fail to predict the specific safety problem that ultimately halts development.” The white paper noted that one pharmaceutical company estimated that clinical failures based on liver toxicity cost them more than $2 billion over the last decade.
In an effort to improve their performance, the Bristol-Myers Squibb Pharmaceutical Research Institute, Princeton, N.J., formed a discovery toxicology group that analyzed approximately 100 development compounds that failed during a 12-year period. This helped them determine which technologies to use early in the discovery and development process to reduce attrition, and they decided that a combination of tools would be most effective. Many companies adopted newer in vivo, in vitro, and in silico tools, while others are trying to harness the potential of newer fields such as pharmacogenomics and toxicogenomics.
But despite the promise of these newer tools, the goal is not necessarily to provide definitive answers in early preclinical stages, says Eric Blomme, DVM, PhD, project leader in cellular and molecular toxicology, Abbott Laboratories, Abbott Park, Ill. “The definitive answer is really at the end of the clinical trials. What we want to do here early on is to proceed to a rank-ordering exercise and select the candidates with an optimal toxicology profile to move into a preclinical safety study.” What follows is a snapshot of 10 approaches that pharmaceutical companies use to minimize drug failures due to safety and toxicity concerns.
1) Intelligent chemistry
When it was founded 10 years ago, Millennium Pharmaceuticals Inc., Cambridge, Mass., was able to create a research and development model free from industry preconceptions, says Peter Smith, PhD, senior vice president of drug discovery and preclinical. “Before we take a drug into development, we really do a much better job of what I call kicking the tires on the molecule.” They work with their chemists and biologists on molecules, defining the problems inherent in them, going back into the lab, designing new molecules, and testing them again. “It’s an iterative process, with the goal being not so much quantity but quality. By the time we have a molecule go into development, the likelihood of that molecule succeeding is going to be higher.”
A key to that approach is what Millennium has dubbed DABS, or discovery assays by stage (see charts above and below). At every stage of discovery, from hit-to-lead through absorption, distribution, metabolism, elimination, and toxicity (ADME-Tox), researchers use a set series of assays to help make decisions about moving molecules to the next stage. For example, in the hit-to-lead stage, they will perform early animal pharmacokinetics, screen for hERG, perform some early gene toxicity assays, and supplement discovery pharmacology by looking at blood levels in their animal models, Smith says. “Getting there faster with an accurate read on whether the molecule is going to work is going to be a huge benefit for us, because it will tell us whether or not we need to accelerate development, spend more money and really push the program, or whether we need to kill the program.”
2) Early in vitro testing
Smith says Millennium screens molecules in vitro at an early stage to prevent taking weak candidates into animal studies. They perform studies in isolated hepatocytes and look at expression profiles to build a data base. “When we compare that expression profile with known bad actors or known bad compounds, we can be pretty sure we’re not going to get a similar expression profile. We can therefore determine whether we would or would not take one of those molecules forward.” But while in vitro techniques have evolved more rapidly than in vivo techniques, they are still often of limited predictive use because they cannot accurately mimic the complex environments that drug candidates encounter in a living organism.
Hurel Corp., Beverly Hills, Calif., offers a microfluidic biochip with separate but fluidically interconnected “organ” or “tissue” compartments. Each compartment contains a
culture of living cells drawn from or engineered to mimic the function or functions of the respective organ or tissue of a living animal. The device, also called a Hurel, has microfluidic channels between the compartments that permit a culture medium that serves as a blood surrogate to recirculate. Drug candidates are added to the medium and are distributed to the cells in the “organ” compartments. The effects of the compounds are detected by measuring or monitoring physiological events such as cell death, differentiation, immune response, or disturbances in metabolism or signal transduction pathways.
Instead of focusing on the chip level, other companies have been thinking big. Earlier this year, Thermo Electron Corp., Waltham, Mass., introduced the LeadStream, a turnkey ADME-Tox system. It combines three integrated instrumentation modules: the WorkCell, a fully automated, modular platform that conducts a variety of ADME-Tox assays; the Reformatter, which provides online preparation of plates for WorkCell; and a liquid chromatography/mass spec system. LeadStream also includes Orchestrator, software that manages the flow of samples through the laboratory.
In vitro efforts could be assisted by new ADME-Tox screens developed as part of a program funded by the National Institute of General Medical Sciences (NIGMS), part of the National Institutes of Health, Bethesda, Md. This year, the institute will divide $2 million in grant money among four to seven research programs lasting up to four years. “We really wanted our research community to think about new areas that they felt were needed to take this to a different level that could really help with predictive ADME and predictive toxicology,” says Richard Okita, PhD, program director in the Division of Pharmacology, Physiology, and Biological Chemistry at NIGMS. Grant announcements will be made in September.
3) Novel in vivo models
Mammalian models are key in predictive ADME-Tox, but they are also expensive, work intensive, require large quantities of compound, and are not always accurate. “Part of the problem is how predictive our animal models are. They’re not too bad, but they’re not perfect. That’s something to keep in mind,” says Smith. And while conventional cell-based assays can evaluate the potential effects of drugs in culture, they cannot accurately gauge metabolic complexities that affect drug efficacy or cause toxicity. Over the last several years, however, researchers have been trying to address those shortcomings by developing new in vivo models (See related story).
Model systems such as zebrafish as well as roundworms and fruit flies are much less sophisticated organisms, but they are easier to use than mammals and inexpensive, says Randall Peterson, PhD, of the cardiovascular research center at Massachusetts General Hospital, Boston. “These are cheap, high-throughput things that are enabling entirely new kinds of applications. They are allowing people to look at toxicity much earlier in the process.” Peterson’s lab looked at QT prolongation by developing an automated way to test heart rates in zebrafish in a 384-well plate and found it to be highly accurate.
Another company exploring novel in vivo tools is Hepaticus Inc., Woodbridge, Conn., which developed a way to model human liver cell functionality using immunocompetent rats. While the technology has not yet been validated for ADME-Tox, the company believes it could be used for preclinical detection of human liver toxicity, which is missed in assays using normal rats. Hepaticus is now seeking further venture funding to further develop the technology.
4) In silico modeling
Another technology is in silico tools for modeling and simulation, a technology that has been used for decades in other industries. Smith says researchers are using a combination of tools to achieve their aims. Millennium is using an in silico method which combines three different types of software, Topkat, DEREK and Multi-Case. “These are each stand-alone, in silico computer modeling systems for predicting genetic toxicity. We actually have a way, from a software basis, to link them all together. It gives us an even more accurate prediction of actual gene toxicity.”
A number of other tools are available for screening. (See the April 2005 cover story for details). Simulations Plus Inc., Lancaster, Calif., is a developer of ADME-Tox neural net and simulation software. ADMET Predictor is a structure-property program that predicts important properties for oral absorption, including pKa, and several pharmacokinetic properties and various aspects of toxicity. It can be used for high-throughput in silico screening of large compound libraries or to estimate ADME-Tox properties for individual compounds.
Okita at NIGMS says investigators are making advances, but one problem with in silico modeling is the limited information available. “The information they want to have access to is, of course, in the drug companies, and they have limited test information. That’s one of their biggest hang-ups. There are studies in the literature that they can use to get some testing done but to really prove and validate how good their models are, they really need access to a much wider range of data.”
5) Exploring toxicogenomics
Toxicogenomics has emerged as a method for detecting and predicting compounds with toxic liabilities at a very early stage. For example, when studying dosing in animals, the typical endpoints are serum chemistry, hematology, histopathology, body weight changes, and food intake. But while it takes two to four weeks for these changes to occur, gene expression changes can be seen after one to three days of treatment, says Abbott’s Blomme. “The majority of the time you can detect gene expression changes way before these other traditional endpoints, so obviously there is a major difference between dosing for three days and dosing for two weeks as far as compound requirements and resources in general.”
In its toxicogenomics work, Bloome’s lab started by evaluating the liver, mainly because it is a common target of toxicity and 90% of its cells are hepatocytes. They treated rats for three days with toxic doses of a variety of hepatotoxicants and non-hepatotoxicants as controls. Using microarrays, they quickly realized that they could easily differentiate the potent hepatotoxicants and the non-hepatotoxicants from a moderate hepatotoxicant. “We were looking at 10,000 genes, but really of all these genes, only 200 were needed.” Since then, their biostatistician has determined that they can separate these compounds using only 40 genes.
Abbott’s Bloome says they have generated about a dozen gene signatures for toxicology and are now trying to implement an assay that could screen the desired number of compounds. They spent the last few years using microarrays to understand the system, generate the right endpoints, and validate the system, but it was not amenable to the right throughput. They are now moving toward a new platform to evaluate gene expression changes and hope to have it automated in three to four years. (See the July 2004 cover story for details).
6) Discovering toxicity biomarkers
The FDA’s Woodcock believes drug safety can be enhanced by using new sciences, something that will help move the industry from a more trial-and-error approach to
click the image to enlarge (Source: US Food and Drug Administration) |
assessment to a more mechanistic assessment. “This gets down to the use of biomarkers, including imaging, in a much more rigorous way to assess the performance of products.” Biomarkers for HIV, HIV resistance, and HIV response have been of “tremendous” benefit, but researchers are lagging in incorporating other markers. FDA has been working, mainly behind the scenes, she says, to arrange the consortium collaborations necessary to rigorously study many biomarkers and see how they can be incorporated in drug development.
“This has to happen across sectors, and that’s one reason that it hasn’t happened,” Woodcock says. “The biopharmaceutical industry needs to partner with the diagnostic people and with government to share data and do collaborative work. One company alone is probably not going to be able to validate any given biomarker for any given situation.” FDA is now working with the NIH to set up collaborative ventures where people can come together and share data, information, and perhaps do research work together with pooled resources.
NIGMS’s Okita agrees that biomarkers are key. “We’re very hopeful that this will make a major advance for the scientific community when they’re trying to study how compounds are affecting different organs. Especially, of course, when it comes to toxicicity biomarkers.” Biomarkers could have helped researchers recognize much earlier that COX-2 inhibitors were going to affect the prostacyclin system and alter the amount of prostacyclin formed. “If they had been aware of that much earlier, either the restrictions on the use of the drug could have been much wider, or they could have come out with different compounds that did not affect that system as much.”
7) Exploiting pharmacogenomics
Although all drugs go through the rigors of safety and efficacy testing before being approved, individual genetic idiosyncrasies lead to large numbers of adverse drug reactions yearly. “Pharmacogenomics will be the ideal risk-benefit tool,” says Janice Bush, MD, vice president of quality, education, and business support at Johnson & Johnson
An Independent Safety Panel In an effort to restore public confidence in the prescription drug supply, the US Food and Drug Administration (FDA) created an independent board to monitor the safety of drugs on the market. The Drug Safety Oversight Board (DSB) will advise the director of the FDA’s Center for Drug Evaluation and Research (CDER) and will manage the flow of safety information through the agency’s recently proposed Drug Watch Web site to patients and health care professionals. “The real issue here is providing some fresh eyes on different problems in drug safety as they develop postmarket,” says Janet Woodcock, MD, acting deputy commissioner for operations at the FDA. “This brings experienced physicians, primarily from different parts of the FDA, to look at some of these problems and how they’re being managed and to provide input and advice.” The FDA’s actions on the board have been swift. Its formation was announced by Department of Health and Human Services secretary Michael Leavitt in mid-February on the eve of a three-day meeting to discuss the safety of anti-inflammatories such as Vioxx (rofecoxib) from Merck & Co. Inc. Three months later, FDA announced the membership of the board. It will include employees from CDER, the Veterans Administration, and the National Institutes of Health. Susan Cummins, MD, MPH, will serve as the board’s executive director. Woodcock says the board may post information online regarding possible drug safety problems. “Of course, that’s caused a lot of angst in the industry because it’s emerging information. But what we’ve learned is that we can’t wait until we’ve dotted every i and crossed every t with the company about the label before we announce really serious emerging drug safety information.” On the whole, industry is wary of the board. Henry McKinnell, chairman of Pfizer Inc., told reporters the board would create “a giant game of ‘gotcha.’ And that’s not what we need.” |
Pharmaceutical Research and Development (J&JPRD), Titusville, N.J. “Right now, there is no 100% guarantee that any particular drug is going to work in any particular person. People will take a drug and may or may not get the benefit, but the risk is still there.”
In June, the FDA approved BiDil, a cardiovascular drug that shows efficacy in blacks. BiDil, from NitroMed Inc. in Lexington, Mass., is a combination of two drugs, hydralazine and isosorbate dinitrate. The FDA rejected the drug after a study involving all races showed little improvement, but indicated it might benefit blacks more. An ensuing study confirmed that. Earlier this year, the FDA approved the AmpliChip CYP450 test, the first microarray-based diagnostic test for detection of genetic variations that can influence drug efficacy and adverse drug reactions. The AmpliChip from F. Hoffmann-La Roche Ltd., Basel, Switzerland, uses microarray technology from Affymetrix Inc., Santa Clara, Calif. Test results will allow physicians to use that genetic information to determine medications and doses for a variety of common conditions, including cardiac diseases, pain, and cancer.
“If we look back five years from now, we can probably say that this was a critical point in pharmacogenomics. Not necessarily because of the device itself, but because of all the implications that came with it,” says Felix Frueh, PhD, associate director for genomics in the office of clinical pharmacology and biopharmaceuticals at the Center for Drug Evaluation and Research. Frueh adds that the FDA has been doing a good deal of work in the area as well. In March, it released a final guidance on pharmacogenomic data submission. While many are skeptical, Frueh is not. “A lot of people will say you’re never going to find out because this is idiosyncratic, there are familial mutations, you’re never going to be able to do that. I agree that there are a lot of obstacles in the way, but I’m an eternal optimist.”
8) Translational medicine
Traditionally, the pharmaceutical discovery and clinical organizations always worked in silos. The discovery scientists discovered and refined chemical structures to create drug-like molecules, tested them in cellular and animal models, and then passed them along to the clinical organization for testing in humans. Once the compound was thrown over the fence, there was very little interaction between scientists and clinicians. The flow of information remained strictly uni-directional.
The emergence of translational medicine is changing this by increasing communication and bridging the information gap between the two sides. “We are now moving into a phase where all the excitement is at the interface between discovery and the clinic,” says Leslie Hughes, PhD, head of global oncology research at AstraZeneca Plc., Macclessfield, UK. “The ability to feed information back from the clinical trials into the discovery organization is going to be very valuable. It’s all about humanizing the discovery process.” By using tools like biomarkers and effectively capturing and mining clinical data, translational medicine is also looking to identify patient sub-groups and find optimal dosing regimens least likely to cause adverse drug effects.
9) Performing attrition analysis
So what’s to be done after research teams have run compounds through in vivo and in vitro screens, predictive modeling, and a variety of other screens, and they still fail? Shama Kajiji, PhD, is head of the attrition analysis office at Pfizer Inc., and she examines failed compounds to learn lessons on what to avoid in future projects. “Attrition cannot be avoided, but it can be managed. That’s the finding I share with people who are looking at this information,” she says. “Yes, you can get incremental improvements, but you’re not going to go from 95% attrition in our business to say 20%. There’s no strategy that’s going to change it that way, but what you can do is manage it and de-risk your portfolio.”
In 2000, Pfizer formed the Attrition Task Force 1 (ATF1) to look at new technologies and develop firmer guidelines and decision-making processes to help improve pharmacokinetics and attrition. ATF2 came in 2001, and one of the ideas that came out of that was the attrition analysis office. Kajiji and another employee work with company sites to get information on attrition and productivity metrics, and to inform them of potential strategies. Another part of the equation is the productivity and attrition coordination team, a senior management group whose primary role is implementing recommendations.
The position did not exist at Pfizer before Kajiji took it, and over the course of delivering talks on the office at a number of meetings, she has gotten the sense that very little work of this type is being done at other companies. “It’s obviously a very sensitive job . . . The first thing I had to do was establish credibility, so part of my credibility is as a scientist. I was really able to talk the talk. That gave me a bit of a calling card.”
10) Balancing risk management
J&JPRD’s Bush says that, on the whole, industry has become more proactive when it comes to risk management. “Industry and people in industry are very cognizant of risk management, not just as a way to make sure that patients can get the right medicines in the right dose at the right times, but also as a tool for industry to really ensure that we put our programs together correctly, that we think of all the different aspects, and that as we launch drugs, we have a program in place that helps it be used appropriately.”
Bush also believes that risk management is going to become second nature throughout industry. “Right now it’s kind of a new topic and people are just starting to learn about it and embrace it, but I think in the next several years, it will just become part of what we do and we’ll look for better ways to do surveillance.” She also hopes that more realistic models that allow researchers to look more quantitatively at benefit-risk management will be developed, because it is now more an art than a science.
Her group is within the benefit-risk management group at J&J, which has about 350 people worldwide. Bush says J&J made a decision about two years ago that its top 10 drugs should have risk-management plans. Over the last year, they have begun putting together risk-management plans for the rest of their drugs using a formal process. “We’ve put together an internal guidance as well as a template so that when we do our risk-management plans, we do them in accordance with final guidances from the FDA” as well as from international regulatory agencies so that the plans can be done in a systematic and consistent format.
Filed Under: Drug Discovery