Whatever the data source, statistical data mining methods, including Bayesian modeling, can efficiently extract safety information.
Assessment of drug safety is a primary goal in drug development and post-marketing. However, in contrast to the rigorous design and analysis of efficacy endpoints, safety events are typically unplanned and occur spontaneously. Safety data are also highly multivariate, with many possible adverse event terms. Most of these terms are very sparsely-populated in clinical trial and post-marketing data sources.
These data structure characteristics make safety data challenging to analyze and report. As such, while large amounts of safety data are collected across all phases of clinical trials and post-marketing studies, basic information, such as relationships between drug treatments and adverse events, often is not well-captured in statistical analyses and clinical study reports.
Novel data mining techniques can help address some of the issues with safety data and extract the salient clinical information from safety data efficiently. This helps identify potential safety issues sooner and more accurately.
This article outlines some of the issues with adverse event data analysis in pre- and post-marketing settings, and describes some data mining methods that address these challenges. Data mining and graphical analysis methods are aimed at pushing drug candidates through the development process faster while highlighting safety issues sooner. In this context, patients get new and safe therapies faster while drug development and post-marketing analysis is streamlined through sponsors, researchers, and regulatory agencies.
Pre-marketing safety data analysis
Pre-marketing data come from prospective, randomized, double-blind clinical trials. These data are high-quality and can be used to assess causal relationships between drug treatment and drug safety. Safety data are challenging to work with, however, and much care needs to be taken in order to provide an accurate and thorough assessment. Some challenging aspects in the clinical trial setting are:
• there are many different safety endpoints (e.g., hundreds or thousands of adverse event types;
• safety data, such as adverse event counts, can be quite sparse; and
• clinical trial populations are small compared to the post-market drug-use population.
|
With many different endpoints under examination, safety issues can arise by chance alone. Such false-positives are particularly problematic under a formal hypothesis testing framework. Sparse data and small trial population issues are perhaps best illustrated by the ”law of three,” whereby, if an adverse event occurs one in 100 times in the population at large, you would need 300 subjects in a clinical trial to detect just one adverse event of this type with 95% confidence.
The above data issues motivate the statistical and graphical analysis and must be addressed in any meaningful analysis of safety data in clinical trials. The current thinking is that statistical modeling and data mining methods are better suited to the analysis of safety data than formal statistical hypothesis testing; and that interactive graphical analysis and review of clinical data enables surfacing of potential safety signals early in the process (Feng et al., 2007; O’Connell, 2007).
There are two key questions in adverse event (AE) analysis: Which adverse events are elevated in treatment vs. placebo? What are the patterns of AE onset in treatment vs. placebo? These questions involve treatment effect patterns and population-level analysis, with attendant subject-level review.
New statistical analysis and data mining techniques address these questions, while taking into account the multiplicity and/or sparseness issues. Bayesian hierarchical models can highlight adverse events related to treatment and facilitate detailed patient-level review.
Hierarchical Bayes modeling
Hierarchical Bayes models are attractive for AE analysis with their natural way for handling sparse data and shrinking estimates according to model-parameter hierarchies. For example, one may set up a hierarchy where AE preferred terms are nested within body systems (MedDRA SOCs) or within structured MedDRA queries (SMQs). This provides stable estimates for adverse event rates, risk differences, and relative risks that borrow strength within the SOCs/SMQs. In many cases, this is the only way to calculate a relative risk due to the many zero counts observed for some preferred terms.
The multiplicity issue may also be accommodated directly within the Bayes model by specifying prior distributions that consider some of adverse events as coming from a population of events that is affected by the treatment, and the rest from a population of events that is not affected by the treatment. A mixing parameter that controls the proportion of events affected by the treatment can be set with its own prior parameters, and all of these parameters can be estimated from these data. This approach models the treatment affected/non-affected populations directly, and avoids the problematic non-Bayesian approach where probability statements made about individual adverse event types, when viewed together, ascribe tiny probabilities to quite real situations in which many adverse events are not affected by the treatment.
Figure 1 shows a p-Risk plot summarizing results of a hierarchical Bayes analysis of a Phase 2 clinical trial involving a hypothetical drug, Prostinol. This plot shows the relative risk on the x-axis versus the confidence ascribed to the relative risk on the y-axis. These summaries are pulled from the posterior distribution of the relative risk in the Bayes analysis. The x-axis is a ”clinician” axis and AE-points to the right end of this axis represent AEs with a higher relative risk of occurring under the treatment, as compared to the placebo. The y-axis is a ”statistician” axis and shows the confidence ascribed to the relative risk. This confidence goes up as there are more observations on the AE. AEs in the top right-hand corner of the plot are thus AEs with high relative risk and high confidence that the AE is treatment-emergent.
Note that in a non-Bayes analysis, a relative risk for some of the AEs couldn’t even be calculated due to the zero counts in the placebo. The Bayes analysis provides an estimate and credible interval for the relative risk by borrowing strength from AEs within the same body system.
The p-Risk plot in Figure 1 highlights the adverse events that are elevated by the treatment. Each affected AE can be explored further through interactive graphics and drill-down to patient-level details, as part of a clinical data review.
Post-marketing safety
There are three types of data often available for assessing post-marketing safety:
• Phase 4 clinical trials,
• spontaneous adverse events reporting systems (e.g., the FDA AERS database), and
• insurance claims data.
Phase 3/4 clinical trial data is, by far, the best quality, albeit with much lower sample size. The spontaneous AE reporting systems data is very poor quality (Feng et al, 2007) and is now badly contaminated with entries related to drugs that have involvement in court cases. As such, analysis of these reporting system data sources have started to lose credibility and interest in the post-marketing safety community.
In recent times, insurance claims data have attracted interest for analysis of adverse events (Ryan et al., 2007; Obenchain, 2007). These data are much higher quality than spontaneous reporting databases. They have large populations of patients (millions) and long subject-level data sequences (often years). Such data enable estimation of incidence rates, drug-condition association, and rare adverse events. These are observational data, however, and do not facilitate the type of causal inference available with prospective, randomized, double-blind clinical trials.
The biggest issue is the so-called ”drug-channeling effect,” where the sickest patients are more likely to get the newer, top-shelf drugs. Adverse events are typically more prevalent in the sicker patients and often raw adverse event rates can be higher for the drug that is actually the better/safer treatment. Some corrections can be made for this problem, for example, a pre-analysis propensity scoring step in which subjects are matched before the comparison of drug effects on adverse events is made. Such matching can identify populations or case-control pairs that provide a more apples-to-apples comparison of drug effects on adverse events. The matching may not be possible for a particular comparison of interest however, and much care should be taken in this type of analysis. Propensity scoring and causal inference are the subject of much current statistical research (Obenchain, 2007; Ryan et al., 2007), and these methods hold some promise in the analysis of observational insurance claims data in the assessment of drug effects on adverse events.
Summary
Safety data are challenging to work with and much care needs to be taken in order to provide an accurate and thorough safety assessment. Prospective, randomized, double-blind clinical trials (Phase 3 and 4) provide by far the best quality data for assessment of drug effects on adverse events. However, the size of these studies is usually small as compared to the real-world population taking the drug. Insurance claims data show promise as an alternate data source, with higher sample sizes. However, these data are observational and not from prospective, randomized, double-blind trials. As such, great care needs to be taken in their analysis and pre-analysis matching of subjects is required as a minimal data quality effort.
Statistical data mining methods, including Bayesian modeling, can be used to efficiently extract safety information from most data sources. These models naturally handle sparse data in small and large datasets, shrinking estimates according to sensible hierarchies that are pre-specified in the analysis.
About the Author
Michael O’Connell has been working in the pharmaceutical arena for the past 15 years. He has written a number of statistical methods papers and software packages; most recently in the area of safety data analysis and reporting.
This article was published in Drug Discovery & Development magazine: Vol. 11, No. 4, April, 2008, pp. 36-38.
|
Filed Under: Drug Discovery