Saturday, May 29, 2010

Some clarifications on Non-inferiority (NI) Clinical Trial Design

I noted that FDA recently issued its draft guidance on non-inferiority clinical trial design. Two weeks ago, I attended the DIA webinar "understand the primary challenges facing non-inferiority studies" that featured the presentations by Drs Bob Temple, Bob O'Neill, and Ed Cox from FDA. Several issues that often prevent us from thinking about the use of NI trial are now clarified:

1. Can we use an active control when the active control is not approved for the indication, but used as standard care (off-label)?

This was answered in section V of the draft guidance. "The active control does not have to be labeled for the indication being studied in the NI study, as long as there are adequate data to support the chosen NI margin. FDA does, in some cases, rely on published literature and has done so in carrying out the meta-analyses of the active control used to define NI margins. "

2. When literatures are used to support the choice of NI margin, what if the endpoints are different from various historical studies?

in Section V of the draft guidance, it says "...among these considerations are the quality of the publications (the level of detail provided), the difficulty of assessing the endpoints used, changes in practice between the present and the time of the studies, whether FDA has reviewed some or all of the studies, and whether FDA and sponsor have access to the original data. As noted above, the endpoint for the NI study could be different (e.g., death, heart attack, and stroke) from the primary endpoint (cardiovascular death) in the studies if the alternative endpoint is well assessed".

3. What if there is no historical clinical trial that directly compare the active control vs. Placebo?
We would typically think that if this is a situation, NI study design is not an option any more because there is no way to estimate the NI margin (precisely M1 margin). However, Dr. Ed Cox presented an example during the webinar to estimate the non-inferiority margin in an indirect way. While there is no clinical trial directly comparing the active control with Placebo, we can still estimate the treatment effect of active control by search for evidence separately for active control group and for Placebo group. For example, in anti-infective area, a lot of antibiotics have been used for many years (perhaps even before FDA is formed). There might never been a formal clinical trial to show that certain antibiotics are better than placebo. Now in pursuing a new antibiotics for indication, a placebo controlled study is not ethical (since other antibiotics products are the standard care). In order to conduct a NI study, it is challenging in choosing the NI margin. The suggestion from Dr Ed Cox's presentation is to derive estimate of effect of active control over placebo by:
  • Estimate the placebo response rate or the response rate if untreated
  • Estimate the response rate in the setting of "inadequate" or "inappropriate" therapey
  • Estimate the response rate of the active control therapy from literatures with active control therapy information
A recent paper in 'Drug Information Journal" detailed the similar approach. See the link below for the paper "Noninferiority margin for clinical trials of antibacterial drugs for Nosocomial Pneumonia". FDA's guidance on "Community-Acquired Bacterial Pneumonia: Developing Drugs for Treatment" also has a section for non-inferiority margin issue in antibacterial drug.

Notice that for each estimation, a collection of literatures are analyzed and often the meta analysis is required. The meta analysis may require the random effect estimate. In Dr Cox's presentation, DerSimonian Larid random effects estimates is used. This approach is described in the original paper as well as many books on meta analysis (for example, "meta analysis of controlled clinical trials" by Anne Whitehead). My colleague wrote the SAS program for this approach. A SAS paper compared the results from DerSimonian approach with the results from SAS Proc Mixed and NLMixed.

Sunday, May 16, 2010

Hodges-Lehmann Estimator

According to Wikipedia, "the Hodges–Lehmann estimator is a method of robust estimation. The principal form of this estimator is used to give an estimate of the difference between the values in two sets of data. If the two sets of data contain m and n data points respectively, m × n pairs of points (one from each set) can be formed and each pair gives a difference of values. The Hodges–Lehmann estimator for the difference is defined as the median of the m × n differences.
A second type of estimate which has also been called by the name "Hodges–Lehmann" relates to defining a location estimate for a single dataset. In this case, if the dataset contains n data points, it is possible to define n(n + 1)/2 pairs within the data set, allowing each item to pair with itself. The average value is calculated for each pair and the final estimate of location is the median of the n(n + 1)/2 averages.(Note that the two-sample Hodges–Lehmann estimator does not estimate the difference of the means or the difference of the medians (it estimates the median of the differences, which, if the underlying distributions are asymmetric, is a different quantity), while the one-sample Hodges–Lehmann estimator does not estimate either the mean or the median.)"

I first time heard this estimator was in a pharmacokinetic bioequivalence study where we had to compare the Tmax between two groups. Typically, we don't need to compare the Tmax between treatment groups since the bioequivalence is typically based on AUC (area under the plasma-concentration curve) and/or Cmax (maximum concentration). Assessment of tmax was mandatory only if
  • either a clinical claim was made (e.g., rapid onset like for some analgetics),
  • or based on safety grounds (e.g., IR nifedipine).

Tmax is the time to reach the maximum concentration (Cmax) after the drug administration. Tmax data is certainly not following the normal distribution and is usually taking only several pre-specified the sampling time point (depending on how many time points are specified in obtaining the PK profile).In this case, a distribution free non-parametric test needs to be used. Hodges-Lehmann estimator can fit into this situation. In addition to Tmax, Hodges-Lehmann can also be used to test the difference for Thalf (t1/2).

In old days, we have to write the SAS program by ourselves. In the latest version of SAS 9.2, Proc NPAR1WAY can be used for calculating the Hodges-Lehmann estimator and its confidence interval. See Hodges-Lehmann Estimation of Location Shift  for details about the calculation and an example of "Hodges-Lehmann Estimation" from SAS website.

With HL statement and Exact HL statement in SAS Proc NPAR1WAY, Hodges-Lehmann estimator (location shift) can be estimated and its confidence intervals (asymptotic (Moses) for large sample and Exact in small sample situation) are provided. However, SAS procedure does not provide the p-value. The p-value may be obtained from Wilcoxin Rank Sum test.

Also see a newer post regarding "Hodges-Lehmann estimator of location shift: median of differences versus the difference in medians or median difference"

Sunday, May 09, 2010

Stronger Bioequivalence Standard?

In April, 2010, "the Pharmaceutical Science and Clinical Pharmacology Advisory Committee (104715) (PSCPAC)" discussed the issues related to the bioequivalence standard.

"The statistical analysis and acceptance criteria seem to be the most confusing aspects of regulatory bioequivalence evaluation. The current statistical analysis, the two one-sided tests procedure, is a specialized statistical method that is capable of testing for “sameness” or equivalence between the two comparator products. The pharmacokinetic parameters, calculated from the bioequivalence study data, area under the plasma concentration-time curve, (AUC) and maximum plasma concentration (Cmax) represent the extent and rate of drug availability, respectively. All data is log-transformed and the analysis of variance (ANOVA) is used to calculate the 90% confidence intervals of the data for both AUC and Cmax. To be confirmed as bioequivalent, the 90% confidence intervals for the test (generic product) to reference (marketed innovator product) ratio must fall between 80 to 125%. This seemingly unsymmetrical criteria is due to the logtransformation of the data."

However, this one-size-fits-all approach may not be adequate for all pharmaceutical products. One category of the pharmaceutical products is called "critical dose (CD) drugs". CD drugs are also called "narrow therapeutic index (NTI) drugs" and are medicines for which comparatively small differences in dose or concentration may lead to serious therapeutic failures and/or serious adverse drug reactions. It is reasonable to assume that a more stringent bioequivalence criteria should be employed to ensure the safety of the product.

According to the voting results from advisory committee, advisory committee agreed that CD drugs are a distinct group of products; the FDA should develop a list of CD drugs; and  the current BE standards are not sufficient for CD drugs.

The FDA proposes that in addition to 80-125% criteria based on 90% confidence interval, a limit of 90-111% on the geometric mean (point estimate) of all BE parameters (i.e., Cmax, AUC0-t, AUC0-∞) is added to the more stringent bioequivalence criteria. However, this proposal was not agreed by the advisory committee. Panelists commented that the scientific basis for the proposed limit of 90-111% was not justified. Some members specified that they did not favor use of Cmax in the proposal, but likely would have been swayed if it focused solely on AUC.

To claim the bioequivalence, should both AUC and Cmax meet the bioequivalence criteria? While regulatory guidance mentioned that AUC and Cmax are typically parameters for evaluating bioequivalence, there is no guidance formerly requiring that both AUC and Cmax have to be demonstrated. In some situation, Cmax may not be applicable in showing the bioequivalence. For example, when comparing the drug giving in different administration routes (intravenous vs subcutaneous), equivalence in AUC could be established while equivalence in Cmax could not be established.

Sunday, May 02, 2010

CDISC beyond the data


CDISC stands for the Clinical Data Interchange Standards Consortium. I have always been thinking that the CDISC is about the data standard and data structure and has nothing to do with the protocol, case report form and so on.

However, for the last several years, CDISC has expanded its reach into the entire flow of the clinical trial. For each step in clinical trial, there is its counterpart in CDISC standard.

  • Protocol: Protocol representation model (PRM)
  • Case Report Form: The Clinical Data Acquisition Standards Harmonization (CDASH)
  • Data management data set: The Study Data Tabulation Model (SDTM)
  • Analysis data set: Analysis data model (ADaM)
The protocol representation model is pretty new and is just recently released. PRM is actually now a subdomain of the BRIDG model. PRIDG stands for Biomedical Research Integrated Domain Group and is a collaborative effort engaging stakeholders from the Clinical Data Interchange Standards Consortium (CDISC), the HL7 Regulated Clinical Research Information Management Technical Committee (RCRIM TC), the National Cancer Institute (NCI) and its Cancer Biomedical Informatics Grid (caBIG®), and the US Food and Drug Administration (FDA).

For each clinical trial, the study protocol is the key. The study protocol is typically a text document and is developed from the protocol template. The protocol is considered as a document, not a data. PRM is trying to change this.

The PRM is NOT a specific protocol template; rather, when a template is designed to meet the purposes of a given organization or study type, the use of the PRM common elements will enable and facilitate information re-use without constraining the design of the study or the style of the document. The PRM elements have been found to be typical across study protocols, but they do not reflect either a minimum or a maximum set of elements.

There are four major components of the PRM v1.0—that is, four major areas of a protocol that the elements are related to:


  • Clinical Trial/Study Registry: Elements related to the background information of a study, based on the requirements from WHO and Clintrials.gov. Examples of elements in this area include Study Type, Registration ID, Sponsors, and Date of First Enrollment.
  • Eligibility: Elements related to eligibility criteria such as minimum age, maximum age, and subject ethnicity.
  • Study Design Part 1: Elements related to a study’s experimental design, such as Arms and Epochs.
  • Study Design Part 2: Elements related to a study’s Schedule of Events and Activities.
It is envisioned that with PRM, the key elements of the protocol can be considered as data strings and can be stored in the data set and can be re-used. The statistical analysis plan can be easily developed by importing the key elements from the protocol. However, to make all companies to follow this standard will take time. There may be a lot of challenges in implementing this standard. This standard needs to be endorsed by the medical writers and medical directors (not the data managers and statisticians) who actually develop the study protocol.